Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Just got my new PC yesterday, and ran some tests today...
This is with full scene, no modifications and Optix Acceleration turned on.
System - Windows 8.1, 8 Core (16 threads) 5960X, 32GB RAM, 2 x Titan X 12GB, 2 x Titan 6GB
CPU + 2 Titan X + 2 Titan = 1 min 13 sec
2 Titan X = 1 min 28 sec
2 Titan = 2 min 21 sec
2 Titan X + 2 Titan = 59 sec
System: Win 8, 32 GB, CPU 2xAMD Opteron 6274 (32 cores), GPU GTX 670.
Qt Version: 4.8.6
Full scene (all spheres).
CPU only (memory; Optix on): 18 min 09 sec (Received update to 05000 iterations after 1084.697s).
CPU only (memory; Optix off): 22 min 53 sec (Received update to 05000 iterations after 1370.290s).
CPU only (speed; Optix on): 17 min 15 sec (Received update to 05000 iterations after 1032.766s).
CPU only (speed; Optix off): 18 min 46 sec (Received update to 05000 iterations after 1125.874s).
GPU only (memory; Optix on): 7 min 13 sec (Received update to 05000 iterations after 430.680s).
GPU only (memory; Optix off): 10 min 44 sec (Received update to 05000 iterations after 642.256s).
GPU only (speed; Optix on): 7 min 34 sec (Received update to 05000 iterations after 451.353s).
GPU only (speed; Optix off): 8 min 01 sec (Received update to 05000 iterations after 477.924s).
CPU+GPU (memory; Optix on): 6 min 3 sec (Received update to 05000 iterations after 358.997s).
CPU+GPU (memory; Optix off): 8 min 20 sec (Received update to 05000 iterations after 496.136s).
CPU+GPU (speed; Optix on): 5 min 48 sec (Received update to 05000 iterations after 345.763s).
CPU+GPU (speed; Optix off): 6 min 16 sec (Received update to 05000 iterations after 372.977s).
Looks like Optix must be on always "on"; optimation "memory" good for GPU, optimation "speed" for CPU. Ttry to find their right combination.
Replaced my 3 year old primary computer with an HP preconfigured factory refurb:
Test Scene rendered with 5000 iterations default, scene unmodified with CPU + GPU + OptiX Prime Acceleration enabled.
NEW System Specs: i7-4790K 4.00 GHz | 32 GB RAM | GTX 980 4GB 2048 cores
Total Rendering Time: 3 minutes 26.70 seconds
OLD System Specs: i7-3930K 3.20 GHz | 16 GB RAM | GT 640 3GB 144 cores | GT 430 1GB 93 cores
Total Rendering Time: 17 minutes 18.84 seconds
My setup:
4 x Xeon E5-4650 (64 cores in total)
128gb DDR3 RAM @ 1600 MHz (ECC)
2 X Quadro K6000
Time: 17 seconds.
I know this may be off topic but I was also trying to compare Max iray results with that of DS studio. Interesting indeed.
http://www.maxforums.org/threads/iray_gpu_cpu_comparison_test/0001.aspx
Note that is a very old thread and it isn't fair to compare Iray 2015 to earlier versions. LOL. So if you are going to compare DAZ Studio's Iray to 3DS Max, make sure you are comparing 3DS Max 2016.
could you please tell us what is your OS and mother board? and wich daz runs that?
I think it can't run windows due to the limitation to 2 max physical cpu.
So, it is unix/linux? I thought that daz was not linux sowtware...
I am using Max 2016!
Supermicro X9QRi-F+ Windows Server 2012 r2.
Maybe I am doing something wrong.
(1) GTX 980 ti
(1) GTX 770
i5 4670
16gb ram
2 minutes 8 seconds. Which seems awfully fast compared to some of these systems.
Not that I am complaining!
Different machines will yield different results. I've found good cooling will also speed up the process.
Hi. I have a question there: Daz Studio 4.8 running with a sli of 560ti: good enough in terms of embarked memory?
Maybe we need to start benchmarking again with newer versions and now some of us have upgraded our hardware.
Xeon 1650
2xM4000 Quadro
1 GTX 980 Ti
Total Rendering Time: 2 minutes 18.18 seconds (with CPU)
Total Rendering Time: 2 minutes 33.18 seconds (without CPU)
I owe to this thread cuz I was inspired to know it myself. I think I may have not optimized my system since it's supposed to be much faster than 2xGTX Titan X of you guys'. But that's it! My iMac universe is over. To complete this test suit, late 2012 iMac took around one and half an hour!
Nvidia 660M Overclocked Core 1050mhz Mem 3000mhz
Total Rendering Time: 25 minutes 28.50 seconds
Intel Core i7 3630QM Overclocked To 3.3ghz
Total Rendering Time: 44 minutes 2.60 seconds
CPU + GPU as above
Total Rendering Time: 21 minutes 32.77 seconds
Nvidia 660M Overclocked Core 1085mhz Mem 3000mhz
Total Rendering Time: 24 minutes 30.66 seconds
Nvidia 660M Stock Clock Core 950mhz Mem 2500mhz
Total Rendering Time: 30 minutes 34.49 seconds
All with Optix Prime ON and FULL scene. Cheers :)
670 2gb, i5 4690 3.5 ghz, 16 gb ram, win 10
gpu only, optix on: 80% took 55 seconds, 90% 6 minutes, 100% ~8 minutes (I missed the exact number when it finished.) This was the original scene with all the balls.
I had a modest overclock on the gpu, as I originally built this pc purely to be a mid tier gaming machine. That was before I discovered this Daz thing. Obviously 2 gb is rather limiting for Daz. I had considered upgrading to a 970, but the strange ram situation with that card put me off. So I've decided to wait it out for 1000 series which is due soon. I am optimistic that the 1000 series (or whatever they call it) will be much better for 3d apps like Daz than the 900 series is. That is in large part due to the new memory Pascal is slated to use, and nvidia has stated they will make cards with as much as 16 gb for consumers, 32 for servers. Plus the memory is much faster, too. This memory should do wonders for Daz!
I'm thinking nvidia will release a standard 4 gb model for the 1070 and 1080 that will mirror the launch prices of the 970/980. Then they'll have 8 and possibly 16 gb versions at premium prices. Hopefully those premium prices aren't too high, but I have a bad feeling they will be, especially any 16 gb model. I don't want to even think about how much the 32 gb server models might cost. I've read reports that the new ram production is slow, and this will likely effect both price and availablity of these cards.
Nvidia 950M Overclocked Core 1250mhz Mem 2350mhz (DDR3)
Total Rendering Time: 11 minutes 7.42 seconds
Nvidia 950M Stock Core 1125mhz Mem 2000 mhz (DDR3)
Total Rendering Time: 12 minutes 16.87 seconds
All with Optix Prime ON and FULL scene. Cheers :)
Thanks to all the people that have posted benchmarks here, it's been very useful.
I'm thinking of getting a single GTX 970 then get a second one later.
This would give me 3,328 cores which seems good.
Curious to know more about the memory thing on the 970 and if it really is an issue.
Cheers :)
Asus 970 mobo, Amd 8 core cpu oc 4.0GHz, 16GB DDR3 ram (1600), Evga 670 2GB, 256 bit, results doesn't make any sense to me, cpu/gpu takes longer then gpu only.
Results 90%, I'm impatient, Lol !!! In future, I'll do 100%
Cpu only = 8 min 57 secs
Gpu only = 4 min 13 secs
Cpu/Gpu = 5 min 31 secs
I've upgraded from 2xM4000 to 2xTitan X and this is the results.
CPU Brand String: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
with CPU round 1
Total Rendering Time: 1 minutes 48.78 seconds
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX TITAN X): 1704 iterations, 41.892s init, 64.626s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 2 (GeForce GTX TITAN X): 1518 iterations, 40.344s init, 66.491s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce GTX 980 Ti): 1485 iterations, 39.766s init, 66.980s render
with CPU round 2
Total Rendering Time: 1 minutes 47.1 seconds
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX TITAN X): 1715 iterations, 41.022s init, 64.530s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 2 (GeForce GTX TITAN X): 1504 iterations, 40.533s init, 64.262s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce GTX 980 Ti): 1491 iterations, 38.308s init, 66.526s render
NO CPU
Total Rendering Time: 1 minutes 56.78 seconds
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX TITAN X): 2566 iterations, 32.697s init, 82.435s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 2 (GeForce GTX TITAN X): 2434 iterations, 32.125s init, 82.308s render
NO CPU
Total Rendering Time: 1 minutes 31.49 seconds
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX TITAN X): 1746 iterations, 30.437s init, 58.509s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 2 (GeForce GTX TITAN X): 1633 iterations, 31.141s init, 58.157s render
Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce GTX 980 Ti): 1621 iterations, 30.878s init, 58.270s render
I wonder why I have significant amount of time more than of necro__boy
(2 Titan X = 1 min 28 sec). Well, I actually upgraded to try it by myself.
There may be something different from my Titan X to his. Don't know.
BTW, I also turned on Optix Acceleration (wonder why I shouldn't).
Because GPU manage rendering jobs better than CPU!
4mins 32 seconds
GeForce GTX 670
Intel Core i7 3820 CPU @3.60GHz 32GB Ram Win 7
Optix Prime on GPU
Maybe I missed something. My iMac GPU is GTX 680 and it took one hour and thirty seven minutes to complete it 100%. I may try it on VirtualBox Windows7 to accomplish what you did.
Shanina, I was very surprised how the GTX 670 performed opposed to newer cards. I just purchased mine new 2 weeks ago. Thought the store rep was trying to dump an older model card off on me. Thus far I'm happy, impressed.
awesomefb, I am too....I've had mine for about 18months and wasn't sure how it would handle iray but I'm very surprised and happy with the results.
It is interesting to see how it compares with the newer cards.
Awesome, card has performed well for you continue to enjoy
With all respects. I guess some of us would like to see a clip showing your card's performance. If that's not bothering you much, please show it. I think it will change many things. I for one won't pay this much to get just 3 minutes faster than GTX 670. :-D
I got a new GTX 980 Ti, yesterday. My full PC specs:
CPU: 3.5 GHz Core i7-4770K
GPU 1: 6 GB GeForce GTX 780 (Display Adapter)
GPU 2: 6 GB GeForce GTX 980 Ti
RAM: 32 GB
O/S: Windows 7 Pro 64-bit
And my new numbers:
CPU: 25 minutes 30.89 seconds
GPU 1: 3 minutes 49.97 seconds
GPU 2: 2 minutes 39.41 seconds
CPU + GPU 1: 3 minutes 48.48 seconds
CPU + GPU 2: 2 minutes 40.75 seconds
CPU + GPU 1 & 2: 2 minutes 9.91 Seconds
Conclusion: Buying a dedicated GPU for rendering was absolutely worth it. In addition to the speed boost, I can finally use all of my PC's available resources to render high-resolution images while retaining full multi-tasking functionality without any system lag.
I'm using a mac NVIDIA GeForceGT755M 1GB GDDR5 and its SO SLOW for iray rendering. This Iray starter scene took almost 2hrs.
please I intend to buy a windows with a good cuda nvidia card, which one is the best? it's going to be used ONLY for daz3d RENDERING, nothing else because I do prefer my mac to work. So I don't know if the i5/i7 is that important.
- Core I5 4690, Gtx960 2gb, 8gb
- Core I7 4790, Gtx980 4gb, 16gb,
- Gtx 780
- I5 4690 - 4gb Gtx 970 - 8gb 1600mhz
thank you!
CPU RAM and GPU Cuda cores/Vram are the most influential for rendering times. CPU speed is less important. Hard drive / SSD speed not that important. The artist is the slowest part of the render equation.
That said, prefer more CPU RAM to faster CPU (e.g. get an I5 if the price difference from an i& gets you 32 GB of RAM. Get at least 4GB on the GPU, buy as many CUDA cores as you can afford. Buying 2 lower GPUs instead of 1 higher level is not usually cost effective.