Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
oh wow...thanks for confirming those I have are interiors of no end. I've sat down long after started only to have my mouth agape looking at them still hammering my machine for so long to get somewhere decent.
Dood! I'd love to see your scores.
I'm contemplating an upgrade. Doesn't sound like the Titan Xp will go much faster, however, the extra RAM may be a factor for me. If I can get by without it, then the 1080ti sounds very good.
Hi, Could you do the tests again without using the Titan for display and set it to TCC mode ?
See under Windows Driver Model to do that http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2014
Hi, same here, could you put one Titan in TCC mode (the one without attached display)
Thanks
Ok! I'll do it tonight and post. :)
Did one test again, as requested with the display connected to the GTX 980 and the Titan X (Pascal) set to tcc mode.
After a loooong reboot and driver installation:
Optix on and only the Titan X: 2 min 11 sec
No difference at all.
Viwport set to Iray, Optix on and only the Titan X: 1 min 51 sec
Set it back to WDDM, so I can read the temperatures and boost speeds with gpu-z.
.
With my 3x 1080 TI's I have 90% convergence with GPU+OptiX Prime at 29.5 seconds.
Not bad. :)
Is this with viewport to Iray or OpenGl.
Just asking, because loading the scene to the gpu takes about 20 sec on my rig.
The Titan is at 90% after 26 sec (6 sec pure rendering time) but the remaining 10% take about 2 min.5 sec
That's without the GPU load.
For Titan, do you mean it is at 90% convergence after 26 seconds, or 90% of the progress (the yellow bar) to 90% convergence? I could try a 99% convergence...
The scene defaults at standard Rendering Convergence Ratio of 95%.
So most of the timings are basically -- render till 'done'. I.e., when convergence reaches 95% or when yellow progress bar reaches 100% (should be same).
I was expecting 3x 1080 TI to be around...40-50 seconds.
Yep, that' s about right once you factor in the GPU load time.
What's the time at the default 95% render setting saved within the scene?
It's not a Tesla or Quadro, but it let me set it. I'm running Windows 10 x64. But Daz Studio 4.9.166 wouldn't load with the non-connected card in TCC mode. It just froze on the splash screen (Pascal system).
for thoose big rigs, this bench seems a bit "outdated" or too light to see the benefit of additional cards.
I'd be interesting to see some heavied render test scene, for example with long (complex) hairs (very time consuming), some set and light and a heavy outfit. And the render size is too little.
the loading time of the loading time in the GPU is too long compared to the render time (20-30s compared to 30s+)
Problem: we don't have many free options (runtime) to do such a bench...
52 seconds with the GPU load. I think that's the bottleneck on this test right now.
Thank you Guys and Girls for posting benchmarks. I have an old Dell XPS 435mt i7-920 computer that is stock essentially. Was able to put a GTX 730 4GB in it and that was slow but worked for IRAY. I put a GTX 1060 6gb in it a year later and that was much better. Actually made DAZ Studio fun again. Only problem was that the hardware around it was not able to keep up. I only had 6GB of RAM in the XPS and the original Hard Drive. Also PCIe 2.0. So in between renders it would take 3-10 minutes for the memory buffers to clear. FRUSTRATING!!! It was either upgrade the XPS rig which would have worked regarding the memory buffer purging or build a new RYZEN 5. I went with a RYZEN 5 1600. It will take a few days more before I can build that but I also sold the GTX 1060 and bought a GTX 1070 ZOTAC mini. So here are some benchmarks for the EVGA 1060 6GB and ZOTAC 1070 mini 8GB on the XPS 435mt i7 920 CPU. OH, and I know the CPU just bottlenecks the GPU so I didn't even bench that.
EVGA ACX 2.0 GTX 1060 6GB - $249
4 minutes and 52.74 seconds OPTIX ON
ZOTAC GeForce GTX 1070 Mini, ZT-P10700G-10M, 8GB - $329
3 minutes and 30.10 seconds Optix On
Pretty much matches other benchmarks with more expensive CPU's as far as I can tell. I would say this old XPS i7 920 would have worked well with a RAM (24GB) and HDD or SSD upgrade. Not nearly as good as a new rig but good for weekend fun. The GTX 1060 6GB was awesome with just HDRI and 2-3 figures rendering would take 25-40 minutes. That would include the 4-7 minutes it would take my CPU to send all the info to the GPU. Not bad in my opinion. I will post RYZEN 5 Benches with the 1070 mini in a few days.
Must be nice I don't know what I am doing wrong, but I have yet to get an IRAY render done lol...
Malander "Must be nice I don't know what I am doing wrong, but I have yet to get an IRAY render done lol..."
What computer system are you using?
What do you have under the hood?
We need to know what hardware you are running before anyone can help you get Iray running. (and remember that AMD and Intel GPUs will not run Iray, you will be stuck in CPU mode only).
While stand-alone renders are fine for output, did you know there was this relatively unknown setting to greatly speed up your real-time Iray viewport? I just discovered it by accident in another thread, and what a revelation.
The default for Edit->Preferences->Interface->Display Optimization is None. If you have one of these new Pascal cards or a 9 series graphics card change it to Best.
You should now be able to do smooth, and fast viewport camera moves in Photoreal mode.
I also set the Draw Setting tab's Drawing -> Response Threshold to between 1000 and 3000 and the Manipulation Resolution to 1/2.
Try it!
Just for fun, 4 years ago, I began my Daz adventure with this video card, a GTX 630. 96 cuda cores. Imagine rendering with this card. Scary slow, Rofl !! I used for 2 years, then upgrade to GTX 670 2015.
@Leonides02
Just so I'm clear, did you first have Iray turned on in your viewport? Or was your viewport switched to the basic shader? These tests should be done with Iray active in the viewport first.
Also, are you on air or water?
-P
When I switched over to Daz Studio from poser I was running an old Geforce GTX 260, but that was long before Iray was a thing.
My first Iray render as on a GTX 760, and the slowest card I've run it on was a GTX 560M.
Currently running a GTX 1060 6GB in my desktop and a Quadro K5000M 4GB in my laptop for Iray rendering
Iray was not on in the viewport. When I do that, I get my original time of 29.5 seconds.
I'm on air (no room for water cooling, these cards are huge), but my temperatures don't ever get higher than 84 degrees and my clock is always about 1840 MHz.
That is freaking insane.
Ok, I just upgraded my quad (4) 780 Ti's to dual (2) 1080 Ti's, and my render times have actually improved. lol. (with the help of OptiX)
So with a dual setup, it appears that OptiX helps here.
Wow, I just ran the Octane Benchmark on my two 1080 Ti's and got a score of 402. That is equivelent to four (4) 980's! lol.
Can't wait to see when you get the 3rd 1080 in there. Gotta put that water cooling system to good use!
There's no mention of that requirement in SY's original post. Why would it make a difference on render time?
Agreed! I believe the test should not be run with iray set in the viewport and the auxilary port turned off. The reason people switch to iray viewport first is that it improves their render time for this benchmark. It makes, UP TO, a 30 second improvement in that it pre-loads some of the scene into the graphics card. So when you hit render you have shaved off some of the rendering time. I believe that half of the people on this post run it with and the others without.
On a side note has anyone seen or thought of using something like this nvidia GPU crypto-mining rig as a DAZ3D render farm? It has 8 P106-100 graphic cards without video out enclosed in a small form factor. If usable it would probably be done in 5 seconds for this benchmark.
https://videocardz.com/newz/first-look-at-pascal-based-gpu-cryptocurrency-mining-station