Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Great work @RayDAnt :D
5x would be a great improvement!! :D
This is good information to know, and I think it is fine to have a post about it here because it helps get the word out. However, as awesome as this is, the benchmark thread is for posting benchmarks and their discussion. So further discussion about RTX coming to Iray should go either the GPU discussion thread or get its own thread.
You're right @outrider42 :D
But, in the end, I think this thread is not that helpful for benchmarks anymore!
Too much has changed since its opening, and the methodology is too disorganized to actually help us.
I think the best solution will be for RayDAnt to open that new benchmarking thread once these RTX features will be implemented, but, most importantly, we need to filter the valid tests and to put them all in the same place, maybe with a nice graph that compares the performances between different GPUs!
I don't think it would still be a great idea to mix them up with CPUs. It's already confounding enough to consider multi-GPUs system IMHO.
Daz Studio Public Beta 4.11
SY scene, stock scene as downloaded.
1x RTX 2080 Ti
Optix On: 44.8s
Optix Off: 51.8s
2x RTX 2080 Ti
Optix On: 26.1s
Optix Off: 29.9s
3x RTX 2080 Ti
Optix On: 20.1s
Optix Off: 22.1s
Public Beta 4.11
1 x RTX Quadro 4000
Sicleyield's Starter Scene
Optix On
Total Rendering Time: 1 minutes 40.51 seconds
5000 iterations
FYI in case anyone's wondering, Daz Studio 4.11.0.366 Beta (the version released most recently) uses the exact same version of Iray as 4.11.0.335 did. Meaning that pure rendering performance between the two version is identical - ie. benchmark results for '335 will also be true for '366.
ETA: And Nvidia driver 430.64 is within margin of error of 430.39 performance-wise too.
The issue isn't so much the specific benchmarking scene(s) being used (the OG Sickleyield scene is still my goto benchmark for any time something changes, regardless of its thoroughly antiquated content - because it's fast.) The problem is with how little info other than just a Total Rendering Time statistic (plus an OptiX on/off note more recently) people have been in the habit of posting as their results (not their fault - the OP only ever specificed TRT and graphics card model as things to report. And not Sickleyield's fault either - knowing exactly what stats need to be included for benchmark results to stay relevant years after they're recorded is something you can only learn through experience.)
Unless a set of results include the following bare minimum of surrounding platform info:
Those results are useless once an additional version or two of any of these things comes and goes around. For reference, since the start of this thread, Windows has gone through numerous major/minor upgrades (as well as version obsolences), as has Daz Studio (4.10 final didn't come out until December of 2017 - this thread's been kicking around since early 2015) , and - of course - Nvidia drivers have gone through hundreds if not thousands of tweaks/upgrades.
Still no one with a GTX 1660 or 1660 Ti?
They have a really good value.
PS: you probably already know this, but there's this thing called "Octanebench", that has basically every GPU benchmarked: https://www.cgdirector.com/octanebench-benchmark-results/
Are these results comparable in Iray?
Thanks for the answer :D
Don't they also have some RTX capabilities? Won't they at least improve a little bit?
It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!
And when do you think this support will come?
No, the 1660 and 1660ti have no RTX features at all. No ray tracing cores, no tensor cores. That is why they are cheaper.
We do not know when RTX support is coming to Daz. All we know is that it *is* coming. Nvidia announced that a new Iray will be shipping with RTX support in May. And well, May is half way over, so that could be any day now! However, the people at Daz must implement the new Iray into Daz Studio, and that takes a little time. It usually takes Daz about 1 or 2 months to release a new version of Daz Studio once they receive the new Iray.
So all I can do is guess that Iray RTX could be coming around July-August if Nvidia is on time.
Keep watching for any announcements that Nvidia has released a new Iray SDK. That is the big key. Daz cannot do anything until this happens. Once the new Iray is released, then we will know there is a finish line in sight.
I know it may be tough to jump from the 1660 to a 2060. I have been there. But I truly do believe that saving the money for RTX will prove to be worth doing once RTX support comes. If you are on the fence, just keep waiting, and keep saving while you wait. Once it does come, we can do all the testing and see once and for all if RTX is indeed worth the money. Until then we can only speculate. I personally believe that RTX will be worth the wait, but that is my opinion. Everything else that has adopted RTX has seen big gains.
I think we need to do something like that Octanebench but for Iray!
Honestly, I'd find it much more useful than a 42 pages thread with random info on random versions xD
That's good, I can wait until August! Today my PC+ membership expires, so I'll save something up (I hope).
Maybe that's why they still haven't released Daz 4.11!
You're surely more expert than me, but in April they said that ray tracing was coming to Pascal and 1600 series GPUs too: https://www.tomshardware.com/reviews/nvidia-pascal-ray_tracing-tested,6085.html
It's not a complete ray tracing, but it does something I guess!
Thank you, now I've understood! :)
Intel i9-7900X @ default frequency
Asus Rog Rampage VI Extreme mobo
64gb ddr4 @3200
Windows 10 Pro version 1809 build 17763.503
Nvidia driver version 430.64
Daz build: 4.11.366 Beta
All GPUs are running at default frequency
OptiX on:
trial 1: 105.819s.
OptiX off: 151.398s.
OptiX on: 116.702s.
OptiX off: 163.403s.
Optix on: 117.870s.
Optix off: 165.396s.
OptiX on: 30.022s
OptiX off: 42.054s
OptiX on: 40.847
Optix on: 62.189
OptiX off: 76.030
Optix on: 32.271s
Optix off: 39.157s
Optix on: 142.725s
Optix on: 134.941s
Take-home message: Using SickleYield's benchmark, the 2 Titan RTX's were about 2 seconds slower than the 4 Pascal cards. Using Outrider's benchmark, the 2 Titan RTX's were about 8 seconds faster than the 4 Pascal cards.
Interestingly, Spectra3DX's 2080Ti's are outperforming the Titan RTX's by a good margin. I'm curious how Specktra's GPUs are clocked. Also, NVLink shows up tomorrow. I expect a slight increase in render times due to overhead (as demonstrated in Vray), but I haven't found anything definitive on whether memory pooling via NVLinkworks in Iray. When the bridge shows up tomorrow I'll try to push a 25+gb scene to see what happens.
Memory pooling should absolutely work out of the box with your two Titan RTXs via NVLink as long as you go and enable TCC mode in your drivers (see this post/the posts around it for an in-depth discussion/step-by-step process on how to do that.) Memory pooling without TCC mode enabled (TCC mode is currently limited to Quadro and Titan cards) is where the big mystery lies.
Tell me, please, if there are two or more graphics cards, the scene(8gb) is loaded into the memory of all( 8/8/8/8 or 8/0/0/0)?
The full scene will be loaded into each graphics card, so your 8/8/8/8 analogy would be correct.
If you are using NVLink however, which supposedly can pool the memory, then two graphics card will essentially combine their memory of those two cards into a common block. HOWEVER, this will slow down memory performance as cards will need to access memory across the NVLink for larger scenes. That's what the folks ove at the OTOY forums have observed.
Of course, NVLink is NOT required for multi-GPU rendering in Daz Iray, or SLI for that matter. Just keep in mind that the card with the least memory will set the upper limit of the size of a scene. It is possible to mix and match cards for Iray rendering, as you can see in various results in this thread. Essentially, the more cuda cores you can throw at a render, the faster the render will generally be.
This is incorrect. In a non memory-pooled setup, any one card lacking enough onboard memory to handle a scene will simply sit out the rendering while any remaining, capable devices finish the render. A lower capacity card will never interfere with a higher capacity card's ability to function.
Indeed. Lets say you have 3 cards. A 1080ti with 11GB, a 1060 with 6GB, and a 1050 with 2GB.
If you create a scene that takes about 5GB, then the 1050 will not run. But the 1060 will, and will combine its CUDA cores with the 1080ti to render faster than the 1080ti would alone.
If you create a scene that is about 9GB, then only the 1080ti will run, and the other two cards will have no benefit.
So while you can combine cards of different spec, it is recommended to try using cards that are reasonably close in VRAM size if you plan on doing the multiple GPU route. Like a 1070 and a 1080 pair well since they both have 8GB memory. Of course, people can do what what they want, there are no rules in stone about this.
Hi, everybody. Sorry for my English.
I put all the data in one table.
Using multiple graphics cards
Based on the results of the data, I made a table with which you can quickly estimate which of the graphics cards is better to take and what performance is to be expected:
To use, you need:
1. Go to the table (link)
2. File - Make a copy
3. Edit prices in the selected screen below the column "price". Now in this column prices from my city
4. Select the maps of interest from the drop-down lists
5. The table shows the overall performance of the selected graphics cards (converges with the real tests)
I wasn't being entirely clear on this. In order for a particular card to participate on the render, the scene needs to be able to fit inside of the card.
As I've never personally mixed and matched cards with different VRAM sizes, I wasn't sure if Daz would just bypass those cards if the scene couldn't fit in the smaller cards or drop to CPU only. Since Daz loves to drop to CPU only at times on renders, particularly when you are repeating renders on a scene that fit in previous passes, I was leaning towards a full CPU only drop, where you would ned to 'uncheck' the boxes for the smaller GPUs to avoid CPU only. It's good to know that I won't need to do that if I ever mix cards with different ram amounts though.
Thank you for the clarification!
Good to have a real life usage case perspective on this. I'm not disagreeing with you at all on these points. Even mixing cards from different generations, even with the same VRAM amounts, may cause increased instability I would think.
That being said, we probably should get back on topic r.e. only posting render benchmarks, and move this discussion to the other thread if necessary since the goal is to not overly clutter this thread. I do appreciate y'all's experience on this topic though!
https://www.daz3d.com/forums/discussion/321401/general-gpu-testing-discussion-from-benchmark-thread
Awesome effort. Although there are some key factors seemingly missing from your analysis which - if left uncontrolled for - will lead to statisitcally significant innacuracies in your graphs:
1. Daz Studio version tested.
Unless all of the datapoints you are using from this thread came from EXACTLY same version of Daz Studio (eg. 4.11.0.236 vs. 4.11.0.366 vs. 4.10.0.123) there is a potential margin of error in people's results anywhere from about 4 seconds under the various 4.11.0.XXX variants to as much as around 55 seconds if 4.10.0.123 is included as well. And since many of the performance differences you've tabulated between cards are within the 1-4 second range (especially on the higher end) this is a potentially problem for making charts like this (it's one of the main reasons I haven't yet produced one myself.)
2. Daz Studio Total Rendering Time vs. GPU/CPU scene rendering time.
When Sickleyield first started this thread 4+ years ago s/he decided to use Total Rendering Time as the base statistic for reporting relative rendering performance across devices/scenarios. Technically speaking, Total Rendering Time - as reported by Daz Studio during/after the completion of a render - is NOT just a measure of device rendering time. It is an overall measure of how long it takes Daz Studio to load/process the assets of a scene, initialize the Iray rendering plugin, transfer those assets to that plugin for rendering, wait for the plugin to finish the rendering process, and finally saving the final render as outputed by the Iray plugin for review by the user. In other words, Total Rendering Time includes a graphics card rendering performance unrelated overhead of anywhere from a couple of seconds (as exemplified by my totally solid state Titan RTX/i7-8700K rendering machine build) to as much as a whole minute or longer (some spinning disk machines.)
4+ years ago this potentially minute+ offset between Device Rendering Time vs. Total Rendering Time wasn't much of an issue (at least for single card benchmarks) since most rendering devices needed tens of minutes to complete Sickleyield's scene. And having an extra 30 seconds give-or-take added to that most likely wasn't going to change anything to the extent of, say, rearranging the order of things in a graph of relative rendering performance between different GPU models. However, now that we are at the point where many cards are capable of finishing Sickleyield's scene in a couple of minutes or less and differences between many cards are within mere seconds of each other, this 30+ second margin-of-error is a huge problem. It's why most of the benchmarking numbers in this thread are currently (unfortunately) useless going forward. And the only way to remedy this is for people to report Device Rendering Statistics like this
where 570.178 seconds is an actual measure of device rendering performance you can base calculations on, rather than Total Rendering TIme like this
where 9 minutes 51.42 seconds isn't. Notice that 9 minutes 51.42 seconds is 591.42 or 21.242 more than 570.178. Meaning that this TRT statistic is off by 21.242 seconds. And this difference isn't going to scale by time - ie. the same scene rendered to 10 iterations in <10 seconds is still going to have the same overhead of 21.242 seconds. Making it look 3x less efficient at rendering than it really is.
ETA:
There is a fix for this - start a new benchmarking thread where the OP's directions are to report Device Statistics rather than/in addition to Total Rendering Time as a person's benchmarking results. I actually already have just such a thread pretty much ready to post at a moment's notice. I'm just waiting for concrete news regarding when to expect RTCore support to debut since that will dictate how complex a benchmarking scene needs to be to be meaningful on Turing level hardware.
I can't see the images!
Copy images in table list "Iray Starter Scene"
Heya, where u got results for a 1070 ti ?
Becuz i have 2:27-2,20 with it, 3:00 looks strange :0 just a 30+ % diffirence
There is a doubt in the remaining results =\
Optix ON GPU (Only 1x 1070 TI)
2:27 if u open scene first time
2:20 +- if u render same scene second time
Optix ON CPU + GPU (2687W X2 + 1070 TI X1)
2:08
Win 8.1 Pro
DAZ 4.11 Pro Public Build
430.64 Game Ready Driver
Z9PE D8 WS, 2687W X2, 128GB RAM
ZOTAC GTX 1070 TI AMP! Extreme
That is, my system is quite old, I do not think that this is a problem in the drivers or something else
Or in the results only reference models of video cards are used, if yes then sorry.