Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Nope, I checked. Seems to me too that they've fixed some kind of a reflection/refraction bug but I don't see it in the logs, or at least I don't know what I'm looking for.
Wasn't the 'Dual Lobe Specular' new to the Beta? That might cause that diference.
No, that's in 4.10 too.
Used GTX 1080 vs new RTX 2060?
2060 has way fewer cuda cores but it has tensor stuff.
Which is a better buy?
Keep in mind, Turing era cuda cores are significantly more powerful than Pascal ones. With that said, 2060s only go up to 6GB of VRAM which imo is pretty low for modern workloads - especially for cg rendering.
Raytracing is an open-ended iterative process in which the key statistics for measuring performance are render time and convergance % - not iterations. Iteration counts are only a useful measure of performance when using the same version of rendering software on different pieces of hardware. DAZ_Rawb's benchmark is convergance limited (unlike, for example, Sickleyields' benchmark which is iteration count limited due to its scene composition.) So judging by the comparison renders and stats you posted for your 1080ti on different versions of DS/Iray with that particular benchmark, the answer to your question seems pretty obvious to me: Iray version 2018.0.1 (the current version in DS beta) has been changed to do a much more comprehensive job of faithfully raytracing an image (reaching a higher convergance % and with greater accuracy) in fewer if computationally more complex iterations than Iray version 2017.0.1 (the current version in DS official.) Ie. it's become better software.
Don't forget to keep in mind that all current benchmarks for Turing based 20XX cards are only representative of how well they perform at raytracing without utilizing any of their dedicated raytracing hardware. Once APIs like OptiX get updated and rendering engines like Iray (hopefully) adopt full support, we will be able to see what the Turing architecture is truly capable of performance wise for raytracing in comparison to its ancestors like Pascal. Until then, all the 20XX series benchmarks you see posted in places like this thread are gonna be under-reporting reality - and drastically so if NVidia's new gigaray terminology is to be believed.
On a different note, for those still confused/curious about OptiX Prime's current role in DS/Iray as well as why it is slated to not be "getting RTX support" here's some information I've stumbled on recently that you may find interesting:
Meanwhile, here's the most in-depth technical explanation I've been able to find (so far) for what an RT Core in the Turing architecture actually is and does:
So in other words, OptiX Prime isn't "getting RTX support" because RT Cores are OptiX Prime - just in an exponentially higher performing dedicated hardware form.
I've rendered the test scene from SY:
CPU + GPU: 11m and 12s to finish (from Texture Shaded preview, Optix off, memory optimization)
GPU Only: 32m 22s to finish (from Texture Shaded preview, Optix ON, memory optimization)
From Iray preview with speed optimization the scene goes over my VRAM.
My system specs:
I guess I need an upgrade! xD
What seems strange to me are other benchmarks posted here with a 1060 6Gb, that take 6,5 minutes, but the CUDA difference should be of just 10%!
Is there a simple list of average benchmarks? It would be really useful to see what GPU to buy, since this thread is too long and contains too many comments other than benchmarks...it would be nice something like "GPU only: 1050 in 15min, 1060 in 10 min, 1070 in 7min, 1080, 2060, 2070, 2080..." etc.
I suppose you're right. The 4.11 version of the render looks a bit more converged compared to the 4.10 version even if it has fewer iterations. But I think we need to devise a new scene to understand how exactly it has changed. Two versions of it, where one is time-limited and one is iteration-limited, and then compare the results for both Turing and Pascal cards. That way we can nail down how fast the new generation of cards really is compared to the old.
True too. But when will it happen though? Other big renderers are already following through. You can already combine two cards in Octane to get twice the memory (albeit with a bit of a performace hit). Not sure if the RT cores are speeding up their renders already, but they're definitely working on it. Nothing on Iray though, and I can't find any news about it. Their blog hasn't been updated since September 8th.
There has been a change relating to normal maps, which I believe is one reason 4.11 is slower.
1080 has more RAM; expensive paperweights don't render at all (when scene drops to CPU).
It's down to if you feel you need the extra RAM; personally, 8 should be the minimum reccommended for Studio. Sure it can work, and well with less, but we're talking about a much better user experience imo
6gb ram vs 3 gb ram in the 1060s is a significant diffrance. DAZ's minimul spec for iray is 4gb. 1060 3gb is to small.
I think it might be something else, a bug I might have stumbled upon. I'll post in much more detail in the next post.
I've just created a new benchmark scene with two variations for testing. It's a simple Cornell Box, used for typical Raytracing tests to match in-render lighting with real word lighting. One benchmark is set to 1000 iterations maximum, the other is set to two minutes. The point is to have a scene without any textures in it so that it loads and starts rendering as fast as possible and is completed fast enough so we don't waste too much time testing. You can find the two scenes attached down below.
I've rendered the scenes solely on my Aorus GTX 1080 Ti G11, here are the results:
And below you can see the different test result images (right-click open to new tab for greater detail, and swap between tabs to see the differences).
The first two images are the differences between the 1000 iterations tests between Daz 4.10 (first image) and Daz 4.11 (second image)
And the images below are the differences between the 2 minutes tests between Daz 4.10 (first image) and Daz 4.11 (second image)
There are some slight differences in the images itself, especially in when the lights are reflected directly on the surfaces of the objects, and the SSS-enabled objects have some slight variations in the way they scatter light too. Surprisingly though, 4.11 Beta is actually rendering faster in both cases on my 1080 Ti, as you can see, unlike the other previous scenes that we've tested before. I was wondering what's up and spent the afternoon investigating.
What I've found is this: There's some kind of a bug or intended feature that I'm not understanding when it comes to bump and normal maps. On the scene tab of both scenes, there's a hidden sphere shaded with the Iray Leather Surface that comes with the default Daz resources. I've made a slight modification where I put the normal map on the bump map too for testing purposes so that it loads correctly for everyone. And I've had some interesting results if I render the scene with the Leather Sphere set to visible on DAZ Studio 4.11 Beta:
Those are some weird results as you can see. The render is slowed down significantly only when both maps are set on the shader. At first, I thought it's maybe because I'm using the same map for both, so I went online and searched google for a leather bump map, set it instead of the normal one. I still got the same results. This is incredibly weird for me. And I do not experience the same kind of slowdowns in 4.10. Unfortunately, right now it's really late and I can't start testing the scene with characters instead. But I would greatly appreciate it if some of you guys performed the same tests to see if you get the same results.
Well...I did not know I would have discovered Daz when I first built my PC, and now I'm saving up but a new GPU is not cheap...especially if these Daz Sales keep eating my wallet xD
Anyway, I assure you that, even if it's very uncomfortable, you can achieve something with just 3Gb! ^^
Aala, I decided to do some digging of my own through the Iray changelogs from the DS 4.11 beta thread, and although I’m no computer graphics expert (my cs degree was in hci - not cgi unfortunately) I’m fairly certain pretty much every graphical anomaly/difference in renderer performance observed here between the Iray renderer version found in DS 4.10 and that found in the current beta is just the result of documented bug fixes/feature upgrades. It’s just a question of knowing what to look for. Eg:
The changelog for Iray 2017.1 beta, build 296300.616 mentions the following:
Most of the benchmarking scenes discussed in this thread feature reflections and/or refractions resulting from interactions between formal light sources and backplates (light emissive textures like HDRI skydomes.) And although the changelog only mentions it as an issue for solid color backplates, it stands to reason that this would also be a noticeable issue with high intensity light sources originating as part of a backplate (as is the case with the very noticeably incorrectly reflected/refracted light source found in Outrider42’s benchmarking scene under DS 4.10) since such spots are technically just a solid color grouping of pixels.
Similarly, regarding what you’ve observed with DS 4.11 beta taking longer to render with that bump/normal map combination present, the changelog for Iray 2017.1.4, build 296300.6298 contains the following:
The implication being that previous Iray versions (including the one found in DS 4.10) were just ignoring the existence of traditional bump maps any time a normal map (which is just a 3d upgrade of a bump map) was also given for a particular object. The version of Iray found in the current DS beta isn’t slower at this than the version in DS 4.10. The Iray version in DS 4.10 is just finishing it faster because it isn’t actually doing the computational work.
Any chance that we can talk you into benching the 2990WX on it's own? For comparison purposes of course!
Yeah, it'll be slower, but this way we can compare the results to say a 16 core 1950X/2950X...
The older/shorter benchmark should be fine if you are game.
You carefully omitted part of the quote you referred to. Here it is in full.
"In conclusion, despite the Volta microarchitecture with Tensor Cores has only been recently released, it is possible to program Tensor Cores for HPC applications using three approaches and achieve considerable performance boost at the cost of decreased precision. We expect the programming interfaces for NVIDIA Tensor Cores to evolve and allow increased expressiveness and performance. In particular, we noted that the measured Tensor Core maximum performance is still 74% the theoretical peak, leaving room for improvement. While the precision loss due to mixed precision might be an obstacle for the uptake of NVIDIA Tensor Cores in HPC, we showed that it is possible to decrease it at the cost of increased computations. For all these reasons, it is very likely that HPC applications will strongly benefit from using of NVIDIA Tensor Cores. In the future we will focus on testing Tensor Cores in real-world HPC applications, such as Nek5000 [23], [24] or Fast Multipole Method-accelerated FFT [25]. "
Note those key words that you gloss over. In spite of the drawbacks, they STILL CONCLUDE HPC will benefit from Tensor. Not just a little, but they specifically use the term "strongly". And that is the bottom line. You can pick through this with a fine tooth comb as much as you want to, that does not change the final conclusion they made.
Besides, the whole point I was making was not just HPC. My point is that Tensor can be used for ANYTHING. Daz Studio and Iray are not HPC applications. You even state yourself that Tensor can be used for image processing...is a render not an image process?
---------------------------------------------------------------
Daz 4.11 has no drivers for Turing. None. Daz 4.11 released in July. The actual Iray plugin in 4.11, Iray 2018.0.1, build 302800.3495, further predates that. This update contains drivers for Volta, and mentions Volta by name, but not Turing. This strongly suggests that the Iray plugin we see in 4.11 is not yet optimized for Turing, and that includes the OptiX Prime acceleration. That is why you see weird times with Optix enabled on Turing GPUs. Optix Prime indeed may not add RT core support, but it could be optimized better, further increasing the performance over Pascal. But we do not know for certain. I assume it will, and that you will see further speed enhancements. Support for Tensor could change how Iray behaves. And if Iray gets the full Optx plugin, things could get very interesting indeed. The best case would be for Iray to offer both options to its users. The full Optix will be slower for anything before Turing, but with Turing, it should be quite fast.
I have said it before, but it is a very bad idea to compare render times based on convergence percentage. Regardless of what you set the convergence to, convergence can differ from run to run, giving different iteration counts. If you have different iteration counts, then almost certainly your render times will differ as well. But more over, the way Iray calculates convergence can change in different SDK plugin updates! So what Daz 4.10 with Iray 2017 deems 100% converged may in fact be different from what Daz 4.11 Iray 2018 deems 100% converged. This is why my bench caps at 5000 iterations and runs all the way through to that 5000 number. Any other benchmark you make NEEDS TO HAVE AN ITERATION CAP in order to properly compare times. This is a limitation of the SY bench, it does not always run to the 5000 cap, it can end much sooner. This can skew results. Daz_RAWB's bench does not take this into consideration, either. In the last post, Daz 4.11 ran over 1,000 fewer iterations than 4.10. That basically invalidates that benchmark. A better solution would be to cap the iteration count at perhaps 1500 or less and then compare the render times.
Very true, I started with less. My first iray card was an NVIDIA 640 2gb Ram 384 Cuda Cores on a 2010 ASUS cm5571 with a Core 2 Due E5584 and 6GB @ 1666 Ram. You are starting off better than I was.
When I upgraded to a 1070 Ti it made a world of diffrance. Once you get a better card, your render tims will drop significantly.
Eheh, the time will come that we'll render in real time at a much higher quality! :D
For now, I'd be happy to see the Batch Renderer on sale xD
Nope, I checked. Seems to me too that they've fixed some kind of a reflection/refraction bug but I don't see it in the logs, or at least I don't know what I'm looking for.
The changelog for Iray 2017.1 beta, build 296300.616 mentions the following:
Most of the benchmarking scenes discussed in this thread feature reflections and/or refractions resulting from interactions between formal light sources and backplates (light emissive textures like HDRI skydomes.) And although the changelog only mentions it as an issue for solid color backplates, it stands to reason that this would also be a noticeable issue with high intensity light sources originating as part of a backplate (as is the case with the very noticeably incorrectly reflected/refracted light source found in Outrider42’s benchmarking scene under DS 4.10) since such spots are technically just a solid color grouping of pixels.
Seems like it, but this seems to be a simple bug fix that doesn't affect performance (or might be improving it).
Okay this is what I don't understand. Why should having both the bump and normal map active produce such a big slowdown, but having one of each be much faster? It has got to be a bug. Have you tested out the scene with your Titan RTX? It would be interesting to see the results.
Why are you using both
A bump is just the old basic version and normal maps are the improved newer version
Bump Map / Normal Maps
Bump Maps are the most basic form of this, as they are only one channel of information (greyscale). This is sort of like a height map (darker being deeper, brighter being higher). It heavily relies on the existing normal information, and combines with it to make the result look bumpier than it really is. It's an older concept, limited in what it can do, and as a result is mostly just good for very basic and minor types of surface bumps.
Normal maps are a fancier form of bump mapping. It uses the red, green, and blue channels of the image to control the direction of each pixel's normal. It still relies on the original vertex normal information, but lets you define/fake a surface more accurately than a simpler bumpmap can
I would have noticed in 4.10 and before if bump/normal didn't stack, so I'd rule out that.
Yeah, I don't see this leading to a big performance difference either. However it does lead to a visual difference in the final render. Which is an equally valid part imo of what you brought up concerning rendering variances between DS 4.10 and 4.11 beta.
A bump map is a collection of one-dimensional offsets applied to the positional coordinates of triangles on the surfaces of an object. A normal map is a collection of three-dimensional offsets applied to the positional coordinates of triangles on the surfaces of an object. Having both active at once on an object in a scene (especially when that object happens to be a sphere) would mean introducing the need for at least two extra calculations to the process of positioning each and every point on that objects' surface. And those sorts of micro calculations add up.
I would if I could, but it's impossible to render ANYTHING in DS 4.10 with a Turing card due to the lack of Volta support. :(
I am planning on doing a full gauntlet of the 1000 iterations version and adding that to my earlier post of collected benchmark stats in case people find it useful. You can never really have too many benchmarks imo.
Who said anything about them not stacking? The Iray change log in question (repeated below for sake of convenience) states:
Weighted averaging is obviously a much more mathematically complex process than simply stacking (summing) things together. And we aren't even talking about bump/normal map interactions in general - just in cases where both are used on a sphere (since that was the specific use case in Aala's benchmarking scene where a seemingly mysterious performance hit was observed.)
I'm hugely confused Ted. Half the test results on the same cards are reporting vastly differerent render times, and the other half are reporting times when they don't let the render go to 100% convergence at 05000 iterations.
It's very hard to compare apples with apples in these cases. Everyone know the last 5% of a render is the slowest so unless you let it finish the times are meaningless for the purpose of comparison.
For what it's worth here are mine on a GTX 2080 non-OC on an older system..... and I did conduct the test as it was meant to be conducted.
Oh I see. Apologies, I thought you meant a bump map might be completely ignored when a normal map is used.
This is not the thread for another acrimonious argument over hardware - please drop the subject if the posts cannot be kept civil.
What does convergence actually mean? When does a pixel count as converged?
Each time Periodically during the rendering process, as Iray completes a new iteration of a scene it goes through and compares the newly computed value of each and every pixel to what it was the last time it checked, and if the two values are within a certain threshold of difference from each other (as defined by the "Rendering Quality" parameter found under Render Settings in Daz Studio) it flags that pixel as "converged". Iray then keeps a running tally of how many pixels in the current intermediately rendered image are flagged as "converged" versus the pixel count as a whole, and uses this ratio ("Rendering Converged Ratio" in Render Settings) as one of the key parameters for determining when a render is completed. It's actually a very elegant way of solving what is technically an unsolvable problem (determining a point at which a theoretically endless process of iterative computation like raytracing can fairly said to be "complete".)
So in other words,
Q: When does a pixel count as converged?
A: When the value of "Rendering Quality" says so.
What does this all actually mean? The default value of "Rendering Quality" in all versions of DS (afaik) is 10. And I've never come across anyone changing it. Increasing it (it has a max value of 10000...) leads to an increase in per-pixel rendering quality at the cost of a linear increase in rendering time (ie. a render at RQ=10 taking ten minutes to "converge" will take twenty minutes to "converge" at RQ=20.)
Otherwise there really isn't much more (in terms of concrete things) to say on the matter - other than point out that convergance % can be used as a valid performance benchmark (as long as people stick to the same value for Iray's "Rendering Quality" parameter.)
What can I say - 3d rendering is a cryptic business...
ETA: clarity. Plus better info.