Daz Studio Iray - Rendering Hardware Benchmarking

1333436383945

Comments

  • outrider42 said:

    You could maybe wait just for the 4080 to release since it is just days away. I'm not saying to buy one, but rather its release might help just a little bit. New cards have to compete with used cards on eBay, too. I don't think the 4080 will be received very well, but it could still shift prices a little by simply being there. Retailers probably have to make room for these, so that means something has to go. There could be some better sales on the 3060 as a result.

    Not a bad idea but I won't be holding my breath on that, looking at the prices retailers have listed for the 4080, which approaches the 4090 (yea, price gouging all the way down).  I have no doubt they'll sell them all too.  

  • So my 3060 arrived today (Gigabyte GeForce RTX 3060 12GB GAMING OC V2).  I loaded a test scene that's kind-of on the edge of the 8GB memory limit and let it run for each card:

    Time to complete 1500 iterations:

    2070 2482.564s
    3060 1631.518s

    So the 3060 is around 1.52 times faster for iRay than the (vanilla) 2070 was.  The extra 4GB will come in handy too.  Quite happy with this purchase!

  • outrider42outrider42 Posts: 3,679
    edited November 2022
    Glad you are happy, your research pays off. And like I said, if you kept the 2070 and can run both at the same time you will see an even greater performance boost. Iray scales very well with multiple cards, so you could be looking at a nearly 2.5x increase in scenes where you can run both cards.
    Post edited by outrider42 on
  • i have 3060 and 1070ti. How to make them work together?

  • beidok55 said:

    i have 3060 and 1070ti. How to make them work together?

    Just plug them both in, only need one of them connected to monitor.  The software will detect both cards, and you'll be able to select them both in render settings.  They don't share memory btw, so the memory limit will be the lower of the two cards.

  • Pickle Renderer said:

    beidok55 said:

    i have 3060 and 1070ti. How to make them work together?

    Just plug them both in, only need one of them connected to monitor.  The software will detect both cards, and you'll be able to select them both in render settings.  They don't share memory btw, so the memory limit will be the lower of the two cards.

    Each card is managed separately - both will be used if the scene fits in both, if it is too big for the smaller card but not the larger then the smaller card will be dropped but the larger will continue, and of course if it is too big for either then both will be dropped and the render will drop to CPU or stop, depending on settings.

  • outrider42outrider42 Posts: 3,679
    beidok55 said:

    i have 3060 and 1070ti. How to make them work together?

    Bear in mind your PC needs to be able to handle the extra power and heat of running 2 GPUs. Plus your motherboard needs to have a 2nd pcie slot for a GPU, not all do. Physical space may become an issue as well. If your PC is a prebuilt, odds are it was built with the one GPU it has in mind. As long as your PC can handle it, adding a 2nd GPU is super easy.

    So you will want to do some research on what you have first.
  • Yes, you need a good power supply to handle it.  There are power supply calculators online (they're vaguely accurate, just giving you ballpark for what you need).  One of the cards will block the airflow for the other, so they're likely to run at a lower clock in general (blower-style cards are better for airflow), but two will still be better than one, just not precisely a+b.  IDK how server farms do thermal management.  There's some crazy setups out there.

  • outrider42outrider42 Posts: 3,679

    I've never had any real problems with thermals with 2 cards, and I have been using 2 cards for a long time. I put the smaller card on top, which gives the other more space. The card on top might run warmer, but I haven't had it thermal throttle. I use an aggressive fan curve to keep them cool enough. My EVGA 1080ti used almost 250W and so did the MSI 1080ti. The EVGA had the weaker cooler, it was only a Black model. It would hit 84C when gaming if I used it, but only 74C when rendering Iray. That is plenty good enough to stay in spec and avoid much throttling. 

    Right now I have the EVGA 3060 Black on top, with the monster Founder's 3090 under it. The 3060 performs very similar to the EVGA 1080ti, which makes sense given they are both Black models. The cooler is designed for a specific thermal spec. Since the 3060 is a 2 slot card, it leaves plenty of space between them. If you get much bigger that might start pushing it, but as long as the fans are doing their job and the case is getting air flow, it can be fine.

    When using the Black cards by themselves, they still hit a similar temp.

    As for the card on the bottom, it has never been an issue at all. My MSI was running in the low 60s, and sometimes even 50s. The 3090 does the same.

    I will say that after installing my 3090 I just left my case side panel off. It didn't make a huge difference, maybe a couple degrees. It doesn't effect performance. At least not normally. Maybe on long renders it might let one of the cards stay just a step higher, but probably not enough to really alter render speed. 

  • Thanks everyone.
    daz 4.16.0.3
    3060+1070ti = 
     2 minutes 33.53 seconds
    3060 = 3 minutes 21.60 seconds
    1070ti = 9 minutes 7.73 seconds

     

    2022-11-16 11:31:31.570 Total Rendering Time: 2 minutes 33.53 seconds

    2022-11-16 11:31:34.010 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA GeForce GTX 1070 Ti): 489 iterations, 2.493s init, 148.462s render

    2022-11-16 11:31:34.010 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1311 iterations, 2.581s init, 148.400s render

     

    2022-11-14 23:33:13.084 Total Rendering Time: 3 minutes 21.60 seconds

    2022-11-14 23:33:19.262 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 4.064s init, 195.246s render

     

    2022-11-12 01:51:27.382 Total Rendering Time: 9 minutes 7.73 seconds

    2022-11-12 01:54:25.800 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce GTX 1070 Ti):                  1800 iterations, 2.060s init, 543.199s render

  • Cant wait ofr 4080 numbers here. Too bad the 4000 series are still not officially supported yet

  • outrider42outrider42 Posts: 3,679

    For what's worth, Puget has benched the 4080 with other more popular rendering engines and software.

    In both RTX and pure CUDA modes the 4080 is between 40-50% faster than a 3090ti in Vray. How well this translates to Iray is yet to be seen. Given that the 4090 isn't hitting anywhere near it should be in Iray, the 4080 might even struggle to beat a 3090ti. It would be nice to see a more taxing scene done with the 4090, 4080, and 3080(series) cards to see if the scaling is different than our little benchmark. But nobody benchmarks Iray besides us.

    I must admit though it is funny to see the 4080 more than tripling the 2080ti. I suppose one could argue the 2080ti also cost $1200, but I still don't think that gives the 4080 a pass. I would just take the extra $400 and buy a 4090. I know that is not an easy thing to say, but if you are ready to drop $1200, you may as well do the extra $400 and get a better card with 8GB more VRAM. The 4080 Founder's Edition uses the EXACT same cooler as the 4090, so it is not any smaller at all! Many 3rd party cards are also using a similar cooler on the 4080 as they did with the 4090, so many of these are still huge cards. So you are not saving any space dropping to a 4080. I suppose the monster cooler will be pleasantly overkill considering it uses 100 Watts less, so there's that.  But I do not see any purpose in the 4080 at its launch price it point. If the 4080 had been $900 or even maybe $1000 it might be alright. Even those prices are vastly marked up over the 3080's $700. The 4080 is just too cut down compared to the 4090, but still cost over $1000. Maybe it is different in some regions.

    However, it is ultimately up to each user to make their own decisions. Maybe it does make sense for some people. As long as they understand what they are getting, which is what we try to do here in this thread, that is fine. But if it was me I would be firmly in the 4090 or bust camp. I am a sucker for VRAM, though.

  • skyeshotsskyeshots Posts: 148
    edited November 2022

    outrider42 said:

    I must admit though it is funny to see the 4080 more than tripling the 2080ti. I suppose one could argue the 2080ti also cost $1200, but I still don't think that gives the 4080 a pass. I would just take the extra $400 and buy a 4090. I know that is not an easy thing to say, but if you are ready to drop $1200, you may as well do the extra $400 and get a better card with 8GB more VRAM. 

    True, but the 4080 will do well for most other uses. Following your logic though, if you are already spending $1200, just drop the extra $8,000 and get the 'fully armed and operational' 48GB A6000-ADA. 

    Post edited by skyeshots on
  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    outrider42 said:

    I must admit though it is funny to see the 4080 more than tripling the 2080ti. I suppose one could argue the 2080ti also cost $1200, but I still don't think that gives the 4080 a pass. I would just take the extra $400 and buy a 4090. I know that is not an easy thing to say, but if you are ready to drop $1200, you may as well do the extra $400 and get a better card with 8GB more VRAM. 

    True, but the 4080 will do well for most other uses. Following your logic though, if you are already spending $1200, just drop the extra $8,000 and get the 'fully armed and operational' 48GB A6000-ADA. 

    Really?

  • skyeshots said:

    outrider42 said:

    I must admit though it is funny to see the 4080 more than tripling the 2080ti. I suppose one could argue the 2080ti also cost $1200, but I still don't think that gives the 4080 a pass. I would just take the extra $400 and buy a 4090. I know that is not an easy thing to say, but if you are ready to drop $1200, you may as well do the extra $400 and get a better card with 8GB more VRAM. 

    True, but the 4080 will do well for most other uses. Following your logic though, if you are already spending $1200, just drop the extra $8,000 and get the 'fully armed and operational' 48GB A6000-ADA. 

    I see why NVIDIA does this.  People all over the place talk about paying $1200 but apart from 0 day (0 microsecond actually) sales you won't be able to get one for $1200.  It'll be a partner card for $100 - $400 more than MSRP.  MSRP is basically fiction.

  • skyeshotsskyeshots Posts: 148
    edited November 2022

    outrider42 said:

    skyeshots said:

    outrider42 said:

    I must admit though it is funny to see the 4080 more than tripling the 2080ti. I suppose one could argue the 2080ti also cost $1200, but I still don't think that gives the 4080 a pass. I would just take the extra $400 and buy a 4090. I know that is not an easy thing to say, but if you are ready to drop $1200, you may as well do the extra $400 and get a better card with 8GB more VRAM. 

    True, but the 4080 will do well for most other uses. Following your logic though, if you are already spending $1200, just drop the extra $8,000 and get the 'fully armed and operational' 48GB A6000-ADA. 

    Really?

    Depends if you can obtain a bulk discount. Budget $9K per A6000-ADA GPU. Big price jump from previous pro gen 48 GB cards.18,176 CUDA cores @ 300 watts each. I have a couple on order though will be a while yet before they arrive for testing. My guess is slightly lower score here than the 4090.

    Post edited by skyeshots on
  • outrider42outrider42 Posts: 3,679
    edited November 2022

    The 4090 offers better price for the performance than the 4080. This gap gets even wider with heavy ray tracing, and well, Iray tends to like ray tracing a lot. So the "cost per iteration" of the 4080 might very well be far worse than the 4090. Usually the cards down the stack offer a better performance per dollar compared to the top halo product. The 4080 just isn't a good deal.

    There are partner 4080 models that are basically the same price as the MSRP 4090.

    Buying a A series (former Quadro) does make any sense unless someone desperately wants that VRAM. The 4090 offers more VRAM and is faster than the 4080. The A series may not be any faster than the 4090, if it is it will be close. So the upsell is just not the same. The price to performance ratio goes off the rails with A series as well. So no, the logic doesn't apply. If a shopper wants the best price to performance card...the 4090 is actually the best option with the new generation, and even holds up against some older models.

    However I doubt the 4080 is going to stay at $1200 for too long, and by that I mean the price might drop. While some may think this is unlikely, first of all, 4080s have not sold out yet. Even the scalpers are not buying them all up! My Microcenter has 6 different 4080s in stock, including 17 of just one ASUS model, and 19 of one Zotac. They are not selling too well. These cards should have sold out. In normal times, it usually takes a month or two after launch for stock to normalize. This is abnormal.

    Things have changed since Ampere launched. Miners caused a near unlimited demand, because miners had no limit on how many GPUs they could use. While Iray can use multiple GPUs, it is extremely difficult to use more than a few for any normal user. Even power users have a hard time going beyond that. Most users here have no more than 2 cards. Most only have one. Now that mining has died off like it has, there is a finite limit on the demand for any GPU. Without miners around, Nvidia will be forced to compete whether they want to or not.

    AMD is set to release 2 new cards soon. While they will not beat the 4090, they will beat the 4080 with ease, and they will do so at a cheaper price. So without miners around, Nvidia has to compete with this, as AMD has more momentum coming into this generation than they have had in years, perhaps ever. Then you add the growing negativity towards Nvidia over various issues, you have a recipe for a perfect storm. AMD might even get close to the 4090 in performance, though we don't know for sure until reviews. They might match up in gaming raster performance, but I doubt they will ray trace well at all. So AMD will probably still tank in any game that has ray tracing. But a lot of gamers still don't care much for ray tracing, either. Either way, AMD's most expensive card is going to be $1000. That's $200 cheaper than the 4080 and a whopping $600 cheaper than the 4090. It also uses 95 Watts less power than the 4090.

    All things considered, I don't think anyone needs to worry about prices going up on the 4080. Scalpers don't even want this card, so it is going to be sitting on shelves collecting dust. If it does that, eventually it will have to go on sale.

    The only question mark is supply. Rumors say the 4080 has much more limited stock than the 4090 did. If that is true, that makes the fact that it has not sold out even worse. But it could mean that the cards will eventually sell out simply because there are fewer of them. Even so, I still do not believe the 4080 is going to see any hikes from retailers unless something crazy happens (you never know in this crazy world.)

    The 4090 sells simply because you have certain people who always want the fastest card. Plus its MSRP is only $100 higher than the 3090, compared to the 4080 costing $500 more than the 3080, and the 4080 has a much larger gap between it and the 4090. We are seeing the result of that in sales.

    Post edited by outrider42 on
  • skyeshotsskyeshots Posts: 148
    edited November 2022

    Shipping dates for A6000-ADA are late December or January. Some sites show it over $9k. I think IT Creatons will be selling at around $7500. This is a huge increase from previous generation of the same card series since the current A6000 cards can be aquired new at under $5K.

    Post edited by skyeshots on
  • skyeshotsskyeshots Posts: 148
    edited November 2022

    outrider42 said:

    The 4090 offers better price for the performance than the 4080. This gap gets even wider with heavy ray tracing, and well, Iray tends to like ray tracing a lot. So the "cost per iteration" of the 4080 might very well be far worse than the 4090. Usually the cards down the stack offer a better performance per dollar compared to the top halo product. The 4080 just isn't a good deal.

    There are partner 4080 models that are basically the same price as the MSRP 4090.

    Buying a A series (former Quadro) does make any sense unless someone desperately wants that VRAM. The 4090 offers more VRAM and is faster than the 4080. The A series may not be any faster than the 4090, if it is it will be close. So the upsell is just not the same. The price to performance ratio goes off the rails with A series as well. So no, the logic doesn't apply. If a shopper wants the best price to performance card...the 4090 is actually the best option with the new generation, and even holds up against some older models.

    More to the point, I am personally not a fan of the current pricing schemas at all. Just looking at Ampere, the 3090 vs A6000 was about 3x times the price. Now with Lovelace, we are seeing 5x the cost to move from the 4090 to RTX-A6000-ADA. For the RTX-A6000-ADA, $6K would be more reasonable. It would also be more logical to see the 4080 prices at $1000 or less, just based on the card specs.

    When we start talking about performance, you are probably right on most points, but this is in theory only. We have no benchmark data here for IRAY on RTX-A6000-ADA or the 4080 (yet). The 4080 might suprise us. Until we have those data points, this is simply not an empirical comparison.This is a unique benchmark that depends heavily on the specific driver and IRAY optimization. And, if the 4000 series follow Ampere’s trend, we should see the 4060 or 4060 Ti take the lead in terms of cost and performance per dollar, assuming the 4060 does not bare yet another overly inflated price tag.

    And the last point here today, which I will make overtly, for those upgrading to Genesis 9, the 8K models demand much more VRAM per model. 24 GB works well for a dozen or more Genesis 8 characters. Genesis 9, on the other hand, just one model can crush systems with 6 GB VRAM. On my machine, loading up a half dozen 8K textured characters in a moderately complex scene went over the 24 GB line today:

    Since the 24 GB cards simply cannot pull that type of load, I guess what we can do is benchmark a 30 GB scene via CPU rendering.Then make a price vs performance comparison vs a pro level 48 GB card. 

    Post edited by skyeshots on
  • outrider42outrider42 Posts: 3,679

    I do not see how the 4080 can possibly overcome its CUDA and RT core cuts compared to the 4090. It would have to be clocked to the moon, and AD102 would have to be terribly inefficient in design. There are some video games where the 4080 is closer to the 4090, but most are bottlenecked by the CPU or game engine. Then there are more taxing games where the difference can be 40%. Additionally, the games that relied most on ray tracing were where some of the largest gaps were observed.

    So unless Iray itself is bottlenecking the hardware somehow (and who knows, maybe it is given how goofy recent versions of Iray perform,) I fully expect to see a wide gap between the 4080 and 4090 in Iray. At least 30%, possibly much more. Some scenes might be able to squeeze them closer, but a dense mesh scene should demonstrate just how far they are apart.

    8K textures on Genesis 8 are up to the PAs. There are several G9 characters that have no 8K textures at all. Textures can also easily and painlessly be resized by users. I can resize an entire folder in roughly 5 to 10 seconds without opening Photoshop or any such editor. I just highlight them and right click thanks to Windows Power Tools which I have had installed since like XP. It really does just take seconds to perform the task, you need to have a ton of textures cued to slow it down. We also have Scene Optimizer in this store which can automate the process, too. Aside from that, if a user has multiple models with 8K textures I would argue that they don't really need the 8K textures on most of those models. To fit many characters into a camera shot, you need to zoom out a little. You might have one character close to the camera, but the others will not be. Those other characters don't need 8K. So I don't consider this an issue.

    Of more interest is the mesh, which has more vertices than past Genesis models. Using high levels of subdivision can be what pushes a scene over. But once again the question must be asked if the scene really requires this extra mesh density if the actors are not close to the camera. You can probably use base resolution on them. I haven't used it in years, but Decimate is a product designed to decimate a mesh, and has been used by artists making background characters in scenes for a long time. The artist is also free to use past generation models for background characters.

    These things are not new to anyone who has to fight VRAM limitations. As someone who once owned GPUs with 2GB and 4GB of VRAM way back when, I am very familiar with resizing textures and other tricks to optimize a scene. That is just a fact of life for rendering. I will admit the 3090 has allowed me to get a bit lazier about those old habits. But it isn't infallible and can run out of capacity.

    Also, just a note, if anybody has played with AI art generators for long they quickly realize they eat more VRAM than Iray by far. A simple 512 by 512 image can suck down 6GB of data. I don't think many Daz users would want an image that small. Even a 3090 struggles to draw an image of 2000 pixels in size, running out of VRAM. So in this situation, unless better optimizations are made AI users may want more VRAM in their hardware.

    If the user still needs more than 24GB and has no alternative, then there is no real choice, they will want a Quadro class card with more VRAM. Even if the Ada 6000 is faster than a 4090, I don't expect it to be much faster, not a tier faster. Nvidia has to balance several factors. It may have 2000 more cores, but that is only 11% more. The Quadro cards cannot be drawing so much power, and the clock speeds have to come down with it, that will equalize things a lot, which has long been the case with these cards going back years. The 3090 is faster than its A6000 counterpart as well, in spite of having fewer active cores. At the very least, they are pretty close in performance here. Again if you are getting a Quadro level card for Iray, the primary reason is for VRAM rather than raw performance.

    VRAM is not something that can be calculated by a benchmark. You either have enough or you do not. Every user has to be able to understand how much VRAM they might use when buying any card, and decide for themselves if the price is worth paying for more VRAM.

    (I know they are not called Quadro anymore, I just use the term for simplicity.) The new A6000 certainly has a horrible name. They can simply use an extra letter instead of adding the nonsense at the end to identify the product. Call it the AD6000. Or even ADA6000. Ada is a great name with only 3 letters, they should use it.

  • outrider42 said:

    Before 4.20 most of the render times were really about the same with a few exceptions. Each version might add something new, so having a recent version can be helpful.

    4.16 is the last version before they jumped to 4.20. So I would use that one. 

    You can also use the beta and directly compare them. So if you have 4.16 on the main branch, you can still have 4.21 in beta form. I think this is the way to go if possible. The betas update more frequently, so you can keep the beta branch up to date for all the new stuff that comes along, while still keeping 4.16 for its pure rendering speed.

    As for the quality of the render...I just don't see a difference in most pics. The only time I see any difference is when ghost lights are involved, and that is a whole thread in itself. If there are any changes to Iray otherwise, they are so subtle most people are not going to spot them without a magnifying glass. And to be perfectly honest, I thought my renders in 4.16 looked better, too.

    But you don't have to take any one's word for it. Since you have 4.16, you have the ability to directly make this comparison for yourself and judge it yourself. You can render the same scene in 4.16 and then 4.21 and see how they compare in both time and quality.

    Regardless of the Lovelace optimizations (eta. Dec/Jan), we are still seeing close to a 30% performance drop across the rest of the GPU lineups when moving from Daz 4.16 to 4.20/4.21. I understand that there are tons of new features being added to Daz but this has been a pretty big slowdown.

    My most recent GPU+CPU bench was done with the current public beta and showed a +12% with the CPU enabled vs GPU alone. This is substantial considering how slow these CPUs are  @ 3.5 GHz. Looking back at the change logs between 4.16 and 4.21, dozens of image layering features have been added along the way. The change logs for the public beta note layer loading for layered images is now multi-threaded in the beta.

    I am going to theorize that the new layering features in 4.20/4.21 are part of our 30% performance drop and Daz is working to offset them. In my case, the rig has 112 threads (2 CPU x 28 cores x hyperthreaded), so the result of multi-threading is highly exaggerated. Just something interesting to note in terms of combined GPU/CPU features and something to test for in the upcoming releases.

  • Still no 4080 benchmarks from Daz3D users, quite an unpopular SKU.

  • outrider42outrider42 Posts: 3,679

    That's specific towards layered images, though. Daz started using LIE on a regular basis with their G9 models. If you load the base G9, the eye textures are all layered and stored in a temp file. If you look up the original textures the iris is by itself on a clear PNG and this gets combined with a sclera upon load.

    So that is why LIE has been getting so many updates. Daz probably recognized that LIE needed an overhaul in order to help G9 work better for users.

    But this is not part of Iray, and should not be impacting render performance. These only help the performance of the Daz app itself. Even if this does somehow effect Iray, it would only impact performance of renders that have LIE in the scene. Our little benchmark scene has no LIE. You can look back at the past, when Daz altered normal map "efficiency" as they called it, this only impacted scenes that used normal maps, which our benchmark does have. If you did not use any normal maps in your scene, there was no difference from 4.12 to 4.14. The more normal maps used also made the difference bigger. It so happens that normal maps are very common. But since the speed loss of 4.20+ can be observed regardless of LIE being used, I have to say LIE is not the answer.

    If CPU+GPU render speed is improving, I don't think that bodes well to be honest. This is such a parallel task that CPU struggles to match GPU. If CPUs are getting better while GPUs are still bogged down, I wonder what is happening to cause this. The Iray Dev Blog mentions faster material and environment updates in the newest version. It doesn't give context, but if this is for photoreal mode it could be a factor. Bug fixes are also mentioned, who knows, we might be dealing with some sort of bug.

    I am curious, since I have not downloaded every Daz Beta/build, exactly which version did this start happening? It was between 4.16 and 4.20, but weren't some later versions of 4.16 in the beta channel where reports of Iray being slower started?

    I tried testing thin film, as it gets mentioned in the log. I set every surface to thin film on in both versions and rendered. Then I set all surfaces to thin film off and rendered in both versions. The performance difference did not change. I tried it a few times, it has been a while, though. There are some MDL changes, but these appear to be just name changes.

  • RayDAntRayDAnt Posts: 1,134
    edited November 2022

    outrider42 said:

    I am curious, since I have not downloaded every Daz Beta/build, exactly which version did this start happening? It was between 4.16 and 4.20, but weren't some later versions of 4.16 in the beta channel where reports of Iray being slower started?

    For what it's worth, you can see a breakdown of which daz studio beta/release versions correspond with which Iray versions in the opening post of this thread here.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679

    It is also compounded by how the first version of 4.16 had a regression in Iray, which is very weird. When you browse the changelogs, one change log offers nothing, as it is "sanitized" for some reason (Daz's own word.) Another changelog shows mismatched numbers that make it very hard to follow what is going on. It says "Public Build 4.16.1.2" but at the bottom of the note it says "Incremented build number to 4.15.1.97".

    So they called it 4.16 when it wasn't? What was going on?

  • Richard HaseltineRichard Haseltine Posts: 100,800
    edited November 2022

    outrider42 said:

    It is also compounded by how the first version of 4.16 had a regression in Iray, which is very weird. When you browse the changelogs, one change log offers nothing, as it is "sanitized" for some reason (Daz's own word.) Another changelog shows mismatched numbers that make it very hard to follow what is going on. It says "Public Build 4.16.1.2" but at the bottom of the note it says "Incremented build number to 4.15.1.97".

    So they called it 4.16 when it wasn't? What was going on?

    The version change comes after the other changes, and the Public Build at the top refers to the version that was released, that those entries are leadign up to. Are those the confusion? I think theer have also been occasions when the private track has switched from working on the path to the next release to isntead work on an interim update for the current release, which may lead to confusing entries if reading throught hem in sequence.

    Sanitising may be removing background stuff, but it may also be removing secret stuff (Iray, dForce, and dForce hair were all - as I recall - kept out of the chnage log until there was an announcement, though they must have been worked on before that).

    Post edited by Richard Haseltine on
  • Richard Haseltine said:

     but it may also be removing secret stuff (Iray, dForce, and dForce hair were all - as I recall - kept out of the chnage log until there was an announcement, though they must have been worked on before that).

    Didn't know that. Adds potential for unknown surprises. laugh Thanks for sharing. 

  • skyeshotsskyeshots Posts: 148
    edited November 2022

    outrider42 said:

    But this is not part of Iray, and should not be impacting render performance. These only help the performance of the Daz app itself. Even if this does somehow effect Iray, it would only impact performance of renders that have LIE in the scene. Our little benchmark scene has no LIE. You can look back at the past, when Daz altered normal map "efficiency" as they called it, this only impacted scenes that used normal maps, which our benchmark does have. If you did not use any normal maps in your scene, there was no difference from 4.12 to 4.14. The more normal maps used also made the difference bigger. It so happens that normal maps are very common. But since the speed loss of 4.20+ can be observed regardless of LIE being used, I have to say LIE is not the answer.

    If CPU+GPU render speed is improving, I don't think that bodes well to be honest. This is such a parallel task that CPU struggles to match GPU. If CPUs are getting better while GPUs are still bogged down, I wonder what is happening to cause this. The Iray Dev Blog mentions faster material and environment updates in the newest version. It doesn't give context, but if this is for photoreal mode it could be a factor. Bug fixes are also mentioned, who knows, we might be dealing with some sort of bug.

    I am curious, since I have not downloaded every Daz Beta/build, exactly which version did this start happening? It was between 4.16 and 4.20, but weren't some later versions of 4.16 in the beta channel where reports of Iray being slower started?

    It was interesting to see that CPUs pushed up the iteration rate in my recent measure on the dual Ice Lake Xeon rig in the current beta (4.21.1.13). Looking at this a bit closer, I saw that in Daz the CPU Load Limit was set to 56. This made sense to me as there are 56 Cores total, but that is not really what is going on here. Apparently, this setting refers to threads, not cores.

    Daz CPU Load Limit Scheduler

    Come to find out Daz has a serious attitude problem with multiple NUMA groups (multiple CPUs accessing different memory banks) so it has only been using one of the CPUs during render. There is some activity on the 2nd chip, no doubt, but scouring the web I found that this is a known issue with Daz (vs Blender which heartfully uses all CPUs and cores) and people with multiple CPU sockets tend to run into this in Daz.

    2 CPU - 1 Node Active and 2nd Inactive

    In troubleshooting, I did some Bios reconfiguring and even went so far as to upgrade Win 11 Pro to Workstation on the rig, all to no avail. The task manager affinity is set to allow full access to both CPUs and all cores for Daz. One of the suggested workarounds is running a separate Daz instance on the 2nd CPU. If anyone has a better solution, I am officially interested. This is what the task manager looks like with affinity checked.

    CPU with only 1 CPU Rendering

    Going back to the CPU benefit/penalty we were talking about; I ran a battery of 20 benchmarks today in the beta and filtered the data for the max iteration rate achieved per hardware combination. A conventional/modern CPU has something like 8 cores/16 threads, which is an important ‘normal rig’ baseline. To simulate this, I set the CPU scheduler in Daz to limit the rendering pipeline to 16 threads maximum. I then ran four benchmarks for the hardware combinations below to find a max iteration rate for each setup:

    I also went back to the retail build of 4.21 and ran the same benchmark to find that there was no benefit whatsoever to having the CPU enabled on the current release version of Daz, even with all 26 cores firing.

    To summarize: 1.) Layered Images have been part of Daz since 2007 and the CPU added increase may have something to do with the upcoming threading enhancements (active layering or not). 2.) Daz should work on the NUMA issues. 3.) It would be great if Daz would allow us mere common folks access to the prior Daz builds.

    On the business side of all of this, if I worked for Daz and I knew that I was implementing an update that would hit everyone with a 30% (or more) performance hit on their favorite render engine, I might build in a slider that would allow users to shift hardware/rendering performance values between quality and performance based on the (possibly hidden or confidential) differences between 4.16 and 4.21. 

    Post edited by skyeshots on
  • KitsumoKitsumo Posts: 1,215

    I finally broke down and bought a 3060 and I'm happy with the results so far.

    System Configuration
    System/Motherboard: ASUS TUF GAMING B550-PLUS (WI-FI)
    CPU: AMD Ryzen 7 5700G with Radeon Graphics (16) @ 4.673GHz
    GPU: NVIDIA GeForce GTX 1080 Ti
    GPU: NVIDIA GeForce RTX 3060 Lite Hash Rate
    System Memory: 32 Gb
    Operating System: Kubuntu 22.04.1 LTS x86_64
    Nvidia Drivers Version: 515.65.01
    Daz Studio Version: 4.20.0.17

    Benchmark Results

    1080ti
    Total Rendering Time: 8 minutes 7.11 seconds
    (NVIDIA GeForce GTX 1080 Ti): 1800 iterations, 1.049s init, 483.990s render

    3060
    Total Rendering Time: 4 minutes 15.27 seconds
    (NVIDIA GeForce RTX 3060): 1800 iterations, 1.139s init, 251.991s render

    Both GPUs
    Total Rendering Time: 2 minutes 53.79 seconds
    (NVIDIA GeForce RTX 3060):    1201 iterations, 1.154s init, 169.876s render
    (NVIDIA GeForce GTX 1080 Ti): 599 iterations, 1.125s init, 170.080s render

    CPU + GPUs
    Total Rendering Time: 2 minutes 59.80 seconds
    (NVIDIA GeForce RTX 3060):    1124 iterations, 2.089s init, 173.710s render
    (NVIDIA GeForce GTX 1080 Ti): 583 iterations, 2.001s init, 173.492s render
    CPU:    93 iterations, 1.620s init, 174.564s render

    So it looks like my scores are a little lower than average, but I think it's understandable with all the translating between DS and Linux. The 5700G being PCI-E gen 3 probably doesn't help. It would be interesting to see how a 5800X3D handles things.

  • outrider42outrider42 Posts: 3,679

    DS already has quality settings in place. So users have these quality sliders at their disposal. They also have the built in denoiser as well. I really wish the denoiser had more robust settings to fine tune it. There should be options for various types of architecture, humans, and animals. Right now our denoiser is relatively dumb and handles all content equally. It also seems to have limits on color gradients, especially dark colors.

    The goal behind denoising is to reduce render times. You can render to much lower convergence and let the denoiser fill in the unfinished pixels, resulting in dramatically faster renders. We just need a better denoiser.

    Actually, the denoiser should have its own dedicated menu section, one that is highly visible and easy to access for users. The Iray settings should also be easier to dig into as well.

    The denoiser in general feels like a missed opportunity to me. This thing can be so amazing, but it needs training to perform at its best. Users should be able to train the denoiser with their own renders. After a number of pictures the denoiser can use this data to better construct images with noise.

    Nvidia has long intended ray tracing to be used with denoising. They take this 2 pronged approach to make ray tracing faster. So I find it strange that Iray's denoiser has seen relatively few changes and enhancements since its introduction. The Intel denoiser actually works better in most cases.

    Still, this doesn't really solve our problem of Iray's mysterious slow down, it can only mask it at best. One thing that could be helpful is if we could examine other software that use Iray. After all, if this is only happening in Daz Studio, then that would point to some kind of inefficiency with how the plugin is being handled. The trouble is other software that use Iray have forums are quite lifeless. If this is seen in other software, then we know the issue is squarely with Iray.

Sign In or Register to comment.