Daz Studio Iray - Rendering Hardware Benchmarking

1252628303145

Comments

  • prixatprixat Posts: 1,588

    There's also a new PSU standard, proposed by Intel, (starting at around 1000W) which includes that 600W rail for those future GPUs.

  • chrislbchrislb Posts: 100

    RayDAnt said:

    skyeshots said:

    The Dual Ice Lake is a tough build. For the Intel fanboys, Puget advocates keeping with a single Workstation class Xeon.Having just walked the Dual Xeon tightrope, I completely agree with them. Tons of setup and anxiety. 

    Anyone want to guess the RTX 3090 Ti Daz rendering performance?

    20.5 iterations per second (give or take 0..25)

    Some of my 3090 benchmak resutls are already around 22 iterations per second 

  • chrislbchrislb Posts: 100

    outrider42 said:

    Cooling and power will be absolutely critical to any 3090ti build. The card is just insane. I personally do not recommend it, certainly not for the price it is. The original 3090 was more logical because of the 24gb VRAM that makes it stand out. The 3090ti does not offer any additional VRAM. The price and especially power are real concerns here. You have to have a very good power supply to even consider a 3090ti. Maybe the 450 Watts doesn't sound so bad by itself, but power spikes could be an issue that lesser power supplies can handle. These cards can go beyond 500 Watts. We joked about small space heaters with the 3090, but the 3090ti at 500 Watts is exactly what small space heaters in the US can be. So people need to understand that even if they water cool, that heat is getting displaced into the room. If you go to Walmart and buy a small space heater it will create the same amount of heat that a 3090ti will eject into your room. The 3090ti is very capable of heating your room.

    I think many people forget two things.  One is that many RTX 3000 series cards have a max power draw during Daz rendering that is well below what they draw during gaming.  For example, even if I set my power limit to 1000 watts, I rarely see 420 watts on my 3090s with Daz.  However in games or game benchmarks with ray traced lighting, I can see 600 watts power draw per card.

    The second is that many AIB RTX 3090 gaming cards on that have been on the market for a while already have 450-500+ watt power limits. The MSI Suprim X shipped with a 450 watt BIOS.  The 3090 Strix shipped with a 480 watt power limit. EVGA started shipping the 3090 FTW3 Ultra with a 500 watt power limit a few months after it was released.  That 500 watt BIOS replaced the 450 watt BIOS that the card originally shipped with.  The 3090 Kingpin shipped with a 520 watt BIOS and has an option al 1000 watt BIOS.  Some of the Galax 3090 cards shipped with 500 and/or 1,000 watt BIOS.

    The new power connector standard eliminates the risk of substandard wiring that many power supplies have on their PCIE 8 pin(or 6+2 pin) connectors.  Many lower priced power supplies have wiring that isn't rated for much more than the 150W power draw per 8 pin connector.  However, better quality power supplies come with wiring that can handle 250-300+ watts per PCIE 8 pin connector.  With the new connector standard, there is now question if the wiring can handle the power draw of the card.

  • outrider42outrider42 Posts: 3,679

    It is true that rendering Iray generally uses less power than a highly demanding game. The reason for this is how video games work, they are constantly shifting data around, in and out of VRAM. Modern games only have a few seconds worth of data in their VRAM at any given time and this is constantly getting refreshed. Iray operates completely different, it loads a scene into VRAM exactly one time, and then processes the information on board the GPU from that memory. This is why we need to have enough VRAM to render the scene or it fails to GPU render. Because Iray is not making all of these draw calls like a game does, it results in less power being used when compared to a demanding game.

    However, rendering still uses quite a bit of power, after all, you said rarely 420 Watts, but 420 Watts is still a lot! Like I said, I have a 3090 now, plus a 3060. Before I had two 1080tis. If I used one 1080ti, I never felt the heat. If I used both sometimes I could feel the heat as it was just a bit over 500 Watts. Using a 3090 by itself actually uses a little less heat than double 1080tis, but adding the 3060 in pushed it back up in spite of the 3060 being a very power friendly card. It is totally noticeable in my room when I render a lot stuff. I can also feel it when playing a demanding game, too, even though the 3060 is idle. Games that are not demanding don't even break a sweat. I have been playing some older games and the 3090 is barely used.

    I just ran a test render to double check. I have MSI Afterburner installed and I also have a Watt meter on my PC at the outlet. If I run the 3090 by itself, it will hit 330 Watts peak in Afterburner. My total system draw bounces around 400 to 480 Watts on the wall. My 3090 is a Founder's model, not one of the super over clocked ones. If you add a 100 Watts to that you will easily break 500 Watts of total power. Next gen 4090s (or whatever they are called) may use even more than just 100 Watts additional power.

    You may have heard about the situation where a video game called "The New World" was killing 3090s. This came down to several design factors in both the game and 3090 that set up the perfect storm chance that something could go wrong. This happened with some other GPUs, but the 3090 seemed to be the most impacted. EVGA actually went on record with their investigation and said that it was not the game's fault, it just happened that the game did things in a way that could trigger this (this is kind of humorous, honestly). EVGA blamed bad soldering, and it is surprising that EVGA would blame themselves. However, that doesn't make a lot of sense when multiple brands and even some AMD cards seemed to die playing this game. It seems like playing the game with an uncapped frame rate using a powerful GPU may lead to trouble. But we still do not know exactly why, because GPUs are supposed to be able to handle this workload regardless. Playing a game with an uncapped frame rate is pretty common on PC.

    Keep in mind that Iray by its nature is "uncapped" as it runs the GPU chip to 100% for the full duration. While Iray may not use as much energy overall as a game can, the fact remains that next generation cards are very likely going way up in power draw. And while some 3rd party cards may have gone a bit crazy with the 3090, the 4090 is set to be even crazier by default. Then you add the 3rd parties on top of that, who will always push harder. There may be 600 Watt cards released next gen, not just 450 Watts, or even 500. Even with Daz studio, such a card is going to hit near 500 Watts by itself, and you will certainly go over 500 Watts when you factor in total system power.

    I would love to be totally wrong about that. But pretty much every leaker is suggesting that the top cards are going to use a lot of power, and they are stressing that they will use even more than the 3090ti does.

    The Kingpin card is a special case, it is designed purely for enthusiast overclockers, people who like to try breaking records with liquid nitrogen. They have always been outlandish cards with outlandish prices. Some other brands started picking up on this as well, and so they also started adding options for 1000 Watt BIOS because overclockers who make the news can be free publicity. But next gen shipping such a BIOS will be necessary. That is the difference. Most people do not overclock their cards much, if any at all, and so such BIOS are total overkill.

  • rrwardrrward Posts: 556

    I manage my heat situation (used to run 3X1080ti+1060, then 2X2080ti+1070ti, now a single A5000) by moving my render box into the laundry room, the coldest room in the house, and running cables through the wall into my office (I also have a remote power switch for it). The excess heat keeps the laundry room warm and my office is spared the heat.

  • RayDAntRayDAnt Posts: 1,134

    chrislb said:

    RayDAnt said:

    skyeshots said:

    The Dual Ice Lake is a tough build. For the Intel fanboys, Puget advocates keeping with a single Workstation class Xeon.Having just walked the Dual Xeon tightrope, I completely agree with them. Tons of setup and anxiety. 

    Anyone want to guess the RTX 3090 Ti Daz rendering performance?

    20.5 iterations per second (give or take 0..25)

    Some of my 3090 benchmak resutls are already around 22 iterations per second 

    This is my prediction for an air-cooled Founders Edition 3090 Ti running at stock settings.

  • outrider42outrider42 Posts: 3,679

    I did 19.5 iterations with my Founder's 3090 at stock. A 10% gain would be 21.5, a 20% gain would be 23.4 iterations. I am leaning on the high side, so around 23 iterations for a stock 3090 then, since Iray always gets more out of these than gaming does.

  • chrislbchrislb Posts: 100

    outrider42 said:

    It is true that rendering Iray generally uses less power than a highly demanding game. The reason for this is how video games work, they are constantly shifting data around, in and out of VRAM. Modern games only have a few seconds worth of data in their VRAM at any given time and this is constantly getting refreshed. Iray operates completely different, it loads a scene into VRAM exactly one time, and then processes the information on board the GPU from that memory. This is why we need to have enough VRAM to render the scene or it fails to GPU render. Because Iray is not making all of these draw calls like a game does, it results in less power being used when compared to a demanding game.

    However, rendering still uses quite a bit of power, after all, you said rarely 420 Watts, but 420 Watts is still a lot! Like I said, I have a 3090 now, plus a 3060. Before I had two 1080tis. If I used one 1080ti, I never felt the heat. If I used both sometimes I could feel the heat as it was just a bit over 500 Watts. Using a 3090 by itself actually uses a little less heat than double 1080tis, but adding the 3060 in pushed it back up in spite of the 3060 being a very power friendly card. It is totally noticeable in my room when I render a lot stuff. I can also feel it when playing a demanding game, too, even though the 3060 is idle. Games that are not demanding don't even break a sweat. I have been playing some older games and the 3090 is barely used.

    I just ran a test render to double check. I have MSI Afterburner installed and I also have a Watt meter on my PC at the outlet. If I run the 3090 by itself, it will hit 330 Watts peak in Afterburner. My total system draw bounces around 400 to 480 Watts on the wall. My 3090 is a Founder's model, not one of the super over clocked ones. If you add a 100 Watts to that you will easily break 500 Watts of total power. Next gen 4090s (or whatever they are called) may use even more than just 100 Watts additional power.

    You may have heard about the situation where a video game called "The New World" was killing 3090s. This came down to several design factors in both the game and 3090 that set up the perfect storm chance that something could go wrong. This happened with some other GPUs, but the 3090 seemed to be the most impacted. EVGA actually went on record with their investigation and said that it was not the game's fault, it just happened that the game did things in a way that could trigger this (this is kind of humorous, honestly). EVGA blamed bad soldering, and it is surprising that EVGA would blame themselves. However, that doesn't make a lot of sense when multiple brands and even some AMD cards seemed to die playing this game. It seems like playing the game with an uncapped frame rate using a powerful GPU may lead to trouble. But we still do not know exactly why, because GPUs are supposed to be able to handle this workload regardless. Playing a game with an uncapped frame rate is pretty common on PC.

    Keep in mind that Iray by its nature is "uncapped" as it runs the GPU chip to 100% for the full duration. While Iray may not use as much energy overall as a game can, the fact remains that next generation cards are very likely going way up in power draw. And while some 3rd party cards may have gone a bit crazy with the 3090, the 4090 is set to be even crazier by default. Then you add the 3rd parties on top of that, who will always push harder. There may be 600 Watt cards released next gen, not just 450 Watts, or even 500. Even with Daz studio, such a card is going to hit near 500 Watts by itself, and you will certainly go over 500 Watts when you factor in total system power.

    I would love to be totally wrong about that. But pretty much every leaker is suggesting that the top cards are going to use a lot of power, and they are stressing that they will use even more than the 3090ti does.

    The Kingpin card is a special case, it is designed purely for enthusiast overclockers, people who like to try breaking records with liquid nitrogen. They have always been outlandish cards with outlandish prices. Some other brands started picking up on this as well, and so they also started adding options for 1000 Watt BIOS because overclockers who make the news can be free publicity. But next gen shipping such a BIOS will be necessary. That is the difference. Most people do not overclock their cards much, if any at all, and so such BIOS are total overkill.

    420 watts is the peak power draw.  Its the occasional spike.  Its not a constant 420 watts.  Depending on the settings its an average of 370-400 watts on my EVGA 3090s and 350-370 watts on my MSI 3090.  I just tried my MSI 3090(with a 450 watt BIOS) in a test render of a 4K resolution scene and it was averaging under 350 watts.  Often you can see its the stock voltage limit that limits the power draw of 3090s if you run monitoring software during rendering.

     

    When you compare the 3090 to the 1080 ti, the rendering performance and efficiency per watt is vastly different.  When I had a pair of 2080 Supers, their render times were about equal to one 3090 and the pair of 2080 Supers were using more power than one 3090 for the same render speed.  If I remember correctly, the 3090 is 4 to 5 times faster in rendering than a 1080 ti.  Sure the 3090 uses almost as much power as a pair of 1080 ti cards, but its also performing the same as 4 or 5 1080 ti cards.  So in the end, its quite a bit more efficient.  

    There is a lot more to the New World game situation than most of the tech media covered.  First, the EVGA 3090 FTW3 cards had design flaws in power delivery to begin with.  Second, New World wasn't the first game to cause that issue with EVGA RTX 3090 FTW cards.  Many other DirectX 11 games were causing the same issue as New World.  Third, EVGA tried to cover up the issues earlier with previous games by saying it was very rare and only happened to a limited number of users, yet the EVGA forums and other forums were filled with people having the same or similar issue with the EVGA 3090 FTW cards that poeople had with New World.  Fourth, the other brands of cards actually had very limited number of cards that had problems with New World comapred to EVGA.  The other cards that died when playign New World  may have also failed in other games.

    This may be a bit technical and off topic for this thread, however the EVGA 3090 FTW3(and some of their other FTW3 cards) were not a full custom PCB like the previous FTW cards.  The 3090 FTW3 was a reference design with an extra power connector added and a few extra power stages added.  The original design was for two 8 pin connectors and one 6 pin connector.  It was later updated to three 8 pin connectors.  However EVGA did not reprogram the chip on the PCB that controls power management, so the card was acting as if it still had two 8 pin connectors and one six pin connector.  This caused the power to be unbalanced and can cause all sorts of issues.  The card would draw 90 watts or more throught he PCIEx16 slot when the spec says a max power draw of 75 watts through a PCIEx16 slot.  In addition to that, the 3090 FTW3 cards would try to draw more power through the 8 pin PCIE connectors than the card was designed for.  So some of the power stages were overloaded.

    I had an EVGA RTX 3090 FTW3 that had issues with DX11 games.  The fans would spin up to 100% speed, then the screen would go black.  After rebooting, the card would act normally again while gaming for five to 20 minutes before the same issue happened again.  This would repeat again and again.  If I continued to play, the card would die due to an electrical malfunction.  I had two more replacement 3090 FTW3 Ultra cards that did the same thing.  Similar to many other early 3090 FTW3 owners who had issues and multiple RMAs with no solution, EVGA had me return the third replacement card and issued me a full refund.

    EVGA later updated the power management chip firmware.  After that, EVGA also updated the PCB design to what EVGA calls "revision 1.0."  The early cards were PCB revision 0.1.  At this point, most of the 3090 FTW3 cards are stable with DirectX 11 games including New World.  However, there are still plenty revision 0.1 PCB cards out there.

    There have been electrical engineers and overclockers with an electrical engineering background who diagnosed other brand/model cards that failed due to the New World game and some were found to have deficiencies in their power management or power delivery design or faulty components from the factory.  Also, with newer AMD cards, the reference design of the RX 6000 series has worse induction power filtering than the RX 5000 series.  For example, the RX 6900 XT has worse power filtering than the RX 5600 and RX 5700.  That kind of shortcut can lead to unstable clock speeds and in some rare cases can damage components.

    Every generation of new GPUs has more transitors.  The RTX 3090 GPU is above 28 billion trasitors in all psrts of the GPU.  Also the RTX 3090 has above 10,000 cuda cores.  When you keep adding transistors and cores, the power requirements will go up, even with a smaller die size.  The RTX 4000 series is rumored to double the number of cuda cores over the RTX 3000 series.  Yet, it won't require double the power.  

    I'm well aware of the intended use of the Kingpin and Hall of Fame cards.  Using the Kingpin RTX 3090s that I have, I still have one world record benchmark score and several top 5 and top 10 scores.  However, it seems that most people who buy Kingpin 3090s lately never push them close to their limits.  With this generation, the Kingpin 3090 cards were very reasonably priced.  The MSRP was only $100-$200 more than the EVGA RTX 3090 FTW3 Ultra with the same style cooler.  Because of the price, many gamers who couldn't get a new graphics card were entering the EVGA queue to get a 3090 Kingpin instead of paying scalper prices or waiting 9+ months or camping outside of BestBuy all night to get a 3080 or 3090.  The default power limit of the Kingpin 3090 isn't much higher than most higher end gaming focused RTX 3090 cards on the market.  My point was that the Kingpin and some version of the Galax HoF RTX 3090 is widely used by people who aren't competitive overclockers.  I've seen maybe two dozen people actually using Kingpin 3090s with any sort of effort to get top 10 benchmark and overclocking scores on the official leaderboards.

    I agree that power limits can and will go up on the highest end cards of next generation, but I don't think we will see as many 600 watt cards as the current tech media/press is trying to say.  The number of people who own computers with power supplies that can handle both a 600 watt power draw card and high end CPU is rather limited.  

    To get back on topic, I ended up selling a bunch of the spare graphics cards I had to gamers who couldn't find an upgrade or replacement.  I still have a few around, and some newer cards, that I plan to do some rendering benchmarks on in the near future.  

  • chrislbchrislb Posts: 100
    edited April 2022

    Has anyone else had their render times increase and iterations per second decrease with the latest version of Daz?

     

    Post edited by chrislb on
  • PerttiAPerttiA Posts: 10,024

    outrider42 said:

     

    This is coming in either Q3 or Q4 this year. After all, Ampere released in 2020, and will be 2 years old. The next gen is due, you don't need a crystal ball for that. I know that creates a tricky situation for those who are dying for a new GPU now because GPUs are finally starting to come down in price from this horrible market condition and are also in stock. So I can totally understand the difficulty in making a decision, plus for all we know the market could blow up again when the new stuff releases like it did last time. There is so much uncertainty. I do think GPUs will get cheaper in the short term. 

    If one needs a GPU, I would not wait for the next gen due to the price, availability and it will again take time before there is proper support/drivers for them.

    Did not regret getting the RTX 2070 Super just before Amperes were released. 

  • outrider42outrider42 Posts: 3,679

    The power draw does not have to go up with transistor counts. For generations the top cards from Nvidia were all around 250 Watts, regardless of the chip size or transistor count. They balanced their top GPU through a careful combination of chip size, transistor count, and clockspeeds to run around 250 Watts.

    What changed is competition from AMD. AMD has been pretty much MIA for a decade here. With AMD competing at the top again, Nvidia pushed performance as hard as they could by clocking the cards crazy high. Indeed Ampere is a big jump in efficiency over Turing, as the Samsung 8nm node allowed more transistors than Turing ever could. But it is still not as dense as TSMC 7nm, and to ensure they stayed on top, Nvidia had to push their cards harder than perhaps they planned to. A lot of leaks suggested Nvidia had very different plans for their Ampere lineup until they began to realize that AMD RDNA2 could be a contender. When you look at how sporadic and weird the Ampere product stack is, it really does look like these were not totally planned for. The 3080 was potentially going to be a very different product possibly based on GA103 instead of the 3090's GA102.

    With this frame of mind, it becomes easier to see how this lead to GPUs dying. Perhaps EVGA had originally planned on using less power on their cards? But the spec changed after they already had a design in production. That would explain a lot of things.

    But you play into a troublesome point noting the flaws of the different cards. Engineering a high end GPU that uses a lot of power requires a lot of design work. Some current gen cards already failed this, can we really be confident that next gen cards which demand even more power will be properly engineered to accommodate those demands? At a certain point the heat causes major problems, and you have to expertly manage power...something EVGA failed at. Any engineer will say that "heat is the enemy". The biggest challenge is getting the heat out of the card as fast as possible. You not only have to deal with the GPU chip itself, but all of the other components have heat tolerance levels that need to be observed. Even the heat being ejected must be managed, if it gets too hot, you can damage components on the IO.

    The 3090 running at 330 Watts with Iray is still a big step above what the 1080ti did. I forget right now what one 1080ti ran at during Iray, but it was the low 200 range. So we are looking at around a 100 Watt increase here even with Iray using less power than gaming. If the "4090" uses 100 more Watts than a 3090 in gaming, then it stands to reason it will do so with Iray as well. Overclocked models would go even higher, at which point you reach the space heater territory. I am sure the 4090 will bring wild performance increases, and will offer more performance per Watt than Ampere. I would not be at all surprised if the 4090 reached over 50 iterations per second on our bench scene. So a lot of us would probably still accept that, even in a 450 Watt or more form. But that is still a 450 to 500 Watt card that takes a full 3 slots. And rumors suggest it could be even more. I would probably just use a single card at that point instead of trying to use my 3090 or 3060 with it.

    The silver lining is if rumors have been based on 5nm chips. If Nvidia has been building these, but then switches to 4nm, that could bring the power down from what the rumors suggest. But still, I think it is clear the 3090ti is a test product to determine how high customers are willing to go.

  • Paper Tiger_493335Paper Tiger_493335 Posts: 11
    edited April 2022

    System Configuration
    System/Motherboard: ASUS TUF GAMING X570-PLUS (WI-FI)
    CPU: AMD Ryzen 7 3800X @ stock
    GPU: EVGA RTX 2080 @  stock
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 512GB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 12:46:57.522 [INFO] :: Finished Rendering
    2022-04-06 12:46:57.559 [INFO] :: Total Rendering Time: 4 minutes 49.13 seconds
    Iteration Rate: (1800 / 284.591) 6.324

    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:18:54.526 [INFO] :: Finished Rendering
    2022-04-06 13:18:54.564 [INFO] :: Total Rendering Time: 4 minutes 49.63 seconds
    Iteration Rate: (1800 / 285.741) 6.299


    System Configuration
    System/Motherboard: GigabyteZ590 Aorus Elite AX
    CPU: Intel i5-11600K @ Stock
    GPU: Asus TUF Gaming RTX 3080 10GB OC @ Stock
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 1TB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 13:30:48.518 [INFO] :: Finished Rendering
    2022-04-06 13:30:48.544 [INFO] :: Total Rendering Time: 2 minutes 10.71 seconds
    Iteration Rate: (1800 / 126.903) 14.184


    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:36:27.701 [INFO] :: Finished Rendering
    2022-04-06 13:36:27.726 [INFO] :: Total Rendering Time: 2 minutes 10.75 seconds
    Iteration Rate: (1800 / 127.276)  14.142

    Post edited by Paper Tiger_493335 on
  • outrider42outrider42 Posts: 3,679

    Paper Tiger_493335 said:

    System Configuration
    System/Motherboard: ASUS TUF GAMING X570-PLUS (WI-FI)
    CPU: AMD Ryzen 7 3800X @ stock
    GPU: EVGA RTX 2080 @  stock
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 512GB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 12:46:57.522 [INFO] :: Finished Rendering
    2022-04-06 12:46:57.559 [INFO] :: Total Rendering Time: 4 minutes 49.13 seconds
    Iteration Rate: (1800 / 284.591) 6.324

    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:18:54.526 [INFO] :: Finished Rendering
    2022-04-06 13:18:54.564 [INFO] :: Total Rendering Time: 4 minutes 49.63 seconds
    Iteration Rate: (1800 / 285.741) 6.299


    System Configuration
    System/Motherboard: Gigabyte
    CPU: Intel i5-11600K
    GPU: Asus TUF Gaming RTX 3080 OC
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 1TB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 13:30:48.518 [INFO] :: Finished Rendering
    2022-04-06 13:30:48.544 [INFO] :: Total Rendering Time: 2 minutes 10.71 seconds
    Iteration Rate: (1800 / 126.903) 14.184


    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:36:27.701 [INFO] :: Finished Rendering
    2022-04-06 13:36:27.726 [INFO] :: Total Rendering Time: 2 minutes 10.75 seconds
    Iteration Rate: (1800 / 127.276)  14.142

    Thank you for posting a 2080 bench! We haven't had anybody post one in a very long time. The iteration rate of 6.299 is a nice bump up from the 5.75 posted with Iray 2019.

    This post also shows the gen on gen improvement with the 3080 rocking 14.142 iterations, more than doubling the 2080's performance. I believe this is a 12GB 3080, as the 12GB variant has slightly better specs than the 10GB version. I don't know if they even still make the 10GB version. It is certainly a little confusing how Nvidia did things. Can you verify which one you have?

  • outrider42 said:

    Paper Tiger_493335 said:

    System Configuration
    System/Motherboard: ASUS TUF GAMING X570-PLUS (WI-FI)
    CPU: AMD Ryzen 7 3800X @ stock
    GPU: EVGA RTX 2080 @  stock
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 512GB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 12:46:57.522 [INFO] :: Finished Rendering
    2022-04-06 12:46:57.559 [INFO] :: Total Rendering Time: 4 minutes 49.13 seconds
    Iteration Rate: (1800 / 284.591) 6.324

    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:18:54.526 [INFO] :: Finished Rendering
    2022-04-06 13:18:54.564 [INFO] :: Total Rendering Time: 4 minutes 49.63 seconds
    Iteration Rate: (1800 / 285.741) 6.299


    System Configuration
    System/Motherboard: Gigabyte
    CPU: Intel i5-11600K
    GPU: Asus TUF Gaming RTX 3080 OC
    System Memory: Corsair 32GB DDR4 @ 2133
    OS Drive: WD Blue 1TB NVME SSD
    Asset Drive: WD Blue 4tb HDD
    Power Supply: Corsair RM850x
    Operating System: Windows 10 12H2 19044.1586
    Nvidia Drivers Version: 511.65
    Daz Studio Version: 4.20.0.2

    Benchmark Results
    2022-04-06 13:30:48.518 [INFO] :: Finished Rendering
    2022-04-06 13:30:48.544 [INFO] :: Total Rendering Time: 2 minutes 10.71 seconds
    Iteration Rate: (1800 / 126.903) 14.184


    Daz Studio Version: 4.20.0.11

    Benchmark Results
    2022-04-06 13:36:27.701 [INFO] :: Finished Rendering
    2022-04-06 13:36:27.726 [INFO] :: Total Rendering Time: 2 minutes 10.75 seconds
    Iteration Rate: (1800 / 127.276)  14.142

    Thank you for posting a 2080 bench! We haven't had anybody post one in a very long time. The iteration rate of 6.299 is a nice bump up from the 5.75 posted with Iray 2019.

    This post also shows the gen on gen improvement with the 3080 rocking 14.142 iterations, more than doubling the 2080's performance. I believe this is a 12GB 3080, as the 12GB variant has slightly better specs than the 10GB version. I don't know if they even still make the 10GB version. It is certainly a little confusing how Nvidia did things. Can you verify which one you have?

    I just checked and verified that I have the 10GB version of the RTX 3080.

  • PerttiAPerttiA Posts: 10,024
    edited April 2022

    Installed the 3060 today;

    System Configuration
    System/Motherboard: MSI X99A SLI PLUS
    CPU:  Intel i7-5820K @ 3.30GHz
    GPU: Asus RTX 3060 @ stock
    System Memory: Kingston 4x16GB DDR4 3200-CL16 @ stock
    OS Drive: Kingston 500GB SSD
    Asset Drive: 4 x Kingston 900GB SSD
    Operating System:  Windows 7 Ultimate SP1
    Nvidia Drivers Version: 471.41
    Daz Studio Version: 4.15.0.2

    Benchmark Results

    Total Rendering Time: 4 minutes 0.2 seconds
    IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 4.318s init, 229.220s render
    Iteration Rate: 7.85 iterations per second
    Loading Time: 10.98 seconds

    ----------------------------------

    Previous configuration;

    GPU: Asus RTX 2070 Super @ stock
    Nvidia Drivers Version: 456.38
    Daz Studio Version: 4.15.0.2

    Benchmark Results

    Total Rendering Time: 4 minutes 37.83 seconds
    IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER): 1800 iterations, 9.483s init, 261.482s render
    Iteration Rate: 6.884 iterations per second
    Loading Time: 16.35 seconds

    Post edited by PerttiA on
  • outrider42outrider42 Posts: 3,679

    Thanks for verifying. So it is a 10GB version. 

    And now we have a current 2070 Super bench, too, very nice. Interesting it ran slightly faster than the 2080, but that is not new. It worth pointing out that the version of Daz is different, with the 2070 S on 4.15 and the 2080 on 4.20. But the 2070 Super was scoring better than the 2080 results we had back in 2019 as well. This is in contrast to their gaming performance, the 2070 Super is very, very close to a 2080 in gaming, but it does not beat the 2080 unless it is overclocked enough. But it seems to have an advantage with Iray with multiple people. Regardless, the whole Turing line gets blown away by Ampere as the 3060 easily outpaces them both and even beats a 2080ti.

    Which is really wild, because the 2060 could also match pace with a 1080ti from the previous generation, and the 2060 Super cleanly beats it. So two generations in a row have had big leaps in ray tracing, and really makes preRTX hardware feel pretty old now.

    So will the 4000 series keep up that trend? Will a 4060 actually match or beat a 3090 when it comes out? I have to say I think it will. After all, I also said I think a 4090 will top 50 iterations, so it would be logical for a 4060 to hit around 25, which is well above the 3090. Of course I could be wrong, but hey, we have had two generations of GPUs in row do this. Perhaps the better questions is why wouldn't they accomplish this?

  • skyeshotsskyeshots Posts: 148

    rrward said:

    I manage my heat situation (used to run 3X1080ti+1060, then 2X2080ti+1070ti, now a single A5000) by moving my render box into the laundry room, the coldest room in the house, and running cables through the wall into my office (I also have a remote power switch for it). The excess heat keeps the laundry room warm and my office is spared the heat.

    So true, silence is precious. 30 feet away for me with partial partitions and custom acoustic panels. When I have to get near the beast while its running, I use 3M Shotgunners.
  • skyeshotsskyeshots Posts: 148

    EVGA 3090 FTW3 vs EVGA 3090 Ti FTW3:

    System/Motherboard: MSI Z490 
    CPU: Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz   
    GPU: EVGA 3090 FTW +1250 Mem
    System Memory: 64 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: Intel 670p M.2 1 TB NVMe
    Asset Drive: (Same) Intel 670p M.2 1 TB NVMe
    Operating System: Win 11 Pro, 21H2
    Nvidia Drivers Version: 511.65 Studio Drivers
    Daz Studio Version: 4.20

    2022-04-07 21:10:28.755 [INFO] :: Finished Rendering
    2022-04-07 21:10:28.795 [INFO] :: Total Rendering Time: 1 minutes 35.32 seconds
    2022-04-07 21:12:12.351 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-04-07 21:12:12.351 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 0.851s init, 92.707s render
    Loading Time: 2.613 Seconds
    Rendering Performance: 19.416 Iterations Per Second

    _________________________________________________________________________________

    System/Motherboard: MSI Z490 
    CPU: Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz  
    GPU: EVGA 3090 Ti FTW +1500 Mem
    System Memory: 64 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: Intel 670p M.2 1 TB NVMe
    Asset Drive: (Same) Intel 670p M.2 1 TB NVMe
    Operating System: Win 11 Pro, 21H2
    Nvidia Drivers Version: 512.16 Studio Drivers
    Daz Studio Version: 4.20

    2022-04-07 21:37:53.118 [INFO] :: Finished Rendering
    2022-04-07 21:37:53.158 [INFO] :: Total Rendering Time: 1 minutes 30.87 seconds
    2022-04-07 21:38:48.994 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-04-07 21:38:48.994 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090 Ti): 1800 iterations, 1.047s init, 88.055s render
    Loading Time: 2.815
    Rendering Performance: 20.442 Iterations Per Second

    These scores are pretty low, as I have had this EVGA 3090 over 20 in earlier Daz versions. This is as close as I could get today for a direct comparison. About 5% faster in this system. The Ti was giving me issues with drivers so I could not have both cards in the system at the same time and I ended up with different driver versions so not a true comparison. This might have just been me, but if this keeps up it is a bit of a deterrent for anyone using Daz that was thinking about mixing a 30xx and 3090 Ti. I could not get the Ti drivers to load with the 3090 nor could I get generic 3090 drivers to load with the Ti. Again, perhaps just me and there could be some workarounds out there. Never really ran into this mixing the 3090s with A5000 or A6000 cards.

    Bottom line: The 4 slot cooler is expensive real estate.

     

     

  • outrider42outrider42 Posts: 3,679

    It does seem like 4.20 is knocking speeds down a bit, that could certainly be a factor in the 2070 S beating the 2080 in the last couple posts.

    It is real strange that the drivers would behave that way, it doesn't make any sense why they would be any different. It is still Ampere. The size of the 3090ti is absolutely its biggest fault of all. 4 slots is going to fill many people's cases and prevent them from even thinking about adding another card, that is not without a riser or something. At least in my case I can pop a 3060 in it with the 3 slot 3090 without any cramping. I'm not sure I could do that if the 3090 was a 4 slot card. 4 slot cards are not something I anticipated when I got this case, LOL.

  • NotAnArtistNotAnArtist Posts: 384

    Hi! There's a statistic in your "Section 3. Benchmark Results" chart which I cannot make sense of.

    In that section, I noticed this comparison:

    -RTX 3060Ti:  Its 10.648 iteration rate took 2 minutes 53.70 seconds
    -RTX 3080:     Its 12.062 iteration rate took 2 minutes 38.53 seconds

    I don't understand this. The 3060 Ti has 4864 CUDAs and the 3080 has 8960 CUDAs.

    How can a difference of 4096 CUDAs between the two cards result in only 15.17 seconds difference in render times?

    I'm debating whether to super-stretch my funds to get the 3080 or 3080 Ti, but there are good arguments for the 3060 Ti. No time for detail here, but I need render speed and my scenes are always very small, so my only concern (I assume) should be finding the fastest card that I can manage to get the funds for.

    BTW, my Firefox and Chromium both hide the statistics in that section. I had to save the page and edit it in Kompozer to 'uncover' them. If you haven't heard of this problem, then I'll assume my Linux OS is the culprit. No biggie!

    Thanks for your insight here!

  • outrider42outrider42 Posts: 3,679

    The benchmark scene is not full proof. We have people running them on wildly different computers, different drivers, and different versions of Daz. This is actually the biggest change in some bench times. Daz versions older than 4.14 were a fair bit slower with normal maps. So 4.15 and 4.16 times can be faster with the same hardware than the older versions. The newest version 4.20 may be dropping performance a little bit, but is still faster than 4.14 and older. So if possible, try to compare people with the same version of Daz. Sometimes drivers can impact render speeds, too. And sometimes there may be human error involved, we cannot discount that.

    Plus this is only one scene, and the performance you get may vary. But it can give a general idea of what you can expect. Also, make sure when you read these to not look at the total time, but rather the rendering time. They are different. The total time includes the time it took to load the scene, which is a different story as that depends on your storage and other components. The iteration count is supposed to be calculated from the rendering time divided by 1800. The iteration count is always going to be different in every scene. I some scenes I can get crazy fast iteration counts, but in others it barely moves, and that all depends on what you are putting in your scenes.

    The best thing to do here is to take the test yourself. Then you can compare your hardware against other people's, and this will give you a general baseline. So for example if you get like 4 iterations per second with your current GPU, you can look at the 3080 posted above and expect to see around a 3X speed up in your rendering. It will not be that way for every scene, it can vary some. But with this information you can do the basic math and get an idea of what your current scenes might do with a new GPU.

    With Iray you use both CUDA and ray tracing cores. Before ray tracing hardware came along, it was very easy to estimate. If GPU X scored 1 iteration and GPU Z scored 2 iterations, then GPU Z would pretty much ALWAY be twice as fast as GPU X in almost every scene. But ray tracing cores changed everything. They no doubt made a drastic increase in rendering performance, as the 2080ti could double the previous generation 1080ti's speed. But it was always twice as fast. Sometimes it could be even more, and occasionally it might be slightly less.

    The ray tracing cores perform better as geometry increases. The more geometry, the more of a performance gap you get between the cards. You can get complex geometry without huge scenes...Strand Hair can really push out high poly counts depending on how it is made. Some clothing is very high poly. You can subdivide Genesis to 4 or 5 and make just a single figure take a lot of polys, especially at 5.

    I have had a 1080ti, a 3060 and a 3090. There is unquestionably a performance gap between each of them.

    The other thing is price. The market is still highly elevated right now, with all Nvidia cards selling for well above their MSRP. It has improved a lot in just a few months, 3090s were going for $3000 or more, now they can be had for about $2000. The MSRP is $1500 though, and most of the 3rd party 3090s should be $1800 or less at this point if times were more normal. Some GPUs still have prices that do not line up with their performance tier.

  • RayDAntRayDAnt Posts: 1,134
    edited April 2022

    NotAnArtist said:

    Hi! There's a statistic in your "Section 3. Benchmark Results" chart which I cannot make sense of.

    In that section, I noticed this comparison:

    -RTX 3060Ti:  Its 10.648 iteration rate took 2 minutes 53.70 seconds
    -RTX 3080:     Its 12.062 iteration rate took 2 minutes 38.53 seconds

    I don't understand this. The 3060 Ti has 4864 CUDAs and the 3080 has 8960 CUDAs.

    How can a difference of 4096 CUDAs between the two cards result in only 15.17 seconds difference in render times?

    You're looking at benchmark passes done using different versions of Daz Studio (DS 4.12 vs 4.14), between which, there proved to be a signicant jump in iteration rate performance (due to under-the-hood tweaks made by Daz developers to their implementation of the Iray plugin itself) in favor of the newer version. And since the lower tier 3060 Ti card was benched on the newer, better performing application version, it's performance appears inflated in comparison to the higher spec'ed 3080 from looking at the stats in that chart.

    Strictly speaking, the benchmark scores in this thread when going between different card models are only directly comparable if the software versions being used to run them are the same (read more about confounding variables eg. here.) I would recommend looking through the most recent pages of this thread for other benches of those cards using more similarly performing versions of software. Due to limitations on both my time and of the Daz Forum code itself, I have pretty much given up on keeping that initial performance table updated. Meaning that you're best bet at finding useful information at the moment is going through the thread pages doing Ctr+F for the card model you're looking for. it sucks, but it's all we've got at the moment.

    That, by the way is also why your browsers are giving you trouble even seeing those tables. They're undoubtedly following current web 2.0+ standards for displaying HTML and CSS.  These forums currently do no such thing.

     

    Post edited by RayDAnt on
  • fred9803fred9803 Posts: 1,564

    chrislb said:

    Has anyone else had their render times increase and iterations per second decrease with the latest version of Daz?

    and outrider42

    "The newest version 4.20 may be dropping performance a little bit, but is still faster than 4.14 and older."

    Is this the general consensus that 4.20 is in fact slower than the precvious GR build (4.15)?

  • RayDAntRayDAnt Posts: 1,134
    edited April 2022

    fred9803 said:

    chrislb said:

    Has anyone else had their render times increase and iterations per second decrease with the latest version of Daz?

    and outrider42

    "The newest version 4.20 may be dropping performance a little bit, but is still faster than 4.14 and older."

    Is this the general consensus that 4.20 is in fact slower than the precvious GR build (4.15)?

    Keep in mind that Iray's developers at Nvidia are constantly making under-the-hood tweaks and changes to its rendering engine pipeline that result in variations in the amount of time as well as precision and accuracy of contributory content to a final render each rendering iteration amounts to. The effect of which is then inherited by each new version of Daz Studio as Daz's developers incorporate new Iray versions.

    This means that a benchmark test such as the one this thread is built around - that uses an arbitrary number of rendering iterations to objectively gauge performance (out of necessity, since visual quality is a subjective measure) is only telling part of the story. The ultimate measure of rendering performance is overal visual quality - not iteration rate. It is entirely possible for eg. one older version of Daz Studio/Iray to be twice as fast in completing this benchmark than a more recent version - but also resulting in a final render that is noticeably worse visual quality than that same more recent version gives at half the number of this benchmark's iterations.

    The benchmarking you see in this thread is technically only actually useful for gauging hardware performance. Not rendering speed or visual quality of Iray/Daz Studio itself. For that you'd need a whole different benchmarking thread with a completely different testing methodology (one where apparent visual quality - not iterations - is the fixed statistic.)

    Post edited by RayDAnt on
  • NotAnArtistNotAnArtist Posts: 384

    outrider42 said:

    The benchmark scene is not full proof. We have people running them on wildly different computers, different drivers, and different versions of Daz. This is actually the biggest change in some bench times. Daz versions older than 4.14 were a fair bit slower with normal maps. So 4.15 and 4.16 times can be faster with the same hardware than the older versions. The newest version 4.20 may be dropping performance a little bit, but is still faster than 4.14 and older. So if possible, try to compare people with the same version of Daz. Sometimes drivers can impact render speeds, too. And sometimes there may be human error involved, we cannot discount that.

    Plus this is only one scene, and the performance you get may vary. But it can give a general idea of what you can expect. Also, make sure when you read these to not look at the total time, but rather the rendering time. They are different. The total time includes the time it took to load the scene, which is a different story as that depends on your storage and other components. The iteration count is supposed to be calculated from the rendering time divided by 1800. The iteration count is always going to be different in every scene. I some scenes I can get crazy fast iteration counts, but in others it barely moves, and that all depends on what you are putting in your scenes.

    The best thing to do here is to take the test yourself. Then you can compare your hardware against other people's, and this will give you a general baseline. So for example if you get like 4 iterations per second with your current GPU, you can look at the 3080 posted above and expect to see around a 3X speed up in your rendering. It will not be that way for every scene, it can vary some. But with this information you can do the basic math and get an idea of what your current scenes might do with a new GPU.

    With Iray you use both CUDA and ray tracing cores. Before ray tracing hardware came along, it was very easy to estimate. If GPU X scored 1 iteration and GPU Z scored 2 iterations, then GPU Z would pretty much ALWAY be twice as fast as GPU X in almost every scene. But ray tracing cores changed everything. They no doubt made a drastic increase in rendering performance, as the 2080ti could double the previous generation 1080ti's speed. But it was always twice as fast. Sometimes it could be even more, and occasionally it might be slightly less.

    The ray tracing cores perform better as geometry increases. The more geometry, the more of a performance gap you get between the cards. You can get complex geometry without huge scenes...Strand Hair can really push out high poly counts depending on how it is made. Some clothing is very high poly. You can subdivide Genesis to 4 or 5 and make just a single figure take a lot of polys, especially at 5.

    I have had a 1080ti, a 3060 and a 3090. There is unquestionably a performance gap between each of them.

    The other thing is price. The market is still highly elevated right now, with all Nvidia cards selling for well above their MSRP. It has improved a lot in just a few months, 3090s were going for $3000 or more, now they can be had for about $2000. The MSRP is $1500 though, and most of the 3rd party 3090s should be $1800 or less at this point if times were more normal. Some GPUs still have prices that do not line up with their performance tier.

    Well, it's not 'fool proof,' either. This fool didn't think those variables could be that meaningful to that degree. I love science, addicted to quantum physics even, but I need actual scientists to explain it to me and it takes several repeats for it to sink in. So, yeah, thanks for your help here!

    I tend to focus on one point or another that interests me at one time. I hope most people don't do this! It could apparently steer a person off in a costly direction.

    I see RayDAnt has a response too, just below. Now I'm afraid to read it...

  • NotAnArtistNotAnArtist Posts: 384

    RayDAnt said:

    You're looking at benchmark passes done using different versions of Daz Studio (DS 4.12 vs 4.14), between which, there proved to be a signicant jump in iteration rate performance (due to under-the-hood tweaks made by Daz developers to their implementation of the Iray plugin itself) in favor of the newer version. And since the lower tier 3060 Ti card was benched on the newer, better performing application version, it's performance appears inflated in comparison to the higher spec'ed 3080 from looking at the stats in that chart.

    Strictly speaking, the benchmark scores in this thread when going between different card models are only directly comparable if the software versions being used to run them are the same (read more about confounding variables eg. here.) I would recommend looking through the most recent pages of this thread for other benches of those cards using more similarly performing versions of software. Due to limitations on both my time and of the Daz Forum code itself, I have pretty much given up on keeping that initial performance table updated. Meaning that you're best bet at finding useful information at the moment is going through the thread pages doing Ctr+F for the card model you're looking for. it sucks, but it's all we've got at the moment.

    That, by the way is also why your browsers are giving you trouble even seeing those tables. They're undoubtedly following current web 2.0+ standards for displaying HTML and CSS.  These forums currently do no such thing.

    Please read my response to outrider42 above. My focus was one obsession, CUDA count, because I need speed for a long-planned list of short animation and small scene ideas. I see the mistake.

    About confounding variables. That's what I call them now, but in a different, more frustrated context;-)
    OK, I get now. The DEGREE to which the variables can affect things is enormous!

    But maybe I've gotten too old too soon, because I don't remember so many things having so many convoluted effects on what used to be a basic decision towards how to build a machine.
    Last time I built was about 11 years ago and it turned out so good and so fast that... What we really call my problem, then, is hubris!

    *The Linux versions of several things are weak. I prefer an old thing like Kompozer for nearly all writing purposes.
    *My newly updated Firefox 99.0 shows hundreds of errors and lots of warnings in your validator link... I still refuse to use Windows online.

  • skyeshotsskyeshots Posts: 148

    NotAnArtist said:

    Please read my response to outrider42 above. My focus was one obsession, CUDA count, because I need speed for a long-planned list of short animation and small scene ideas. I see the mistake.

    About confounding variables. That's what I call them now, but in a different, more frustrated context;-)
    OK, I get now. The DEGREE to which the variables can affect things is enormous!

    But maybe I've gotten too old too soon, because I don't remember so many things having so many convoluted effects on what used to be a basic decision towards how to build a machine.
    Last time I built was about 11 years ago and it turned out so good and so fast that... What we really call my problem, then, is hubris!

    *The Linux versions of several things are weak. I prefer an old thing like Kompozer for nearly all writing purposes.
    *My newly updated Firefox 99.0 shows hundreds of errors and lots of warnings in your validator link... I still refuse to use Windows online.

    The equation might look something like: (Hardware Combination x OS Version x Nvidia Driver Version) / Daz Version = Rendering Performance

    Each element in the equation above (and perhaps a few I am missing) will populate a coefficient of variance on your final results. Things like switching from one driver version to another, a new Daz version or even your Windows Update editions can bend the numbers. The version of Daz that you run probably has the biggest impact as it may come bundled with a newer IRAY renderer package. 

    In the bigger setups, a CV at work becomes more evident. Going back to the question from chrislb and fred9803 about 4.20 running slower, it is hard to answer conclusively because Daz does not publish older versions that I know of. I cannot easily rerun 4.16. I can say that my scores running the 5x A6000 test have dropped from 95.47 iterations per second (Daz 4.16) to 87.26 iterations per second (Daz 4.20) or about 9% slower. While this sounds like a big bummer, at the other end of this, they have added faster scene loading, volumetric effects and greater stability.

    If budget is no issue and you want max CUDA cores, go with an A6000 or 3090Ti. If you have a hint of sensibility though, you should be good with an RTX A5000 or any consumer 3090 card. The rest of your core system components will factor somewhat on your creative process but have miniscule effects in the final rendering of your animations. 

     

  • fred9803fred9803 Posts: 1,564

    There's no doubt that subtle variances, be it a butterfly effect or a ghost in the machine LOL, can have more influence on render times than hardware or software put together.

    I'm sure we've all experienced what we thought would be quick render render slow, or what we thought would be slow renders render fast. I did a scene a today, 3 G8F and 2 G8M and relatively low light. Got the sleeping bag ready for a long wait. The damn thing finished in 21.2 minutes and looked great. Alternatively I've had comparatively far less heavy scenes (less complicated with respect to lighting, props, people) take 4+ times longer to render out.

    As Lewis Carol said, "I mark this day with a white stone.“ Or in my case I save the scene file for later use because I know it has hit some sort of sweet spot (an optimum point or combination of factors), even when I later choose different character, props etc. I know it renders fast.

    It might be more productive, at least for me excluded because of price from high-end machinery, to continue on this path. As it appears that GPUs, drivers, DS iterations, etc can sometimes have little relevance to render times.

     

     

  • outrider42outrider42 Posts: 3,679

    Honestly though, Iray has been remarkably consistent since its original inclusion into Daz Studio. I have been running these benchmarks for years, and each time the results are real close to each other. There have been only a handful of events that changed rendering speed since Daz Studio added Iray in 2015.

    The first seems to be within Nvidia's drivers. Once i got two 1080tis, my times were almost always at the 4 minute mark, with the variance only a few seconds apart. But at some point all drivers over a certain number were slower. It wasn't just me, and rolling back drivers recovered the speed. We never did figure out why. My render time lost 30 seconds to 4 minutes and 30 seconds.

    Then Iray made the big change to Iray RTX. This didn't really change GTX cards too much, but it added full support for RTX cards with ray tracing hardware. The ray tracing cards were already faster, but with this they got a huge boost.

    Then Daz released 4.14. This version of Daz made an internal change without updating Iray. They changed how normal maps were handled, and this made scenes with normal maps render a lot faster than before. The bench scene has some normal maps, and my time went from 4:30 all the way to 3:30 with this update. So this update made Iray potentially faster than it ever was.

    Now we get to 4.20, and numerous people are getting slower times than before. I don't have 4.20, plus I upgraded my hardware, so I can't offer a time with my 1080tis.

    But this is an easy way to break it down. All results from 4.12 and older are mostly with the old OptiX Prime Iray. Iray RTX came along with full OptiX and changed things. Versions 4.14 through 4.16 all enjoyed some faster rendering. 4.20 seems to have lost a little speed.

    So if you look at the benchmarks, pay attention to the version. You really only need to break it down into a few groups.

    You can also consider this. I used two 1080tis for a long time. I have a 3090 and a 3060 now. They stack up like this.

    The 3060 is about like two 1080tis. The 3090 is faster than 4 1080tis, and almost 5 times faster. Using the 3060 and 3090 together is around 6 times faster than a 1080ti, or more.

    That adds up fast. So all the 3000 series cards between the 3060 and 3090 are going to be inbetween 2 and 4 times faster than a 1080ti. This can vary a little depending on your exact scene, but this will be the general performance.

  • NotAnArtistNotAnArtist Posts: 384
    edited April 2022

    skyeshots said:

    The equation might look something like: (Hardware Combination x OS Version x Nvidia Driver Version) / Daz Version = Rendering Performance

    Each element in the equation above (and perhaps a few I am missing) will populate a coefficient of variance on your final results. Things like switching from one driver version to another, a new Daz version or even your Windows Update editions can bend the numbers. The version of Daz that you run probably has the biggest impact as it may come bundled with a newer IRAY renderer package. 

    In the bigger setups, a CV at work becomes more evident. Going back to the question from chrislb and fred9803 about 4.20 running slower, it is hard to answer conclusively because Daz does not publish older versions that I know of. I cannot easily rerun 4.16. I can say that my scores running the 5x A6000 test have dropped from 95.47 iterations per second (Daz 4.16) to 87.26 iterations per second (Daz 4.20) or about 9% slower. While this sounds like a big bummer, at the other end of this, they have added faster scene loading, volumetric effects and greater stability.

    If budget is no issue and you want max CUDA cores, go with an A6000 or 3090Ti. If you have a hint of sensibility though, you should be good with an RTX A5000 or any consumer 3090 card. The rest of your core system components will factor somewhat on your creative process but have miniscule effects in the final rendering of your animations.

    Budget is a huge issue - my social security is below the poverty line. But I've saved well over the decades and this rendering goal needs to be accomplished "now!" Really sick of waiting.

    outrider42's overview of recent DS releases vs GPU issues is a good, clear breakdown of the balance in simple terms. Very appreciated.

    Whatever the DS software can do now or in the future, I'll just aim for a 2080 Ti in the next few weeks. I can underclock it if it draws more power than I want to spend in electric bills. (Just discovered I bought a surprisingly strong MB and PSU two years ago!! They were in storage in their original unopened boxes! As has been many of our lives for those 2 years).

    This is going to work!

    Anything smaller than a 2080 Ti would probably be regretful when trying to make small animations... "If only I'd spent that little bit more."
    Anything larger would be greedy, because if the trends this thread shows continue, the card will offer more than I'd expected going forward. And I'm moving up from a 1060 6GB in a machine that can't safely render at all! Even there, my larger scenes took no longer than 10 minutes to render, because I hid or edited out anything not needed. So, yes, the 2080 Ti, or even 'just' a 2080, will be fine.

    You folks are greatly appreciated by me and I'm sure many many other struggling ignorants like myself. Thank you!

    Post edited by NotAnArtist on
Sign In or Register to comment.