Daz Studio Iray - Rendering Hardware Benchmarking

1293032343545

Comments

  • RayDAntRayDAnt Posts: 1,134

    PerttiA said:

    RayDAnt said:

    Longer answer: Not really, but having the non-KF version (with iGPU) does open up some potentially very useful alternative multi-GPU rendering system configurations.

    And potential problems.

    No - not remotely, since you can have it and simply not use it.

  • PerttiAPerttiA Posts: 10,024

    RayDAnt said:

    PerttiA said:

    RayDAnt said:

    Longer answer: Not really, but having the non-KF version (with iGPU) does open up some potentially very useful alternative multi-GPU rendering system configurations.

    And potential problems.

    No - not remotely, since you can have it and simply not use it.

    It's still there, adding to the complexity of the hardware and software environment. Over the years they have been culprit to countless of problems in systems that do have a dedicated GPU as well. 

  • skyeshotsskyeshots Posts: 148

    Mart1n71 said:

    Thought I'd add a 2x 3090 benchmark.

    System Configuration
    System/Motherboard: Gigabyte x299x Aorus Master
    CPU: BRAND MODEL @ SPEED/stock Intel i9-10900X @ 3.70GHz
    GPU: CUDA device 0 (NVIDIA GeForce RTX 3090) Zotac Trinity v1 stock speed
    GPU: CUDA device 1 (NVIDIA GeForce RTX 3090) Gigabyte Aorus Master v2
    System Memory: Corsair Dominator 128GB DDR4 @ 2133
    OS Drive: Samsung Evo 1 TB SSD
    Asset Drive: Corsair Force MP510 series 1920GB NVMe PCIe M.2
    Power Supply: Corsair AX1600i
    Operating System: Windows 10 Pro Version 21H1 build 19043.1645
    Nvidia Drivers Version: 472.47
    Daz Studio Version: 4.20.0.3

    Benchmark Results
    DAZ_STATS Total Rendering Time: 1 minutes 3.84 seconds
    IRAY_STATS
    CUDA device 0 (NVIDIA GeForce RTX 3090): 830 iterations, 1.161s init, 58.956s render
    CUDA device 1 (NVIDIA GeForce RTX 3090): 970 iterations, 1.373s init, 59.173s render

    Iteration Rate:
    CUDA device 0 14.08 iterations per second
    CUDA device 1 16.39 iterations per second
    Loading Time: 4.67 seconds

    Nice build very similar to something I had going for a long time.This is a great setup for Daz.

  • skyeshotsskyeshots Posts: 148

    System/Motherboard: SuperMicro X12
    CPU: 2x Xeon Gold 6348 @ Stock 3.5 GHZ
    GPU: 5x A6000
    System Memory: 512 GB ECC @ 3200 mhz 
    OS Drive: WD SN850 1 TB NVMe
    Asset Drive: 256 GB RAM DRIVE 
    Operating System: Win 11 Pro
    Nvidia Drivers Version: 516.25 DCH
    Daz Studio Version: 4.20.1 public Beta
    PSU: 2x Corsair AX1600

    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA RTX A6000): 346 iterations, 2.466s init, 24.390s render
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA RTX A6000): 358 iterations, 1.749s init, 24.506s render
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (NVIDIA RTX A6000): 354 iterations, 1.846s init, 24.407s render
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 3 (NVIDIA RTX A6000): 354 iterations, 1.743s init, 24.532s render
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 4 (NVIDIA RTX A6000): 348 iterations, 1.781s init, 23.969s render
    2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:                              40 iterations, 1.032s init, 24.351s render
    2022-07-03 23:02:54.694 [INFO] :: Finished Rendering
    2022-07-03 23:02:54.783 [INFO] :: Total Rendering Time: 32.44 seconds

    Loading Time: 7.908 Seconds
    Rendering Performance: 73.38 Iterations Per Second

    Finally got all five A6000 connected with PCIe Gen 4x16. Seems like a small milestone, but it was a long time coming.

  • RayDAntRayDAnt Posts: 1,134

    PerttiA said:

    RayDAnt said:

    PerttiA said:

    RayDAnt said:

    Longer answer: Not really, but having the non-KF version (with iGPU) does open up some potentially very useful alternative multi-GPU rendering system configurations.

    And potential problems.

    No - not remotely, since you can have it and simply not use it.

    It's still there, adding to the complexity of the hardware and software environment.

    Not if it's turned off or disconnected.

     

    Over the years they have been culprit to countless of problems in systems that do have a dedicated GPU as well. 

    "countless problems" such as...?

  • PerttiAPerttiA Posts: 10,024

    RayDAnt said:

    PerttiA said:

    It's still there, adding to the complexity of the hardware and software environment.

    Not if it's turned off or disconnected.

    And when the user is too afraid to touch any settings in the Bios?

    Over the years they have been culprit to countless of problems in systems that do have a dedicated GPU as well. 

    "countless problems" such as...?

    Manifests in different ways 

  • outrider42outrider42 Posts: 3,679

    PerttiA said:

    RayDAnt said:

    PerttiA said:

    It's still there, adding to the complexity of the hardware and software environment.

    Not if it's turned off or disconnected.

    And when the user is too afraid to touch any settings in the Bios?

    Over the years they have been culprit to countless of problems in systems that do have a dedicated GPU as well. 

    "countless problems" such as...?

    Manifests in different ways 

    You can apply these exact words to somebody who does NOT have a iGPU. If somebody is afraid of the BIOS...they are going to need help using their computer regardless. Suggesting people are afraid to use the BIOS is not even a valid point on the subject, because this is something effects them equally on any potential issue, iGPU or not.

    They can have issues that can "manifest in different ways"? Yeah...so can not having a iGPU. If you have no iGPU and you have a video problem, you are going to have a hard time trouble shooting that issue without a SECOND GPU on hand. How many people have that? You are fine with assuming some people don't like using the BIOS but then expect them to know how to trouble shoot without a iGPU...these two things rather contradict each other.

    My friend's GPU died. He was without a computer at all for a month while he waited for the painfully slow RMA process from that GPU maker. He didn't have a backup GPU, or a tablet or laptop, so he was pretty much offline for a whole month. If only he had a iGPU his life would not have been so miserable for that month!

    Another one had a problem trying to upgrade their video card on an older motherboard. Without a iGPU, he was unable to fix the problem on his own. This is a situation where accessing the BIOS would fix, but this person was afraid to do that...which is exactly like your first statement about people who might not want to use a BIOS.

    So I don't see the relevance to those words. Anybody can have any problem with a computer. To cast so much blame on the iGPU is just wrong here, when the presence of iGPU can in fact be the saving grace of a trouble shooting session.

    Can a iGPU cause an issue, sure...but you are throwing the baby out with the bathwater here.

    And again, the other issue is that ditching iGPU is straight up leaving performance on the table for anybody who does content creation. While it may not directly effect Iray, it can in certain configurations.

    Given how much crossover there is between Daz Studio and other content creation software, it is a fair bet that many Daz users may also use such software that benefits from iGPUs. In which case telling them to avoid iGPU is just flat out wrong. iGPU can also help streamers.

    I combed the comments on both videos, and could only find ONE comment out of hundreds that made any reference to iGPUs potentially being a problem. Otherwise the comments almost universally support iGPU. The second fellow builds and repairs a lot of computers as well, he owns a shop.

  • skyeshotsskyeshots Posts: 148
    edited July 2022

    przemyslaw.kulesz4 said:

     

    Hi,

    Quick question. Will choosing the 12th generation Intel processor in the "KF" option instead of "K" will it have any influence on the rendering speed in DAZ3d?

     

     

    iGPUs have come a long way. In the past, onboard graphics were terrible, consuming valuable system RAM and creating havoc with off brand drivers. Today, they are built into the CPU dies and very stable. When I build workstations that are used for business, I always opt for the iGPU for simplicity and effeciency. Especially for laptops. 

    For content creation, I skip the iGPU altogether. Descrete GPUs are better for video encoding, rendering or even image libraries in Lightroom. They are great for taking the load off the chip and keeping it cooler. For content creation, I would suggest spending the money on extra CPU cores or higher clock speeds. I think price might be your bigger concern here in this equation though. You need to look at the overall value for what you do, and the processor you have is a great value. In this species of CPU, I would personally lean to the i9-12900k for more cores and threads, because that allows more processing overhead and better system responsiveness. This is personal preference though and much more expensive.

     

    Post edited by skyeshots on
  • Guys I have a question.  I currently have a bog standard 2070 (MSI).  According to the benches, it's 32.6 iterations per dollar per hour, and 4.5 iterations per second.  For the 3080 it's 62.1 iterations per dollar per hour, and 12 iterations per second.  So for iRay the 3080 is x3 better than the 2070 (in performance terms).

    The question is then as follows:  given I want to upgrade, should I wait for a 4080 as they're mere months away or buy a 3080 and skip the 4xxx series. 

  • RayDAntRayDAnt Posts: 1,134

    Guys I have a question.  I currently have a bog standard 2070 (MSI).  According to the benches, it's 32.6 iterations per dollar per hour, and 4.5 iterations per second.  For the 3080 it's 62.1 iterations per dollar per hour, and 12 iterations per second.  So for iRay the 3080 is x3 better than the 2070 (in performance terms).

    The question is then as follows:  given I want to upgrade, should I wait for a 4080 as they're mere months away or buy a 3080 and skip the 4xxx series. 

    Regardless, you should wait until the 40 series GPUs are already out before buying anything - 30 series prices should see a much bigger drop by then. Since I'm assuming rendering is your main focus, in your place I think I'd go for a 40 series over an existing 30. Especially on the lower end, VRAM capacity is almost certainly going to see a significant bump with the new cards, and that is often the key factor in what makes a GPU good for rendering or not (not raw speed.)
  • Pickle RendererPickle Renderer Posts: 236
    edited July 2022
    Regardless, you should wait until the 40 series GPUs are already out before buying anything - 30 series prices should see a much bigger drop by then. Since I'm assuming rendering is your main focus, in your place I think I'd go for a 40 series over an existing 30. Especially on the lower end, VRAM capacity is almost certainly going to see a significant bump with the new cards, and that is often the key factor in what makes a GPU good for rendering or not (not raw speed.)

    You know I'm not so sure about the RAM bump.  I read they're going to 12GB for the 4080, with the '70 at 10GB.  That's 3080 TI territory.  I'm really curious about the iRay performance of the new hardware.  They're reporting huge improvement, but they're also reporting huge power use.  That's going to be expensive in 2022/2023.  If I did buy a 400x I'd probably undervolt it.

    Anyway I think you're right.  I should just wait.

    Post edited by Pickle Renderer on
  • outrider42outrider42 Posts: 3,679

    You are asking a difficult question. The question you need to ask first is just how important is an upgrade to you, and how soon. Hardware is always getting better, with new hardware releasing about every 2 years. And of course the past 2 years have been pure chaos, to put it mildly.

    The particular stat you mention can vary wildly, too. I am assuming the cost per iteration is based on MSRP, correct? But we haven't had MSRP in so long that any cost per iteration comparison is not valid. Plus every model can have different base prices. The thing to really look at here is the iteration rate, the general speed of the card in question. Take the benchmark test for yourself, the download link to the DUF is in the first post. I believe the 2070 numbers are out of date since the new releases of Daz nerf rendering speeds.

    Then compare your iteration rate versus a 3080 in the same version of Daz. These numbers can vary a little depending on the scenes you build, so keep in mind this is a general number. The actual performance gap may increase or decrease in your scenes. But this bench will give you an idea of what kind of performance a 3080 will give you over the 2070.

    Still, the 3080 should be a big increase over the 2070, while offering a little more VRAM. The Iray performance of Ampere is far greater than any gaming benchmarks indicate because the ray tracing cores are fully utilized. Even games that use ray tracing don't push them like Iray.

    The 4000 series is just around the corner...but how close to that corner are we talking??? The latest rumors say that Nvidia is looking to delay the 4000 launch as long as possible because they are struggling to sell old 3000 stock. They do not want to launch a new product and leave old products collecting dust on store shelves. Some even say it might be December when the 4080 launches, which is 5 months away. Is that worth waiting for? That is up to you to decide. I personally do not believe Nvidia will wait longer than that, AMD is also set to launch new GPUs, and Nvidia does not want AMD to launch first. So AMD could force Nvidia's hand here. They are playing a corporate game of cat and mouse. Ultimately when RTX 4000 launches is actually up to AMD! If AMD announces they are launching in September, Nvidia will respond immediately by doing the same. But if AMD holds off or takes a while, Nvidia will as well.

    BTW, rumors have been suggesting the 4080 will actually have 16GB of VRAM, not just 12. So if this is true, then that would be doubling your current 2070 VRAM.

    There is one other option here, if you have not considered it, you can keep your 2070 and run two GPUs at once. If your PC can handle it, that would give you the biggest performance boost of all. Running a 2070+3080 is WAY faster than running a 3080 alone. This will probably not be as fast as a single 4080 though, and VRAM does not stack, so you would still be limited by the 8GB in your 2070. If the scene exceeded 8GB, the 2070 would not be used. And there is also the crazy chance you could go with a 2070+4080, LOL.

    So you have lots of options. I wouldn't be too concerned about power, I don't think it will be as crazy as some people suggest. But it will be more than you are used to.

  • nonesuch00nonesuch00 Posts: 18,120
    edited July 2022

    System Configuration
    System/Motherboard: Gigabyte B450M DS3H Wifi
    CPU: AMD Ryzen 7 5700G @ 3.80GHz with Radeon Vega 8 Graphics 
    GPU: EVGA GeForce RTX 3060 12GB @ 1.882GHz
    System Memory:  Patriot 2X16GB RAM @ 2.66GHz
    OS Drive: PNY CS2130 2TB NVMe M.2 PCIe SSD
    Asset Drive: same
    Power Supply: SeaSonic 750Watts Gold
    Operating System: MS Windows 11 21H2
    Nvidia Drivers Version: nVidia GEForce 3060 Game Drivers 516.59
    Daz Studio Version: DAZ Studio Pro Public Beta 4.20.1.43
    Optix Prime Acceleration: N/A

    Benchmark Results
    2022-07-21 21:25:13.209 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01781 iterations after 294.084s.
    2022-07-21 21:25:14.121 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01786 iterations after 294.995s.
    2022-07-21 21:25:15.043 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01791 iterations after 295.917s.
    2022-07-21 21:25:15.970 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01796 iterations after 296.844s.
    2022-07-21 21:25:16.752 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 297.626s.
    2022-07-21 21:25:16.753 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.
    2022-07-21 21:25:17.258 [INFO] :: Finished Rendering
    2022-07-21 21:25:17.293 [INFO] :: Total Rendering Time: 4 minutes 59.84 seconds


    Iteration Rate: (DEVICE_ITERATION_COUNT / DEVICE_RENDER_TIME) : 1800 iterations/297.626s = 6.048 iterations per second
    Loading Time: ((TRT_HOURS * 3600 + TRT_MINUTES * 60 + TRT_SECONDS) - DEVICE_RENDER_TIME) seconds : 42.584s

    +++++

    The PNY GTX 1650 Super 4GB I had last year took about 16 minutes to render the test scene so that's a descrease in render time of 11 minutes or the RTX 3060 12GB render the scene almost 3 times or 300% faster faster than the GTX 1650 Super 4GB.

    Special thanks to AgitatedRiot who gifted me the RTX 3060. I am sort of excited to play with lighting since the RTX 3060 haves iRay previews in the DS viewport quite easily. 

    Post edited by nonesuch00 on
  • Jason GalterioJason Galterio Posts: 2,562

    Newegg knows I have a weak will and they fill my email box up with video card sales.

    Which lead to more experiments in an attempt to convince myself I don't need to make the investment.

    System/Motherboard:  PRIME Z390-A
    CPU:  Intel Core i9-9900K CPU @ 3.60GHz
    GPU:  NVIDIA GeForce RTX 2070 SUPER
      NVIDIA GeForce RTX 3080
      NVIDIA GeForce RTX 2080 SUPER
    System Memory:  DDR4-2666 16GB x2 STT
    OS Drive:  Windows 10 Pro 64

    Pass 1 (3 GPUs)
    ===============
    2022-07-28 14:23:30.992 [INFO] :: Total Rendering Time: 1 minutes 16.9 seconds
    2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (NVIDIA GeForce RTX 2070 SUPER): 418 iterations, 5.588s init, 67.117s render
    2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080):       971 iterations, 2.861s init, 69.999s render
    2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA GeForce RTX 2080 SUPER): 411 iterations, 2.413s init, 69.275s render

    Iteration Rate: 6 + 14 + 6 iterations per second / 26 iterations per second
    Loading Time: 7 seconds

    Pass 2 (1 GPU)
    ==============
    2022-07-28 14:33:00.640 [INFO] :: Total Rendering Time: 2 minutes 10.25 seconds
    2022-07-28 14:33:30.176 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-28 14:33:30.176 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080): 1800 iterations, 2.090s init, 126.593s render

    Iteration Rate: 14 iterations per second
    Loading Time: 4 seconds

    Pass 3 (2 GPUs)
    ===============
    2022-07-28 14:35:56.340 [INFO] :: Total Rendering Time: 1 minutes 35.57 seconds
    2022-07-28 14:36:00.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-28 14:36:00.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080):       1285 iterations, 1.987s init, 91.615s render
    2022-07-28 14:36:00.373 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA GeForce RTX 2080 SUPER): 515 iterations, 2.873s init, 90.983s render

    Iteration Rate: 14 + 6 / 20 iterations per second
    Loading Time: 6 seconds

    In a mostly non scientific, theoretical way, swapping out the 2070 for a second 3080 should result in:

    Iteration Rate: 14 + 14 + 6 / 34 iterations per second

    New render time should be 54 seconds + load time, gain of about 16 seconds

    If a real scene takes x10 longer than the benchmark scene, I am looking at a current render of time of about 16 minutes. Investing in a new card would bring that down to around 9 minutes.

    Can anyone poke some holes in this logic? The gain is just on the edge of being worth it.

  • outrider42outrider42 Posts: 3,679
    edited July 2022

    The biggest hole is that RTX 4000 is coming soon. You will have even more options for performance. While the 4090 might be the only one to launch at first, the prices of existing cards will keep dropping.

    But otherwise, you can never have too much GPU power. It is always going to be a question of how much are you willing to pay for the extra speed. The 3080 would also offer extra VRAM over the 2070, so there is at least that.

    The other caveat is that this bench will not always scale to your exact scenes. The new performance of RT cores is enhanced most by geometry. The more complex your scene gets, the more of a difference the RT cores make. So if your scenes are more complex, then maybe you will actually see more improvement than what you do in the bench. But if your scenes are not that complex, your improvements might be less. It just depends on what you do.

    Since you have different cards, you have a unique chance to test them in your own scenes and see just how they do. You could cap the iteration or convergence to make the tests faster. Then you should have enough information to judge if another 3080 is worth it in your eyes. Or if waiting helps.

    After all, if a 4090 more than doubles the 3090 performance, it might actually be logical to dump the 2000 series cards completely in favor of a 3080+4090(or 4080)

    BTW, which version of Daz Studio are you running?

    Post edited by outrider42 on
  • Jason GalterioJason Galterio Posts: 2,562
    edited July 2022

    DS version is 4.20.0.17, 64 bit.

    I'm all over the place on the decision making here. To be perfectly honest, what I have is more than adequate. The drop on 3XXX prices, plus their availability pushed me to experiment more.

    I wanted to see if my set up could even handle the third card. I'm stuck with the 2080 because nothing else will really fit inside the computer case without a lot of monkeying around.

    The 2070 was gathering dust and I really should think about donating it. Just been burned in the past... The 3080 is on an 850 Power Supply by itself, so I am wasting energy there.

    I've had the 3080 about a month now and I've been particularly impressed with how quiet it runs. Much quieter than both of my 2XXX. Granted that could be just the specific configuration of this 3080 GPU.

    The jump from 2080 to 3080 also made Iray interactive mode much smoother. The 2080 could do it, but it would chug along at times.

    Picking up a 3090 is tempting because of the increased memory.

    Waiting for a 4XXX is also tempting, but concerned about the lack of supply being an issue again. I've been hearing the warning bells of low chip availability in the manufacturing sector. Nothing concrete, yet, but they usually don't start sounding the alarm unless there is a problem coming.

    Post edited by Jason Galterio on
  • outrider42outrider42 Posts: 3,679
    Keep in mind what I saying here is not confirmed officially. But...

    There is no chip supply shortage. There never really was, it was more demand driven in large part by crypto mining. That is not the case now, though that could always change if a new coin pops up to replace Ether. If there is a supply issue, it would be more because Nvidia is trying to decrease supply itself. There are multiple rumors that say Nvidia is trying to cut back production with TSMC. They are getting desperate to sell stock right now, which is why prices are dropping. Prices should keep dropping for the foreseeable future.

    This over stock situation leads to where the old 3000 cards could be competing against 4000 cards. They don't want this, because a 4000 launch would tank the sale values of 3000 series cards, when they have so many still remaining. Of course the AIBs don't want this, and Nvidia may even push back the 4000 launch as a result. They are talking about waiting to launch the 4070 next year, and even the 4080. That is highly unusual for Nvidia if they do. We might only get a 4090 this year, and that has never happened. Nvidia always launches at least 2 chips, like 1080 and 1070, 2080ti and 2080, and 3090 and 3080, very close together. If they launch a 4090 only, that would be because they still want to beat AMD at the very top. When AMD chooses to launch will play a role in all this, too. If AMD launches multiple tiers, Nvidia will be forced to respond.

    Either way these are not due to any chip shortages. If anybody claims that, I frankly do not believe them.
  • chrislbchrislb Posts: 100

    przemyslaw.kulesz4 said:

     

    Hi,

    Quick question. Will choosing the 12th generation Intel processor in the "KF" option instead of "K" will it have any influence on the rendering speed in DAZ3d?

    Overall, I bought a little bit without further thought the i7 12700KF versions, and now read various conflicting opinions. Fortunately, I still have a few days to exchange.

    I made a test on stage from this thread. Turning on the CPU shortened the rendering time only marginally.

    Only GPU RTX 3090

    2022-07-03 13: 06: 11.186 [INFO] :: Finished Rendering
    2022-07-03 13: 06: 11.202 [INFO] :: Total Rendering Time: 1 minutes 47.14 seconds
    2022-07-03 13: 06: 54.671 Iray [INFO] - IRAY: RENDER :: 1.0 IRAY rend info: Device statistics:
    2022-07-03 13: 06: 54.671 Iray [INFO] - IRAY: RENDER :: 1.0 IRAY rend info: CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 0.941s init, 105.076s render


    GPU + CPU RTX 3090 /  i7 12700KF

    2022-07-03 13: 04: 00.888 [INFO] :: Finished Rendering
    2022-07-03 13: 04: 00.908 [INFO] :: Total Rendering Time: 1 minutes 46.73 seconds
    2022-07-03 13: 04: 19.579 Iray [INFO] - IRAY: RENDER :: 1.0 IRAY rend info: Device statistics:
    2022-07-03 13: 04: 19.579 Iray [INFO] - IRAY: RENDER :: 1.0 IRAY rend info: CUDA device 0 (NVIDIA GeForce RTX 3090): 1686 iterations, 0.999s init, 104.134s render
    2022-07-03 13: 04: 19.579 Iray [INFO] - IRAY: RENDER :: 1.0 IRAY rend info: CPU: 114 iterations, 0.772s init, 103.805s render

    I repeated the same test on my second platform: Ryzen 9 3900X + Nvidia RTX3070. With a total time of about 3 minutes, the difference was less than 3 seconds.

    Only GPU RTX 3070

    2022-07-03 13:09:26.357 [INFO] :: Finished Rendering
    2022-07-03 13:09:26.394 [INFO] :: Total Rendering Time: 2 minutes 58.22 seconds
    2022-07-03 13:17:01.481 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-03 13:17:01.481 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3070): 1800 iterations, 0.975s init, 174.968s render

    GPU + CPU RTX 3070 /  Ryzen 9 3900X

    2022-07-03 12:58:00.574 [INFO] :: Finished Rendering
    2022-07-03 12:58:00.616 [INFO] :: Total Rendering Time: 2 minutes 55.57 seconds
    2022-07-03 12:59:02.373 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-07-03 12:59:02.374 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3070): 1627 iterations, 1.920s init, 171.289s render
    2022-07-03 12:59:02.374 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:                                     173 iterations, 1.274s init, 170.849s render

    Regards.

    One thing I noticed a while ago is that its not worth it to use your CPU in renders with most newer GPUs.  Even an overclocked Threadripper 32 core CPU struggles to match the performance of a 1080 ti.   Even the 1080 ti isn't actually that great in Daz rendering performance when compared to most Nvidia RTX 2000 and RTX 3000 series GPUs.  Quite often adding the CPU intot he rendering task increases render times if tis not a high core count CPU

    I haven't had a chance to test the 5000 series Threadripper CPUs in Daz yet, but I don't expect them to be much better than a 1080 ti without chilled liquid cooling/

  • Jason GalterioJason Galterio Posts: 2,562
    edited August 2022

    Here is an interesting discovery that is sort of on topic.

    My UPS absolutely does not like having the 3rd video card attached to it.

    The UPS is fairly beefy, a CyberPower that I picked up at CostCo for ~$200 USD. The only things plugged into it are my desktop (750 PS), the external video card PS (850 PS), and my monitor.

    Now I'm not an electrical engineer by the CyberPower website implies that this PS would only output 600W. Which wouldn't even be enough to power the desktop PS at load. Since this is the first time I am having issues, I have to assume its the last thing I changed. I.e. adding the 2070 into the mix.

    With the 850 powering only one video card, no issues. With it powering both video cards,no issue until DS started rendering. Then it begin beeping to alert me of an issue. I didn't loose power, but the UPS was warning me that it needed to pull from the battery to handle the load.

    I shut down, pulled the power to the 2070, rebooted and rendered the same scene. No issues.

    I will probably shift the monitor to another UPS and see if that will lighten the load. I don't need the UPS batteries overheating when I am away from my desk.

    (Edited to change the price.)

    Post edited by Jason Galterio on
  • nonesuch00nonesuch00 Posts: 18,120

    Jason Galterio said:

    Here is an interesting discovery that is sort of on topic.

    My UPS absolutely does not like having the 3rd video card attached to it.

    The UPS is fairly beefy, a CyberPower that I picked up at CostCo for ~$200 USD. The only things plugged into it are my desktop (750 PS), the external video card PS (850 PS), and my monitor.

    Now I'm not an electrical engineer by the CyberPower website implies that this PS would only output 600W. Which wouldn't even be enough to power the desktop PS at load. Since this is the first time I am having issues, I have to assume its the last thing I changed. I.e. adding the 2070 into the mix.

    With the 850 powering only one video card, no issues. With it powering both video cards,no issue until DS started rendering. Then it begin beeping to alert me of an issue. I didn't loose power, but the UPS was warning me that it needed to pull from the battery to handle the load.

    I shut down, pulled the power to the 2070, rebooted and rendered the same scene. No issues.

    I will probably shift the monitor to another UPS and see if that will lighten the load. I don't need the UPS batteries overheating when I am away from my desk.

    (Edited to change the price.)

    I've never bought a UPS but I am surprised they'd even consider selling one that wasn't rated for 1500W.

  • aspiringartistaspiringartist Posts: 5
    edited August 2022

    Results for RTX 3060 12 GB (non-TI version) only GPU

    System Configuration
    System/Motherboard: Asrock X570 Phantom Gaming 4
    CPU: Amd Ryzen 3400G @ 3.7GHz (default)
    GPU: Gigabyte GeForce RTX 3060 Eagle OC 12GB GDDR6 (GV-N3060EAGLE OC-12GD 2.0) @  (Core clock 1320 MHz / Boost clock 1807 MHz) (default)
    System Memory: GOODRAM 32GB (2x16GB) 3600MHz CL17 IRDM PRO @2133Mhz (default DR 1066/15/15/15/36/50/1.20V, no XMP enabled)
    OS Drive: Silicon Power XD80 2 TB M.2 2280 PCI-E x4 Gen3 NVMe
    Asset Drive: Same
    Power Supply: Thermaltake Toughpower GF1 750W
    Operating System:  Windows 10 Home 19044.1826
    Nvidia Drivers Version: 516.93 (studio drivers)
    Daz Studio Version: 4.20.0.17

    Benchmark Results
    2022-07-30 10:46:16.909 [INFO] :: Total Rendering Time: 4 minutes 25.25 seconds
    IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 2.059s init, 260.058s render

    Iteration Rate: (1800/260,058) 6.92307692308
    Loading Time: (265.25-260.058) 5.192

     

    Comment:
    With msrp at $329 that would give 75.50 iterations per dollar per hour, which is still not bad (2nd place in the table If I'm correct).

    3060 Ti is 96.0722 iterations per dollar per hour (1st place), but major bonus of 3060 over 3060 8GB Ti, is extra 4 GB for a larger scenes (although it is not so easy to hit the bottleneck here, as I thought earlier. The test scene while rendering takes around 5 GB of GPU memory)

    Post edited by aspiringartist on
  • kttttttktttttt Posts: 3

    System Configuration

    System/Motherboard: MacBook Pro (16-inch, 2019) 

    CPU: Intel Core i9-9880H 8-Core CPU @ 2.30GHz

    GPU: N/A 

    System Memory: 16 GB 2667 MHz DDR4

    OS Drive: Macintosh HD 1TB

    Asset Drive: Same

    Power Supply: 96W

    Operating System: macOS 12.5

    Nvidia Drivers Version: N/A

    Daz Studio Version: 4.20.0.2

     

    Benchmark Results

    Total Rendering Time: 1 hours 32 minutes 22.17 seconds

    IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 1800 iterations, 2.905s init, 5535.202s render

    Iteration Rate: (1800 iterations / 5535.202s) = 0.325191384162674 iterations per second

    Loading Time: ((1 hour * 3600 + 32 minutes * 60 + 22.17 seconds) = 5542.17 seconds

     

     

    CPU MSRP: USD556

    iterations per dollar per hour: 02.1029

    iterations per second: 0.32519

     

     

     

     

     

  • chrislbchrislb Posts: 100
    edited August 2022

    nonesuch00 said:

    I've never bought a UPS but I am surprised they'd even consider selling one that wasn't rated for 1500W.

    Most home PC UPS systems are rated for 600-900 watts because most people don't have PCs that draw that much power.  My 1200 watt rated UPS trips overload protection if I run game benchmarks with ray tracing enabled using both RTX 3090s.  However that's because each 3090 is drawing 700 watts.  During 3D rendering they usually don't draw much more than 400 watts, even with an uncapped power limit.  When I had a pair of RTX 2080 Supers with a higher wattage manufacturer's  BIOS they would trip overload on my old 900 watt rated UPS.

    Its hard to find a 120v home USP system rated for more than 1200 watts.  A 1500 watt 120v UPS is usually a commercial rack mount system of a floor system that is the size of a mid tower PC case.

    Post edited by chrislb on
  • Anyone have 4090 scores yet? Or predictions? Perhaps 28 iterations/second?
  • skyeshots said:

    Anyone have 4090 scores yet? Or predictions? Perhaps 28 iterations/second?

    The 4090 is not yet available, anyone who has one is bound by strict NDAs. It is of course quite possible that an Iray update will be needed before the new cards can be used at all.

  • PerttiAPerttiA Posts: 10,024

    chrislb said:

    nonesuch00 said:

    I've never bought a UPS but I am surprised they'd even consider selling one that wasn't rated for 1500W.

    Most home PC UPS systems are rated for 600-900 watts because most people don't have PCs that draw that much power.  My 1200 watt rated UPS trips overload protection if I run game benchmarks with ray tracing enabled using both RTX 3090s.  However that's because each 3090 is drawing 700 watts.  During 3D rendering they usually don't draw much more than 400 watts, even with an uncapped power limit.  When I had a pair of RTX 2080 Supers with a higher wattage manufacturer's  BIOS they would trip overload on my old 900 watt rated UPS.

    Its hard to find a 120v home USP system rated for more than 1200 watts.  A 1500 watt 120v UPS is usually a commercial rack mount system of a floor system that is the size of a mid tower PC case.

    No problem finding one for 240V, I can get a 1900W APC at 1500eur (VAT 24% included) in a week, a 2850W would cost about twice as much.

  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    Anyone have 4090 scores yet? Or predictions? Perhaps 28 iterations/second?

    I already made one guess. Historically Iray has faired much better than video game performance uplift with RTX. The 2000 series saw a huge bump, and the 3000 series saw another very large bump. These were much higher than what games got. The ray tracing cores do a lot more work, so the actual uplift is closer to the performance increase of the ray tracing cores.

    The 4090 is supposed to be twice as fast as the 3090, maybe even slightly more, like 2.1 or 2.2. The ray tracing cores once again see a bigger uplift, as they are talking about a 2.2 to 2.5 time increase.

    So Iray should easily fall in between these and likely go on the high side. The 3090 does between 18 and 20 iterations in this benchmark. Doubling that should be give us 36-40 iterations. A 2.5x increase would put it right at 50 iterations, and I actually think this is very possible.

    None of the big tech outlets benchmark the ultra niche Iray, but some do bench Octane and Vray. Their results should give us a good clue.

    But like Richard said, the 4000 series might not even work with Iray for a while. That is one of Iray's biggest failing points, you have to update to use a new generation of GPU, and the process of getting that update to consumers is painfully slow. The Iray Dev Team first has to release an updated version of Iray. We can only hope that they already have dev models of Lovelace on hand to get it done before launch, but we don't know. THEN we have to wait for Daz to release an updated version of Daz Studio that uses this new Iray. In other words we have to go through 2 separate verification processes in order to get an update.

    And we all know how fast "Daz time" can be. <.<

    Maybe we get lucky and it just works. Oddly it seemed to work with the Titan V pretty quickly back when it launched.

  • RayDAntRayDAnt Posts: 1,134

    I'm fully expecting next generation cards to work out-of-the-box with Iray given the lack of any sort of entirely new ASIC "core" processing units seemingly on the horizon in GPU world right now (the big GPU compute initatives - raytraced rendering, ai processing, and conventional rendering already being addressed by existing solutions.) Although obviously only time will tell.

  • outrider42outrider42 Posts: 3,679

    According to rumors Lovelace is not just a refresh, there is a change in core design from Ampere. Last time the Iray Dev Team snuck in Ampere support before it released. But there is no guarantee that they repeat that, and there is no guarantee whatsoever that Daz will quickly release a beta channel with the update, either. Last time, even when the Dev Team put out an update for Ampere before Ampere even launched, we still had to wait until October 13 for a Daz Studio beta that added support. The hardware released on September 17, so it took over 3 weeks, almost 4 for Daz Studio to get the update out even under the very best circumstances where they had access to the new Iray before the cards launched!

    The fact that the Iray Dev Team even needed to post an update for Ampere proves you cannot simply plug the new cards in and expect them to work. As Lovelace is not just a refresh of Ampere, there is a good chance that the 4000 series will not release until a solid month later.

    Basically, I would not want to place bets on this. It has happened enough times (Daz Studio getting support well after launch), that history shows we cannot make assumptions about it. The key will be watching the Iray updates on their own site, not the Daz forums. If they mention Lovelace support soon after it gets announced, then that is a good sign DS will be able to get quickly. Of course they cannot post anything about it yet, Lovelace is not official. The days after the announcement will be key.

    If they give no word, we may have to reach out to them and ask them directly if Lovelace will get supported by update, or if it already works.

    So if anybody is selling their current card in order to get a 4000 series card, you might be waiting for that update. It is probably better to keep your current card for at least a little while to make sure you can use the new card you buy.

  • I will report 4090 compatibility & IRAY performance promptly after drop. I am guessing 28 iterations per second, more perhaps after IRAY is updated.
Sign In or Register to comment.