Daz Studio Iray - Rendering Hardware Benchmarking

1222325272845

Comments

  • System Configuration
    System/Motherboard: DoradoOC-AMP (Z490/pcie 3)
    CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
    GPU: Nvidia RTX 3090
    System Memory: HyperX XMP RGB 32 Go DDR4 @ 3200 Mhz
    OS Drive: SSD WD_BLACK 1 To PCIe NVMe TLC (M.2 2280)
    Asset Drive = Seagate IronWolf 10 To (SATA 6 Gbit/s 7 200 tr/min)
    Operating System: Windows 10 64bits 20H2
    Nvidia Drivers Version: 471.96
    Daz Studio Version: 4.15.0.30 Public Build

    Benchmark Results

    2021-09-10 19:04:44.381 Total Rendering Time: 1 minutes 36.43 seconds
    2021-09-10 19:05:09.890 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-09-10 19:05:09.890 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.470s init, 92.906s render

    Iteration Rate: (1800 / 92.906) 19.37
    Loading Time: (1*60+36,43)-92.906= 3.524

  • Mark_e593e0a5Mark_e593e0a5 Posts: 1,594
    edited September 2021

    System Configuration
    System/Motherboard: Apple iMac Pro
    CPU: Intel(R) Xeon(R) W-2150B CPU @ 3.00GHz   3.00 GHz
    GPU: Gigabyte Aorus RTX3090 Gaming Box (Thunderbolt 3 eGPU)
    System Memory: Built in 32 GB 2666 Mhz DDR4 ECC 
    OS Drive: Apple SSD AP2048
    Asset Drive: Network Drive - NAS Synology DS920+
    Operating System: Apple Bootcamp, Windows 10 Pro 21H1, Build 19043.1165
    Nvidia Drivers Version: Studio 471.68
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: default

    Benchmark Results
    Total Rendering Time: 1 minutes 40.93 seconds
    CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 2.164s init, 95.411s render
    Iteration Rate: (1800/100.93) = 18,87
    Loading Time: 5.519 seconds

    Post edited by Mark_e593e0a5 on
  • KCMustangKCMustang Posts: 114
    edited September 2021

    System Configuration
    System/Motherboard: Infinity W5-11R7N Laptop PCIe 4
    CPU: CPU: Intel(R) Core(TM) i7-11800H CPU @ 2.30GHz
    GPU: Nvidia GeForce RTX3070 Laptop 8GB Max-P
    System Memory: 64 GB 3200 Mhz DDR4
    OS Drive: Samsung SSD 980 Pro 1TB
    Asset Drive: Same
    Operating System: Microsoft Windows 10 Home (x64) Build 19042.1165
    Nvidia Drivers Version: 471.96
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: N/A

    Benchmark Results
    Total Rendering Time: 2 minutes 41.27 seconds
    CUDA device 0 (NVIDIA GeForce RTX 3070 Laptop GPU):      1800 iterations, 2.189s init, 157.186s render
    Iteration Rate: 11.16 iterations per second
    Loading Time: 4.084 seconds

     

    Post edited by KCMustang on
  • KCMustangKCMustang Posts: 114
    edited September 2021

    And I ran this on my 4 year-old laptop out of curiosity:

    System Configuration
    System/Motherboard: Acer Aspire F5-573G (laptop)
    CPU: CPU: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
    GPU: Nvidia GeForce 940MX 4GB
    System Memory: 16 GB 2133 Mhz DDR4
    OS Drive: Crucial M4 SSD 128GB
    Asset Drive:  Same
    Operating System: Microsoft Windows 10 Home (x64) Build 19042.1165
    Nvidia Drivers Version: 471.11
    Daz Studio Version: 4.15.0.30
    Optix Prime Acceleration: N/A

    Benchmark Results
    Total Rendering Time: 1 hours 4 minutes 49.37 seconds
    CUDA device CUDA device 0 (NVIDIA GeForce 940MX): 1800 iterations, 7.883s init, 3878.309s render
    Iteration Rate: 0.463 iterations per second
    Loading Time: 11.061 seconds

     

    Post edited by KCMustang on
  • RayDAntRayDAnt Posts: 1,134

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    System Configuration
    System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    CPU: Intel 8700K @ stock (MCE enabled)
    GPU: Nvidia Titan RTX @ stock (custom watercoooled)
    GPU: Nvidia RTX A5000 @ stock
    System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
    OS Drive: Samsung Pro 980 512GB NVME SSD
    Asset Drive: Sandisk Extreme Portable SSD 1TB
    Operating System: Windows 10 Pro version 21H1 build 19043
    Nvidia Drivers Version: 471.96 GRD
    Daz Studio Version: 4.15.0.30 64-bit
    Optix Prime Acceleration: N/A


    Titan RTX Solo Results

    Benchmark Results: Titan RTX (WDDM driver mode - used as display device)
    Total Rendering Time: 3 minutes 19.48 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 1.901s init, 196.068s render
    Iteration Rate: 9.180 iterations per second
    Loading Time: 3.412 seconds

    Benchmark Results: Titan RTX (WDDM driver mode) 
    Total Rendering Time: 3 minutes 17.94 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 2.058s init, 193.673s render
    Iteration Rate: 9.294 iterations per second
    Loading Time: 4.267 seconds

    Benchmark Results: Titan RTX (TCC driver mode)
    Total Rendering Time: 3 minutes 15.32 seconds
    CUDA device 1 (NVIDIA TITAN RTX): 1800 iterations, 2.615s init, 190.427s render
    Iteration Rate: 9.452 iterations per second
    Loading Time: 4.893 seconds

     

    RTX A5000 Solo Results

    Benchmark Results: RTX A5000 (WDDM driver mode - used as display device)
    Total Rendering Time: 1 minutes 53.69 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 1.941s init, 110.225s render
    Iteration Rate: 16.330 iterations per second
    Loading Time: 3.465 seconds

    Benchmark Results: RTX A5000 (WDDM driver mode) 
    Total Rendering Time: 1 minutes 52.44 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 1.993s init, 108.904s render
    Iteration Rate: 16.528 iterations per second
    Loading Time: 3.536 seconds

    Benchmark Results: RTX A5000 (TCC driver mode)
    Total Rendering Time: 1 minutes 49.37 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1800 iterations, 2.068s init, 105.757s render
    Iteration Rate: 17.020 iterations per second
    Loading Time: 3.613 seconds

     

    Titan RTX + RTX A5000 Combo Results:

    Benchmark Results: Titan RTX + RTX A5000 (both WDDM driver mode)
    Total Rendering Time: 1 minutes 15.7 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1160 iterations, 1.950s init, 71.427s render
    CUDA device 1 (NVIDIA TITAN RTX): 640 iterations, 2.065s init, 71.389s render
    Iteration Rate: 25.201 iterations per second
    Loading Time: 4.273 seconds

    Benchmark Results: Titan RTX + RTX A5000 (both TCC driver mode)
    Total Rendering Time: 1 minutes 12.96 seconds
    CUDA device 0 (NVIDIA RTX A5000): 1160 iterations, 1.968s init, 69.293s render
    CUDA device 1 (NVIDIA TITAN RTX): 640 iterations, 1.977s init, 69.366s render
    Iteration Rate: 25.949 iterations per second
    Loading Time: 3.594 seconds


    Conclusions:

    Interesting to note that the Titan RTX looks to be almost a perfect performance match for the A4000 these days...

    As indicated, all the above benchmarks were done with the A5000 in its default air-cooled configuration. Am actually quite happy with how well the blower design performs even in the restrictive environment that the Tower 900 presents for air-cooling (it simply isn't designed for that.) But I do plan to convert it over to watercooling eventually, since there is clearly some additional performance to be had out of it. The way to tell if operating temp is adversely effecting a GPU's performance is to watch it's frequency stats during a render and noticing whether it immediately shoots up to a high number and stays there (no thermal limiting) or if it fluctuates in any way over time.

    It's also worth mentioning imo that these cards are each operating over a mere 8 lanes of PCI-E 3.0. So lest there be any lingering doubts about the unimportance of latest gen/high PCI-E lane count motherboards for Iray type rendering systems... you really are better off getting more ram/a faster GPU instead.

  • outrider42outrider42 Posts: 3,679

    Unless the card is running on the high side I really wouldn't bother water cooling it, I think any performance gain would be pretty small in the scope of things. 

    It is pretty cool to see just how much faster the A5000 is over the last generation Titan. It is not even close. For those wondering the A5000 is similar to a 3080, with just a few less CUDA cores, but 24GB of VRAM.

    The A5000 has 8192 CUDA, while the 3080 has 8704. 

  • Hey, congratz on your RTX A5000 plunge!

    Huh, good to know that PCI3 vs 4 has basically no diff.
    When I get re-focussed in next month or two, will test that out too.
    Will be interesting to see if PCI4 will make a diff at all in this next hardware lifecycle.

    Will for sure check GPU frequency when I test then as well.
    Will keep your comment in mind.

    If you do add water would be interested to hear what you decided on and what you did.
    I'm still firmly in the air is safest route, but very open to suggestions.

  • chrislbchrislb Posts: 100

    RayDAnt said:

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    The A5000 is a 230 watt card according to Nvidia.  

    https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a5000/nvidia-rtx-a5000-datasheet.pdf

    With a waterblock, I don't think you will see any significant improvement in render times with water cooling unless your air cooler temperatures are rather high.  The main benefit might be noise reduction.  It appears that the A5000's default fan curve keeps the GPU around 78C.  You can drop those temperatures with a higher fan speed and more noise.  Dropping the GPU temps to 45C with water cooling might get you up to an additional 85-100 MHz in GPU clock speed with the default VBIOS on the card.

    It may be possible to use certain versions of MSI afterburner and raise the power limit a little along with increasing the GPU clock speed.  However, I doubt that even with that method you will be getting much more than 250 watts power draw at the peak.

    With the A5000 having its 8 pin power connector attached by a pigtail, would it cause any waterblock clearance issues?

  • RayDAntRayDAnt Posts: 1,134
    edited September 2021

    outrider42 said:

    Unless the card is running on the high side I really wouldn't bother water cooling it, I think any performance gain would be pretty small in the scope of things. 

    Most likely. However, since I have so much watercooling headroom in my system already (right now it's the single Titan RTX being cooled by an entire one of these) and the cost of an additional waterblock + fittings is almost negligible at this point, I'll most likely try it out.

     

    It is pretty cool to see just how much faster the A5000 is over the last generation Titan. It is not even close.

    And at a considerably smaller power budget too - 230 vs. 280 watts. Which might not sound like all that much of a difference by itself. But when you're talking about having too or more of these in a system together... that can easily be difference between needing a new power supply or not (my 750 watt EVGA G2 is still going strong with this setup.)

     

    Saxa -- SD said:

    Hey, congratz on your RTX A5000 plunge!

    Huh, good to know that PCI3 vs 4 has basically no diff.
    When I get re-focussed in next month or two, will test that out too.
    Will be interesting to see if PCI4 will make a diff at all in this next hardware lifecycle.

    Barring major changes to the way Iray handles data transfer during the rendering process (preloading all render data to GPU memory) the chances of it making any sort of difference are basically zero. Unless Nvidia were to bring back memory pooling between multiple GPUs via the PCI-E bus (it is a little known fact that prior to the introduction of NVLink with the GP100, Nvidia GPUs already had a system for memory pooling: GPUDirect P2P.) But the chances of that are pretty small imo for the obvious reasons...

     

    If you do add water would be interested to hear what you decided on and what you did.
    I'm still firmly in the air is safest route, but very open to suggestions.

    Everyone's gonna have their own comfort level when it comes to intentionally bringing water near/in your computing equipment. For me it's being able to have the waterloops themselves located below (rather than above or sandwiched in between as you often see these days) the core components of a PC that make me comfortable with it. Which is why I am such a big fan of the Tower 900 case specifically for watercooling - since it allows you to do just that (you can see the pre A5000 version of my implementation of it here.)

     

    chrislb said:

    RayDAnt said:

    RE: @JamesJAB @Skyeshots @Saxa -- SD

    Finally took the plunge on an RTX A5000 after seeing proof from Igor's Lab that the A5000 (and likely the A6000 as well) is directly compatible with all RTX 3080/3090 Reference (not Founders) Edition waterblocks for custom watercooling. For what it's worth to you. First,

    The A5000 is a 230 watt card according to Nvidia.  

    https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a5000/nvidia-rtx-a5000-datasheet.pdf

    With a waterblock, I don't think you will see any significant improvement in render times with water cooling unless your air cooler temperatures are rather high.  The main benefit might be noise reduction.  It appears that the A5000's default fan curve keeps the GPU around 78C.  You can drop those temperatures with a higher fan speed and more noise. 

    Can confirm that the A5000 likes to keep its core temp in the lower to mid 70s (with clock speeds hovering in the 1650-1850 range) with the default cooling setup. In my partcular case, increased computing noise is a major issue (the system doubles as an audio production PC in a home recording studio) so turning up fan speeds only for last resort.

     

    Dropping the GPU temps to 45C with water cooling might get you up to an additional 85-100 MHz in GPU clock speed with the default VBIOS on the card.

    It may be possible to use certain versions of MSI afterburner and raise the power limit a little along with increasing the GPU clock speed.  However, I doubt that even with that method you will be getting much more than 250 watts power draw at the peak.

    As a rule, I don't mess with voltages on my pc components (other than turning them down when undervolting is possible) for both component longevity and power efficiency reasons. But I'm more than willing to spend the extra buck necessary to get them the best cooling/power delivery subsystems possible so that they perform the best they can while still maintianing spec. Especially in systems where the end goal is to have 2+ high end GPUs in them (where thermal interaction between GPUs starts to become a concern

     

    With the A5000 having its 8 pin power connector attached by a pigtail, would it cause any waterblock clearance issues?

    Since the Tower 900's vertical layout puts GPUs in a hanging downward (from the rear panel IO end) position, I don't see it as much of a concern in my specific case. But it is something I will be evaluating closely if/when I make the change to watercooling.

     

    ETA: That's another thing I'm really appreciating about this A5000 right now: All that performance - just a single 8-pin power connector.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

  • RayDAntRayDAnt Posts: 1,134

    outrider42 said:

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

    This did occur to me as a potential limitation as I was first plugging the card in - it's been a long time since I've seen a GPU without at least one HDMI port.

  • outrider42outrider42 Posts: 3,679
    edited September 2021

    RayDAnt said:

    outrider42 said:

    There is one note about the A series (formerly Quadro) cards; they only have Displayport. The gaming models have both Displayport and HDMI 2.1. For most people this is probably no big deal, but if you are using a TV as a monitor, for example LG OLED, then this does matter. TVs do not have Displayports, so you would need to use a Displayport to HDMI adapter. Doing this will not grant you all HDMI 2.1 features, including Gsync/freesync, which is obviously a serious deal breaker for gamers. You may get 4K at 120Hz, and even HDR if the Displayport is 1.4, but the lack of Gsync hurts. If you are not a gamer, then this is no bid deal and you can happily use an OLED TV with the A series with an adapter.

    This did occur to me as a potential limitation as I was first plugging the card in - it's been a long time since I've seen a GPU without at least one HDMI port.

    I hadn't really thought of it either, until just recently. I was thinking seriously about getting the A4000. It has the core count of a 3070ti but with 16GB of VRAM. Not only that, but the A4000 is a just a single slot card and only 140 Watts on 6 pins, which is pretty incredible for such a high end GPU. I believe it might be the most powerful single slot GPU right now. Plus at around $1200 it is actually decently priced. It is not marked up much over its MSRP (though being Quadro class those are already high), but considering the VRAM and single slot size it really isn't a bad deal.

    However, I realized that it only offers Displayports, and this gave me pause. I do not have an OLED yet, but my plans are to buy the 42" LG OLED when it launches. Yep, a 42" OLED is coming soon guys! It may be still a bit big for a monitor, my current one is just 32". But I don't like to sit directly on top of my screen, I tend to sit pretty far back from my screen. So the 42" should be fine, and the OLED has so many advantages over LCD based screens. The 42" model should be around $1000. That is way cheaper than the dedicated OLED PC monitors LG is producing, which will have Displayport...but cost $4000. The $4000 OLEDs are a hard pass, but a $1000 42" OLED TV with great gaming features and that OLED performance is hard to pass up. Hardware Unboxed reviewed the 48" LG OLED TV as a gaming monitor and it performed, well like you expect an OLED would. It trounced the competition in many catagories.

    **Just a small edit to add that the A series OLEDs (not to be confused with A series GPUs) are an exception. The LG OLED A series lacks the HDMI 2.1 features, and also have weaker CPUs powering them. These are cheaper OLEDs, but don't perform nearly as well. I am talking about the C series OLEDs here, these have full HDMI 2.1 and better processing. So if this post has anybody considering OLED TVs for their monitors, I want you guys to be aware of these differences!**

    Anyway, it turns out the A4000 would not fit into those OLED plans. I have to admit this really bums me out, because I thought I the A4000 would be a great fit for me. If I didn't play games, it could still work, but alas, I do play games.

    So I am kind of torn. I do need the extra VRAM. I think I may go after the A4000 anyway and buy a gaming GPU with HDMI 2.1 down the road for when I do get the OLED and use the GPUs together.

    BTW, since the A4000 is a single slot card, it is possible to cram 4 of these babies into a board that would normally only fit 2 GPUs. Another plus is that 16GB is more proper amount of VRAM for a computer with 64GB of RAM. The reason I bring this up is that a pair of A4000's should be able to beat the 3090 or A6000, while using the SAME amount of space. The A4000's would use less power than the 3090 as well. And to top it off a pair of A4000's would even cost less than the 3090's current street price. This makes the A4000 a very compelling product, if 16GB is a good fit. That is because the A4000 sadly does NOT support Nvlink, which I find rather disgusting to be honest. There are only 3 Ampere GPUs that support Nvlink, the 3090, A6000 and A5000. That is kind of messed up. Nvidia has been ridiculously stingy with VRAM this generation, with the 3060 being the only exception.

     

    Post edited by outrider42 on
  • nonesuch00nonesuch00 Posts: 18,120

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

  • As my time allows am checking this and that.  Thanks RayDAnt for foto of your tower900 & watercool setup. Been revisiting the infos I chose then, and will write back when have things more tidy.

  • outrider42outrider42 Posts: 3,679

    nonesuch00 said:

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

    We probably shouldn't discuss the screen tech too much in this thread. I mainly wanted to point out the A series lack of HDMIs. However, I am a fan of OLED, I have seen many, many screens over the years and the OLEDs always stand out to me. The perfect blacks are really does it for me, and while the best LEDs have come close, they still are not there, and all of the things they do to try and control LED backlights only add to the complexity and thus potential problems that such screens can have. OLED isn't perfect though, and there is a possibility of burn in depending on the content you have on screen. There is no such thing as a perfect display. All I can say is that if you have a local big screen store around with good demonstrations, check them out. I'm not talking about Best Buy or Costco, because the store lighting is just too bright to properly show how these things look. You can still look at them this way, though, because they will have OLED and probably QLED near side by side. My Costco has OLEDs by the front door so you can't miss them. They do this this because they know the picture is eye catching.

    I can point you to the Hardware Unboxed review of the LG 48" C1 model. This is a gaming focused review, but it covers a lot of ground and shows calibrated versus non calibrated results. It also compares performance directly to other gaming monitors. Since it is about gaming, you will not find content production monitors discussed very much.

    And keep in mind you need a GPU with HDMI to use these as a monitor, and specifically HDMI 2.1 to fully support what the screen can do, many of which are gaming features. Like my 1080ti does not have 2.1, and so I would not be able to use Gsync on these TVs.

  • nonesuch00nonesuch00 Posts: 18,120

    outrider42 said:

    nonesuch00 said:

    My TV broke last month after 7 years (It is Sceptre brand for those that keep count on measures of quality and durability) and want to know if the 4K OLED TVs are worth the $1500+ more then the regular 4K LED TVs. I know the contrasts are more accurate and the NIT is not as high on the OLED vs LED but that is in words in article only, what does it look to be the difference in person?

    We probably shouldn't discuss the screen tech too much in this thread. I mainly wanted to point out the A series lack of HDMIs. However, I am a fan of OLED, I have seen many, many screens over the years and the OLEDs always stand out to me. The perfect blacks are really does it for me, and while the best LEDs have come close, they still are not there, and all of the things they do to try and control LED backlights only add to the complexity and thus potential problems that such screens can have. OLED isn't perfect though, and there is a possibility of burn in depending on the content you have on screen. There is no such thing as a perfect display. All I can say is that if you have a local big screen store around with good demonstrations, check them out. I'm not talking about Best Buy or Costco, because the store lighting is just too bright to properly show how these things look. You can still look at them this way, though, because they will have OLED and probably QLED near side by side. My Costco has OLEDs by the front door so you can't miss them. They do this this because they know the picture is eye catching.

    I can point you to the Hardware Unboxed review of the LG 48" C1 model. This is a gaming focused review, but it covers a lot of ground and shows calibrated versus non calibrated results. It also compares performance directly to other gaming monitors. Since it is about gaming, you will not find content production monitors discussed very much.

    And keep in mind you need a GPU with HDMI to use these as a monitor, and specifically HDMI 2.1 to fully support what the screen can do, many of which are gaming features. Like my 1080ti does not have 2.1, and so I would not be able to use Gsync on these TVs.

    Cool thanks. So I need to go to a big city with a Costco. The local Walmart will not have OLEDs I don't think. I am mainly interested in the difference between QLED and OLED. I came within a hair of buying an OLED 4K laptop recently but decided at such prices I would wait and buy a 4K OLED laptop with a RTX 4000 series Ada Lovelace GPU in another year & a half instead. Which is just as well as all this stuff adds up to a lot of money.

  • outrider42outrider42 Posts: 3,679

    So I finally hit the lotto with Best Buy and bagged a Founder's 3090 at MSRP. It only took about 9 months of trying with the Hot Stock app. I might have been able to about 2 months ago, I had the 3090 in my cart and it was in the process, but I had trouble with my Best Buy account log in that I needed to get sorted. That was agitating to say the least. But I finally scored. The Founder's is well made, at least, so I am not concerned about that. I plan on undervolting it.

    Anyway, my first test with the bench scene went very well at stock settings. Considering this Founder's edition is not gassing up 3x 8 pin connectors it handles very well.

    Windows 10  20H2

    Ryzen 5800X

    64GB RAM

    Asset Drive Samsung 4TB 870 EVO

    OS Drive 2TB M.2 Inland Platinum

    Daz 4.15.1.72

    Driver 496.13

    2021-10-24 20:28:30.420 Total Rendering Time: 1 minutes 38.77 seconds

    2021-10-24 20:29:05.412 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 4.947s init, 91.729s render

    I still have one of my 1080ti's installed. So here is a unique test, 3090 plus a 1080ti.

    2021-10-24 20:47:30.055 Total Rendering Time: 1 minutes 19.7 seconds

    2021-10-24 20:47:43.123 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1469 iterations, 1.399s init, 75.498s render

    2021-10-24 20:47:43.123 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA GeForce GTX 1080 Ti): 331 iterations, 1.628s init, 75.867s render

    The 1080ti helped, but not a ton as it only gained 19 seconds, LOL. But that is just the gulf between these two cards. It is pretty staggering just how wide the gap is between two generational flagships.

    I plan on doing a number of tests comparing these two cards more in depth. Like I talk about in some previous posts, I want to try to run some tests that demonstrate pure geometry, pure shading, and a combination that might be like a "typical Daz scene" (of course there is no such thing, but I'll try). One question I want to answer is how this benchmark scene compares to so called real world performance. I will make a separate thread for that.

  • skyeshotsskyeshots Posts: 148

    outrider42 said:

    So I finally hit the lotto with Best Buy and bagged a Founder's 3090 at MSRP. It only took about 9 months of trying with the Hot Stock app. I might have been able to about 2 months ago, I had the 3090 in my cart and it was in the process, but I had trouble with my Best Buy account log in that I needed to get sorted. That was agitating to say the least. But I finally scored. The Founder's is well made, at least, so I am not concerned about that. I plan on undervolting it.

    Anyway, my first test with the bench scene went very well at stock settings. Considering this Founder's edition is not gassing up 3x 8 pin connectors it handles very well.

    Windows 10  20H2

    Ryzen 5800X

    64GB RAM

    Asset Drive Samsung 4TB 870 EVO

    OS Drive 2TB M.2 Inland Platinum

    Daz 4.15.1.72

    Driver 496.13

    2021-10-24 20:28:30.420 Total Rendering Time: 1 minutes 38.77 seconds

    2021-10-24 20:29:05.412 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 4.947s init, 91.729s render

    I still have one of my 1080ti's installed. So here is a unique test, 3090 plus a 1080ti.

    2021-10-24 20:47:30.055 Total Rendering Time: 1 minutes 19.7 seconds

    2021-10-24 20:47:43.123 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1469 iterations, 1.399s init, 75.498s render

    2021-10-24 20:47:43.123 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (NVIDIA GeForce GTX 1080 Ti): 331 iterations, 1.628s init, 75.867s render

    The 1080ti helped, but not a ton as it only gained 19 seconds, LOL. But that is just the gulf between these two cards. It is pretty staggering just how wide the gap is between two generational flagships.

    I plan on doing a number of tests comparing these two cards more in depth. Like I talk about in some previous posts, I want to try to run some tests that demonstrate pure geometry, pure shading, and a combination that might be like a "typical Daz scene" (of course there is no such thing, but I'll try). One question I want to answer is how this benchmark scene compares to so called real world performance. I will make a separate thread for that.

    Outrider, I am so happy for you to finally score one of these cards, and at a great price! Ampere really makes creating art on the PC so much more enjoyable. You move so much faster from concept to final render. 

    I'm also glad to see RayDAnt break into the new pro-grade lineup. That is awesome improvements for both of you guys. I'm sure you both can feel the difference when you are in Daz as well - especially when you switch to the IRAY preview mode. 

    Using the Quad A6000 has been nice so far, especially for IRAY preview mode. I really do like it.My next curiosity has moved over to the A100 lineup though. What do you guys think would happen in terms of performance in Daz with a card like the A100 (PCIE/40 GB) versus the A6000/A5000 class cards? From benchmarks I have seen, they barely hover over the A6000. But then again, I have not seen an IRAY render speed comparison. Quick math tells me that  the A100 should crank out far more iterations per watt, with substantially lower BTU issues, but may fail to impress in terms of render speeds. Watching my VRAM consumption, I think 40 GB is more than enough for most scenes as I have yet to overflow the 48 GB VRAM buffers on the A6000s. The extreme high speed VRAM on the A100 also has me curious about the 'real-time' performance, as that is where most of my time is spent - actually working in Daz.

    Also, do you think these would work OK opposite a few A6000s? Would they mix well in the same system?

  • outrider42outrider42 Posts: 3,679
    edited October 2021

    Thanks. I really do feel like I won something after struggling to find one at MSRP for so long. I was worried the whole time something would go wrong, like they gave the card away before I could pick it up, or the card would be defective. So it wasn't until I plugged it in and powered on my PC that I finally felt like celebrating. It does make a difference. I also like the Nvidia cooler. I saw reviews of it, but I was still surprised at how quiet and cool it runs considering it uses 350 Watts. This is an extra boost for rendering, because it stays at a high boost clock for longer. My time on the benchmark is one of the faster 3090 times, and I did no overclocking at all. I did use a custom fan curve (which I always have on), but that was all I used for the test.

    The A100 is a totally different beast. This card is aimed at the pure compute market. In fact it has no ray tracing cores at all. So while this card packs a massive amount of CUDA shaders and Tensor cores, it has no hardware to accelerate ray tracing. This card is also passively cooled because it is intended for server racks.

    While there are no Iray benchmarks, Octane does have the A100 listed on its benchmark chart. Octane is not Iray, but results for Octane often fall in line with Iray, that is if a card is faster at Octane it is most likely faster in Iray. So if these numbers in Octae are correct, the lack of ray tracing cores seriously hurts the A100 as it ranks far below the 3090 or A6000, indeed the 3080 even scores higher than the A100. So with less VRAM and only passive cooling I would avoid the A100. 

    This is copy and pasted from the Octane benchmark. These are average scores, direct links showing every test are included. 3090s are scoring 669, A6000s are scoring 628. However the A100 is "only" scoring 505. The A5000 and 3080 post scores higher. You want those ray tracing cores.  

    1x NVIDIA GeForce RTX 3090 ( 37 results )

    669

    1x NVIDIA GeForce RTX 3080 Ti ( 51 results )

    667

    1x RTX 3090 ( 825 results )

    654

    1x RTX 3080 Ti ( 10 results )

    650

    1x RTX A6000 ( 2 results )

    628

    1x RTX A5000 ( 13 results )

    593

    1x RTX 3080 ( 455 results )

    549

    1x NVIDIA GeForce RTX 3080 ( 15 results )

    542

    1x A100-SXM4-40GB ( 1 result )

    505

    1x A100-PCIE-40GB ( 3 results )

    498

     

    Post edited by outrider42 on
  • chrislbchrislb Posts: 100

    outrider42 said:

    1x NVIDIA GeForce RTX 3090 ( 37 results )

    669

    The two 3090s scoring 768(767.90) and 756 in the octane benchmark linked there are two of my water cooled 3090 cards. Octane responds much differently to a mild overclock and raised power limit than Daz does.

  • chrislbchrislb Posts: 100

    I decided to test my 5950X alone without using the GPU.  I've been testing Hydra, which is made by the same person who made Clock Tuner for Ryzen. Its software to optimize the overclocking of AMD 5000 series(Zen 3) CPUs beyond what PBO or Ryzen Master does.

     

    System Configuration

    System/Motherboard: MSI MEG ACE x570

    CPU: AMD Ryzen R9 5950X overclocked with Hydra 1.0C Pro

    GPU: EVGA RTX 3090 Kingpin Hybrid

    System Memory: 64 GB of DDR4 3600 MHz G.Skill Trident Z Neo CAS 16

    OS Drive: 1TB Sabrent Rocket NVMe 4.0 SB-ROCKET-NVMe4-1TB

    Asset Drive: XPG SX 8100 4TB NVMe SSD

    Operating System: Windows 10 Pro build 19043.1288

    Nvidia Drivers Version: N/A CPU test

    Daz Studio Version: 4.15.0.30

     

    Benchmark Results 5950X CPU only:

    2021-10-31 22:07:46.730 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 1192.266s.

    2021-10-31 22:07:46.731 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-10-31 22:07:47.306 Finished Rendering

    2021-10-31 22:07:47.340 Total Rendering Time: 19 minutes 55.16 seconds

    2021-10-31 22:07:50.487 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-10-31 22:07:50.487 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 1800 iterations, 1.206s init, 1191.052s render

    Iteration Rate: (1800 / 1191.052 seconds) = 1.511 iterations per second

    Loading Time: ((1195.16 seconds) - 1191.052) = 4.108 seconds

  • System Configuration

    System/Motherboard: CyberPowerPC Gaming, ASUS Prime X570-Pro

    CPU: Gigabyte GeForce RTX 3080 Ti @ stock

    System Memory: T-Force VulcanZ 32 GB @ 3200

    OS Drive: 1 TB SSD

    Asset Drive: Same

    Operating System: Windows 10 Pro 21H1 19043.1320 64-bit

    Nvidia Drivers Version: 472.39

    Daz Studio Version: 4.15.0.30 Pro 64-bit

    Optix Prime Acceleration: n/a

     

    Benchmark Results

    DAZ_STATS

    2021-11-02 19:24:57.814 Finished Rendering

    2021-11-02 19:24:57.843 Total Rendering Time: 1 minutes 35.57 seconds

    IRAY_STATS

    2021-11-02 19:25:33.024 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-11-02 19:25:33.025 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3080 Ti): 1800 iterations, 1.587s init, 91.901s render

    Iteration Rate: 19.586 iterations per second

    Loading Time:    3.669 seconds

  • prixatprixat Posts: 1,588

    Has anyone got this with the test scene (revision 3) and this week's beta 4-15-1-91?

    The sphere emits less than 10% of the light and produces a moire pattern in all the shadows!

    4-15-1-91-beta.png
    900 x 900 - 916K
    4-15-0-30.png
    900 x 900 - 1M
  • RayDAntRayDAnt Posts: 1,134

    prixat said:

    Has anyone got this with the test scene (revision 3) and this week's beta 4-15-1-91?

    The sphere emits less than 10% of the light and produces a moire pattern in all the shadows!

    According to the official changelog 4.15.1.087 breaks backwards compatibility with previously imported MDL shader code. Which is something this benchmarking scene contains.I'll have to see if I can re-import those bits to get it working as expected again.

    Sure wish Daz's developers would resume updating the various thread titles/etc they themselves set up to help users keep track of when new releases are put out...

  • RayDAntRayDAnt Posts: 1,134

    @prixat Are you seeing any singificant changes to render rates of the benchmarking scene in the latest vs the previous beta release (before the visual changes appeared)?

  • prixatprixat Posts: 1,588
    I get a significant slowdown... but the notes say that's to be expected. This revision of Iray adds about 15 seconds to the previous Iray's 210s render time (rtx 3060).
  • RayDAntRayDAnt Posts: 1,134

    prixat said:

    I get a significant slowdown... but the notes say that's to be expected. This revision of Iray adds about 15 seconds to the previous Iray's 210s render time (rtx 3060).

    My RTX A5000 is seeing about a 10 slowdown  - which sounds about in line with what you're seeing (proportionally speaking.) I think the thing to do for now is to leave the benchmarking scene as is since the whole point of it is to present a consistent computational load to judge performance on different systems/software versions over an extended period of time. Not necessarily present a pleasing final image (althoguh that is always a plus imo.)

  • prixatprixat Posts: 1,588

    Reading posts in the Beta thread, it looks like Iray changed how it puts Emission through the Cutout Opacity (or something like that cool).

     

    Inverting Cutout Opacity on the 'Sphere Light Source' from 0.07 to 0.93 returns the light levels to previous versions of Iray.

  • outrider42 said:

    Thanks. I really do feel like I won something after struggling to find one at MSRP for so long. I was worried the whole time something would go wrong, like they gave the card away before I could pick it up, or the card would be defective. So it wasn't until I plugged it in and powered on my PC that I finally felt like celebrating. It does make a difference. I also like the Nvidia cooler. I saw reviews of it, but I was still surprised at how quiet and cool it runs considering it uses 350 Watts. This is an extra boost for rendering, because it stays at a high boost clock for longer. My time on the benchmark is one of the faster 3090 times, and I did no overclocking at all. I did use a custom fan curve (which I always have on), but that was all I used for the test.

    The A100 is a totally different beast. This card is aimed at the pure compute market. In fact it has no ray tracing cores at all. So while this card packs a massive amount of CUDA shaders and Tensor cores, it has no hardware to accelerate ray tracing. This card is also passively cooled because it is intended for server racks.

    While there are no Iray benchmarks, Octane does have the A100 listed on its benchmark chart. Octane is not Iray, but results for Octane often fall in line with Iray, that is if a card is faster at Octane it is most likely faster in Iray. So if these numbers in Octae are correct, the lack of ray tracing cores seriously hurts the A100 as it ranks far below the 3090 or A6000, indeed the 3080 even scores higher than the A100. So with less VRAM and only passive cooling I would avoid the A100. 

    This is copy and pasted from the Octane benchmark. These are average scores, direct links showing every test are included. 3090s are scoring 669, A6000s are scoring 628. However the A100 is "only" scoring 505. The A5000 and 3080 post scores higher. You want those ray tracing cores.  

    Thanks for referring me back to the OTOY bench. Something funny about that benchmark is that when the card launched (before the Ampere 30 series hit the market) the A100 took the top spot in the world for the OTOY bench. The 3090 and its cousins quickly took the crown though after they were released. This is something that kept me from making the leap previously. After reading volumes on the A100, it really is a tensor laden device, purpose built for AI and deep learning. If I was currently learning AI or building molecular models, I might be able to justify the cost of admission to bring the card into our IRAY benchmarks here. I believe that it would do better than it did in OTOY relative to the other cards, but still far below much cheaper cards. Looking at other benchmarks for the A100, it shines with code that is built for the tensor cores. This is where it crushes the rest of the Ampere lineup. 

    Anyone that happens to hit this page with an A100 (any version) please feel free to post a score.

  • And at a considerably smaller power budget too - 230 vs. 280 watts. Which might not sound like all that much of a difference by itself. But when you're talking about having too or more of these in a system together... that can easily be difference between needing a new power supply or not (my 750 watt EVGA G2 is still going strong with this setup.)

    ETA: That's another thing I'm really appreciating about this A5000 right now: All that performance - just a single 8-pin power connector.

    Jumping back to this very important point. The cards and EPS cables are truely amazing in terms of effeciency. The A6000s use the same hookups as the A5000s. It makes cable management easier with better airflow. When I ordered custom cables for my A6000 cards, they were also cheaper than 2 or 3 groups of PCIe.

    The real issue with the gaming cards though, in my opinion, is the thermal issues - inside and outside the case. Even with the full water loop on my tripple 3090 rig the heat into the room is virtually identical to the therms from the quad A6000 setup. I know this from the power management software in thier respective Corsair PSUs. This is 20% + wasted energy in therms. And you pay for those therms twice: once to create them and then again to cool them with your AC. 

    I will throw together a quad A5000 system later this week for some comparisons.

Sign In or Register to comment.