Daz Studio Iray - Rendering Hardware Benchmarking

1161719212245

Comments

  • outrider42outrider42 Posts: 3,679
    edited January 2021
    I've always wanted to know just what Iray's texture format is, and does it compress all textures? Like just as an extreme a jpg is just 50 kb in size while still being 4k. Does Iray compress this even more? There has to be some kind of curve on this format.

    I know some textures if they are very compressed to start with will "bleed" in Iray. This is fixed by adjusting the texture compression to a higher number. So perhaps Iray will compress everything a fixed amount.

    Also, Daz may have altered the equation a bit with 4.14. The speed enhancement observed in 4.14 is related to this vague statement:

    Made use of bump and normal maps together with the NVIDIA Iray renderer more efficient

    "Efficient" can mean different things, including how compression is handled. If I recall correctly I did notice a small increase in VRAM in the same scenes when using 4.14 over previous versions. This only effects normal and bump maps, though, not all maps. If a scene happens to have no normal maps they will not get any speed boost in 4.14 over past versions. So Daz HD character presets that do not load normals wont see a difference in 4.14. I have tested this aspect.

    It is also important to note that while Iray compresses textures going to VRAM, the scene is not compressed in system RAM. So texture sizes will impact how much RAM a scene holds while working in Daz Studio.
    Post edited by outrider42 on
  • skyeshotsskyeshots Posts: 148
    edited January 2021

    Running a test with an old 3570K and a 3090..

    Post edited by skyeshots on
  • skyeshotsskyeshots Posts: 148

    System/Motherboard: Intel DQ77MK
    CPU: Intel i5-3570K at 3.4 Ghz (stock)
    GPU: MSI RTX 3090 
    System Memory: 16 GB Corsair Dominator Platinum DDR3-2133/XMP
    OS Drive: Intel SATA6 SSD 
    Asset Drive: Crusial SATA6 SSD
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 460.89 
    Daz Studio Version: 4.15.02

    2021-01-30 23:04:51.334 Saved image: D:\Daz Render Exports\Test 3570 Off with 3090.jpg
    2021-01-30 23:04:51.348 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-01-30 23:04:51.353 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 5.888s init, 92.836s render
    2021-01-30 23:04:29.831 Finished Rendering
    2021-01-30 23:04:29.873 Total Rendering Time: 1 minutes 42.14 seconds
    Loading Time: 9.304 Seconds 
    Device Iteration Rate: 19.3890 iterations per second

    Circa 2012 Intel i5-3570K with an RTX 3090 scored above. With CPU enabled, the RTX 3090 dropped to 15.3198 iterations per second, further supporting the idea to disable CPU during rendering. Substantially longer loading times than Daz 4.14, but GPU iteration rate again is higher here in Daz 4.15.

    This should make it clear to anyone with an aging machine though: If you have an adequate power supply, you can drop in one of these 30 series cards and get great results. You can uprade the rest of the system later, when Intel finally catches up.

  • outrider42outrider42 Posts: 3,679

    Yeah, I have tried to point that out often. You only need a new CPU if you use applications that really need a better one. Daz itself will run fine on most PCs regardless of CPU. For Iray the GPU is truly king, to the point where you can have PCs that make no sense to most people, but will be fine for Daz and Iray. If you told just about any PC builder that were you going to use a 3090 with a 3570k they would probably look at you like are stupid. Some might even say that to your face, and laugh about the idea.

    But Daz Studio and Iray are not typical software. This isn't a video game. The idea of building a "balanced PC" is frankly a waste of money for this software. You can instead be placing the bulk of your budget into the GPU, and render faster than the person who "balanced" their PC. And you may have spent less money in the process.

    If somebody buys a AMD 5950X and a RTX 3080...assuming they got close to MSRP that would be basically the price of a 3090 for those two parts alone. You can instead keep the CPU you already own and put all of that into a 3090.

  • colcurvecolcurve Posts: 152

    which rtx is the most efficient (price/iterations) with regard to iray currently? do octane benchmarks give similar results as iray benchmarks? octane bench seems to recommend 3080

  • skyeshotsskyeshots Posts: 148

    colcurve said:

    which rtx is the most efficient (price/iterations) with regard to iray currently? do octane benchmarks give similar results as iray benchmarks? octane bench seems to recommend 3080

    RTX 3060 Ti is (currently) the most efficent in terms of price per iteration for Iray as well as Octane.

  • System/Motherboard: Asus TUF X570 Pro Gaming Wifi
    CPU: AMD Ryzen 7 5800X (PBO)
    GPU: RTX 3090 Founder's Edition (Stock)
    System Memory: 32GB G. Skill Trident Z Neo (3600 MHz)
    OS Drive: Crucial P2 1TB
    Asset Drive: WD WD100EMAZ
    Operating System: Win 10 Pro, 20H2
    Nvidia Drivers Version: 461.40 
    Daz Studio Version: 4.15.02

    2021-01-31 13:38:22.482 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-01-31 13:38:22.954 Saved image: C:\Users\[...]\AppData\Roaming\DAZ 3D\Studio4\temp\render\r.png

    2021-01-31 13:38:22.963 Finished Rendering

    2021-01-31 13:38:22.992 Total Rendering Time: 1 minutes 34.67 seconds

    2021-01-31 13:38:23.008 Loaded image r.png

    2021-01-31 13:38:23.037 Saved image: C:\Users\[...]\AppData\Roaming\DAZ 3D\Studio4\temp\RenderAlbumTmp\Render 1.jpg

    2021-01-31 13:38:39.112 Saved image: C:\Users\[.....]\New folder\Benchnew.png

    2021-01-31 13:38:39.582 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-01-31 13:38:39.582 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 2.043s init, 90.617s render

    Device Iteration Rate: (1800/90.617) = 19.86

    Thought I'd rebench with the new setup and new DAZ version/driver. Could probably get higher if I OC'd the GPU; it may have also been thermally throttling as it was hot from before I started the bench. Definitely digging the new speed updates though. I know when I first benchmarked this scene back in the fall, other 3090 owners had faster speeds than me (probably more factory OC), so I wouldn't be surprised if there's cards out there easily pushing 20 iterations+ on this scene. Also, definitely seems like the initialization time is down compared to before -- wonder if that's due to the PCIe Gen 4 bandwidth..

     

  • outrider42outrider42 Posts: 3,679
    Remember that rendering speed is just one factor. The VRAM capacity is major factor, too. You could have the fastest GPU ever, but if you build a scene that is too large for its VRAM, that GPU will do NOTHING. The 3080 has 10gb, which is a step up from the 8gb from past x80 releases. But it is agitating that Nvidia didn't give it more. At least not yet.

    The upcoming 3060 is going to be the value champion for Iray users. The 3060 will have 12gb and has a MSRP of $329. So half the price of a 3080 while actually offering 2 additional gb of VRAM. No x60 class has ever done this. Of course the 3060 will be slower. It has less than half the CUDA cores of the 3080, but it is possible to buy two 3060s, LOL. Iray can do that.

    But the real issue is if you can actually buy any of these cards near the MSRP...or if you can find one at all. The market is pure chaos right now. You have to get lucky to get any GPUs for a reasonable price today.
  • skyeshots said:

    colcurve said:

    which rtx is the most efficient (price/iterations) with regard to iray currently? do octane benchmarks give similar results as iray benchmarks? octane bench seems to recommend 3080

    RTX 3060 Ti is (currently) the most efficent in terms of price per iteration for Iray as well as Octane

    I think you're right, but I'm not sure if it's by much. Some quick napkin math: adjusting for speed gains in 4.15 (from 4.12), I found that my 3090 went from 13.549 to 19.86, which would be an increase of 47%. Assuming this holds true with the 3080 benchmarked before 4.14/4.15 as well, this would give a new speed of 17.589 iterations; adjusting for (old?) MSRP of $699, that's .025 iterations/dollar. The 3060Ti on 4.14 got 10.648 iterations, which would give us .0266 iterations/dollar. At that point, it's fairly similar, and you're also getting an extra 2GB of VRAM, which at these low amounts would make a big difference. Obviously we'd need a 3080 user to rebench to be sure of these data, but personally I would go for a 3080 because 8GB is too small for many scenes, and even 10GB isn't ideal. 

    Personally, I can't wait to see how the 3060 12GB turns out. That could be a sweet spot for Daz: a comfortable, though not huge, amount of VRAM, at a low price of $300, with still likely acceptable iterations (8, maybe?). I know I will try to get one just to see. 

  • testing on new rig, GPU rendering only. 

    CPU: Intel 10900KF 4.4ghz
    GPU: RTX 3090 TUF, slightly OCed
    System Memory: 32GB 
    OS Drive: nvm 1T SSD
    Asset Drive: WD WD100EMAZ
    Operating System: Win 10 Pro
    Nvidia Drivers Version: 461.40 
    Daz Studio Version: 4.15

     

    2021-02-01 20:22:56.189 Total Rendering Time: 1 minutes 35.68 seconds

    2021-02-01 20:22:56.205 Loaded image r.pngg

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.509s init, 91.921s render

    Day and night difference compare to RTX 2060 on old rig

  • windli3356 said:

    testing on new rig, GPU rendering only. 

    CPU: Intel 10900KF 4.4ghz
    GPU: RTX 3090 TUF, slightly OCed
    System Memory: 32GB 
    OS Drive: nvm 1T SSD
    Asset Drive: WD WD100EMAZ
    Operating System: Win 10 Pro
    Nvidia Drivers Version: 461.40 
    Daz Studio Version: 4.15

     

    2021-02-01 20:22:56.189 Total Rendering Time: 1 minutes 35.68 seconds

    2021-02-01 20:22:56.205 Loaded image r.pngg

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.509s init, 91.921s render

    Day and night difference compare to RTX 2060 on old rig

    Grats on the new rig!

    If my math is right, you have 3.759 seconds load time with 19.58 iterations per second. That is a huge step up from a 2060.

  • skyeshots said:

    windli3356 said:

    testing on new rig, GPU rendering only. 

    CPU: Intel 10900KF 4.4ghz
    GPU: RTX 3090 TUF, slightly OCed
    System Memory: 32GB 
    OS Drive: nvm 1T SSD
    Asset Drive: WD WD100EMAZ
    Operating System: Win 10 Pro
    Nvidia Drivers Version: 461.40 
    Daz Studio Version: 4.15

     

    2021-02-01 20:22:56.189 Total Rendering Time: 1 minutes 35.68 seconds

    2021-02-01 20:22:56.205 Loaded image r.pngg

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-01 20:23:51.516 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.509s init, 91.921s render

    Day and night difference compare to RTX 2060 on old rig

    Grats on the new rig!

    If my math is right, you have 3.759 seconds load time with 19.58 iterations per second. That is a huge step up from a 2060.

    Thanks alot :D  Yes, rendering speed is a nice upgrade, but what's more important is the scene capicity, the way Daz's iRray engine works is not very user friendly, you never know when you are about to run out of Vram, and debuging tool don't give you much useful info, many new users getting black screen iray failures and got very mad about it(i was one of these folks) my 2060 can hardly handle 3 Gen 8 characters with any high poly enviorment & a HDRI, with this card, there don't seem to be a limit :D and I'm super happy about it.  

  • windli3356 said:

    Thanks alot :D  Yes, rendering speed is a nice upgrade, but what's more important is the scene capicity, the way Daz's iRray engine works is not very user friendly, you never know when you are about to run out of Vram, and debuging tool don't give you much useful info, many new users getting black screen iray failures and got very mad about it(i was one of these folks) my 2060 can hardly handle 3 Gen 8 characters with any high poly enviorment & a HDRI, with this card, there don't seem to be a limit :D and I'm super happy about it.  

    Daz3d falls flat in terms of VRAM tracking. This is because add-ons like IRAY and Optane are relatively new to the application. You might try GPU-Z, or Open Hardware Monitor, the latter of which can provide logs. Since DAZ Studio has grown so GPU dependent in recent years, a built in VRAM usage meter would be nice. Even if very basic. Users could then balance out character & poly counts/mesh complexities vs texture priorities on the fly and from within the app. 

  • skyeshots said:

    windli3356 said:

    Thanks alot :D  Yes, rendering speed is a nice upgrade, but what's more important is the scene capicity, the way Daz's iRray engine works is not very user friendly, you never know when you are about to run out of Vram, and debuging tool don't give you much useful info, many new users getting black screen iray failures and got very mad about it(i was one of these folks) my 2060 can hardly handle 3 Gen 8 characters with any high poly enviorment & a HDRI, with this card, there don't seem to be a limit :D and I'm super happy about it.  

    Daz3d falls flat in terms of VRAM tracking. This is because add-ons like IRAY and Optane are relatively new to the application. You might try GPU-Z, or Open Hardware Monitor, the latter of which can provide logs. Since DAZ Studio has grown so GPU dependent in recent years, a built in VRAM usage meter would be nice. Even if very basic. Users could then balance out character & poly counts/mesh complexities vs texture priorities on the fly and from within the app. 

    Iray and Octane are self-contained - DS would have no way of pre-calculating how much memory they would need for a particular scene, if that is what you are asking for

  • ebergerlyebergerly Posts: 3,255
    edited February 2021

    skyeshots said:

     

    Daz3d falls flat in terms of VRAM tracking. This is because add-ons like IRAY and Optane are relatively new to the application. You might try GPU-Z, or Open Hardware Monitor, the latter of which can provide logs. Since DAZ Studio has grown so GPU dependent in recent years, a built in VRAM usage meter would be nice. Even if very basic. Users could then balance out character & poly counts/mesh complexities vs texture priorities on the fly and from within the app. 

    FYI, GPU-Z does provide logging, and you can use those log files to draw some wonderful graphs in many apps.

    Also, Windows 10 Task Manager provides a HUGE amount of useful GPU data, including

    • Dedicated GPU Memory usage by process/application,
    • System memory use by process/application (very useful in evaluating how your system/GPU's are performing during renders),
    • Realtime usage of each GPU engine (such as CUDA, video encoding, compute engines, etc.) so you can actually find out what is using your GPU VRAM (and not just assume it's DAZ/Iray),
    • GPU temperatures,
    • Overall GPU Dedicated Memory Usage,
    • and much more. 

    Much nicer and more valuable than other apps, IMO. And since it's the OS's job to monitor and assign hardware resources, it is arguably the most accurate, first-hand data.   

    Post edited by ebergerly on
  • PerttiAPerttiA Posts: 10,024

    outrider42 said:

    I've always wanted to know just what Iray's texture format is, and does it compress all textures? Like just as an extreme a jpg is just 50 kb in size while still being 4k. Does Iray compress this even more? There has to be some kind of curve on this format.

     

    The 50kb file size of the jpg is just about the space the file takes on disk, once the image is opened in whatever program, the compression factor is forgotten and the the image reserves memory as uncompressed - A 4096x4096x24bit image uses 48MB of memory even if the jpg file is just 50kb on disk (Width (px) x Height (px) x color depth (bits) / 8 (bits) / 1024^2 = MegaBytes)

    The small file size on disk is achieved with algoritms, which may for example use surrounding pixels to calculate the colors of other pixels, but when the image is opened, each and every pixel needs it's own color and this information reserves RAM - The ultimate would be an all black 4k image, which compresses to almost nothing, but when opened, reserves the same 48MB that any other image with the same pixel size and color depth reserves.

  • artphobe said:

    System Configuration
    System/Motherboard: Gigabyte B550M Aorus Pro
    CPU: Ryzen 5 5600x  (stock)
    GPU: EVGA GeForce RTX 3070 FTW3 Ultra 8GB GDDR6 2010 MHz (stock)
    System Memory: Crucial Ballistix Gaming Memory 16 GB (2 x 8 GB) DDR4 3600 MHz C16 (stock)
    OS Drive: Samsung 860 Evo 1TB
    Asset Drive: Samsung 850 Evo 256GB
    Operating System: Windows 10 Pro
    Nvidia Drivers Version: 460.89
    Daz Studio Version: 4.14.0.10 Pro Edition 64bit
    Optix Prime Acceleration: N/A

    Benchmark Results
    2021-01-26 00:02:41.630 Finished Rendering
    2021-01-26 00:02:41.658 Total Rendering Time: 2 minutes 37.61 seconds

    Edit : Will rerun and upload.

     

    Got a 3080 FTW3 Ultra this time.

    2021-02-06 18:15:05.671 Total Rendering Time: 2 minutes 1.43 seconds

    2021-02-06 18:15:07.509 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3080):      1800 iterations, 5.713s init, 113.654s render

     

  • System Configuration
    System/Motherboard: MSI X470 GAMING PLUS
    CPU: AMD Ryzen 7 2700X Processor (8x 3.7GHZ/20MB L3 Cache)
    GPU: ASUS ROG Strix GeForce RTX 3090
    System Memory: G.Skill Aegis 288-Pin DDR4 2666 (4x16GB)
    OS Drive: Samsung 970 EVO PLUS M.2 PCIe NVMe SSD
    Asset Drive: Same
    Operating System: Windows 10 Home version 20H2 build 19042.746
    Nvidia Drivers Version: 461.40 Studio 
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: N/A

    Benchmark Results
    2021-02-06 01:50:39.798 Finished Rendering
    2021-02-06 01:50:39.832 Total Rendering Time: 1 minutes 35.18 seconds
    2021-02-06 01:50:44.235 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-06 01:50:44.235 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090):    1800 iterations, 1.847s init, 91.063s render
    Iteration Rate: 19.76
    Loading Time: 4.11 seconds

  • DripDrip Posts: 1,191

    Been a while, and there have been some noticable improvements to both Studio and the NVidia drivers, so I wanted to see how my 2070 was doing.

    System Configuration
    System/Motherboard: BRAND MODEL
    CPU: AMD Ryzen 5 2600x @ stock
    GPU: NVidia 2070 RTX @ stock (if left at defaults)
    System Memory: Corsair LPX 2x16GB @ default
    OS Drive: Samsung EVO 860 500GB
    Asset Drive: Seagate Barracuda 4TB @ 5400rpm
    Operating System: Win 10 Home
    Nvidia Drivers Version: 460.79
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results
    2021-02-07 17:09:11.377 Finished Rendering
    2021-02-07 17:09:11.416 Total Rendering Time: 5 minutes 56.70 seconds
    2021-02-07 17:09:22.659 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-07 17:09:22.659 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070):      1800 iterations, 2.832s init, 350.362s render
    Iteration Rate: 5.1375 iterations per second
    Loading Time: 6.08 seconds

  • System/Motherboard: Gigabyte Aorus Master V1.2
    CPU: AMD Ryzen 9 5950X (No PBO)
    GPU: RTX 3090 MSI Suprim OC (Stock)
    System Memory: 64GB G. Skill Trident Z Neo (3600 MHz)
    OS Drive: Samsung 980 Pro NVMe 500GB
    Asset Drive: Samsung 970 Evo NVMe 2TB
    Operating System: Win 10 Pro, 20H2
    Nvidia Drivers Version: 461.40 
    Daz Studio Version: 4.15.02

    Benchmark Results

     

    2021-02-07 21:38:19.511 Finished Rendering

    2021-02-07 21:38:19.549 Total Rendering Time: 1 minutes 32.78 seconds

    2021-02-07 21:38:58.165 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-07 21:38:58.165 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.438s init, 89.257s render

     

  • chrislbchrislb Posts: 100

    I decided to see how much changing the GPU speed and VRAM speed improves results with the latest version of Daz and Iray.  In increased the GPU clocks and VRAM clocks individually until the results stopepd improving.  The difference was 1.9525 iterations per second.

     

    System Configuration

    System/Motherboard: MSI MEG X570 ACE

    CPU: AMD R9 3950X @ Stock with PBO +200

    GPU: MSI Gaming X Trio RTX 3090 with MSI Suprim 450 watt BIOS

    System Memory: Corsair Vengeance RGB Pro 64 GB @ 3600 MHz CAS18

    OS Drive: 1TB Sabrent Rocket NVMe 4.0 SB-ROCKET-NVMe4-1TB

    Asset Drive: XPG SX 8100 4TB NVMe SSD

    Operating System: Windows 10 Pro 64 bit Bild 19042.789

    Nvidia Drivers Version: 461.40

    Daz Studio Version: 4.15.02

     

    Benchmark Results

     

    MSI 3090 Gaming X Trio w/Suprim 450 Watt BIOS Stock Settings:

    2021-02-08 19:29:24.164 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 90.378s.

    2021-02-08 19:29:24.735 Finished Rendering

    2021-02-08 19:29:24.784 Total Rendering Time: 1 minutes 33.31 seconds

    2021-02-08 19:29:27.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-08 19:29:27.301 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.349s init, 89.471s render

     

    Iteration Rate: (1800 / 89.471) = 20.1182 iterations per second

    Loading Time: ((93.31) - 89.471) 3.839 seconds

     

    MSI 3090 Gaming X Trio w/Suprim 450 Watt BIOS +164 MHz GPU +1107 Memory 450W power limit Setting:

    2021-02-08 20:03:20.425 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 82.388s.

    2021-02-08 20:03:20.426 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-02-08 20:03:21.046 Total Rendering Time: 1 minutes 24.96 seconds

    2021-02-08 20:03:24.138 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-08 20:03:24.138 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.004s init, 81.556s render

     

    Iteration Rate: (1800 / 81.556) = 22.0707 iterations per second

    Loading Time: ((84.96) - 81.556) 3.404 seconds

  • chrislbchrislb Posts: 100
    edited February 2021

    outrider42 said:

    If you look at the A6000, you will se it actually uses LESS power than the 3090 even though it has more active cores and twice the VRAM. The max power draw on the A6000 caps at 300 Watts, which is high, but not as high as the 350 Watts of the 3090 Founder's Edition, and 3rd party vendors can push the 3090 to nearly 400 Watts. I suppose this could be another consideration in the A6000 versus the 3090.

    With the benchmark in this thread, it seems that most of the 3090 models with 450-500+ watt power limits in their BIOS rarely will hit 420 watts of power draw and often stay under 400 watts.  Even when rendering other larger and more complex scenes in Daz, I've rarely seen 430 watts power draw with those cards.  I've also tried some of the XOC BIOS versions available for some of the cards with basically no power limit and even when using those, renderking 4K resolution scenes in Daz, the card rarely draws 430 watts.  The same cards while running the 3DMark DirectX Raytracing feature test will draw 600-700 watts.

    Also, water cooling the 3090 seems to decrease its power draw.  With my 3090, the max power draw when rendering this benchmark scene was under 390 watts after water cooling the card.  When using the air cooler, the power draw regularly spiked past 420 watts with the same BIOS and power limit settings.  The peak temperature it saw was just above 40C while the peak temperature with the factory air cooler was over 70C.

    Post edited by chrislb on
  • chrislb said:

    I decided to see how much changing the GPU speed and VRAM speed improves results with the latest version of Daz and Iray.  In increased the GPU clocks and VRAM clocks individually until the results stopepd improving.  The difference was 1.9525 iterations per second.

    Can you share the clocks for these scores?
  • outrider42outrider42 Posts: 3,679

    Water cooling can effect power draw. The reason why cooling can be more effecient is due to how electrons work in silicon. Electron mobility in silicon will indeed decrease as the temperature increases, so this is a real thing. But 30 Watts seems like a lot. Are you measuring the entire system? You may be seeing several things combining to add to the overall power draw when on air. Under air, the cooling system itself has to spin the fans hard to keep up, plus all the hot air is simply ejected in the PC case (unless you have a blower type) and this in turn can cause other components to run hotter and their fans to work more.

    The power ratings are guidelines, and these are generally geared towards gaming and similar applications. Playing video games stresses GPUs very differently than Iray rendering does. With Iray, the entire scene is loaded into VRAM and it basically stays there the whole duration of the render. This means that the VRAM is not actually stressed that much and will stay cooler than playing a video game with an unlocked frame rate. Video games stress VRAM constantly, as they draw data in and out of VRAM rapidly. For example the PS4 only holds roughly 30 seconds worth of gameplay in its VRAM, everything is constantly streaming in and out of VRAM during play. The new consoles do this even faster, as the new PS5 only holds about a second or so of data.

    So the result is that video games will run much hotter than Iray in the same setup. If you try almost any modern gaming benchmark, like 3DMark, you will see your temps go up a lot more than with Iray, especially with air. In my system my 1080ti will easily be 10C hotter during a game than with Iray. And this goes along with power draw, the power draw numbers can get very high as you observed. This will depend on the game, but if you play at high or unlocked frame rates you will generally run hotter, and of course benchmarks are unlocked. If you lock the frame rate and your GPU can easily handle it then it run cooler.

    The A6000 uses different memory than the 3090. The GDDR6X memory uses more power than standard GDDR6, which is one reason why the A6000 can use less. The other reason may be because the A6000 chips are the very best binned GA102 dies. 

  • Hi everyone, I figured I'd benchmark my 5900X and what I can get out of my FE 3090 with OC. 

    I would caution people here to use HWinfo64 to monitor your GDDR6X memory junction temp. While the core temp was entirely acceptable, Iray did bring it close to 100*C, and with NVLink'd cards/poor air flow (my case is open at the moment), I could see the memory chips overheating with longer renders -- this one only takes a minute and a half, after all. I also put some extra cooling fins and a fan on my card due to reading about high VRAM temps, so I can't help but think that some air-cooled cards with poor air flow may run hot with temp. This may not really be an issue, but thought I'd throw it out there. Might also be better at stock power limits; it was using ~ 375W IIRC here at 114% PL. 

    My 5900x (a 12-core ryzen 5000 chip) did throttle a bit (temp reached 90C) so frequency as variable between 4.2 and 4.3 GHz. My AIO only has one 120mm fan due to RAM stick clearance, so maybe I will reasess when I figure out a better cooling system. Only used PBO; with some manual OC and lower voltage I'm sure it could be a bit better, too.

    System Configuration
    System/Motherboard: ASUS TUF GAMING X570-PLUS (WI-FI) 
    CPU: AMD Ryzen 9 5900X (PBO)
    GPU: NVIDIA GeForce RTX 3090 Founders Edition: 114% Power Limit, Core@2050Mhz, Mem@2550Mhz
    System Memory: G.SKILL Trident Z Neo Series 32GB (2 x 16GB) DDR4 3600 F4-3600C18D-32GTZN
    OS Drive: Crucial CT1000P2 (1TB)
    Asset Drive: WDC WD100EMAZ (10TB)
    Operating System: Win 10 Pro 20H2 Build 19042.804
    Nvidia Drivers Version: 461.40
    Daz Studio Version: 4.15.0.2 Pro


    Benchmark Results
    2021-02-14 13:57:22.174 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 87.120s.
    2021-02-14 13:57:22.178 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.
    2021-02-14 13:57:22.652 Saved image: C:\Users\
    2021-02-14 13:57:22.656 Finished Rendering
    2021-02-14 13:57:22.688 Total Rendering Time: 1 minutes 29.21 seconds

    2021-02-14 13:58:08.316 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-14 13:58:08.316 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.486s init, 85.744s render

    Rendering Performance: 1800/85.744 = 20.99 iterations/sec
    Loading Time: 89.21-85.744= 3.466s

    CPU Rendering

    Benchmark Results

    2021-02-14 14:28:19.172 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Received update to 01800 iterations after 1706.019s.

    2021-02-14 14:28:19.177 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend progr: Maximum number of samples reached.

    2021-02-14 14:28:19.726 Total Rendering Time: 28 minutes 27.54 seconds

    2021-02-14 14:28:19.774 Saved image: 

    2021-02-14 14:28:32.501 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-14 14:28:32.501 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 1800 iterations, 1.342s init, 1704.667s render

    Rendering Performance: 1800/1704.667 = 1.056 iterations/sec
    Loading Time: 1707.54-1704.667 = 2.873s

     

  • skyeshotsskyeshots Posts: 148
    edited February 2021

    outrider42 said:

    Water cooling can effect power draw. The reason why cooling can be more effecient is due to how electrons work in silicon. Electron mobility in silicon will indeed decrease as the temperature increases, so this is a real thing. But 30 Watts seems like a lot. Are you measuring the entire system? You may be seeing several things combining to add to the overall power draw when on air. Under air, the cooling system itself has to spin the fans hard to keep up, plus all the hot air is simply ejected in the PC case (unless you have a blower type) and this in turn can cause other components to run hotter and their fans to work more.

    The power ratings are guidelines, and these are generally geared towards gaming and similar applications. Playing video games stresses GPUs very differently than Iray rendering does. With Iray, the entire scene is loaded into VRAM and it basically stays there the whole duration of the render. This means that the VRAM is not actually stressed that much and will stay cooler than playing a video game with an unlocked frame rate. Video games stress VRAM constantly, as they draw data in and out of VRAM rapidly. For example the PS4 only holds roughly 30 seconds worth of gameplay in its VRAM, everything is constantly streaming in and out of VRAM during play. The new consoles do this even faster, as the new PS5 only holds about a second or so of data.

    So the result is that video games will run much hotter than Iray in the same setup. If you try almost any modern gaming benchmark, like 3DMark, you will see your temps go up a lot more than with Iray, especially with air. In my system my 1080ti will easily be 10C hotter during a game than with Iray. And this goes along with power draw, the power draw numbers can get very high as you observed. This will depend on the game, but if you play at high or unlocked frame rates you will generally run hotter, and of course benchmarks are unlocked. If you lock the frame rate and your GPU can easily handle it then it run cooler.

    The A6000 uses different memory than the 3090. The GDDR6X memory uses more power than standard GDDR6, which is one reason why the A6000 can use less. The other reason may be because the A6000 chips are the very best binned GA102 dies. 

    The power draw on '30 series cards w/450 watt bios creates excessive heat, especially in multi GPU. The cards with the double (8) pin connectors draw less power, ex. 350 watts max., and dissipate less heat. The A6000 is listed at 300 watts with a single 8 Pin EPS cable. 

    It should be here next week, so we can take bets today. -> Which card is faster in Iray? The A6 or 3090?

    Post edited by skyeshots on
  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    outrider42 said:

    Water cooling can effect power draw. The reason why cooling can be more effecient is due to how electrons work in silicon. Electron mobility in silicon will indeed decrease as the temperature increases, so this is a real thing. But 30 Watts seems like a lot. Are you measuring the entire system? You may be seeing several things combining to add to the overall power draw when on air. Under air, the cooling system itself has to spin the fans hard to keep up, plus all the hot air is simply ejected in the PC case (unless you have a blower type) and this in turn can cause other components to run hotter and their fans to work more.

    The power ratings are guidelines, and these are generally geared towards gaming and similar applications. Playing video games stresses GPUs very differently than Iray rendering does. With Iray, the entire scene is loaded into VRAM and it basically stays there the whole duration of the render. This means that the VRAM is not actually stressed that much and will stay cooler than playing a video game with an unlocked frame rate. Video games stress VRAM constantly, as they draw data in and out of VRAM rapidly. For example the PS4 only holds roughly 30 seconds worth of gameplay in its VRAM, everything is constantly streaming in and out of VRAM during play. The new consoles do this even faster, as the new PS5 only holds about a second or so of data.

    So the result is that video games will run much hotter than Iray in the same setup. If you try almost any modern gaming benchmark, like 3DMark, you will see your temps go up a lot more than with Iray, especially with air. In my system my 1080ti will easily be 10C hotter during a game than with Iray. And this goes along with power draw, the power draw numbers can get very high as you observed. This will depend on the game, but if you play at high or unlocked frame rates you will generally run hotter, and of course benchmarks are unlocked. If you lock the frame rate and your GPU can easily handle it then it run cooler.

    The A6000 uses different memory than the 3090. The GDDR6X memory uses more power than standard GDDR6, which is one reason why the A6000 can use less. The other reason may be because the A6000 chips are the very best binned GA102 dies. 

    The power draw on '30 series cards w/450 watt bios creates excessive heat, especially in multi GPU. The cards with the double (8) pin connectors draw less power, ex. 350 watts max., and dissipate less heat. The A6000 is listed at 300 watts with a single 8 Pin EPS cable. 

    It should be here next week, so we can take bets today. -> Which card is faster in Iray? The A6 or 3090?

    Well yeah, they draw more power and thus need more pins. There are 3090s that even have THREE eight pin connectors. That is a lot of power, and a lot of heat created as a result. Good for the winter time!

    Nobody has tested the A6000 with Iray, so we cannot say for sure which is faster. Puget tested the A6000, but for some reason did not do the obvious and test it head to head against the 3090...talk about short sighted! They tested rendering, which is something that Quadro does not gain much of an advantage over gaming cards. However, Puget did test the 3090 when it released, so we can compare the numbers from that test to the A6000 results. It is important to note that since the tests were not done at the same time, there may be variences in them. And of course Octane is not Iray, but they are comparible. The 3090 does beat the A6000 in their Octane bench, and by a margain that would be beyond error.

    https://www.pugetsystems.com/labs/articles/Nvidia-RTX-A6000-48GB-Review-Roundup-2063/

    https://www.pugetsystems.com/labs/articles/OctaneRender-2020---NVIDIA-GeForce-RTX-3080-3090-Performance-1890/

    As said before, the A6000 is really only for those that need that much memory, or specifically Quadro features. But Iray does not use any Quadro feature at all. Quadro can be placed into more Nvlinks than the 3090. The 3090 can only be Nvlinked in pairs, while the A6000 can go link up to 4 I believe. The A6000 does use a lot less power, but one could simply undervolt/downclock a 3090 if power is that much a concern.

    I know the Quadro name is gone, but it is just easier to use because people understand what it means. 

  • chrislbchrislb Posts: 100
    edited February 2021

    skyeshots said:

    chrislb said:

    I decided to see how much changing the GPU speed and VRAM speed improves results with the latest version of Daz and Iray.  In increased the GPU clocks and VRAM clocks individually until the results stopepd improving.  The difference was 1.9525 iterations per second.

    Can you share the clocks for these scores?
     

    It was 1949 MHz on the GPU.  I think the +1107 MHz on the VRAM is 10858*2 which would be 21,716 MHz effective rate.

    edit: for some reason the quote function in replies is acting weird for me today.

     

    Post edited by chrislb on
  • chrislbchrislb Posts: 100
    edited February 2021

    outrider42 said:

    Water cooling can effect power draw. The reason why cooling can be more effecient is due to how electrons work in silicon. Electron mobility in silicon will indeed decrease as the temperature increases, so this is a real thing. But 30 Watts seems like a lot. Are you measuring the entire system? You may be seeing several things combining to add to the overall power draw when on air.

    I'm measuring the power draw reported by the card through the PCIEx16 connector and three 8 pin connectors.  When the card is water cooled, its not powering the three fans and the various LED lights on the card.  I've read thatt the fans alone can be 10+ watts power draw at max speed.  However I don't knwo how accurate that estimate is.  

    EDIT: Several sources list 12 volt PC fans as drawing 5-10 watts each.  With 3 fans on the card being removed because of the waterblock/watercooling, that's 15-30 watts power draw reduction from the fans alone.

    Also, I wonder if the GDDR6X uses les wattage when its running about 20C cooler due to water cooling?

    Post edited by chrislb on
  • Thought I'd try out the 3070 FE I got today. I tested it both at stock and overclocked, as seen below. 13.214 iterations while OC'd isn't too shabby -- that's around what my 3090 got back on the old DAZ beta with older drivers. 

    System Configuration
    System/Motherboard: ASUS TUF GAMING X570-PLUS (WI-FI) 
    CPU: AMD Ryzen 9 5900X (PBO)
    GPU: NVIDIA GeForce RTX 3070 Founders Edition: Stock (test 1) or 109% Power Limit, Core @ 2040Mhz, Mem@7607Mhz 
    System Memory: G.SKILL Trident Z Neo Series 32GB (2 x 16GB) DDR4 3600 F4-3600C18D-32GTZN
    OS Drive: Crucial CT1000P2 (1TB)
    Asset Drive: WDC WD100EMAZ (10TB)
    Operating System: Win 10 Pro 20H2 Build 19042.804
    Nvidia Drivers Version: 461.40
    Daz Studio Version: 4.15.0.2 Pro


    Benchmark Results at Stock

    2021-02-19 17:26:47.467 Total Rendering Time: 2 minutes 34.75 seconds

     

    2021-02-19 17:27:00.742 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-19 17:27:00.742 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3070): 1800 iterations, 2.242s init, 150.290s render

    Rendering Performance: 1800/150.290 = 11.977 iterations/sec
    Loading Time: 154.75s-150.290s= 4.46s

    Rendering

    Benchmark Results

    2021-02-19 17:32:18.835 Total Rendering Time: 2 minutes 18.99 seconds

     

    2021-02-19 17:32:44.308 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-19 17:32:44.308 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3070): 1800 iterations, 1.309s init, 136.218s render

     

    Rendering Performance: 1800/1704.667 = 13.214 iterations/sec
    Loading Time: 138.99-136.218=2.77s

Sign In or Register to comment.