Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Missing a 2070 from the collection? Here's my bench to give people a crude estimate.
System Configuration
System/Motherboard: MSi B450-A Pro
CPU: AMD Ryzen 5 2600x stock
GPU: RTX2070 @ stock
System Memory: Corsair LPX 8GB (one bank died, currently running on only 1 bank) @ default (2166 mhz I think?)
OS Drive: Samsung EVO 860 500GB
Asset Drive: Seagate Barracuda 4TB @ 5400rpm
Operating System: Win 10 Home
Nvidia Drivers Version: 430.86
Daz Studio Version: 4.11.0.383 64-bit
Optix Prime Acceleration: Off
Benchmark Results
DAZ_STATS
2019-07-26 00:04:38.901 Finished Rendering
2019-07-26 00:04:38.951 Total Rendering Time: 8 minutes 23.21 seconds
IRAY_STATS
2019-07-26 00:04:52.779 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2019-07-26 00:04:52.779 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2070): 1800 iterations, 5.544s init, 493.208s render
Iteration Rate: 1800/493.208=6.139 3.650 iterations per second *FIXED, thank you*
Loading Time: 503.21 - 493.21 = 10 seconds
Drip just an fyi that your Iteration Rate stat is actually 3.650 (that's a resounding "Yay!" from me for the wonders of built-in data collection redundancy.) Any chance you'd be up for benching with OptiX and/or in the current 4.12 Beta as well? You finally get the full benefits of raytracing there.
Tick box, hit render.
System Configuration
System/Motherboard: MSi B450-A Pro
CPU: AMD Ryzen 5 2600x stock
GPU: RTX2070 @ stock
System Memory: Corsair LPX 8GB (one bank died, currently running on only 1 bank) @ default (2166 mhz I think?)
OS Drive: Samsung EVO 860 500GB
Asset Drive: Seagate Barracuda 4TB @ 5400rpm
Operating System: Win 10 Home
Nvidia Drivers Version: 430.86
Daz Studio Version: 4.11.0.383 64-bit
Optix Prime Acceleration: On
Benchmark Results
DAZ_STATS
2019-07-26 08:52:43.305 Finished Rendering
2019-07-26 08:52:44.210 Total Rendering Time: 8 minutes 37.38 seconds
IRAY_STATS
2019-07-26 08:52:54.770 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2019-07-26 08:52:54.770 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2070): 1800 iterations, 4.235s init, 507.814s render
Iteration Rate: 1800/507.814=3.545 iterations per second
Loading Time: 517.38 - 507.81 = 9.57 seconds
I don't have the beta installed, so no results from that I'm afraid.
Not entirely sure, but I *think* the Optix Prime version was actually a little bit faster to converge the first ~20%, though it still ended up slightly slower overall. So the initial iterations might be better in quality, the time per iteration is slightly longer.
I don't want to muddy the waters, so consider this anecdotal.. but wanted to just test the new render server - so threw the benchmark on it and..
ETA that's 4x 2080ti's in there now, sent via DS 4.12 on Iray Server 2.53 (so Iray 317500.2436)
By all means, muddy the waters (post as much benchmarking data/hardware details as you can stand.) 'Tis the reason I created this benchmark/thread in the first place. I can always just not include whatever other data you have if it proves too convoluted to get into the graphs/tables above.
Btw do you run Iray Server on Windows or Linux? Because in case you don't already know, running it under Linux will give you some MAJOR surprise perks.
Currently Windows, since clustering only works between the same OS's.. and I wanted to try and get that working with my work rig. Initial attempts saw some bugs, but I didn't know if it was due to the fact I was mixing the cards. Now everything is 2080ti's between the two machines, I'll try again.
But yea, it would be nice to try a Linux distro at some point.
There's been LOTS of hay made about whether or not Iray is currently capable of implementing VRAM pooling across multiple GPUs. The full truth of the matter (and this is information I was only able to finally tease out properly as recently as two weeks ago after months of exhaustive research) is that it DOES (and has been capable of doing so for years) - but only in the case of Iray Server being run on a Unix-based system with the addition of NVLink bridges with capable Turing cards (like your 2080Ti's or my Titan RTX.)
The key sticking point for compatibility under other operating systems is apparently in the way graphics cards and their indivial hardware components are enumerated as devices by the operating system. Both modern versions of Windows and Mac OS handle io operations with onboard graphics memory directly, whereas Unix systems farm the task off to the manufacturer's driver (assuming one exists, of course.) This is key because Nvidia's implementation of memory pooling (what they loosely refer to as enhanced Unified Memory) is enacted at the basic io level. Making it currently a no-go on OS X or Windows unless Apple/Microsoft start tailoring there own low level graphics card support systems (eg. WDDM) to fit Nvidia's own proprietary implementation needs...
System Configuration
System/Motherboard: ASRock Z170 OC Formula
CPU: Intel Core i7-6700 @ 3.40GHz (stock)
GPU: Nvidia GTX 1070 Founders Edition @ stock Nvidia RTX 2080 Founders Edition @ stock
System Memory: Kingston Hynix 32GB DDR4 @ 2133MHz
OS Drive: Samsung 850 EVO 500GB
Asset Drive: Western Digital Mainstream (WD30EZRZ) 3TB
Operating System: Windows 8.1 6.3.9600 Build 9600
Nvidia Drivers Version: 430.86 Standard
Benchmark Results - GTX 1070 Only
Daz Studio Version: 4.11.0.383 64-bit
Optrix Prime Acceleration: Yes
CUDA device 1 (GeForce GTX 1070): 1800 iterations, 4.233s init, 917.842s render
Total Rendering Time: 15 minutes 25.59 seconds
Iteration Rate: (1800/917.842) = 1.961 iterations per second
Loading Time: ((0 + 900 + 25.59) 925.59) = (925.59 - 917.842) = 7.748 seconds
Benchmark Results - RTX 2080 Only
Daz Studio Version: 4.11.0.383 64-bit
Optrix Prime Acceleration: Yes
CUDA device 0 (GeForce RTX 2080): 1800 iterations, 4.161s init, 344.199s render
Total Rendering Time: 5 minutes 51.33 seconds
Iteration Rate: (1800/344.199) = 5.229 iterations per second
Loading Time: ((0 + 300 + 51.33) 351.33) = (351.33 - 344.199) = 7.131 seconds
Benchmark Results - GTX 1070 + RTX 2080
Daz Studio Version: 4.11.0.383 64-bit
Optrix Prime Acceleration: Yes
CUDA device 1 (GeForce GTX 1070): 475 iterations, 4.603s init, 254.723s render
CUDA device 0 (GeForce RTX 2080): 1325 iterations, 4.605s init, 255.111s render
Total Rendering Time: 4 minutes 22.59 seconds
Iteration Rate: (1800/255.111) = 7.055 iterations per second
Loading Time: ((0 + 240 + 22.59) 262.59) = (262.59 - 255.111) = 7.479 seconds
Benchmark Results - GTX 1070 Only
Daz Studio Version: 4.12.0.33 Beta 64-bit
Optrix Prime Acceleration: Yes
CUDA device 1 (GeForce GTX 1070): 1800 iterations, 4.548s init, 762.203s render
Total Rendering Time: 12 minutes 49.99 seconds
Iteration Rate: (1800/762.203) = 2.361 iterations per second
Loading Time: ((0 + 720 + 49.99) 769.99) = (769.99 - 762.203) = 7.787 seconds
Benchmark Results - RTX 2080 Only
Daz Studio Version: 4.12.0.33 Beta 64-bit
Optrix Prime Acceleration: Yes
CUDA device 0 (GeForce RTX 2080): 1800 iterations, 4.234s init, 313.011s render
Total Rendering Time: 5 minutes 20.48 seconds
Iteration Rate: (1800/313.011) = 5.750 iterations per second
Loading Time: ((0 + 300 + 20.48) 320.48) = (320.48 - 313.011) = 7.469
Benchmark Results - GTX 1070 + RTX 2080
Daz Studio Version: 4.12.0.33 Beta 64-bit
Optrix Prime Acceleration: Yes
CUDA device 1 (GeForce GTX 1070): 511 iterations, 4.805s init, 226.737s render
CUDA device 0 (GeForce RTX 2080): 1289 iterations, 4.326s init, 227.118s rende
Total Rendering Time: 3 minutes 54.87 seconds
Iteration Rate: (1800/227.118) = 7.925 iterations per second
Loading Time: ((0 + 180 + 54.87) 234.87) = (234.87 - 227.118) = 7.752
2x2080ti
Wasn't really sure what to do with the maths w/2 cards? Maybe take the highest? (1800/151.359 = 11.892? Seems too high)
System Configuration
System/Motherboard: Gigabyte ATX DDR3 LGA 1150 SATA DIMM 6Gb/s Motherboard (GA-Z97X-UD3H)
CPU: Intel Core i7-4790 Processor @ stock
GPU: 2 x EVGA GeForce RTX 2080 Ti Black Edition Gaming, 11GB GDDR6 11G-P4-2281-KR @ stock (if matters, they are in NVLINK)
System Memory: Ballistix Sport 32GB Kit (8GBx4) DDR3 1600 MT/s (PC3-12800)
OS Drive: Samsung 840 EVO 500GB 2.5-Inch SATA III Internal SSD
Asset Drive: 2 x Seagate Desktop 3 TB HDD SATA 6 Gb/s NCQ 64MB Cache 7200 RPM, RAID 1
Operating System: Windows 10 Pro Build 17134
Nvidia Drivers Version: 431.36
Daz Studio Version: 4.11.0.383
Optix Prime Acceleration: ON
Benchmark Results
DAZ_STATS
2019-07-27 10:32:17.410 Finished Rendering
2019-07-27 10:32:17.456 Total Rendering Time: 2 minutes 40.97 seconds
IRAY_STATS
2019-07-27 10:34:26.723 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2019-07-27 10:34:26.723 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti): 917 iterations, 5.694s init, 151.359s render
2019-07-27 10:34:26.723 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti): 883 iterations, 5.735s init, 150.687s render
Iteration Rate: (1800 / 151.359) = 11.892 iterations per second
Loading Time: ((0 * 3600 + 2 * 60 + 40.97) - 151.359) = 160.97 - 151.359 = 9.611 seconds
Thanks @RayDAnt for the final numbers. I did calc that but thought it was too high. Also, seeing that loading time and discussions, I may switch assets over to my 1TB M.2 850 EVO.
@AelfinMaegik see here for the right template to use in your case (adjusted the wording to hopefully make it more obvious.)
Here's how it works out for your results:
Loading Time: ((0 * 3600 + 2 * 60 + 40.97) - 151.359) = 160.97 - 151.359 = 9.611 seconds
Same as above, just with 4.12 BETA
Daz Studio Version: 4.12.0.33 BETA
Benchmark Results
DAZ_STATS
2019-07-27 12:06:17.050 Finished Rendering
2019-07-27 12:06:17.096 Total Rendering Time: 2 minutes 25.72 seconds
IRAY_STATS
2019-07-27 12:07:00.784 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2019-07-27 12:07:00.784 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti): 904 iterations, 5.494s init, 136.003s render
2019-07-27 12:07:00.784 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti): 896 iterations, 5.289s init, 136.656s render
Iteration Rate: (1800 / 136.656) = 13.172 iterations per second
Loading Time: ((0 * 3600 + 2 * 60 + 25.72) - 136.656) = 9.064 seconds
System Configuration
System/Motherboard: Asus P8Z68-V LE
CPU: Intel Core i7-2600 @ 3.40GHz (Stock)
GPU: EVGA GTX 1070ti SC @ stock NVIDIA GTX 970 @ stock
System Memory: Kingston 16GB DDR3
OS Drive: SanDisk Ultra 3D SSD
Asset Drive: Western Digital Mainstream (WD30EZRZ) 3TB
Operating System: Windows 10 Home 1903 18362.239
Nvidia Drivers Version: 430.86 Standard
Benchmark Results - GTX 1070ti Only
Daz Studio Version: 4.11.0.383 64-bit
Optix Prime Acceleratio: Yes
CUDA device 0 (GeForce GTX 1070 Ti): 1800 iterations, 6.175s init, 770.149s render
Total Rendering Time: 13 minutes 0.20 seconds
Iteration Rate: (1800/770.149) = 2.337 iterations per second
Loading Time: ((0 + 780 + 0.20) 780.20) = (780.20 - 770.149) = 10.51
Benchmark Results - GTX 1070ti Only
Daz Studio Version: 4.12.0.33 Beta 64-bit
Optix Prime Acceleratio: Yes
Total Rendering Time: 10 minutes 58.40 seconds
CUDA device 0 (GeForce GTX 1070 Ti): 1800 iterations, 6.094s init, 648.264s render
Iteration Rate: (1800/648.264) = 2.776 iterations per second
Loading Time: ((0 + 600 + 58.40) 658.40) = (658.40 - 648.264) = 10.136
Doing these made me realize I needed to update Windows and Nvidia drivers. Not much of a difference, probably within statistical variance.
Operating System: Windows 10 Pro Build 18362 (1903)
Nvidia Drivers Version: 431.60 GRD
Daz Studio Version: 4.12.0.33 BETA
Benchmark Results
DAZ_STATS
2019-07-27 15:53:50.052 Finished Rendering
2019-07-27 15:53:50.087 Total Rendering Time: 2 minutes 24.16 seconds
IRAY_STATS
2019-07-27 15:54:22.007 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
2019-07-27 15:54:22.007 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti): 909 iterations, 4.817s init, 135.982s render
2019-07-27 15:54:22.007 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti): 891 iterations, 4.546s init, 135.692s render
Iteration Rate: (1800 / 135.982) = 13.2370
Loading Time: ((0 * 3600 + 2 * 60 + 24.16) - 135.982) = 8.178
The DS 4.10.0.123 table looks pretty bare. I'll add the first entry...
System Configuration
System/Motherboard: MSI Meg Creation X399
CPU: AMD Threadripper 1920X @ 4.00 GHz
GPU: Nvidia Titan Xp @ 2050, Nvidia Titan Xp @ 2050, Nvidia Titan X Pascal @ 2050
System Memory: G.Skill Ripjaws 32GB DDR4 @ 3200
OS Drive: Samsung 970 NVMe 512 GB
Asset Drive: Samsung 860 EVO 1TB
Operating System: Windows 10 Home 64-bit 1809
Nvidia Drivers Version: 416.34 Standard
Daz Studio Version: 4.10.0.123 64-bit
Optix Prime Acceleration: On
Benchmark Results
DAZ_STATS
2019-07-27 20:44:42.604 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Received update to 01800 iterations after 146.093s.
2019-07-27 20:44:43.305 Total Rendering Time: 2 minutes 29.56 seconds
IRAY_STATS
2019-07-27 20:45:04.494 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 1 (TITAN Xp): 617 iterations, 13.473s init, 132.657s render
2019-07-27 20:45:04.499 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (TITAN Xp): 620 iterations, 13.420s init, 132.414s render
2019-07-27 20:45:04.499 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 2 (TITAN X (Pascal)): 563 iterations, 13.427s init, 132.636s render
Rendering Performance: 1800 / 132.657 = 13.569 iterations per second
Loading Time: [(0 + 120 + 29.56) - 132.657 = 16.903 seconds
EDIT: I corrected my calculations, and made a couple of other corrections/clarifications.
I believe this should be 13.56 i/s with a load time of 16.903
(Just upgraded from 2x 1080 Ti's to 2x 2080 Ti's, did some benchmarks)
System Configuration
System/Motherboard: Gigabyte Z390 Aorus Master
CPU: Intel i7-9900K @ 4.7 Ghz
GPU: 2x Gigabyte GTX 1080 Ti Windforce OC
System Memory: Corsair Vengeance LPX 64GB DDR4 @ 3000Mhz
OS Drive: Samsung Pro 970 512GB NVME SSD
Asset Drive: Samsung SSD 860 QVO 4TB
Operating System: Windows 10 Pro version 1903
Nvidia Drivers Version: 431.60
------------------------------------------------
Benchmark Results - 2x Gigabyte GTX 1080 Ti Windforce OC
Daz Studio Version: 4.10.0.123
Optix Prime Acceleration: ON
Total Rendering Time: 4 minutes 2.48 seconds
CUDA device 0 (GeForce GTX 1080 Ti): Scene processed in 9.489s, 239.733s render
CUDA device 1 (GeForce GTX 1080 Ti): Scene processed in 9.498s, 239.733s render
Benchmark Results - 2x Gigabyte GTX 1080 Ti Windforce OC
Daz Studio Version: 4.10.0.123
Optix Prime Acceleration: OFF
Total Rendering Time: 4 minutes 27.64 seconds
CUDA device 0 (GeForce GTX 1080 Ti): Scene processed in 9.397s, 265.072s render
CUDA device 1 (GeForce GTX 1080 Ti): Scene processed in 9.406s, 265.072s render
---
Benchmark Results - 2x Gigabyte GTX 1080 Ti Windforce OC
Daz Studio Version: 4.12.0.33 Public Beta
Optix Prime Acceleration: ON
Total Rendering Time: 4 minutes 12.83 seconds
CUDA device 0 (GeForce GTX 1080 Ti): Scene processed in 2.286s, 250.252s render
CUDA device 1 (GeForce GTX 1080 Ti): Scene processed in 2.290s, 250.252s render
Benchmark Results - 2x Gigabyte GTX 1080 Ti Windforce OC
Daz Studio Version: 4.12.0.33 Public Beta
Optix Prime Acceleration: OFF
Total Rendering Time: 4 minutes 12.51 seconds
CUDA device 0 (GeForce GTX 1080 Ti): Scene processed in 1.908s, 250.513s render
CUDA device 1 (GeForce GTX 1080 Ti): Scene processed in 1.915s, 250.513s render
---------------------------------------------------------------------------------------------------------------------------
System Configuration
System/Motherboard: Gigabyte Z390 Aorus Master
CPU: Intel i7-9900K @ 4.7 Ghz
GPU: 2x Gigabyte RTX 2080 Ti Gaming OC 11G
System Memory: Corsair Vengeance LPX 64GB DDR4 @ 3000Mhz
OS Drive: Samsung Pro 970 512GB NVME SSD
Asset Drive: Samsung SSD 860 QVO 4TB
Operating System: Windows 10 Pro version 1903
Nvidia Drivers Version: 431.60
---
Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
Daz Studio Version: 4.12.0.33 Public Beta
Optix Prime Acceleration: ON
Total Rendering Time: 2 minutes 22.0 seconds
CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 7.877s (First time render), 139.184s render
CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 7.880s (First time render), 139.184s render
Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
Daz Studio Version: 4.12.0.33 Public Beta
Optix Prime Acceleration: OFF
Total Rendering Time: 2 minutes 17.11 seconds
CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render
CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render
[HomerSimpsonDrool.jpg]
A new day, a new DS Beta/Iray release!
System Configuration
System/Motherboard: Gigabyte Z370 Aorus Gaming 7
CPU: Intel i7-8700K @ stock (MCE enabled)
GPU: Nvidia Titan RTX @ stock (watercooled)
System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
OS Drive: Samsung Pro 970 512GB NVME SSD
Asset Drive: Sandisk Extreme Portable SSD 1TB
Operating System: Windows 10 Pro version 1903
Nvidia Drivers Version: 431.60 GRD WDDM (except where noted)
Daz Studio Version: 4.12.0.042 Beta 64-bit
Optix Prime Acceleration: On*
Benchmark Results - Titan RTX Only (TCC mode enabled)
Total Rendering Time: 3 minutes 56.96 seconds
CUDA device 0 (TITAN RTX): 1800 iterations, 3.209s init, 230.966s render
Iteration Rate: (1800 / 239.117) = 7.793 iterations per second
Loading Time: ((0 + 180 + 56.96) - 230.966) = (236.96 - 230.966) = 5.994 seconds
Benchmark Results - Titan RTX Only
Total Rendering Time: 4 minutes 18.82 seconds
CUDA device 0 (TITAN RTX): 1800 iterations, 3.117s init, 253.092s render
Iteration Rate: (1800 / 253.092) = 7.112 iterations per second
Loading Time: ((0 + 240 + 18.82) - 253.092) = (258.82 - 253.092) = 5.728 seconds
Benchmark Results - Titan RTX + i7-8700K
Total Rendering Time: 4 minutes 11.79 seconds
CUDA device 0 (TITAN RTX): 1800 iterations, 3.263s init, 244.974s render
CPU: 110 iterations, 2.862s init, 246.195s render
Iteration Rate: (1800 / 246.195) = 7.311 iterations per second
Loading Time: ((0 + 240 + 11.79) - 246.195) = (251.79 - 246.195) = 5.595 seconds
Benchmark Results - i7-8700K Only
Total Rendering Time: 1 hours 2 minutes 34.70 seconds
CUDA device 0 (TITAN RTX): 1800 iterations, 2.461s init, 3750.235s render
Iteration Rate: (1800 / 3750.235) = 0.480 iterations per second
Loading Time: ((3600 + 120 + 34.70) - 3750.235) = (3754.70 - 3750.235) = 4.465 seconds
PS: Should get to bringing the main tables up to speed over the next day or so.
So where do we discuss this stuff now?
Just a thought...
Clearly, our often contentious tech discussions are seen as very disruptive, and that's certainly understandable.
So I'm thinking it might be best for all of us interested in those types of tech discussions to just pick up and move to an unrelated discussion website somewhere. I imagine that would be a huge relief for all involved. Not sure where we'd go, but I imagine there are many choices.
If someone decides to start up something new here/somewhere else, drop me a PM and I'll link to it at the top of this thread.
https://www.reddit.com/r/Daz3D/ ?
maybe? not sure. maybe i don't actually want to post with my normal account on that.
WOW! This really got me wondering since the prime off was faster. I downloaded 4.12.0.42, turned NVLINK/SLI to OFF and tried this, too. I'm really surprised. I am running air cooled, non OC, and I'm shocked at these numbers. Why is prime off faster? (I am kinda peeved at myself- I changed two variables here... I updated beta version AND i turned off nvlink/sli)
Daz Studio Version: 4.12.0.42 BETA
Optix Prime Acceleration: ON
Benchmark Results
DAZ_STATS
2019-07-30 21:54:57.007 Finished Rendering
2019-07-30 21:54:57.042 Total Rendering Time: 2 minutes 18.77 seconds
IRAY_STATS
2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti): 882 iterations, 4.569s init, 129.627s render
2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti): 918 iterations, 4.755s init, 128.987s render
Iteration Rate: (1800 / 129.67) 13.881 iterations per second
Loading Time: ((0 * 3600 + 2 * 60 + 18.77) - 129.67) 9.1 seconds
Daz Studio Version: 4.12.0.42 BETA
Optix Prime Acceleration: OFF
Benchmark Results
DAZ_STATS
2019-07-30 22:08:00.921 Finished Rendering
2019-07-30 22:08:00.956 Total Rendering Time: 2 minutes 12.65 seconds
IRAY_STATS
2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 2080 Ti): 883 iterations, 4.572s init, 123.675s render
2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (GeForce RTX 2080 Ti): 917 iterations, 4.722s init, 122.562s render
Iteration Rate: (1800 / 123.675) 14.554 iterations per second
Loading Time: ((0 * 3600 + 2 * 60 + 12.65) - 123.675) 8.975seconds
Because you have RTX. Also, if by chance Nvlink was working, that can actually slow performance in Nvlink mode. Nvlink mode should only be used when you need to pool the VRAM, assuming that is possible. This is because of how Nvlink works, since the cards must talk to each other more and share more data this slows them down a little bit VS letting them render apart without Nvlink.
The Daz Reddit is a ghost town. Last post was 4 DAYS ago, and scrolling just a few posts takes you 27 days back. It is a shame that Daz has to be like the NFL...No Fun League.
i understand your points about nvlink. totally expected a minor improvement. but why is prime acceleration OFF faster than ON?
Yeah, this is extremely interesting.
Logically neither you nor Aala should be seeing any dscernable difference in render times between OptiX Prime On vs OptiX Prime off tests here because, in the case of rendering with only RTX GPUs on Daz Studio 4.12+, OptiX Prime is disabled internally by Iray and all raytracing is farmed directly out to the RTCores on the cards themselves. Currently the only programmatic difference between OptiX Prime checked/unchecked with 20XX cards in 4.12+ is that one state pops up errors in the log file about optix prime being improperly called for when it isn't supported versus a clean log file with no extraneous error messages (renders still finish successfully either way.) At least that's been the case with my Titan RTX under 4.12.0.033. Could you post log files (at least the portions starting after "Rendering image") for both an Optix Prime On and Off test run? I'm gonna try doing the same with my Titan RTX tomorrow on 4.12.0.042 as well, and would love to be able to compare what my logs look like compared to yours.
Btw could you also try OptiX Prime on vs off with just a single 2080Ti activated and see if the same patterns are true? It could be that what both you and Aala are seeing here is a multi-GPU thing. Specifically I'm wondering if perhaps the current DS/Iray version has a bug where having more than one RTX card activated at once is causing Iray to revert back to its pre-RTX GPU era behavior where OptiX Prime is used rather than RTCores for raytracing. Meaning that there may be a DS/Iray bugfix in the near future that could net both you and Aala a noticeable speedup in dual 2080Ti rendering scenarios regardless of whether OptiX Prime is checked on or off..
That's been what my testing so far has indicated. OptiX Prime still registers as on/off in the log file as it always has in the case of my GTX 1050 laptop, whereas my RTX machine says its unsupported. I'm assuming that if I had a GTX card in my RTX system as well and had it selected instead, OptiX Prime would once again appear as a valid configuration option in the log file like it has in the past.
So far it seems to be just Embree used for all raytracing functionality in CPU based rendering on 4.12+ (based, again, on my testing so far.) I remember reading somewhere that speedups in Embree's codebase have pretty much made it independently as good (if not better) than OptiX Prime ever was in accelerating raytracing. Thus the utilization change.
Would be nice, but I'm pretty sure that's soemthing Iray's developers would have to first develop and then Daz's developers implement in order for us to have use of it. Not inclined to hold my breath on all that happening.