Iray Starter Scene: Post Your Benchmarks!

1151618202149

Comments

  • grinch2901grinch2901 Posts: 1,246

    Another thing to check is that the viewport is set to Iray before pressing render. I'm sure you already know this, but this also impacts render times significanlty. 

    Is this standard practice people are using for the benchmarks?  I didn't do this when I ran mine. I know it's 30 seconds to as much as a minute faster if I do but I didn't think most people were doing that. Ideally everyone would be following the same process so we can get good apples to apples comparisons for those trying to make buying decisions, whatever process that is.

  • Robert FreiseRobert Freise Posts: 4,481

    Don't know about everyone else but that's how I ran mine

  • junkjunk Posts: 1,362
    edited March 2017

    Another thing to check is that the viewport is set to Iray before pressing render. I'm sure you already know this, but this also impacts render times significanlty. 

    Is this standard practice people are using for the benchmarks?  I didn't do this when I ran mine. I know it's 30 seconds to as much as a minute faster if I do but I didn't think most people were doing that. Ideally everyone would be following the same process so we can get good apples to apples comparisons for those trying to make buying decisions, whatever process that is.

    Wow, I didn't see the very first post stating to change your viewport to iray before running.  So my benchmarks are without changing the viewport first.  Like you said Apples to Oranges comparison.

    Post edited by junk on
  • dragotxdragotx Posts: 1,138

    Another thing to check is that the viewport is set to Iray before pressing render. I'm sure you already know this, but this also impacts render times significanlty. 

    It does?  Hot digity, I'm gonna have to try that tonight!  Thanks!

  • hphoenixhphoenix Posts: 1,335

    Switching the viewport to use Iray Interactive mode really should NOT be done.  This pre-loads and pre-calculates a lot of stuff, and knowing how that load time is affected (by both  the card itself, as well as how many PCI lanes it is running on, and the CPU as well) it really isn't the greatest comparison.

    By having the numbers posted based on a fresh start up of DS, load the scene, modify the render settings (optix, optimization), and hit render, then the numbers are very consistent.  If you switch to Iray Interactive viewport mode, then suddenly the performance could be wildly different, depending on drivers, etc.  And for CPU-only testing, having the viewport in Iray mode actually could cause problems.

    The whole point is to get the numbers in the log file anyway, which are startup time, and rendering time.  The total time is typically what gets posted, but really it should only be the rendering time we are concerned with, as startup time has a whole host of other dependencies that could affect it greatly.

    I too had not noticed that blurb in the OP to switch like that to Iray Viewport beforehand.  And I stick to my view that it isn't the correct way to do it.

     

  • grinch2901grinch2901 Posts: 1,246

    I ran it on my GTX 1060, here's what I had vs what I have:

    OLD (GT 750M 2GB) :  Total Rendering Time: 30 minutes 29.77 seconds

    NEW (GTX 1060 6 GB): Total Rendering Time: 4 minutes 59.49 seconds

    outrider42, your prediction of 5 mins for the 6 GB version of the 1060 was way off, 0.51 seconds to be exact.  wait, that's pretty much perfect. Well, it wasn't the 3 mins I was lusting over but I'm very happy! What a difference!

    To add to the discussion of "preview window in Iray mode or not", the above was without preview mode on.  With it on, all other settings the same, I got this:

    GTX1060 6GB with Iray preview on prior to render: Total Rendering Time: 4 minutes 33.74 seconds

    Not an earth shattering difference, really important if you were doing animationsas 25 seconds per frame would add up really fast.  But it seems for a static render it's a relatively small change, at least with my hardware.

  • IllriggerIllrigger Posts: 12
    Illrigger said:

    For reference, I am getting a 1080 Ti on Monday and wanted a reference point from my current setup:

    Core i7 4770k @ 4.4 Ghz
    GTX 780 3GB (overclocked)
    GTX 660 2GB (stock clocked)

    CPU + Both GPUs: 2 minutes 45.75 seconds
    CPU + GTX 780 only: 3 minutes 25.68 seconds

    Following up, was too busy to run this before this morning:

    CPU + GTX 1080 Ti: 2 minutes 11.74 seconds
    CPU + GTX 1080 Ti + GTX 660: 1 minutes 53.41 seconds

    Obvioulsy the benefit of adding in the 660 is pretty low at this point - I really only have it in there as a dedicated PhysX card for gaming, and I don't really need that anymore with the power of a 1080 Ti to play with. I would put the 780 back in as a second card instead, but the backplate on it gets REALLY hot, and my initial testing showed that it was actually causing the 1080 Ti above it to throttle. Maybe someday if I get a new CPU and motherboard with an extra slots' worth of spacing between them, but for now this is what I get.

     

  • mikmodmikmod Posts: 65
    stryfe said:

    Ryzen 1800X @ Stock
    GTX 970 x2
    32GB RAM
    OptiX On


    CPU Only:            Total Rendering Time: 18 minutes 13.51 seconds
    GPU1 Only:          Total Rendering Time: 4 minutes 12.5 seconds
    CPU + GPU1:       Total Rendering Time: 3 minutes 39.45 seconds
    GPU1+2:               Total Rendering Time: 2 minutes 23.47 seconds
    CPU + GPU1+2:    Total Rendering Time: 2 minutes 8.30 seconds
      
    Can't wait for the ASUS Strix 1080 Ti's to come out to finish my build!

    Hm, on my i7 4771 (stock) and GTX 970 (MSI 100ME Edition) I had 4 minutes 21 seconds, with GPU only and Optix on (optimized for memory), Windows 7 x64, so this model of Ryzen can have a bit of advantage over Haswell cores ^_^

    On Zotac 1080 AMP! the time was 2 minutes 51 seconds (the rest of hardware was the same).

  • SyndarylSyndaryl Posts: 521

    Testing on my slightly wonky old system:

    AMD Phenom II X6 1090T Processor, 3.20Ghz (6 cores)
    16GB RAM, 12800 DDR3
    Zotac GF GTX970 AMP! Extreme 4GB
    Mobo: M4A89GTD PRO/USB3 AMD890GX AM3
    7200RPM HDD

    Clean startup of Studio each time, regular openGL viewport each time, benchmark scene, unmodified:

    CPU + GPU + OptiX Prime:
    hits 90% converged at ~206s @3238 iterations
    hits convergence threshold 293s @4833 iterations

    CPU + GPU:
    hits 90% converged at ~324s @03229 iterations
    hits convergence threshold 460s @4812 iterations

    So even with an AMD CPU, I'm definitely benefiting from OptiX. I was wondering if that was of most benefit to Intel chipsets but apparently not.

    GPU + OptiX Prime:
    hits 90% converged at ~194s @3213 iterations
    hits convergence threshold 279s @4840 iterations

    Takeaway: Wonky dusty AMD CPU is bottlenecking me. I will keep that unchecked going forth!

  • SoneSone Posts: 84

    Thought I'd share my new Ryzen PC build results. I'm keeping this as my base as I tweak and update it like RAM speeds, bios updates and such...which will be a plenty being so new.

    My build all at default settings (Win 10 Home at Balanced Power setting)

    Ryzen 1700 65w plain jane

    Asus Prime X370 mobo

    16gb Corsair @ 2133

    1TB Plextor M.2 x4 SSD boot and DAZ content

    MSI GTX 970 4GB OC model

    Gigabyte GTX 1070 8GB OC model

     

    I ran these runs in this manner for each bench test. Opened DAZ, loaded SickleYield scene, ran first load run, another second run with SickleYield scene iRay loaded already on each hardware config.

     

    Ryzen CPU only

    1st Total Rendering Time: 25 minutes 16.23 seconds

    2nd Total Rendering Time: 24 minutes 48.97 seconds

    1st Total Rendering Time: 21 minutes 31.16 seconds Optix on

    2nd Total Rendering Time: 21 minutes 12.57 seconds Optix on

     

    GTX 970 only

    1st Total Rendering Time: 7 minutes 29.23 seconds

    2nd Total Rendering Time: 7 minutes 18.42 seconds

    1st Total Rendering Time: 4 minutes 32.13 seconds Optix on

    2nd Total Rendering Time: 4 minutes 16.96 seconds Optix on

     

    GTX 1070 only

    1st Total Rendering Time: 5 minutes 11.0 seconds

    2nd Total Rendering Time: 4 minutes 58.57 seconds

    1st Total Rendering Time: 3 minutes 17.36 seconds Optix on

    2nd Total Rendering Time: 3 minutes 3.75 seconds Optix on

     

    GTX 970 and 1070 together

    1st Total Rendering Time: 3 minutes 17.27 seconds

    2nd Total Rendering Time: 3 minutes 4.36 seconds

    1st Total Rendering Time: 2 minutes 5.25 seconds Optix on

    2nd Total Rendering Time: 1 minutes 52.15 seconds Optix on

     

    CPU+970+1070 together

    1st Total Rendering Time: 3 minutes 5.33 seconds

    2nd Total Rendering Time: 2 minutes 52.58 seconds

    1st Total Rendering Time: 2 minutes 3.17 seconds Optix on

    2nd Total Rendering Time: 1 minutes 49.94 seconds Optix on

     

    Looks like a nice boost with Optix on! :)

     

     

     

      

  • Godless8Godless8 Posts: 25

    https://www.daz3d.com/forums/discussion/162446/gtx-1080-ti-benchmark

    My Machine consists of the following specs:

    GPU : 2x GTX 1080 Ti's Both from Inno3D (NO SLI) (one at 1950 Mhz second at 1800Mhz)

    CPU : Ryzen 7 1700X @3,6Ghz boost

    RAM : Corsair Vengeance3000 64Gb  @2111Mhz

    MOBO : Asrock X370 Taichi

    Storage: Samsung 1 Tb 850 EVO SSD


    Test1:

    Sickelyield's benchmark:

    https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks

    using 2 Ti's and optix acceleration. NO CPU

    Completed in 1 min 20s

    +- 5000 iterations

    you can see more benchmarks in the first link.

  • Hi,

    no Pascal Titan X in the list yet, so I thought I post the results of testing the CUDA-monster I bought yesterday.

    Dell Precision T3500 with Corsair 850 Watts 24 GB DDR3-ECC-RAM
    CPU:Xeon X5675 @3,07GHz 6 Cores/12 Threads 95 Watt TDP
    GPU0:Nvidia Titan X   @1898MHz  (air) 12GB 3584 CUDA-cores (Pascal)
    GPU1:Gigabyte GTX 980 G1 GAMING  @1494MHz  (air)  4GB 2048 CUDA-cores (Maxwell)

    Optix ON:
    GPU0+GPU1
    Total Rendering Time: 1 minutes 35.53 seconds
    GPU0
    Total Rendering Time: 2 minutes 11.94 seconds
    GPU1
    Total Rendering Time: 3 minutes 38.28 seconds

    Optix OFF:
    GPU0+GPU1
    Total Rendering Time: 2 minutes 25.18 seconds
    GPU0
    Total Rendering Time: 3 minutes 32.73 seconds
    GPU1
    Total Rendering Time: 5 minutes 56.32 seconds

    No CPU testing, would be bottlenecking the GPUs anyway.

    Display connected to GPU0
    Each test with freshly started Daz Studio 4.9.3.116
    No Iray viewport opened.
    Loaded the benchmark (Thank you, Sickleyield!)
    Hit RENDER

    With Viewport set to IRAY (textures and geometries already loaded to the GPUs)
    the first Render (GPU0+GPU1+Optix ON)
    took Total Rendering Time: 1 minutes 16.88 seconds

  • i5 3470 3.2Ghz

    16GB DDR3 Ram

    Gigabyte GTX 1070 8GB Mini ITX OC 

    Crucial 256GB SSD and WD 7200RPM 1TB HDD

    CPU + GPU with OptiX on 3 min 18 Secs 

    GTX 1070 Only 5 min 2 Secs

    GTX 1070 Only with OptiX 3 min 2 Secs

     

     

     

  • junkjunk Posts: 1,362
    ...Pascal Titan X...

    Optix ON:
    GPU0+GPU1
    Total Rendering Time: 1 minutes 35.53 seconds
    GPU0
    Total Rendering Time: 2 minutes 11.94 seconds
    GPU1
    Total Rendering Time: 3 minutes 38.28 seconds

     

    The Titan X is a beast of a card.  But there must be some strange driver issue, intricacies of Daz3D with the hardware or its just this particular scene.  My two overclocked 1070's came in slightly faster (.03 seconds) than this configuration of a Titan X + 980.  I didn't try first setting Daz to iray viewport and always restart the program before testing.  Any thoughts by anyone because perhaps balancing two hardware cards is better than mixing and matching cuda's?

  • ToborTobor Posts: 2,300

    For all, in the interest of more accurate benchmarking: Do a "holding" render by starting a few iterations, then cancel. Don't close the rendering window. This keeps the geometry and textures in memory. This part has almost nothing to do with the GPU, and is basically entirely processed by the CPU.

    Then, for each subsequent test, use your own stopwatch to time between Iteration 1 and the final iteration. Don't include the initial process of setting up for the render, which likewise heavily involves the CPU.

    Optix on/off may not be much of telling parameter as it accelerates ray tracing to triangle geometry, whereas most of the Daz human characters have quad geometry. Depending on your scene it may have a big impact, or it may have none. It would be interesting to do benchmarks with only a base G3 character ... no hair, clothing, set pieces, etc.

    Junk, your observation of the two 1070 SC's is in keeping with others. The Titan and 980 are mismatched in architecture, and it's been suggested Iray will "clock down" to the slowest card in the system. This may be what's causing the disparity in the results, or it may be the testing procesure.

  • Junk,

    of course two Pascal cards (your 1070s) can be faster than my Pascal/Maxwell setup.

    Your 1800x cpu is much faster than my Xeon, its 15374 vs. 8556 Passmark score.

    8 cores/16 threads will load the scene much faster via pcie to the two gpus, who are identical, at least concerning bus width and bandwith.

    As Tobor stated, the Titan and 980 are mismatched in architecture, it´s all about different cuda compute capability (5.2 vs 6.1), different host-to-device

    and device-to-host speeds.

    This will slow down the whole system a bit. But Iray does not "clock down" anything.

    The logfile states that in any render under 4GB where I can use both gpus, the Titan X computes

    two thirds of the iterations, and the 980 one third. So 11433 GFlops versus 4612 stock makes 16045, plus the OC bonus makes about 17700 GFlops.

    For me that reduced the rendering time for a 1 sec/30 frames anim from half an hour to 10 minutes.

    And the Titan has to drive my monitor, too.

    Yes it is a beast, and I was lucky to get a used one for 850 EUR from a gamer who needed the bucks for 2x1080ti in sli.

    If I can acquire a second one, I will. Or I´ll change the 980 to another Pascal card, a 1080ti maybe, the specs are almost the same as the Titan X.

     

     

     

     

     

  • surrealsurreal Posts: 171

    I had problems when I mixed Titan X(Maxwell) and 1080(Pascal) cards. Could only get system to run stable with max of three cards. System fell over completely if I swapping in a 1080Ti beside Titan X(Maxwell). A recent BIOS update and using four identical cards appears to now allow system to run reasonably stable, but has proved not functional as the four 250w double width cards get very hot when rendering for more than a few minutes. I think MEC4D had the right idea starting with a commercial grade chassis and identical video hardware.

  • junkjunk Posts: 1,362
    edited April 2017

    8 cores/16 threads will load the scene much faster via pcie to the two gpus, who are identical, at least concerning bus width and bandwith.

    I didn't even think about the CPU affecting overall performance let alone PCI generation 1 vs 2 vs 3 loading data to the cards as a factor.  So, I believe once laoded to the cards it would shine through.  This makes sense and for very short renders of less than 2 minutes it's hard to guage. 

    And when it comes to utilizing passmark scores I have the Ryzen overclocked to where it is getting a score of 19800ish with 19859 as a high on a semi-consistent level. I just love squeezing every bit of performance out of what I have.  Also factor in that if the Titan X was OC'd it would just trounce most everything.

    Post edited by junk on
  • DDIGITALDNRDDIGITALDNR Posts: 2
    edited April 2017

    Upgraded my System with New CPU, Mobo and Ram.

     

    AMD Ryzen 5 1600 Overclocked to 3.7Ghz (6 Cores and 12 Threads)

    16GB DDR4 Ram Running at 2933Mhz

    Gigabyte GTX 1070 8GB Mini ITX OC 

    Asus PRIME B350M-A Motherboard

    Crucial 256GB SSD and WD 7200RPM 1TB HDD

    CPU + GPU with OptiX on 2 Min 42 Seconds

    Post edited by DDIGITALDNR on
  • GatorGator Posts: 1,312

    Main rig

    i7-6700K

    48 GB RAM

    2 Titan X (Pascal)

    Opti-X enabled, CPU disabled.  SLI is enabled.  The kid was playing a game and I had too much stuff running to turn it off. 

    1 min 11s

     

     

    Render rig

    AMD FX-8320

    32GB RAM

    2 Titan X (Maxwell)

    Opti-X enabled, CPU disabled.  SLI off.

    1 min 54 s.

     

    Interesting how big the discrepancy is.  I wonder if the system is slower with the older AMD processor.  On bigger scenes the difference is around 30-40% faster with the Pascal.

     

  • Ryzen 1800X @3.65Ghz stock, GTX Maxwell Titan X, 64GB RAM @ 2666Mhz

    GPU Only (OptiX on): 3 minutes 3.27 seconds

    CPU Only: 17 minutes 51.60 seconds

    CPU+GPU: 2 minutes 47.4 seconds

     

  • IsazformsIsazforms Posts: 210

    Nividia 1060 GXT

    GPU only 4:46.

     

    Is good enought or not?

  • Richard HaseltineRichard Haseltine Posts: 102,374
    Isazforms said:

    Nividia 1060 GXT

    GPU only 4:46.

     

    Is good enought or not?

    For what? As long as you can render what you want to render in an aceptable time it's good enough, if not you need to look at optimising your scenes or a hardware upgrade.

  • JamesJABJamesJAB Posts: 1,760
    edited May 2017

    -Desktop-  Dell Precision T7500

    Dual Intel Xeon X5570 @ 2.93 GHZ (2x4 cores 16 threads total) 24GB RAM
    Nvidia Geforce GTX 1060 6GB RAM 

    3 minutes 48 Seconds - CPU/GPU (OptiX on) (CPU holds @ 3.13 GHZ boost throughout entire render)
    4 minutes 17 Seconds - GPU only (OptiX on)
    Included picture is a screencap as the render finished, showing GPU stats and render time.  The break in the middle of the graph is between renders (GPU+CPU on the left and GPU only on the right)

    -Notebook-  Dell Precision M6700

    Core i7-3840QM @ 2.80 GHZ (4 cores 8 threads) 16GB RAM
    Nvidia Quadro K5000M 4GB RAM

    12 minutes 38 seconds - CPU/GPU (OptiX on) (CPU holds @ 3.31 GHZ boost throughout entire render)
    15 minutes 20 Seconds - GPU only (OptiX on)

     

    Render Results.jpg
    2867 x 1754 - 550K
    Post edited by JamesJAB on
  • FletcherFletcher Posts: 63
    edited May 2017

    From my proudly self built 2 day old rig:
    ​I7-7700K
    Asus Strix Z270E
    ​Asus Strix 1080 ti  (overclocked out of box)
    ​16 g RAM

    --------------------------------------------------------

    2 min 0.56 secs
    GPU + Optix Memory

    1 min 53.98 secs
    GPU + Optix Speed

    3 min 19.88 secs
    GPU  - Optix Memory

    2 min 16.49 secs
    GPU  - Optix Speed

    With these different settings, is the final render quality the same? To my eyes it is.

     

     

     

     

    Post edited by Fletcher on
  • Leonides02Leonides02 Posts: 1,379
    edited May 2017

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

    Quick question: Is there an appreciable difference in render time depending on HOW MUCH of a GPU's memory is left?

    Or does it only matter that the scene fits in the GPU's memory?

    Post edited by Leonides02 on
  • TooncesToonces Posts: 919

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

    Quick question: Is there an appreciable difference in render time depending on HOW MUCH of a GPU's memory is left?

    Or does it only matter that the scene fits in the GPU's memory?

    Nope. Scene just has to fit, the rest is cudas.

  • SoneSone Posts: 84

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

     

    holy smokes that's a lot of CUDA CORES! Don't blink so you see it render! :) 

  • KaladinKaladin Posts: 1
    edited May 2017

    Great thread.

    Ryzen 1700 build. (1080 ti + 1070) with Optix on:

    • 1070 MSI gaming X (factory overclocked) - Started with just this card, bought the ti later. 
      • GPU: 3min 6 secs 
    • 1080ti ASUS founders edition 
      • GPU: 2min 
    • 1080ti + 1070
      • GPU only: 1min 10secs
      • GPU + CPU: 1min 12secs 

    Based on the previous reports, the best value for money dual-gpu machine would be a 1070x2 setup (if you can get each for < 350 usd) - faster renders vs a single 1080ti. (I'm biased since I started with just the single 1070) 

    For me, the additional 300$+ for a dual-1080ti setup (10 sec improvement at most) didn't make sense. 

    Post edited by Kaladin on
  • Leonides02Leonides02 Posts: 1,379
    Sone said:

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

     

    holy smokes that's a lot of CUDA CORES! Don't blink so you see it render! :) 

    I know! But you know what? It still takes me an hour+ to get to 99.9% convergiance on those pesky interior scenes. Argh!

     

     

Sign In or Register to comment.