Iray Starter Scene: Post Your Benchmarks!

1161719212249

Comments

  • SoneSone Posts: 84
    Sone said:

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

     

    holy smokes that's a lot of CUDA CORES! Don't blink so you see it render! :) 

    I know! But you know what? It still takes me an hour+ to get to 99.9% convergiance on those pesky interior scenes. Argh!

     

     

    oh wow...thanks for confirming those I have are interiors of no end. I've sat down long after started only to have my mouth agape looking at them still hammering my machine for so long to get somewhere decent. 

  • GatorGator Posts: 1,312

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

    Quick question: Is there an appreciable difference in render time depending on HOW MUCH of a GPU's memory is left?

    Or does it only matter that the scene fits in the GPU's memory?

    Dood!  I'd love to see your scores.

    I'm contemplating an upgrade.  Doesn't sound like the Titan Xp will go much faster, however, the extra RAM may be a factor for me.  If I can get by without it, then the 1080ti sounds very good.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Hi,

    no Pascal Titan X in the list yet, so I thought I post the results of testing the CUDA-monster I bought yesterday.

    Dell Precision T3500 with Corsair 850 Watts 24 GB DDR3-ECC-RAM
    CPU:Xeon X5675 @3,07GHz 6 Cores/12 Threads 95 Watt TDP
    GPU0:Nvidia Titan X   @1898MHz  (air) 12GB 3584 CUDA-cores (Pascal)
    GPU1:Gigabyte GTX 980 G1 GAMING  @1494MHz  (air)  4GB 2048 CUDA-cores (Maxwell)

    Optix ON:
    GPU0+GPU1
    Total Rendering Time: 1 minutes 35.53 seconds
    GPU0
    Total Rendering Time: 2 minutes 11.94 seconds
    GPU1
    Total Rendering Time: 3 minutes 38.28 seconds

    Optix OFF:
    GPU0+GPU1
    Total Rendering Time: 2 minutes 25.18 seconds
    GPU0
    Total Rendering Time: 3 minutes 32.73 seconds
    GPU1
    Total Rendering Time: 5 minutes 56.32 seconds

    No CPU testing, would be bottlenecking the GPUs anyway.

    Display connected to GPU0
    Each test with freshly started Daz Studio 4.9.3.116
    No Iray viewport opened.
    Loaded the benchmark (Thank you, Sickleyield!)
    Hit RENDER

    With Viewport set to IRAY (textures and geometries already loaded to the GPUs)
    the first Render (GPU0+GPU1+Optix ON)
    took Total Rendering Time: 1 minutes 16.88 seconds

    Hi, Could you do the tests again without using the Titan for display and set it to TCC mode ?

    See under Windows Driver Model to do that http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2014

     

    Main rig

    i7-6700K

    48 GB RAM

    2 Titan X (Pascal)

    Opti-X enabled, CPU disabled.  SLI is enabled.  The kid was playing a game and I had too much stuff running to turn it off. 

    1 min 11s

     

     

    Render rig

    AMD FX-8320

    32GB RAM

    2 Titan X (Maxwell)

    Opti-X enabled, CPU disabled.  SLI off.

    1 min 54 s.

     

    Interesting how big the discrepancy is.  I wonder if the system is slower with the older AMD processor.  On bigger scenes the difference is around 30-40% faster with the Pascal.

     

    Hi, same here, could you put one Titan in TCC mode (the one without attached display)

     

    Thanks

  • Leonides02Leonides02 Posts: 1,379

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

    Quick question: Is there an appreciable difference in render time depending on HOW MUCH of a GPU's memory is left?

    Or does it only matter that the scene fits in the GPU's memory?

    Dood!  I'd love to see your scores.

    I'm contemplating an upgrade.  Doesn't sound like the Titan Xp will go much faster, however, the extra RAM may be a factor for me.  If I can get by without it, then the 1080ti sounds very good.

    Ok! I'll do it tonight and post. :)

  • Hi, Could you do the tests again without using the Titan for display and set it to TCC mode ?

     

    Did one test again, as requested with the display connected to the GTX 980 and the Titan X (Pascal)  set to tcc mode.

    After a loooong reboot and driver installation:

    Optix on and only the Titan X: 2 min 11 sec

    No difference at all.

    Viwport set to Iray, Optix on and only the Titan X: 1 min 51 sec

    Set it back to WDDM, so I can read the temperatures and boost speeds with gpu-z.

     

     

     

     

    .

     

  • Leonides02Leonides02 Posts: 1,379

    Just got a new PC with 3x 1080 TI's! Looking forward to giving it a shot.

    Quick question: Is there an appreciable difference in render time depending on HOW MUCH of a GPU's memory is left?

    Or does it only matter that the scene fits in the GPU's memory?

    Dood!  I'd love to see your scores.

    I'm contemplating an upgrade.  Doesn't sound like the Titan Xp will go much faster, however, the extra RAM may be a factor for me.  If I can get by without it, then the 1080ti sounds very good.

     

    With my 3x 1080 TI's I have 90% convergence with GPU+OptiX Prime at 29.5 seconds.

    Not bad. :)

  • With my 3x 1080 TI's I have 90% convergence with GPU+OptiX Prime at 29.5 seconds.

    Not bad. :)

    Is this with viewport to Iray or OpenGl.

    Just asking, because loading the scene to the gpu takes about 20 sec on my rig.

    The Titan is at 90% after 26 sec (6 sec pure rendering time) but the remaining 10% take about 2 min.5 sec

     

  • Leonides02Leonides02 Posts: 1,379

    With my 3x 1080 TI's I have 90% convergence with GPU+OptiX Prime at 29.5 seconds.

    Not bad. :)

    Is this with viewport to Iray or OpenGl.

    Just asking, because loading the scene to the gpu takes about 20 sec on my rig.

    The Titan is at 90% after 26 sec (6 sec pure rendering time) but the remaining 10% take about 2 min.5 sec

     

     

    That's without the GPU load.

    For Titan, do you mean it is at 90% convergence after 26 seconds, or 90% of the progress (the yellow bar) to 90% convergence? I could try a 99% convergence...

  • TooncesToonces Posts: 919

    The scene defaults at standard Rendering Convergence Ratio of 95%.

    So most of the timings are basically -- render till 'done'. I.e., when convergence reaches 95% or when yellow progress bar reaches 100% (should be same).

    I was expecting 3x 1080 TI to be around...40-50 seconds.

  • Leonides02Leonides02 Posts: 1,379
    Toonces said:

    The scene defaults at standard Rendering Convergence Ratio of 95%.

    So most of the timings are basically -- render till 'done'. I.e., when convergence reaches 95% or when yellow progress bar reaches 100% (should be same).

    I was expecting 3x 1080 TI to be around...40-50 seconds.

    Yep, that' s about right once you factor in the GPU load time. 

  • GatorGator Posts: 1,312
    Toonces said:

    The scene defaults at standard Rendering Convergence Ratio of 95%.

    So most of the timings are basically -- render till 'done'. I.e., when convergence reaches 95% or when yellow progress bar reaches 100% (should be same).

    I was expecting 3x 1080 TI to be around...40-50 seconds.

    Yep, that' s about right once you factor in the GPU load time. 

    What's the time at the default 95% render setting saved within the scene?

  • GatorGator Posts: 1,312

    Hi,

    no Pascal Titan X in the list yet, so I thought I post the results of testing the CUDA-monster I bought yesterday.

    Dell Precision T3500 with Corsair 850 Watts 24 GB DDR3-ECC-RAM
    CPU:Xeon X5675 @3,07GHz 6 Cores/12 Threads 95 Watt TDP
    GPU0:Nvidia Titan X   @1898MHz  (air) 12GB 3584 CUDA-cores (Pascal)
    GPU1:Gigabyte GTX 980 G1 GAMING  @1494MHz  (air)  4GB 2048 CUDA-cores (Maxwell)

    Optix ON:
    GPU0+GPU1
    Total Rendering Time: 1 minutes 35.53 seconds
    GPU0
    Total Rendering Time: 2 minutes 11.94 seconds
    GPU1
    Total Rendering Time: 3 minutes 38.28 seconds

    Optix OFF:
    GPU0+GPU1
    Total Rendering Time: 2 minutes 25.18 seconds
    GPU0
    Total Rendering Time: 3 minutes 32.73 seconds
    GPU1
    Total Rendering Time: 5 minutes 56.32 seconds

    No CPU testing, would be bottlenecking the GPUs anyway.

    Display connected to GPU0
    Each test with freshly started Daz Studio 4.9.3.116
    No Iray viewport opened.
    Loaded the benchmark (Thank you, Sickleyield!)
    Hit RENDER

    With Viewport set to IRAY (textures and geometries already loaded to the GPUs)
    the first Render (GPU0+GPU1+Optix ON)
    took Total Rendering Time: 1 minutes 16.88 seconds

    Hi, Could you do the tests again without using the Titan for display and set it to TCC mode ?

    See under Windows Driver Model to do that http://www.migenius.com/products/nvidia-iray/iray-benchmarks-2014

     

    Main rig

    i7-6700K

    48 GB RAM

    2 Titan X (Pascal)

    Opti-X enabled, CPU disabled.  SLI is enabled.  The kid was playing a game and I had too much stuff running to turn it off. 

    1 min 11s

     

     

    Render rig

    AMD FX-8320

    32GB RAM

    2 Titan X (Maxwell)

    Opti-X enabled, CPU disabled.  SLI off.

    1 min 54 s.

     

    Interesting how big the discrepancy is.  I wonder if the system is slower with the older AMD processor.  On bigger scenes the difference is around 30-40% faster with the Pascal.

     

    Hi, same here, could you put one Titan in TCC mode (the one without attached display)

     

    Thanks

     

    It's not a Tesla or Quadro, but it let me set it.  I'm running Windows 10 x64.  But Daz Studio 4.9.166 wouldn't load with the non-connected card in TCC mode.  It just froze on the splash screen (Pascal system).

  • boisselazonboisselazon Posts: 458
    edited May 2017

    for thoose big rigs, this bench seems a bit "outdated" or too light to see the benefit of additional cards.

    I'd be interesting to see some heavied render test scene, for example with long (complex) hairs (very time consuming), some set and light and a heavy outfit. And the render size is too little.

    the loading time of the loading time in the GPU is too long compared to the render time (20-30s compared to 30s+)

    Problem: we don't have many free options (runtime) to do such a bench...

    Post edited by boisselazon on
  • Leonides02Leonides02 Posts: 1,379
    edited May 2017
    Toonces said:

    The scene defaults at standard Rendering Convergence Ratio of 95%.

    So most of the timings are basically -- render till 'done'. I.e., when convergence reaches 95% or when yellow progress bar reaches 100% (should be same).

    I was expecting 3x 1080 TI to be around...40-50 seconds.

    Yep, that' s about right once you factor in the GPU load time. 

    What's the time at the default 95% render setting saved within the scene?

    52 seconds with the GPU load. I think that's the bottleneck on this test right now.

     

    Post edited by Leonides02 on
  • Ongoing MomentOngoing Moment Posts: 78
    edited May 2017

    Thank you Guys and Girls for posting benchmarks. I have an old Dell XPS 435mt i7-920 computer that is stock essentially. Was able to put a GTX 730 4GB in it and that was slow but worked for IRAY. I put a GTX 1060 6gb in it a year later and that was much better. Actually made DAZ Studio fun again. Only problem was that the hardware around it was not able to keep up. I only had 6GB of RAM in the XPS and the original Hard Drive. Also PCIe 2.0. So in between renders it would take 3-10 minutes for the memory buffers to clear. FRUSTRATING!!! It was either upgrade the XPS rig which would have worked regarding the memory buffer purging or build a new RYZEN 5. I went with a RYZEN 5 1600. It will take a few days more before I can build that but I also sold the GTX 1060 and bought a GTX 1070 ZOTAC mini. So here are some benchmarks for the EVGA 1060 6GB and ZOTAC 1070 mini 8GB on the XPS 435mt i7 920 CPU. OH, and I know the CPU just bottlenecks the GPU so I didn't even bench that. 

     

    EVGA ACX 2.0 GTX 1060 6GB - $249

    4 minutes and 52.74 seconds OPTIX ON

    ZOTAC GeForce GTX 1070 Mini, ZT-P10700G-10M, 8GB - $329

    3 minutes and 30.10 seconds Optix On

     

    Pretty much matches other benchmarks with more expensive CPU's as far as I can tell. I would say this old XPS i7 920 would have worked well with a RAM (24GB) and HDD or SSD upgrade. Not nearly as good as a new rig but good for weekend fun. The GTX 1060 6GB was awesome with just HDRI and 2-3 figures rendering would take 25-40 minutes. That would include the 4-7 minutes it would take my CPU to send all the info to the GPU. Not bad in my opinion. I will post RYZEN 5 Benches with the 1070 mini in a few days.

    Post edited by Ongoing Moment on
  • MalandarMalandar Posts: 776

    Must be nice I don't know what I am doing wrong, but I have yet to get an IRAY render done lol...

  • Malander "Must be nice I don't know what I am doing wrong, but I have yet to get an IRAY render done lol..."

     

    What computer system are you using?

  • JamesJABJamesJAB Posts: 1,760
    Malandar said:

    Must be nice I don't know what I am doing wrong, but I have yet to get an IRAY render done lol...

    What do you have under the hood?
    We need to know what hardware you are running before anyone can help you get Iray running.  (and remember that AMD and Intel GPUs will not run Iray, you will be stuck in CPU mode only).  

  • GaryHGaryH Posts: 66

    While stand-alone renders are fine for output, did you know there was this relatively unknown setting to greatly speed up your real-time Iray viewport?  I just discovered it by accident in another thread, and what a revelation.

    The default for Edit->Preferences->Interface->Display Optimization is None.  If you have one of these new Pascal cards or a 9 series graphics card change it to Best.

    You should now be able to do smooth, and fast viewport camera moves in Photoreal mode.

    I also set the Draw Setting tab's Drawing -> Response Threshold to between 1000 and 3000 and the Manipulation Resolution to 1/2.

    Try it!

  • AJ2112AJ2112 Posts: 1,416
    edited May 2017

    Just for fun, 4 years ago, I began my Daz adventure with this video card, a GTX 630.  96 cuda cores.  Imagine rendering with this card.  Scary slow, Rofl !!  I used for 2 years, then upgrade to GTX 670 2015.

       

    GTX 630.jpg
    1278 x 720 - 791K
    Post edited by AJ2112 on
  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited May 2017
    52 seconds with the GPU load. I think that's the bottleneck on this test right now.

    @Leonides02

    Just so I'm clear, did you first have Iray turned on in your viewport? Or was your viewport switched to the basic shader? These tests should be done with Iray active in the viewport first. 

    Also, are you on air or water?

    -P 

    Post edited by PA_ThePhilosopher on
  • JamesJABJamesJAB Posts: 1,760
    Awesomefb said:

    Just for fun, 4 years ago, I began my Daz adventure with this video card, a GTX 630.  96 cuda cores.  Imagine rendering with this card.  Scary slow, Rofl !!  I used for 2 years, then upgrade to GTX 670 2015.

       

    When I switched over to Daz Studio from poser I was running an old Geforce GTX 260, but that was long before Iray was a thing.
    My first Iray render as on a GTX 760, and the slowest card I've run it on was a GTX 560M.
    Currently running a GTX 1060 6GB in my desktop and a Quadro K5000M 4GB in my laptop for Iray rendering

  • Leonides02Leonides02 Posts: 1,379
    52 seconds with the GPU load. I think that's the bottleneck on this test right now.

    @Leonides02

    Just so I'm clear, did you first have Iray turned on in your viewport? Or was your viewport switched to the basic shader? These tests should be done with Iray active in the viewport first. 

    Also, are you on air or water?

    -P 

    Iray was not on in the viewport. When I do that, I get my original time of 29.5 seconds.  

    I'm on air (no room for water cooling, these cards are huge), but my temperatures don't ever get higher than 84 degrees and my clock is always about 1840 MHz. 

  • Iray was not on in the viewport. When I do that, I get my original time of 29.5 seconds.  

    I'm on air (no room for water cooling, these cards are huge), but my temperatures don't ever get higher than 84 degrees and my clock is always about 1840 MHz. 

    That is freaking insane.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2017

    Ok, I just upgraded my quad (4) 780 Ti's to dual (2) 1080 Ti's, and my render times have actually improved. lol. (with the help of OptiX)

    • Four 780 Ti's (OptiX off, CPU off) - 1 min 15 sec
    • Two 1080 Ti's (OptiX on, CPU off) - 1 min 0 sec
    • Two 1080 Ti's (OptiX off, CPU off) - 1 min 45 sec

    So with a dual setup, it appears that OptiX helps here.

    Post edited by PA_ThePhilosopher on
  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2017

    Wow, I just ran the Octane Benchmark on my two 1080 Ti's and got a score of 402. That is equivelent to four (4) 980's! lol.

    Post edited by PA_ThePhilosopher on
  • TooncesToonces Posts: 919

    Can't wait to see when you get the 3rd 1080 in there. Gotta put that water cooling system to good use!

  • Nyghtfall3DNyghtfall3D Posts: 782
    Just so I'm clear, did you first have Iray turned on in your viewport? Or was your viewport switched to the basic shader? These tests should be done with Iray active in the viewport first. 

    There's no mention of that requirement in SY's original post.  Why would it make a difference on render time?

  • junkjunk Posts: 1,362
    Nyghtfall said:
    Just so I'm clear, did you first have Iray turned on in your viewport? Or was your viewport switched to the basic shader? These tests should be done with Iray active in the viewport first. 

    There's no mention of that requirement in SY's original post.  Why would it make a difference on render time?

    Agreed! I believe the test should not be run with iray set in the viewport and the auxilary port turned off.  The reason people switch to iray viewport first is that it improves their render time for this benchmark.  It makes, UP TO, a 30 second improvement in that it pre-loads some of the scene into the graphics card.  So when you hit render you have shaved off some of the rendering time.  I believe that half of the people on this post run it with and the others without. 

  • junkjunk Posts: 1,362

    On a side note has anyone seen or thought of using something like this nvidia GPU crypto-mining rig as a DAZ3D render farm?  It has 8 P106-100 graphic cards without video out enclosed in a small form factor.  If usable it would probably be done in 5 seconds for this benchmark.  smiley

     

    https://videocardz.com/newz/first-look-at-pascal-based-gpu-cryptocurrency-mining-station

Sign In or Register to comment.