Iray Starter Scene: Post Your Benchmarks!

1111214161749

Comments

  • System workstation HPZ440 16gb ram ddr4 ecc- hd 1tb,  nvidia quadro k620 for display and quadro k1200 for render - various test:

    CPU + K1200 + Optix on  = 9' 31" 5000 iteration

    K1200 + Optix on = 11' 40" 5000 iteration

    K1200 + K620 + Optix on = 7' 9" 5000 iteration

  • nonesuch00nonesuch00 Posts: 18,293

    HP 8450P Elitebook, 3rd Generation i5 with Intel HD Graphics 3000 (4 cores), 16 GB RAM

    Scene rendered as is from the download, no changes.

    2016-12-30 17:56:17.811 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01946 iterations after 7201.491s.
    2016-12-30 17:56:17.819 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum render time exceeded.

    2016-12-30 17:56:18.407 Finished Rendering
    2016-12-30 17:56:18.507 Total Rendering Time: 2 hours 4.2 seconds

    The rendering dialogue claimed to be about 92% converved with the 2 hour time limit ran out.

    iRay Starter Scene.png
    400 x 520 - 230K
  • WandererWanderer Posts: 957

    Sorry, TL;DR

    Okay, I was able, at long last, to upgrade my GPU. I have more data that I find interesting and thought I'd share. I'm sorry for repost of previously reported data, but I thought I'd spare you having to scroll up to get the comparison.

    My system stats again, the only things that changed between the first set of tests and the second is the addition of another video card--the 980, a 16GB Flash drive in fully-dedicated Readyboost mode (yes, its using all 16GB, but I'm not sure whether it affects render times or not), and I switched from 3 independent monitors (1920 x 1080 x 3) to one 3-monitor surround setup (5760 x 1080 x 1):

    System Specs: i5-2500k @ 3.30 GHz, using 3 cores for rendering (according to the Daz log file)
    GeForce GTX 680, compute capability 3.0, 2048MB Total, 1753MB available, display attached (3 monitors x 1920 x 1080), 1536 CUDA cores
    ASRock Z77 Extreme4
    16 GB RAM (don't remember make and model)

    PLUS: Zotac Geforce GTX 980 Ti AMP! Extreme Edition 6GB, 2816 CUDA cores

    Because of the age and limitations of my CPU, I've skipped CPU-only testing.

    PREVIOUS TESTS:

    First Test Render:

    Optimization: Memory
    CPU + GPU + OptiX
    5000 Iterations (4546 GPU + 454 CPU)
    Total Time: 7 min, 36.68 sec


    Second Test Render:

    Optimization: Memory
    CPU + GPU
    5000 Iterations (4518 GPU + 482 CPU)
    Total Time: 9 min, 35.14 sec

    Third Test Render:

    Optimization: Memory
    GPU + OptiX
    5000 Iterations 
    Total Time: 6 min, 50.23 sec

    Fourth Test Render:

    Optimization: Memory
    GPU Only
    5000 Iterations
    Total Time: 9 min, 11.39 sec

    Fifth Test Render:

    Optimization: Speed
    GPU + OptiX
    5000 Iterations
    Total Time: 6 min, 26.12 sec

     

    At that point I had decided that the best time must be for speed optimization with OptiX on, so I didn't bother to test GPU only in speed mode with Optix off. It might actually make a difference, but I'm not going back and doing it over now. Here's the added data:

    980 + 680 + optix + speed (680 used for display in all cases - 3 monitor surround at 5760 x 1080)

    Total Time: 1 min 58.47 seconds

    980 + 680 + speed

    Total Time: 1 min 46.79 seconds

    980 + 680 + Optix + Memory

    Total Time: 2 minutes 2.41 seconds

    980 + 680 + Memory

    Total Time: 2 minutes 47.90 seconds

    980 + optix + Speed

    2 minutes 9.98 seconds

    980 + Speed

    2 minutes 16.96 seconds

    980 + Optix + Memory

    2 minutes 33.13 seconds

    980 + Memory

    3 minutes 34.92 seconds

     

    SUMMARY ANALYSIS: 

    When sharing rendering cycles between my CPU and 680 GPU, the render took more time because it appears that a set number of iterations were handed off to the CPU to render--which simply took longer, so the GPU was essentially slowed down by use of the CPU to "aid" in rendering. When running GPU on my 680 card alone, I concluded that GPU + OptiX in Speed mode was the fastest option at 6 minutes 26.12 seconds (perhaps erroneously because I did not think to test it in speed mode with OptiX off). Then, when I added the new card, I was able to determine that running the two cards together in speed mode with OptiX off was my best option, resulting in the render completing in 1 minute 46.79 seconds. Oddly, when running the two cards together, I found that OptiX actually slowed the render down unless using memory mode. But, when I ran with only the 980 alone in speed mode, OptiX seemed to improve the render time somewhat. In any case, memory apparently shouldn't be used on a system like mine without also using OptiX, but that should possibly not be used with speed in systems much like my own. Having said all of that, I'm very curious to see what a more extensive scene with more costly resources added would do. I'm tempted to re-run the tests with something changed/added to scene. Anyway, I found these results to be both fascinating and unexpected. I will definitely be running both cards, and probably with OptiX off in speed mode until I learn what circumstances might change that as the best option for me.

  • jackmanganojackmangano Posts: 4
    edited January 2017

    that is very interesting InfiniteSuns. i'm surprised that with my results that basically i get the best rendering speed with cpu + gpu with optix and speed optimization. but my build isn't the most power efficent. the gpu pulls 350w-ish if iray actually USED 100% gpu and the cpu does god knows how much for a combined system draw of around 475-530w.

    i'd like to know what a more advanced benchmark scene would do too, i feel like testing this scene with ceritan configs isn't the most realistic

    EDIT: like, lets get a figure with genesis 3 and 4k textures and normal maps with displacments, transparent high poly hair... the works. i'm rendering a thing right now and i'm not seeing any gpu load but the cpu is going insane, but the gpu is holding 5gb of data in it right now. that changes things when workloads fill the vram

    Post edited by jackmangano on
  • WandererWanderer Posts: 957
    edited January 2017

    hmm.... interesting indeed...  What is your cpu?

    Let me run another test with CPU, 2 GPU's, Optix and speed... the works. I'll get back with my results

     

    EDIT: Okay, I ran the test with CPU + 2 GPU's + OptiX + Speed and the result was: 2 minutes 36.46 seconds. 

    For my system, that is slower than 2 gpu's + Speed (no OptiX) at 1 min 46.79 seconds and 2 gpu's + speed and OptiX at  1 min 58.47 seconds.

    So, for my tech, and I suspect my cpu, ram, motherboard have a huge impact on this, 2 gpu's + speed alone is the quickest. Perhaps someone could come up with an alternate version of the above scene to push rendering times out and give us more data. It really should be done scientifically with representatives of differing generations and configurations all present for comparison.

    Post edited by Wanderer on
  • Ran the test using the new release on my laptop.

    Specs:

    ASUS GL502VM-DB71- Core i7-6700HQ, nVidia GeForce GTX 1060 6GB, 16GB DD4-2300

    Render Time (GPU Only, Optix On All Spheres}: 4m 43s 95% 4828 iterations

     

     

    [BENCHMARK] 4m43s GPU Only Optix On.png
    400 x 520 - 211K
  • TooncesToonces Posts: 919

    I assume the Beta and Production version of Daz Studio are identical today.

    I'm curious what folks who ran this test scene before today and run it again today (with latest production version) are seeing as a difference in render time.

  • havsm said:

    I assume the Beta and Production version of Daz Studio are identical today.

    I'm curious what folks who ran this test scene before today and run it again today (with latest production version) are seeing as a difference in render time.

    I just re-ran.  My original benchmark was 1:39 on Daz 4.9.2.XX.  On 4.9.3.XX (whatever it is today), my benchmark was 1:48.  Slower by 9 secs.

    i7 4770k
    1x970
    1x980ti

  • bailaowai said:
    havsm said:

    I assume the Beta and Production version of Daz Studio are identical today.

    I'm curious what folks who ran this test scene before today and run it again today (with latest production version) are seeing as a difference in render time.

    I just re-ran.  My original benchmark was 1:39 on Daz 4.9.2.XX.  On 4.9.3.XX (whatever it is today), my benchmark was 1:48.  Slower by 9 secs.

    i7 4770k
    1x970
    1x980ti

    Scratch that, I immediately ran again (just clicked "render" again), and got 1:39.

    I don't remember what I was doing when I originally ran the benchmark quite some time back on 4.9.2, but it's possible / likely I had run it multiple times.  Scene setup is of course faster when you render the same scene again, so that probably accounts for the 10 sec difference.  When I originally ran the bench I probably didn't know that because I had just started to play around with Daz.

    Bottom line is it looks like I'm basically getting the exact same rendering times on this benchmark between 4.9.2 and 4.9.3.

  • bailaowai said:
    bailaowai said:
    havsm said:

    I assume the Beta and Production version of Daz Studio are identical today.

    I'm curious what folks who ran this test scene before today and run it again today (with latest production version) are seeing as a difference in render time.

    I just re-ran.  My original benchmark was 1:39 on Daz 4.9.2.XX.  On 4.9.3.XX (whatever it is today), my benchmark was 1:48.  Slower by 9 secs.

    i7 4770k
    1x970
    1x980ti

    Scratch that, I immediately ran again (just clicked "render" again), and got 1:39.

    I don't remember what I was doing when I originally ran the benchmark quite some time back on 4.9.2, but it's possible / likely I had run it multiple times.  Scene setup is of course faster when you render the same scene again, so that probably accounts for the 10 sec difference.  When I originally ran the bench I probably didn't know that because I had just started to play around with Daz.

    Bottom line is it looks like I'm basically getting the exact same rendering times on this benchmark between 4.9.2 and 4.9.3.

    Also, I just ran the beta (4.9.3.128) and got 1:39 on the second run.  Same.

    For clarity, for me:

    4.9.2.XX = 1:39
    4.9.3.128 (public beta) = 1:39
    4.9.3.166 (current production build) = 1:39

    i7, 1x970, 1x980ti, GPU only (both), Optix on

  • rinkuchal said:

    Ran the test using the new release on my laptop.

    Specs:

    ASUS GL502VM-DB71- Core i7-6700HQ, nVidia GeForce GTX 1060 6GB, 16GB DD4-2300

    Render Time (GPU Only, Optix On All Spheres}: 4m 43s 95% 4828 iterations

    My test with a new video card Palit GTX 1060 6 Gb.

    System: INTEL Core i5 4460, 16 Gb

    Only GPU+Optix on : 100 % complete, 5000 Iterations - total time 4m, 40s !!!

    Das studio 4.9

     

    I am very happy and surprised with the performance at the same time!

     

     

  • Bunyip02Bunyip02 Posts: 8,784

    Intel Skylake Core i7-6700K CPU, nVidia GTX1060 6GB, 32GB 2133Mhz DDR4

    CPU+GPU: 4m 9s 95% 4802 iterations

  • Widdershins StudioWiddershins Studio Posts: 539
    edited January 2017

    Ran the benchmark on my 1070 and this is what I got.

    Optix off. GPU Only
    Total Rendering Time: 5 minutes 25.62 seconds

    Optix on. GPU Only
    Total Rendering Time: 3 minutes 15.56 seconds

    CPU Optix made no real difference btw

    This card is also driving three screens. I was pleased the heat didn't go up anymore than gaming.

    My card details : https://www.asus.com/Graphics-Cards/TURBO-GTX1070-8G/

    Pretty much the same as with the beta : 3 minutes 11.14 seconds with Optix in the new DS.

    Post edited by Widdershins Studio on
  • Geminii23Geminii23 Posts: 1,328

    Core i7-3970x, Dual nVidia GTX 760, 32GB 1333mhz DDR3

    Removed Sphere 8 and 9.

    CPU+GPU+GPU+Optix = 3:10

    I'm happy with that.

    Test.png
    400 x 520 - 190K
  • Did the example file and these are my results

    Asus Laptop Windows 10  with 64 Gb.   Nvidia 1070 with 8 Gb card

    GPU
    Total Rendering Time: 4 minutes 57.58 seconds

    GPU+OptiX (Whatever that it)
    Total Rendering Time: 3 minutes 2.40 seconds

  • nonesuch00nonesuch00 Posts: 18,293

    I did run mine, earlier post, with the 4.9.3.166 Public Beta and it ran out of time at 2 hours. 2nd Gen i5 with Intel HD Graphics 3000 and 16GB RAM is what is relevant. It got to 92% complete, just short of 2000 iterations. It's a very dark scene.

  • please, as the 1070/80 are all custom (well, all pascal GPUs but titan), don't forget to tell the exact ref of your cards (because the clocks can vary from 0 to 20%+).

    And don't forget to ONLY allow one test per daz run. I mean, if you test the render, and then retest in the same session, the result is not comparable to the fresh run because some things are already in your GPU. So, to run the test, exit daz, run daz , load scene and render (eventually change the cpu/gpu/optix settings before render).

    Thx to all of you for your participation.

  • ArtiniArtini Posts: 9,672
    edited January 2017

    Just redid the test using Daz Studio 4.9.3.166 (GPU only) on

    Gigabyte GTX 1080 G1 Gaming 8GB VRAM
    i7-3770K @ 3.5 GHz, 32 GB RAM
    Asus P8 Z77-V PRO motherboard
    Optix On
    Nvidia Driver 376.33, Windows 7 Pro 64-bit

    Total Rendering Time: 2 minutes 58.3 seconds (4871 iterations, 14.924s init, 159.108s render -> 0.032664s/iteration)

    Results from previous test using first Daz Studio beta with support for Pascal cards and Nvidia Driver 375.70 were:

    Total Rendering Time: 3 minutes 9.16 seconds (5000 iterations, 13.594s init, 174.160s render -> 0.034832s/iteration)

    I have compared renders from both tests in Gimp (using Subtract/Difference) and they are exactly the same.

    It looks like iray is more optimized now and does not require so many iterations, to achieve the same results faster.

     

    Post edited by Artini on
  • Surprised my 1070 is only a few seconds behind your 1080.

  • junkjunk Posts: 1,362
    edited January 2017

    With my Gigabyte G1 GTX 1070, using Daz 4.9.3.166 Beta, GPU only, OptiX ON,  I am getting:

    1st run = 3 minutes 7.36 seconds
    2nd run = 3 minutes 7.60 seconds
    3rd run = 3 minutes 6.76 seconds

    Exited program and restarted each time.

    4th run = 2 minutes 56.2 seconds (without restarting)

    OVERCLOCKING (GPU additionally overclocked by 120MHz, memory by 350MHz /restarted Daz3D each time):
    5th run = 2 minutes 53.70 seconds
    6th run = 2 minutes 52.76 seconds

     

    Post edited by junk on
  • WandererWanderer Posts: 957
    edited January 2017

    Having just run the tests again with the updated Studio and Rendering, I can confirm some things from comparison to my tests from just a few days ago.

    Reminder--System Specs: i5-2500k @ 3.30 GHz, using 3 cores for rendering (according to the Daz log file)
    GeForce GTX 680, compute capability 3.0, 2048MB Total, 1753MB available, display attached (3 monitors x 1920 x 1080), 1536 CUDA cores
    ASRock Z77 Extreme4
    16 GB RAM (don't remember make and model)

    PLUS: Zotac Geforce GTX 980 Ti AMP! Extreme Edition 6GB, 2816 CUDA cores

    First, when I ran the tests before, whether I ran with speed or memory with and without OptiX, there was no discernable difference in quality across the final renders. I said as much in my previous posts, hence my decision not to include all the renders for those test runs. That was before the update, however, as when I run the test scene under the new and "improved" version, there is a definite difference in quality. I've run the tests multiple times, and can confirm that the effect is repeatable. First attachment is 3 results from the previous tests to show they are very much the same in quality. EDIT: Okay, I admit I do see some difference in shadowing toward the back of the middle scene--but nothing like the renders under after the update.

    Second attachment shows 3 new render results, from left to right, as follows:

    2 GPU's, Memory, OptiX on ----- 2 GPU's, Speed, No OptiX ---- 2 GPU's, Speed, OptiX on

    If you look closely, you can see the differences in lighting, especially across the floor. None of these 3 results are equivalent to the previous renders under the old engine in terms of quality. If you overlay the two images in an image editing program, and switch back and forth, you can see that the lighting is different, especially between hair highlights under both the new and old, and highlights/ghosting along the edge of the pants--especially the outside of the left leg. Or, you could just open them each in a new browser tab, blow them up to let you see the details, then switch back and forth between the tabs to see what I'm talking about yourself. Another strange issue is that, although the log in Daz says they all reached 95% convergence, I watched the log closely and on some renders the window closed when it was still showing less than 90% convergence, say about 89.xx%.

    Second, speaking of time gain, ugh. Before the update to Daz/Iray my fastest option was 2 GPU's and Speed, no OptiX, coming in well under the 2 minute mark at 1 minute 46.79 seconds. Now, my best runs don't even begin to approach 2 minutes. Even running the scene multiple times without clearing the scene from card memory, as the post above makes reference with regards to improved speed (at the sacrifice of accuracy, I know), only results in 1 second gained in subsequent renders on my system. My best time now is 2 minutes, 11.8 seconds running 2 GPU's and Speed and OptiX ON. But I'm not happy with the new speeds or quality at all. 

     

    TestsPrior.png
    1200 x 520 - 390K
    TestsNew.png
    1200 x 520 - 538K
    Post edited by Chohole on
  • nonesuch00nonesuch00 Posts: 18,293
    edited January 2017

    Having just run the tests again with the updated Studio and Rendering, I can confirm some things from comparison to my tests from just a few days ago.

    Reminder--System Specs: i5-2500k @ 3.30 GHz, using 3 cores for rendering (according to the Daz log file)
    GeForce GTX 680, compute capability 3.0, 2048MB Total, 1753MB available, display attached (3 monitors x 1920 x 1080), 1536 CUDA cores
    ASRock Z77 Extreme4
    16 GB RAM (don't remember make and model)

    PLUS: Zotac Geforce GTX 980 Ti AMP! Extreme Edition 6GB, 2816 CUDA cores

    First, when I ran the tests before, whether I ran with speed or memory with and without OptiX, there was no discernable difference in quality across the final renders. I said as much in my previous posts, hence my decision not to include all the renders for those test runs. That was before the update, however, as when I run the test scene under the new and "improved" version, there is a definite difference in quality. I've run the tests multiple times, and can confirm that the effect is repeatable. First attachment is 3 results from the previous tests to show they are very much the same in quality. EDIT: Okay, I admit I do see some difference in shadowing toward the back of the middle scene--but nothing like the renders under after the update.

    Second attachment shows 3 new render results, from left to right, as follows:

    2 GPU's, Memory, OptiX on ----- 2 GPU's, Speed, No OptiX ---- 2 GPU's, Speed, OptiX on

    If you look closely, you can see the differences in lighting, especially across the floor. None of these 3 results are equivalent to the previous renders under the old engine in terms of quality. If you overlay the two images in an image editing program, and switch back and forth, you can see that the lighting is different, especially between hair highlights under both the new and old, and highlights/ghosting along the edge of the pants--especially the outside of the left leg. Or, you could just open them each in a new browser tab, blow them up to let you see the details, then switch back and forth between the tabs to see what I'm talking about yourself. Another strange issue is that, although the log in Daz says they all reached 95% convergence, I watched the log closely and on some renders the window closed when it was still showing less than 90% convergence, say about 89.xx%.

    Second, speaking of time gain, ugh. Before the update to Daz/Iray my fastest option was 2 GPU's and Speed, no OptiX, coming in well under the 2 minute mark at 1 minute 46.79 seconds. Now, my best runs don't even begin to approach 2 minutes. Even running the scene multiple times without clearing the scene from card memory, as the post above makes reference with regards to improved speed (at the sacrifice of accuracy, I know), only results in 1 second gained in subsequent renders on my system. My best time now is 2 minutes, 11.8 seconds running 2 GPU's and Speed and OptiX ON. But I'm not happy with the new speeds or quality at all. 

     

    Another poster said they had similar problem but as it turns out they had nVidia SLI enabled somewhere in their video card driver configuration. Maybe you have too?

    Post edited by Chohole on
  • TooncesToonces Posts: 919

    Hmm, now that you mention it, I do notice the extra fireflies on the floor, especially in the bottom right ball shadow. I just now rendered it by setting threshold to 99% convergence and upping iterations from 5k to 7k and still see more fireflies than before (comparing with a render back last month before I moved to the Beta).

    My hunch is that this is an Nvidia Iray thing, and not a Daz3d thing. I assume any software using iray will see similar issues when moving to the latest version of Iray.

    In render settings > filtering > fireflies, there is a dial for Nominal Luminance which may help if it's an issue. I'll probably stick with the default personally. I assume Nvidia will continue to improve their iray engine over time (in both speed and quality) and Daz will in turn benefit from the improvements.

  • WandererWanderer Posts: 957
    edited January 2017

     

    Having just run the tests again with the updated Studio and Rendering, I can confirm some things from comparison to my tests from just a few days ago.

    Reminder--System Specs: i5-2500k @ 3.30 GHz, using 3 cores for rendering (according to the Daz log file)
    GeForce GTX 680, compute capability 3.0, 2048MB Total, 1753MB available, display attached (3 monitors x 1920 x 1080), 1536 CUDA cores
    ASRock Z77 Extreme4
    16 GB RAM (don't remember make and model)

    PLUS: Zotac Geforce GTX 980 Ti AMP! Extreme Edition 6GB, 2816 CUDA cores

     

    Another poster said they had similar problem but as it turns out they had nVidia SLI enabled somewhere in their video card driver configuration. Maybe you have too?

    I'm not even sure that would be possible since I'm using 2 different generations of video cards and they must be the same to enable SLI. From my understanding, in order to even get that option in the Nvidia control panel you have to be running 2 of the same cards and they have to be bridged properly. But thanks for the suggestion. I've looked and the option isn't there. I'm guessing this might affect anyone who runs 2​ non-Pascal GPU's with speed and no OptiX enabled. Anyone else notice this?

    Post edited by Wanderer on
  • WandererWanderer Posts: 957
    havsm said:

    Hmm, now that you mention it, I do notice the extra fireflies on the floor, especially in the bottom right ball shadow. I just now rendered it by setting threshold to 99% convergence and upping iterations from 5k to 7k and still see more fireflies than before (comparing with a render back last month before I moved to the Beta).

    My hunch is that this is an Nvidia Iray thing, and not a Daz3d thing. I assume any software using iray will see similar issues when moving to the latest version of Iray.

    In render settings > filtering > fireflies, there is a dial for Nominal Luminance which may help if it's an issue. I'll probably stick with the default personally. I assume Nvidia will continue to improve their iray engine over time (in both speed and quality) and Daz will in turn benefit from the improvements.

    I'm with you on that note. I don't blame Daz Studio, the software, but I am disturbed by it being rolled out before the Iray end was farther along. I expect it will eventually get resolved, but if it's anything like dealing with other Nvidia issues (driver updates anyone?), I won't be holding my breath. Here's to hoping they'll do better in the new year. I wonder what the professional users of DS will be doing, if anything, to compensate for the lighting changes.

  • WandererWanderer Posts: 957
    edited January 2017

    I'll provide a little more data from further test renders. Each of the attached images is of the same section of the image blown up 500%. In order:

    1. Test Render from before the update (what I've been used to):

    2. Test Render post-update using speed optimization without OptiX (floor shading/lighting very different/as are legs):

    3. Test Render post-update using speed and OptiX  (floor shadows appear a little more defined to me):

    On reconsideration, after looking carefully at the above details, I can't help but wonder if the reason for the difference in the legs between the old renders and the new is because more light is actually reaching those surfaces in the updated Iray. Perhaps that is a good thing. As for the hair highlights, which I don't show here (but can be viewed in my previous post), I'm not sure what is happening there--I can only say it's different. And I still don't like the muted floor effect apparent to me in the speed render without OptiX, so I probably won't be using that setting. 

    OldRender.png
    1238 x 809 - 131K
    NewSpdOnly.png
    1238 x 808 - 134K
    defaultspdopt.png
    1238 x 806 - 133K
    Post edited by Wanderer on
  • Finally home from Houston and ran some benchmarks on my everything rig (net mule, gaming rig, renderig rig, art rig, model railroad layout design workstation, etc. etc. etc.)

    System Specs:

    Intel i5 4670K, Gigabyte Z97X-Gaming 7 MB, EVGA GTX 960 SSC 4GB, 2x 8192kb Patriot Viper DDR-3 memory, WD 6400AAKS hard drive with a SanDisk ReadyCache, 1 Hitachi HT721010SLA360 1TB hard drive, Corsair HX1000W PSU, HP DVD1720 optical drive, CoolerMaster CM 690 II Case, Samsung SyncMaster P2370 Monitor, Windows 10 Professional 64 using NVIDIA Geforce Game Ready Driver 376.33 release date 12/13/16 running monitor at native resoluton (1920x1080).

    Test 1: DS 4.9.2 CPU & GPU checked Optix off, Optimization set to memory (default settings), 8 minutes 40.44 seconds

    Test 2: DS 4.9.2 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 06.44 seconds

    Test 3: DS 4.9.3 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 40.7 seconds

    All tests run after shutting down and restarting DS and then loading image.  Not shure why my system is runnign a little slower with 4.9.3.166.

    My card only has 1024 Cuda Cores but it does have 4GB memory.  Still new to this as I have not done any digital art in several years and just getting back into it (Last time I used DS it was version 2.3).  Looking for suggestions to maximize my render speed or suggestions for upgrading on a somewhat thin budget ( replacing this card or possibly adding a second card).  Card is driving the monitor in addition to rendering dutys.

     

    TIA

  • WandererWanderer Posts: 957
    edited January 2017

    Finally home from Houston and ran some benchmarks on my everything rig (net mule, gaming rig, renderig rig, art rig, model railroad layout design workstation, etc. etc. etc.)

    System Specs:

    Intel i5 4670K, Gigabyte Z97X-Gaming 7 MB, EVGA GTX 960 SSC 4GB, 2x 8192kb Patriot Viper DDR-3 memory, WD 6400AAKS hard drive with a SanDisk ReadyCache, 1 Hitachi HT721010SLA360 1TB hard drive, Corsair HX1000W PSU, HP DVD1720 optical drive, CoolerMaster CM 690 II Case, Samsung SyncMaster P2370 Monitor, Windows 10 Professional 64 using NVIDIA Geforce Game Ready Driver 376.33 release date 12/13/16 running monitor at native resoluton (1920x1080).

    Test 1: DS 4.9.2 CPU & GPU checked Optix off, Optimization set to memory (default settings), 8 minutes 40.44 seconds

    Test 2: DS 4.9.2 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 06.44 seconds

    Test 3: DS 4.9.3 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 40.7 seconds

    All tests run after shutting down and restarting DS and then loading image.  Not shure why my system is runnign a little slower with 4.9.3.166.

    My card only has 1024 Cuda Cores but it does have 4GB memory.  Still new to this as I have not done any digital art in several years and just getting back into it (Last time I used DS it was version 2.3).  Looking for suggestions to maximize my render speed or suggestions for upgrading on a somewhat thin budget ( replacing this card or possibly adding a second card).  Card is driving the monitor in addition to rendering dutys.

     

    TIA

    Since you haven't been answered already by those more informed than myself, and it's been 4 days, I'll offer my two cents. Regarding the change in render times, when I first checked into the Daz update, I seem to remember reading some notes somewhere that suggested that the new Iray rendering was going to be slower in the front end, but quicker on the backside for REASONS. No, just kidding, for reasons I'm not entirely understanding. I think with longer renders we might see something different in terms of speed, perhaps.

    As for suggestions for upgrading to improve render times on a budget, I know a little about this, but I'm afraid I'm not going to be able to offer much. For rendering images in Iray, your current preference in shopping for Nvidia cards will be to go with any generation 600 or later, but with a card that ends in -80 (e.g. GTX 680, 780, 980, etc.) at least, and preferably with more memory onboard. I don't know anything about Titan or Pascal, but since you mention budget, I don't think I need to consider those options. Budget makes this a tougher issue to solve, but if you watch carefully, you may find something decent on sale--prices greatly depend upon seasonal considerations and also what's being, been, or about to be released in terms of new cards. One caveat, be certain you are getting what you really need regardless of how good the deal appears as I'm uncertain how useful an older card like mine would ultimately be to you with its 2 GB RAM limit as it only helps when the scene does not run to higher memory requirements and that's entirely dependent upon your workflow/usage. You could render out portions of a scene for post-compositing in PSP or Photoshop. Also, consider going for 32 Gigs of system RAM AFTER you get your card situation straight--because more system RAM never hurts. That's my next goal.

    If you want to see what you can do on budget, just look at my render times. My 680 was doing very well under the previous Daz compared with newer generations with lower ending numbers--740, 760, etc. If you check, even up against your 960, my render times are not shabby at all. Problem is they aren't making anymore 680's, and you will see if you check online, that the vendors offering the last of these in "new" condition appear to know they have something people want because the prices are outrageous. I was able to add a 980 to my system from Amazon lightly used for a little under $500 (it was sold as new, but arrived in used condition), and that does improve render times somewhat as a fall-back: I now run my system off my lower memory unit (my displays primarily), keeping my 980 completely free to use its total 6 Gigs as a stop-gap against having to render on CPU. However, I'm noticing some peculiarities with rendering this way, and I've found that if I render the same scene repeatedly (not this benchmarking scene we're using here--something else), switching back and forth between speed and memory optimization, eventually Daz stops clearing my VRAM, and defaults to CPU rendering where the GPU's were perfectly capable of doing the job alone previously, and that's with the renders all closed (always close your renders before rendering another image). Then I have to save the scene, restart Daz, and that seems to work clearing the memory. Check that first (under the new Daz anyway, I don't know if it was a problem before)--if your scenes go to CPU, it might not be your scene but a memory bug/leak, and you'll only find out if you stop and restart Daz and reload your scene before rendering. I'm using GPU-Z to monitor each of my GPU's and Task Manager to monitor my CPU during renders to watch what's happening. Very informative and helpful doing so.

    Additionally, I'm not sure how much background programs influence your render times, but you might want to check what you have going on behind the scenes. I've had issues with even AV software in the past, but that was back when I primarily had CPU renders--and my CPU is so old now I try to do everything I can to keep from having renders fall to my CPU. I don't want to have to upgrade to a new mobo/cpu any faster than necessary. This PC has worked well for me about 5 years with only having to replace the motherboard and a couple hard drives (NEVER. buy. WD budget drives--whatever they are currently being sold as. period.).

    Post edited by Wanderer on
  • outrider42outrider42 Posts: 3,679

    Finally home from Houston and ran some benchmarks on my everything rig (net mule, gaming rig, renderig rig, art rig, model railroad layout design workstation, etc. etc. etc.)

    System Specs:

    Intel i5 4670K, Gigabyte Z97X-Gaming 7 MB, EVGA GTX 960 SSC 4GB, 2x 8192kb Patriot Viper DDR-3 memory, WD 6400AAKS hard drive with a SanDisk ReadyCache, 1 Hitachi HT721010SLA360 1TB hard drive, Corsair HX1000W PSU, HP DVD1720 optical drive, CoolerMaster CM 690 II Case, Samsung SyncMaster P2370 Monitor, Windows 10 Professional 64 using NVIDIA Geforce Game Ready Driver 376.33 release date 12/13/16 running monitor at native resoluton (1920x1080).

    Test 1: DS 4.9.2 CPU & GPU checked Optix off, Optimization set to memory (default settings), 8 minutes 40.44 seconds

    Test 2: DS 4.9.2 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 06.44 seconds

    Test 3: DS 4.9.3 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 40.7 seconds

    All tests run after shutting down and restarting DS and then loading image.  Not shure why my system is runnign a little slower with 4.9.3.166.

    My card only has 1024 Cuda Cores but it does have 4GB memory.  Still new to this as I have not done any digital art in several years and just getting back into it (Last time I used DS it was version 2.3).  Looking for suggestions to maximize my render speed or suggestions for upgrading on a somewhat thin budget ( replacing this card or possibly adding a second card).  Card is driving the monitor in addition to rendering dutys.

     

    TIA

    You have a 1000 WATT PSU, so you can easily add a second card to your set up instead of replacing the 960. That would be the best bang for your buck by far.

    Just remember, CUDA cores will stack, but memory does not. A second 960 4gb would double your CUDA cores and that would help a lot. But you can use any combination of Nvidia gpus. If you can afford a 970, that plus your 960 would be a lot more CUDAs than a single 980 would have. And of course, a 1070 would rock, with its 8gb of ram. The 960 would drop out for scenes exceeding 4gb, but for all scenes less than 4gb, the 960 would kick in and add its CUDAs to the 1070's.

  • WandererWanderer Posts: 957

    Finally home from Houston and ran some benchmarks on my everything rig (net mule, gaming rig, renderig rig, art rig, model railroad layout design workstation, etc. etc. etc.)

    System Specs:

    Intel i5 4670K, Gigabyte Z97X-Gaming 7 MB, EVGA GTX 960 SSC 4GB, 2x 8192kb Patriot Viper DDR-3 memory, WD 6400AAKS hard drive with a SanDisk ReadyCache, 1 Hitachi HT721010SLA360 1TB hard drive, Corsair HX1000W PSU, HP DVD1720 optical drive, CoolerMaster CM 690 II Case, Samsung SyncMaster P2370 Monitor, Windows 10 Professional 64 using NVIDIA Geforce Game Ready Driver 376.33 release date 12/13/16 running monitor at native resoluton (1920x1080).

    Test 1: DS 4.9.2 CPU & GPU checked Optix off, Optimization set to memory (default settings), 8 minutes 40.44 seconds

    Test 2: DS 4.9.2 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 06.44 seconds

    Test 3: DS 4.9.3 CPU unchecked & GPU checked, Optix on, Optimization set to speed, 6 minutes 40.7 seconds

    All tests run after shutting down and restarting DS and then loading image.  Not shure why my system is runnign a little slower with 4.9.3.166.

    My card only has 1024 Cuda Cores but it does have 4GB memory.  Still new to this as I have not done any digital art in several years and just getting back into it (Last time I used DS it was version 2.3).  Looking for suggestions to maximize my render speed or suggestions for upgrading on a somewhat thin budget ( replacing this card or possibly adding a second card).  Card is driving the monitor in addition to rendering dutys.

     

    TIA

    You have a 1000 WATT PSU, so you can easily add a second card to your set up instead of replacing the 960. That would be the best bang for your buck by far.

    Just remember, CUDA cores will stack, but memory does not. A second 960 4gb would double your CUDA cores and that would help a lot. But you can use any combination of Nvidia gpus. If you can afford a 970, that plus your 960 would be a lot more CUDAs than a single 980 would have. And of course, a 1070 would rock, with its 8gb of ram. The 960 would drop out for scenes exceeding 4gb, but for all scenes less than 4gb, the 960 would kick in and add its CUDAs to the 1070's.

    Good advice, except the moment you run over 4 Gigs, you're asking your cpu to pick up the entire load, and that's easy to do. Granted, 6 Gigs doesn't carry my system much farther, but I'd rather run his 960 alongside something better any day over another 960. Nvidia doesn't make it clear how big the difference is, but it's well worth it to spend a bit more and get an -80 any day. I wasn't suggesting that he use a 980 solo. I'm doing alright with that and a 680. His 960 has 1024 CUDA cores according to Amazon. Two of them is 2048. My 980 ti is running 2816. Add that to my 680 with 1536 CUDA cores, and that will blow two 960's away any day. If he wants to run faster on a budget, almost any -80 from 600 on would be better than buying another of the same. I'm quoting facts that you can look up on the manufacturer's and vendor's websites. Now, I'll step off before this turns into a which whatever is better thread. It isn't for that purpose, but I thought since I answered his questions after nobody else did, I'd clarify my response to added commentary.

Sign In or Register to comment.