Same Video Cards in 2 different rigs – Iray render times

areg5areg5 Posts: 617
edited February 2020 in Daz Studio Discussion

How much difference does the actual motherboard and Ram make, or are rendering times entirely dependent on the video cards?  I recently built a new rig as a rendering server, and pulled my 3 GTX 1080 Ti’s from the old and put them into the new.  I chose a scene that was a long render in the old build, and rendered in the new. The scene in question is shown below.

I chose an AMD thread ripper build as it seemed through my research that it would allow the cards to run faster.  In the old build I had to throttle the cards down using Precision XOC to 60% power.  The hope was that in the new build I would not have to throttle them down.  The video cards are exactly the same in each trial.

 

Old build:

Gigabyte GA-Z97X Gaming 5

Ram 32 GB DDR3 1866

Coolermaster CPU fan

Coolermaster Haf X

i7 4790

EVGA SuperNOVA 1200 P2 220-P2-1200-X1 80+ PLATINUM

 

New build:

ASRock Fatal1ty X399 Professional Gaming sTR4 AMD X399

Ram 64 gig DDR4 3600

Thermaltake Floe AIO Riing

Thermaltake 4-Sided Tempered Glass Vertical GPU Modular E-ATX Gaming Full Tower Case

AMD 2nd Gen Ryzen Threadripper 2950X

EVGA SuperNOVA 1300 G2 120-G2-1300-XR 80+ GOLD 1300W

The first take away was that I still needed to throttle the cards down.  It was relatively stable at 90%, and absolutely stable at 80%.  I render using Manfriday’s Render Queue. I rendered the scenes using Daz3D Studio 4.12 Beta 4.12.1.55. The results:

1:26.22 old

1:21.52 new

So there is a small difference, but not much of one. I had hoped that I would gain more than 5 minutes of time.  I guess in my case the take home point is that the overwhelming majority of the rendering time is dependent on the cards alone.  Despite the fact that I learned at Tom’s Hardware that the new build should allow the cards to run much more efficiently and faster, the real world result is not dramatric.

s292_spotcam.png
2560 x 1440 - 5M
Post edited by areg5 on

Comments

  • First I see no reason that your first system would need to undervolt the cards to get them to run. Were they overheating in the Haf X?

    Second TR adds more PCIE lanes, and that motherboard does provide 4 x16 slots, so scenes should load faster which between going back to full voltage and that is probably the performance difference you're seeing. But otherwise TR does not offer any capability which should increase GPU performance.

    However that scene really shouldn't take an hour and a half on 3 1080ti's. I have one and a 2070 and a 3 G8 plus environment scene finishes in less than an hour pretty routinely. Do you have some setting or subD cranked up for some reason?

     

  • areg5areg5 Posts: 617

    Nothing was overheating.  The Haf X keeps everything really cool. If I don't under volt them, they typically crash to CPU after a few minutes.  I don't have that issue with one card running, just with 3.  The extra lanes was the reason I wanted the new build.  As for the time, my scenes are pretty complex.  Lots of textures, lots of objects.  If I'm doing a render with no objects, just figures and an HDRI it renders in about 10 minutes, if that.  Indoor scenes always take longer in my experience.  Typically they're done in less than 30 minutes. Could be it would have gone faster if I increased the lighting.  I chose that frame to do purely because it was one that took an unusually long time to render, and I wanted to see the difference, if any.

  • areg5 said:

    Nothing was overheating.  The Haf X keeps everything really cool. If I don't under volt them, they typically crash to CPU after a few minutes.  I don't have that issue with one card running, just with 3.  The extra lanes was the reason I wanted the new build.  As for the time, my scenes are pretty complex.  Lots of textures, lots of objects.  If I'm doing a render with no objects, just figures and an HDRI it renders in about 10 minutes, if that.  Indoor scenes always take longer in my experience.  Typically they're done in less than 30 minutes. Could be it would have gone faster if I increased the lighting.  I chose that frame to do purely because it was one that took an unusually long time to render, and I wanted to see the difference, if any.

    I have no idea what you mean by crash to CPU. Do you mean the drivers crashed? Or do you mean the cards themselves actuall;y shutdown (which is almost certainly an overheating issue). did you run your monitor of a 1080ti or off the CPU's iGPU?

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

  • RayDAntRayDAnt Posts: 1,144
    edited February 2020
    areg5 said:

    How much difference does the actual motherboard and Ram make, or are rendering times entirely dependent on the video cards? 

    Check out the columns called "Loading Time" in this thread (scroll all the way to the right for each benchmarked rendering device combination to see them.) This is the reported amount of render time taken up by secondary motherboard/RAM/hard disk processing rather than actual render processing on GPU(s)/CPU. As confirmed by your own results, motherboard/ram/etc choice is pretty much irrelevant to overall render times on modern systems (the longest times currently reported in that thread are just 16 seconds.)  For most intents and purposes GPU performance is the only thing that matters when it comes to rendering times.

     

    areg5 said:

    The first take away was that I still needed to throttle the cards down.  It was relatively stable at 90%, and absolutely stable at 80%.  I render using Manfriday’s Render Queue. I rendered the scenes using Daz3D Studio 4.12 Beta 4.12.1.55. The results:

    1:26.22 old

    1:21.52 new

    Assuming that your three 1080Ti's are air-cooled, it is almost a given that your instability issues are because of poor cooling performance resulting from having so many high performance GPUs packed together in a confined space rather than motherboard/ram limitations. The slight performance gain you report being able to achieve above most likely has everything to do with your new case providing slightly better airflow. If being able to run your cards at their maximum rendering performance level is your goal, then I'd recommend either improving the airflow in your case (potentially going with a different case altogether) or or even switching to GPU watercooling (3+ GPU non-server rack systems are where custom loops truly shine.)

     

     

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Check the bottom-right of the supplied render. There is a fully furnished room hidden beyond the camera viewpoint. All parts of which will contrbute to the time it takes for a fully raytraced scene to render.

    Post edited by RayDAnt on
  • RayDAnt said:
    areg5 said:

     

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Check the bottom-right of the supplied render. There is a fully furnished room hidden beyond the camera viewpoint. All parts of which will contrbute to the time it takes for a fully raytraced scene to render.

    You think I don't use full rooms? An hour and half on 3 1080ti's is slower than my 1080ti and 2070. I'm pretty confident his cards are throttling badly but I'm trying to verify since both cases have very good airflow.

  • areg5areg5 Posts: 617

    Well, it could be an airflow issue certainly.  I notice that when I throttle the cards to a max temp of under 80 it's stable.  I did not think 80 degrees is overheating.  Is it?  Anyway, I should also mention as far as the rendering times go, that I typically set the iterations to at least 10000, usually 15000, and increase the convergence to at least 97%.  Sure, if I kept it at the default values it would finish much faster.

  • RayDAntRayDAnt Posts: 1,144
    RayDAnt said:
    areg5 said:

     

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Check the bottom-right of the supplied render. There is a fully furnished room hidden beyond the camera viewpoint. All parts of which will contrbute to the time it takes for a fully raytraced scene to render.

    You think I don't use full rooms?

    Not at all. Merely pointing out that a full room environment + an unknown number of smaller scene props (the supplied render clearly indicates more is out of view) can mean very different things when it comes to render times.

  • RayDAntRayDAnt Posts: 1,144
    areg5 said:

    Well, it could be an airflow issue certainly.  I notice that when I throttle the cards to a max temp of under 80 it's stable.  I did not think 80 degrees is overheating.  Is it?  Anyway, I should also mention as far as the rendering times go, that I typically set the iterations to at least 10000, usually 15000, and increase the convergence to at least 97%.  Sure, if I kept it at the default values it would finish much faster.

    Do you also change the value for Render Quality (default = 1) as well? Because every doubling of that value (eg. going from 1 to 2) will lead in an approximate doubling of overall render time as well.

  • areg5areg5 Posts: 617
    RayDAnt said:
    areg5 said:

    Well, it could be an airflow issue certainly.  I notice that when I throttle the cards to a max temp of under 80 it's stable.  I did not think 80 degrees is overheating.  Is it?  Anyway, I should also mention as far as the rendering times go, that I typically set the iterations to at least 10000, usually 15000, and increase the convergence to at least 97%.  Sure, if I kept it at the default values it would finish much faster.

    Do you also change the value for Render Quality (default = 1) as well? Because every doubling of that value (eg. going from 1 to 2) will lead in an approximate doubling of overall render time as well.

    Not usually.  I do for a high quality single print.

  • areg5areg5 Posts: 617
    areg5 said:

    Nothing was overheating.  The Haf X keeps everything really cool. If I don't under volt them, they typically crash to CPU after a few minutes.  I don't have that issue with one card running, just with 3.  The extra lanes was the reason I wanted the new build.  As for the time, my scenes are pretty complex.  Lots of textures, lots of objects.  If I'm doing a render with no objects, just figures and an HDRI it renders in about 10 minutes, if that.  Indoor scenes always take longer in my experience.  Typically they're done in less than 30 minutes. Could be it would have gone faster if I increased the lighting.  I chose that frame to do purely because it was one that took an unusually long time to render, and I wanted to see the difference, if any.

    I have no idea what you mean by crash to CPU. Do you mean the drivers crashed? Or do you mean the cards themselves actuall;y shutdown (which is almost certainly an overheating issue). did you run your monitor of a 1080ti or off the CPU's iGPU?

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Is 80 degrees overheating?  If I keep it under 80, usually around 76, it runs no problem.  Yes, by crash to CPU I mean the cards shut down.  I notice that one of the cards always runs a few degrees hotter, and when that one gets to 80 degrees or so, they all shut down.  As for the monitor:  I use a single monitor that I can switch back and forth between both rigs.  The TR Rig does not have intergrated graphics, so it does run off of one of the cards.

  • Install MSI afterburner and turn up fan profile on all your cards to 100% before you start your render. If you do this is should keep you from having to under volting and your hardware will last longer. I don't over clock it just shortens the life of compter components. I also take the side of the case off and my cpu is water cooled. On the the other hand if you live somewhere warm I would get a cheap fan and point it at your desktop to cool all that Iray goodness if it takes hours to do your renders. Modern cases and GPU creators make their product with gaming in mind not rendering. If you don't believe me do the test yourself and use a hard core game that stresses your hardware and check the temps on Msi afterburner and do the same with Iray rendering you will see the differnce.

  • areg5 said:
    areg5 said:

    Nothing was overheating.  The Haf X keeps everything really cool. If I don't under volt them, they typically crash to CPU after a few minutes.  I don't have that issue with one card running, just with 3.  The extra lanes was the reason I wanted the new build.  As for the time, my scenes are pretty complex.  Lots of textures, lots of objects.  If I'm doing a render with no objects, just figures and an HDRI it renders in about 10 minutes, if that.  Indoor scenes always take longer in my experience.  Typically they're done in less than 30 minutes. Could be it would have gone faster if I increased the lighting.  I chose that frame to do purely because it was one that took an unusually long time to render, and I wanted to see the difference, if any.

    I have no idea what you mean by crash to CPU. Do you mean the drivers crashed? Or do you mean the cards themselves actuall;y shutdown (which is almost certainly an overheating issue). did you run your monitor of a 1080ti or off the CPU's iGPU?

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Is 80 degrees overheating?  If I keep it under 80, usually around 76, it runs no problem.  Yes, by crash to CPU I mean the cards shut down.  I notice that one of the cards always runs a few degrees hotter, and when that one gets to 80 degrees or so, they all shut down.  As for the monitor:  I use a single monitor that I can switch back and forth between both rigs.  The TR Rig does not have intergrated graphics, so it does run off of one of the cards.

    80 is hot but it depends on how you get that temp measurement and what it measures. If the card hits 80 and shuts down that is an overheat.

    you are going way beyond any reasonable amount of detail. the difference between 95 and 97% convergence is just not noticeable. Your render times are long because of your settings.

  • areg5areg5 Posts: 617

    Install MSI afterburner and turn up fan profile on all your cards to 100% before you start your render. If you do this is should keep you from having to under volting and your hardware will last longer. I don't over clock it just shortens the life of compter components. I also take the side of the case off and my cpu is water cooled. On the the other hand if you live somewhere warm I would get a cheap fan and point it at your desktop to cool all that Iray goodness if it takes hours to do your renders. Modern cases and GPU creators make their product with gaming in mind not rendering. If you don't believe me do the test yourself and use a hard core game that stresses your hardware and check the temps on Msi afterburner and do the same with Iray rendering you will see the differnce.

    I'm presuming EVA's Precision XOC is the same as afterburner, and it has fan control.  My CPU is watercooled as well.  I'll try a few renders at 100% fan and 100% power and see what happens.

  • areg5areg5 Posts: 617
    areg5 said:
    areg5 said:

    Nothing was overheating.  The Haf X keeps everything really cool. If I don't under volt them, they typically crash to CPU after a few minutes.  I don't have that issue with one card running, just with 3.  The extra lanes was the reason I wanted the new build.  As for the time, my scenes are pretty complex.  Lots of textures, lots of objects.  If I'm doing a render with no objects, just figures and an HDRI it renders in about 10 minutes, if that.  Indoor scenes always take longer in my experience.  Typically they're done in less than 30 minutes. Could be it would have gone faster if I increased the lighting.  I chose that frame to do purely because it was one that took an unusually long time to render, and I wanted to see the difference, if any.

    I have no idea what you mean by crash to CPU. Do you mean the drivers crashed? Or do you mean the cards themselves actuall;y shutdown (which is almost certainly an overheating issue). did you run your monitor of a 1080ti or off the CPU's iGPU?

    There is nothing terribly complex about that scene unless you have high subD somewhere or some other setting cranked. I do scenes like that all the time.

    Is 80 degrees overheating?  If I keep it under 80, usually around 76, it runs no problem.  Yes, by crash to CPU I mean the cards shut down.  I notice that one of the cards always runs a few degrees hotter, and when that one gets to 80 degrees or so, they all shut down.  As for the monitor:  I use a single monitor that I can switch back and forth between both rigs.  The TR Rig does not have intergrated graphics, so it does run off of one of the cards.

    80 is hot but it depends on how you get that temp measurement and what it measures. If the card hits 80 and shuts down that is an overheat.

    you are going way beyond any reasonable amount of detail. the difference between 95 and 97% convergence is just not noticeable. Your render times are long because of your settings.

    I will try the default 95%.  I'm getting card temp measurements from Prfecisions XOC and GPU-Z.  I figured that's why they were long, but I just wanted some added detail.  With lots of objects and textures, and reflective surfaces, sometimes it seems to me that 95% is still a bit grainy, but I guess I can use a denoiser if it is.

  • areg5areg5 Posts: 617
    areg5 said:

    Install MSI afterburner and turn up fan profile on all your cards to 100% before you start your render. If you do this is should keep you from having to under volting and your hardware will last longer. I don't over clock it just shortens the life of compter components. I also take the side of the case off and my cpu is water cooled. On the the other hand if you live somewhere warm I would get a cheap fan and point it at your desktop to cool all that Iray goodness if it takes hours to do your renders. Modern cases and GPU creators make their product with gaming in mind not rendering. If you don't believe me do the test yourself and use a hard core game that stresses your hardware and check the temps on Msi afterburner and do the same with Iray rendering you will see the differnce.

    I'm presuming EVA's Precision XOC is the same as afterburner, and it has fan control.  My CPU is watercooled as well.  I'll try a few renders at 100% fan and 100% power and see what happens.

    Ok.  The render ran for 12 minutes at that setting and then the GPU's cut out.  So I guess what I'm left with is my previous XOC settings.  I basically set the temp limit at 76, and it'll run all day.

  • 76 is way too low for a thermal shutdown.

    try rendering with various pairings of the three cards, pull the third out entirely. See if one of the 3 is a problem, I bet it is.

  • areg5areg5 Posts: 617
    edited February 2020

    76 is way too low for a thermal shutdown.

    try rendering with various pairings of the three cards, pull the third out entirely. See if one of the 3 is a problem, I bet it is.

    Sorry, I guess I didn't make myself clear.  It shuts down at 80.  It works fine at 76.  But I will try your suggestion.  But I don't think the one of the cards is the issue.  I've been using 3 cards of various size for a couple of years now, and it was the same with other combinations.

    Hmmm... initial thoughts are your suggestion may be sort of correct.  I'm using the middle card for display, and rendering through the one in slot 4 and slot 1.  I have the fans at 100% on all cards, at power of 100%.  After 15 minutes of rendering, the working cards are running at 65 degrees (slot 1) and 45 degrees (slot 4)  What I think is happening is that while running all of the cards, there is too much heat build up between them.  But if the middle card is basically cold, the heat is significantly less.  So I don't think a card is bad.  I think that although there are 4 PCI slots I can use, they are physically too closely packed to work at full capacity due to the heat each one feels from the neighboring card.

    That being said, I am repeating that render above running on this set up.  If it goes all the way through but is much slower, then I will likely go back to running all three and capping the temp at 76.  If it is faster, I'll probably swap the 1080 Ti out for my 980 Ti on the other rig, and use the 980 for display only.

     

     

     

    Post edited by areg5 on
  • areg5 said:

    76 is way too low for a thermal shutdown.

    try rendering with various pairings of the three cards, pull the third out entirely. See if one of the 3 is a problem, I bet it is.

    Sorry, I guess I didn't make myself clear.  It shuts down at 80.  It works fine at 76.  But I will try your suggestion.  But I don't think the one of the cards is the issue.  I've been using 3 cards of various size for a couple of years now, and it was the same with other combinations.

    Hmmm... initial thoughts are your suggestion may be sort of correct.  I'm using the middle card for display, and rendering through the one in slot 4 and slot 1.  I have the fans at 100% on all cards, at power of 100%.  After 15 minutes of rendering, the working cards are running at 65 degrees (slot 1) and 45 degrees (slot 4)  What I think is happening is that while running all of the cards, there is too much heat build up between them.  But if the middle card is basically cold, the heat is significantly less.  So I don't think a card is bad.  I think that although there are 4 PCI slots I can use, they are physically too closely packed to work at full capacity due to the heat each one feels from the neighboring card.

    That being said, I am repeating that render above running on this set up.  If it goes all the way through but is much slower, then I will likely go back to running all three and capping the temp at 76.  If it is faster, I'll probably swap the 1080 Ti out for my 980 Ti on the other rig, and use the 980 for display only.

    Try setting the case fans that blow across the cards to a higher rpm. That Thermaltake has very good thermals for a single card but to keep 3 cool you need a lot of airflow.

  • areg5areg5 Posts: 617
    areg5 said:

    76 is way too low for a thermal shutdown.

    try rendering with various pairings of the three cards, pull the third out entirely. See if one of the 3 is a problem, I bet it is.

    Sorry, I guess I didn't make myself clear.  It shuts down at 80.  It works fine at 76.  But I will try your suggestion.  But I don't think the one of the cards is the issue.  I've been using 3 cards of various size for a couple of years now, and it was the same with other combinations.

    Hmmm... initial thoughts are your suggestion may be sort of correct.  I'm using the middle card for display, and rendering through the one in slot 4 and slot 1.  I have the fans at 100% on all cards, at power of 100%.  After 15 minutes of rendering, the working cards are running at 65 degrees (slot 1) and 45 degrees (slot 4)  What I think is happening is that while running all of the cards, there is too much heat build up between them.  But if the middle card is basically cold, the heat is significantly less.  So I don't think a card is bad.  I think that although there are 4 PCI slots I can use, they are physically too closely packed to work at full capacity due to the heat each one feels from the neighboring card.

    That being said, I am repeating that render above running on this set up.  If it goes all the way through but is much slower, then I will likely go back to running all three and capping the temp at 76.  If it is faster, I'll probably swap the 1080 Ti out for my 980 Ti on the other rig, and use the 980 for display only.

    Try setting the case fans that blow across the cards to a higher rpm. That Thermaltake has very good thermals for a single card but to keep 3 cool you need a lot of airflow.

    I'm coming to the conclusion that if I want to run 3 cards, they need to be watercooled.

  • areg5areg5 Posts: 617

    And now for the update.  I've rendered a bunch of frames using only 2 of the cards as described above.  It is a bit slower as expected, but the images look a bit crisper which was unexpected.  I'm thinking that I can move one of the 1080 Ti's to the Coolermaster rig, and move the 980 Ti from there to the Thermaltake for the monitor.  I figure that the rendering times if I run both rigs should be close to just using all 3 cards in the thermaltake batch wise, if I do 2/3 of them in the thermaltake and the remaining 1/3 in the Coolermaster.  Then I can either add another 1080 Ti to the Coolermaster, or wait a bit for the price to drop a bit and start getting 2080 Ti's.

  • RayDAntRayDAnt Posts: 1,144
    edited February 2020
    areg5 said:

    And now for the update.  I've rendered a bunch of frames using only 2 of the cards as described above.  It is a bit slower as expected, but the images look a bit crisper which was unexpected.  I'm thinking that I can move one of the 1080 Ti's to the Coolermaster rig, and move the 980 Ti from there to the Thermaltake for the monitor.  I figure that the rendering times if I run both rigs should be close to just using all 3 cards in the thermaltake batch wise, if I do 2/3 of them in the thermaltake and the remaining 1/3 in the Coolermaster.  Then I can either add another 1080 Ti to the Coolermaster, or wait a bit for the price to drop a bit and start getting 2080 Ti's.

    It occured to me that you might wanna check your power supply to make sure that it isn't the true culprit behind your render crashing issues. 80c, while on the higher end of acceptable temperatures for an Nvidia GPU die, is not so high that you should be seeing outright crashing of your cards (clock speed limiting - yes. Allegedly anything above 60c will do that.) Assuming your cards don't report VRAM temps (eg. most founders editions don't) it is possible that those are what is technically overheating and causing crashes (Iray is very hard on a GPU's VRAM chips.) But even that seems a little hard to believe with what you've described so far.

    Voltage instability from the PSU absolutely will lead to crashing GPUs. And three 1080Tis + a CPU is a LOT of power to be drawing from a PSU's 12-volt rail(s). Iirc the minimum recommendation for a single 1080Ti system is at least 500 watts. For a 3 card system 1200 watts is the least I would consider it with (remember that PSUs usually run best/most reliably at wattages significantly below their sticker ratings.)

    Post edited by RayDAnt on
  • areg5areg5 Posts: 617
    RayDAnt said:
    areg5 said:

    And now for the update.  I've rendered a bunch of frames using only 2 of the cards as described above.  It is a bit slower as expected, but the images look a bit crisper which was unexpected.  I'm thinking that I can move one of the 1080 Ti's to the Coolermaster rig, and move the 980 Ti from there to the Thermaltake for the monitor.  I figure that the rendering times if I run both rigs should be close to just using all 3 cards in the thermaltake batch wise, if I do 2/3 of them in the thermaltake and the remaining 1/3 in the Coolermaster.  Then I can either add another 1080 Ti to the Coolermaster, or wait a bit for the price to drop a bit and start getting 2080 Ti's.

    It occured to me that you might wanna check your power supply to make sure that it isn't the true culprit behind your render crashing issues. 80c, while on the higher end of acceptable temperatures for an Nvidia GPU die, is not so high that you should be seeing outright crashing of your cards (clock speed limiting - yes. Allegedly anything above 60c will do that.) Assuming your cards don't report VRAM temps (eg. most founders editions don't) it is possible that those are what is technically overheating and causing crashes (Iray is very hard on a GPU's VRAM chips.) But even that seems a little hard to believe with what you've described so far.

    Voltage instability from the PSU absolutely will lead to crashing GPUs. And three 1080Tis + a CPU is a LOT of power to be drawing from a PSU's 12-volt rail(s). Iirc the minimum recommendation for a single 1080Ti system is at least 500 watts. For a 3 card system 1200 watts is the least I would consider (remember that running close to a PSU's max rating is bad both for longevity and reliability.

    It very well could be the power supply.  The Collermaster has 1200W, the Thermaltake 1300W.  That could explain why they crash when I don't throttle them down.  But it always seems to happen around the same temps for whichever card is running hotter.  I'm putting it through the paces today.  I did have a crash after 1:30, which is a long render.  I may go back to 3 cards running powered down if it recurs.

  • RayDAntRayDAnt Posts: 1,144
    areg5 said:
    RayDAnt said:
    areg5 said:

    And now for the update.  I've rendered a bunch of frames using only 2 of the cards as described above.  It is a bit slower as expected, but the images look a bit crisper which was unexpected.  I'm thinking that I can move one of the 1080 Ti's to the Coolermaster rig, and move the 980 Ti from there to the Thermaltake for the monitor.  I figure that the rendering times if I run both rigs should be close to just using all 3 cards in the thermaltake batch wise, if I do 2/3 of them in the thermaltake and the remaining 1/3 in the Coolermaster.  Then I can either add another 1080 Ti to the Coolermaster, or wait a bit for the price to drop a bit and start getting 2080 Ti's.

    It occured to me that you might wanna check your power supply to make sure that it isn't the true culprit behind your render crashing issues. 80c, while on the higher end of acceptable temperatures for an Nvidia GPU die, is not so high that you should be seeing outright crashing of your cards (clock speed limiting - yes. Allegedly anything above 60c will do that.) Assuming your cards don't report VRAM temps (eg. most founders editions don't) it is possible that those are what is technically overheating and causing crashes (Iray is very hard on a GPU's VRAM chips.) But even that seems a little hard to believe with what you've described so far.

    Voltage instability from the PSU absolutely will lead to crashing GPUs. And three 1080Tis + a CPU is a LOT of power to be drawing from a PSU's 12-volt rail(s). Iirc the minimum recommendation for a single 1080Ti system is at least 500 watts. For a 3 card system 1200 watts is the least I would consider (remember that running close to a PSU's max rating is bad both for longevity and reliability.

    It very well could be the power supply.  The Collermaster has 1200W, the Thermaltake 1300W.  That could explain why they crash when I don't throttle them down.  But it always seems to happen around the same temps for whichever card is running hotter.  I'm putting it through the paces today.  I did have a crash after 1:30, which is a long render.  I may go back to 3 cards running powered down if it recurs.

    Something about that. All other things being equal, higher operating temps (as you approach a piece of silicon's max rating) will lead to disproportionately higher power draws. Meaning that although temps appear to be the root, it could be temps leading to higher power draw actually causing the problem. Meaning that either more robust (eg. water) cooling or a higher rated/more reliable PSU is the remedy.

  • areg5areg5 Posts: 617

    My gut impression is watercooling.  The Coolermaster PSU is an EVGA 1200 W Platinum, the ThermalTake is an EVGA 1300W Gold.  So they're both good units.  I think I'll stick with running the 3 cards at 80%.  That caps the temp at 76 degrees and it runs very stably on my rig.

  • kenshaw011267kenshaw011267 Posts: 3,805

    If all the cards are at or below 80C at the juction then the cards are not overheating. The VRAM could be but I strongly doubt it.

    Having specced the system out it does look like 1200W is definitely too low to run 3 1080ti's and the i7 and 1300W with the TR chip is borderline. It could well have just been instability from the PSU. Also the 980ti draws more power than the 1080ti so that would be a bad idea.

    You best solution if you really want to render on 3 cards is to get a much higher output card, 1600W or higher.

  • areg5areg5 Posts: 617
    edited March 2020

    Well, I just got the 1300 watt so I'm not looking to replace it just yet.  It could be instability from ther PSU, I don't contest that.  I have found that the card in slot 1 getts a few degrees hotter than the others, likely because it's the closest to the CPU and RAM, and since the CPU is watercooled then there's no fan on it.  I have found that the system while running 3 cards is very stable if I only limit the card in slot 1 to 80%.  The other 2 run fine at 100%.  I have all fans GPU fans at max when the cards hit 50 degrees.

    Post edited by areg5 on
Sign In or Register to comment.