Should I use my old GTX570 in addition to my RTX2080TI?

Hi friends a few months before I have upgraded my GPU. Will it be beneficial for faster rendering times to use my GTX570 in addition to my RTX2080TI?

Thanks for your answers.

Comments

  • alexhcowleyalexhcowley Posts: 2,392

    I'm not familiar with the GTX570 but it sounds like a very old card so I doubt it would make much difference.

    Assuming you can find a driver that works with both cards, I would suggest you use the GTX570 to drive your monitor and then give the RTX2080TI to Iray to play with. 

    Cheers,

    Alex.

  • dijituldijitul Posts: 146
    edited December 2019

    From what I understand, you should not.  The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.  In addition, as alex said, it'll make an insignificant difference compared to the 2080.  If you dedicated the 570 to your monitor, it might help, but if you're monitor is high-res or high-performance, perhaps that's not a good idea either.

    Post edited by dijitul on
  • LeanaLeana Posts: 11,814

    The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.

    That’s not exactly true. If you have 2 cards with different VRAM amounts then Iray will use both for scenes which fit on both. If the scene only fits on one of them then Iray will simply use only the one where the scene fits to render, not ditch both.

  • PDSmithPDSmith Posts: 712
    edited December 2019

    I would suggest against it for a couple of reasons. Like mentioned above the ram.  

    second..sort of reason. The cuda core version for that card is dang near at the bottom if not off the list of acceptable for the current version of Iray and that the effects of recent GPU tripping offline to CPU rendering. the 570 when it goes offline it has a 50/50 chance of takeing the 2080 offline.  This happens with me when it comes to my 2 gtx960's working with my RTX2070. (small scenes I use all three cards. big complex scenes I designate just the one card for rendering.

    the other reason may be your motherboard. Are both of of your graphic card slots PCI Express 16's?  reason being. most...not all but most mother boards will down grade the PCI-E 16 to a PCI-E 8  speed to support the other card which is more than likely in a PCI-E 8 slot.  ie...you've cut your speed thurough put in half.  if you have a third card in a third slot  which might even be a PCI-4 thus all three cards are now cut in half for speed again.  Ugh!

    R/

    PDSmith

     

    Post edited by PDSmith on
  • thilionthilion Posts: 55
    Leana said:

    The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.

    That’s not exactly true. If you have 2 cards with different VRAM amounts then Iray will use both for scenes which fit on both. If the scene only fits on one of them then Iray will simply use only the one where the scene fits to render, not ditch both.

    I have heard that the opposite is the case. When DS renders a scene, it doesn't split the data to be rendered and distributes one part to card 1 and the rest to card 2. A scene is too complex that the data to be rendered could be split. Light rays bounce off of everything and thus there is only one big chunk of data that has to be loaded to both VRAMs. The iterations can be split among the graphic cards, but not the data to be rendered.

  • thilion said:
    Leana said:

    The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.

    That’s not exactly true. If you have 2 cards with different VRAM amounts then Iray will use both for scenes which fit on both. If the scene only fits on one of them then Iray will simply use only the one where the scene fits to render, not ditch both.

    I have heard that the opposite is the case. When DS renders a scene, it doesn't split the data to be rendered and distributes one part to card 1 and the rest to card 2. A scene is too complex that the data to be rendered could be split. Light rays bounce off of everything and thus there is only one big chunk of data that has to be loaded to both VRAMs. The iterations can be split among the graphic cards, but not the data to be rendered.

    With Iray each card loads the full scene and drops out if it can't handle that, or later if it runs out of memory - one card dropping out doesn't stop another, with more memory, from continuing. I'm not sure if you are stating otherwise or misunderstanding leana, who s saying the same.

  • dijituldijitul Posts: 146
    Leana said:

    The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.

    That’s not exactly true. If you have 2 cards with different VRAM amounts then Iray will use both for scenes which fit on both. If the scene only fits on one of them then Iray will simply use only the one where the scene fits to render, not ditch both.

    The GTX 570 doesn't even have 2 GB of RAM.  The 2080 has 11 GB of RAM.  If you render any scene which exceeds the limit of the 570, the 570 becomes useless.  This will likely be a majority of renders, so for all intents and purposes, the 570 is not useful except for being used as a monitor display card.  To answer OP's question, "Would it be faster to keep the 570?" the answer is more often than not, no, it wouldn't.  

    Also, any potential gains from a scene which happens to fit in the RAM of both cards wouldn't gain much.  Check any benchmark website to compare.  You probably wouldn't even find a 570 listed anymore.  It only has 480 CUDA cores.  

    I stick with my previous statement:  Drop the 570 from renders and use it strictly for display.  This way, you can continue doing other things while renders happen, such as watch video or use Photoshop, without any choppiness.

  • Leana said:

    The reason is because whichever card has the least amount of RAM will be the max limit to how much data you can load to render.

    That’s not exactly true. If you have 2 cards with different VRAM amounts then Iray will use both for scenes which fit on both. If the scene only fits on one of them then Iray will simply use only the one where the scene fits to render, not ditch both.

    The GTX 570 doesn't even have 2 GB of RAM.  The 2080 has 11 GB of RAM.  If you render any scene which exceeds the limit of the 570, the 570 becomes useless.  This will likely be a majority of renders, so for all intents and purposes, the 570 is not useful except for being used as a monitor display card.  To answer OP's question, "Would it be faster to keep the 570?" the answer is more often than not, no, it wouldn't.  

    That is what Leana was saying.

    Also, any potential gains from a scene which happens to fit in the RAM of both cards wouldn't gain much.  Check any benchmark website to compare.  You probably wouldn't even find a 570 listed anymore.  It only has 480 CUDA cores.  

    I stick with my previous statement:  Drop the 570 from renders and use it strictly for display.  This way, you can continue doing other things while renders happen, such as watch video or use Photoshop, without any choppiness.

     

  • could hook the monitor to the 570 and just pick the 2080 for rendering

     

  • Here's  a few reasons not to mix GPUS.

    1. Driver compatibility.

    In this case, the latest driver for the 2080ti doesn't support any gpu before the 600 series of gtx cards.

    https://www.nvidia.com/Download/driverResults.aspx/156281/en-us

    And the latest driver for the GTX570 doesn't support 20xx series.

    https://www.nvidia.com/Download/driverResults.aspx/132845/en-us

     

    2. Pass through.

    In some cases, even if using compatible drivers, one or the other card might not actually be used, becoming nothing more than a video port for the other card.

     

    3. non-recognized card.

    Even if using the correct drivers for both cards, it can also happen that one of the cards won't be recognized properly and will just get the microsoft default display adapter driver, rendering it useless for more than the most basic of tasks.

    And nothing i've ever found will allow the proper driver to be installed.

     

    These issues can crop up even when using the same generation of cards, but different types e.g. mixing gtx, quadros and teslas.

     

Sign In or Register to comment.