DAZ IRay Render Speeds

123457»

Comments

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    hphoenix said:
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

    True.  However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change.  ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible).  VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.)  AGP came out in 1997, PCI-X came out in 1998.  Both AGP and PCI-X existed alongside standard PCI.  PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.

    So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots.  Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard.  And PCI-E 4.0 is expected to finalize in 2017.  (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)

    NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market.  The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols.  A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.

     

    For the past 5 years you have been able to get a choice of SLI and Crossfire onthe same board. The more disturbing thing I have seen is boards that support AMD processors don't support PCIe 3. 

  • hphoenixhphoenix Posts: 1,335
    hphoenix said:
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

    True.  However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change.  ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible).  VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.)  AGP came out in 1997, PCI-X came out in 1998.  Both AGP and PCI-X existed alongside standard PCI.  PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.

    So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots.  Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard.  And PCI-E 4.0 is expected to finalize in 2017.  (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)

    NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market.  The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols.  A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.

     

    For the past 5 years you have been able to get a choice of SLI and Crossfire onthe same board. The more disturbing thing I have seen is boards that support AMD processors don't support PCIe 3. 

    I said it they had it, but that it usually doesn't work as well as dedicated support boards.

     

    Wait, AMD boards that don't support PCIe-3.0?????  That's just crazy.

     

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    hphoenix said:
    hphoenix said:
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

    True.  However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change.  ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible).  VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.)  AGP came out in 1997, PCI-X came out in 1998.  Both AGP and PCI-X existed alongside standard PCI.  PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.

    So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots.  Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard.  And PCI-E 4.0 is expected to finalize in 2017.  (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)

    NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market.  The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols.  A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.

     

    For the past 5 years you have been able to get a choice of SLI and Crossfire onthe same board. The more disturbing thing I have seen is boards that support AMD processors don't support PCIe 3. 

    I said it they had it, but that it usually doesn't work as well as dedicated support boards.

     

    Wait, AMD boards that don't support PCIe-3.0?????  That's just crazy.

     

    Each of the Motherboards that I use in built machines, both work and home, support both CrossFire and SLI. Intel did the bridge (I forget if it is North Bridge or SOuth Bridge. LOL) 

    If you can find a board that supports both an AMD processor and PCIe-3 I would love to see it. Though those MOBO's that support AMD processors don't look like they have advanced much in the past 5 years. Probably because, unlike the Intel Boards, the AMD boards are processor backwards compatible, which limits advances. 

     

    Oh well, we are getting way off topic. :) 

  • mjc1016mjc1016 Posts: 15,001
    Oh well, we are getting way off topic. :) 

    Not really...because the motherboard/CPU combination is very relevent to render speed...

    And here's all I could find with PCIe-3

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007625 600474769

  • hphoenixhphoenix Posts: 1,335
    edited January 2016
    hphoenix said:
    hphoenix said:
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

    True.  However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change.  ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible).  VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.)  AGP came out in 1997, PCI-X came out in 1998.  Both AGP and PCI-X existed alongside standard PCI.  PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.

    So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots.  Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard.  And PCI-E 4.0 is expected to finalize in 2017.  (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)

    NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market.  The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols.  A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.

     

    For the past 5 years you have been able to get a choice of SLI and Crossfire onthe same board. The more disturbing thing I have seen is boards that support AMD processors don't support PCIe 3. 

    I said it they had it, but that it usually doesn't work as well as dedicated support boards.

     

    Wait, AMD boards that don't support PCIe-3.0?????  That's just crazy.

     

    Each of the Motherboards that I use in built machines, both work and home, support both CrossFire and SLI. Intel did the bridge (I forget if it is North Bridge or SOuth Bridge. LOL) 

    If you can find a board that supports both an AMD processor and PCIe-3 I would love to see it. Though those MOBO's that support AMD processors don't look like they have advanced much in the past 5 years. Probably because, unlike the Intel Boards, the AMD boards are processor backwards compatible, which limits advances. 

     

    Oh well, we are getting way off topic. :) 

    Way off topic.  :) 

    Oh, and the SABERTOOTH 990FX/GEN3 R2.0 has PCI-E 3.0.  But I think it's the only one.

    edit:  Oh, and it looks like the ASRock X79 series has PCI-E 3.0 as well....

    Post edited by hphoenix on
  • outrider42outrider42 Posts: 3,679
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Everything I have seen about N-Link seems to target servers and workstations more than consumers. And regardless, I'm not sure how that would work for DAZ when it doesn't even support SLI yet. You'd still get the benefit of the extra CUDA cores, but I don't think N-Link would offer DAZ a significant speed increase.

    I believe HBM2 will be the major game changer. You get more RAM and it is much faster, too. The cards will feature more CUDA cores that are even more effecient. These are the two biggest hardware constraints by DAZ, and I think a single flagship Pascal has the potential to blow people's minds at DAZ render speeds.

    The reports about Pascal having shortages are a bit premature. Those are purely based on the lack of a working Pascal prototype at Nvidia's conference. It is concerning, but not something to worry about just yet. AMD has announced they are shipping their new GPU line this fall. Nvidia would not want to get beat to market by AMD, especially when these AMD cards are expected to be major upgrades (they are also using HBM2 and 14-16 nanometer fabrications.) So even though AMD has only 20% market share right now, they can bounce back very quickly and pull a large portion of that share back if they can beat Nvidia to the market. AMD also has a new CPU on the way, and it is supposed to be a major upgrade.

    No matter what happens, this much is certain: 2016 will be an exciting year for PC hardware.

  • mjc1016mjc1016 Posts: 15,001

    It's not Daz (or more specifically Studio) that doesn't support SLI...it is Iray that doesn't and if there were any support for SLI it would need to come from Nvidia and it does not seem that they are interested in providing SLI support for Iray. 

     

  • FirePro9FirePro9 Posts: 456

    In regards to the issue of the CPU having 28 vs 40 PCI lanes and impact on rendering speed in multi-GPU systems, I found this info from Autodesk's Iray FAQ ( http://area.autodesk.com/blogs/shane/the_iray_faq ) :

    Q. How much slower is a GPU placed in Gen1 x8 PCI Express slots versus in a Gen2 x16 PCI Express slot?

    A. For operations that are constantly updated, like viewport interactive graphics, Gen1 is much slower than Gen 2. For compute-intensive operations that keep the GPU very busy before sending its results across the bus there is nearly no difference between the two PCIX types. In the case of iray and GPU ray tracing, you might see a GPU render faster on a Gen2 slot when the scene is very simple, but nearly no difference on a Gen1 slot when the scene is reasonably complex.

    It appears then that DAZ IRay viewport updates could be impacted by slower pipe, and possibly simple rendering scenes, but for more detailed scenes the flow over the pipe probably not a big issue.  Still, it would be nice to have the screen redraw fast, even if it is a bit of a luxury to work in IRay viewport rather than in Texture Shaded. Now with this info I can at least try to justify the additional cost for upgraded CPU, i.e. improved screen redraw of complex IRay scenes.

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    FirePro9 said:

    In regards to the issue of the CPU having 28 vs 40 PCI lanes and impact on rendering speed in multi-GPU systems, I found this info from Autodesk's Iray FAQ ( http://area.autodesk.com/blogs/shane/the_iray_faq ) :

    Q. How much slower is a GPU placed in Gen1 x8 PCI Express slots versus in a Gen2 x16 PCI Express slot?

    A. For operations that are constantly updated, like viewport interactive graphics, Gen1 is much slower than Gen 2. For compute-intensive operations that keep the GPU very busy before sending its results across the bus there is nearly no difference between the two PCIX types. In the case of iray and GPU ray tracing, you might see a GPU render faster on a Gen2 slot when the scene is very simple, but nearly no difference on a Gen1 slot when the scene is reasonably complex.

    It appears then that DAZ IRay viewport updates could be impacted by slower pipe, and possibly simple rendering scenes, but for more detailed scenes the flow over the pipe probably not a big issue.  Still, it would be nice to have the screen redraw fast, even if it is a bit of a luxury to work in IRay viewport rather than in Texture Shaded. Now with this info I can at least try to justify the additional cost for upgraded CPU, i.e. improved screen redraw of complex IRay scenes.

    The Pipes are also Motherboard dependent. Make sure your MOBO handles the lanes you are expecting to use. 

  • Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

  • prixatprixat Posts: 1,590

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    Sometimes! smiley

    I tested this on my old system (an old phenom X6 with an old 550ti) with an old test scene:

    CPU - 19minutes

    GPU - 8.5minutes

    Together - 5.5minutes

    I concluded, its when the GPU is much faster than the CPU that the load balancing fails and a 'slow' CPU can hold back the GPU.

  • CMKook-24601CMKook-24601 Posts: 200
    edited February 2016
    prixat said:

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    Sometimes! smiley

    I tested this on my old system (an old phenom X6 with an old 550ti) with an old test scene:

    CPU - 19minutes

    GPU - 8.5minutes

    Together - 5.5minutes

    I concluded, its when the GPU is much faster than the CPU that the load balancing fails and a 'slow' CPU can hold back the GPU.

    Okay well "slow" is a reletive term my CPU is a Intel i7 930 @2.80Ghz my GPU is a 2GB GDDR5 ...just trying to figure out if I should be using both CPU and GPU or just GPU  (or if I should have photo real and interactive checked differently) also trying to figure out if I should have OptiX Prime Acceleration checked or not....

    Post edited by CMKook-24601 on
  • prixatprixat Posts: 1,590

    You didn't actually say what GPU you have.

    I've had no reason to turn off Optix so far.

  • DAZ_SpookyDAZ_Spooky Posts: 3,100

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    That is highly scene dependent. Time to actually render vs. time to load the scene onto the card. 

     

     

    prixat said:

    You didn't actually say what GPU you have.

    I've had no reason to turn off Optix so far.

    So far the only reason to turn off OptiX is with some computers with AMD/ATI cards it is reported to slow down the render. (I have never seen it, but it is reported to do it.) IN cases where you either have an NVIDIA card, or have an Intel GMA chip to drive your graphics, OptiX is a magic go faster button. 

     

     

    prixat said:

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    Sometimes! smiley

    I tested this on my old system (an old phenom X6 with an old 550ti) with an old test scene:

    CPU - 19minutes

    GPU - 8.5minutes

    Together - 5.5minutes

    I concluded, its when the GPU is much faster than the CPU that the load balancing fails and a 'slow' CPU can hold back the GPU.

    Okay well "slow" is a reletive term my CPU is a Intel i7 930 @2.80Ghz my GPU is a 2GB GDDR5 ...just trying to figure out if I should be using both CPU and GPU or just GPU  (or if I should have photo real and interactive checked differently) also trying to figure out if I should have OptiX Prime Acceleration checked or not....

    Most scenes will not use the 2GB video card you have, at least out of hte box. 2GB may or may not fit one figure, with clothing and hair and an HDRI for lighting. (It can go either way.) 

  • Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    That is highly scene dependent. Time to actually render vs. time to load the scene onto the card. 

     

     

    prixat said:

    You didn't actually say what GPU you have.

    I've had no reason to turn off Optix so far.

    So far the only reason to turn off OptiX is with some computers with AMD/ATI cards it is reported to slow down the render. (I have never seen it, but it is reported to do it.) IN cases where you either have an NVIDIA card, or have an Intel GMA chip to drive your graphics, OptiX is a magic go faster button. 

     

     

    prixat said:

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    Sometimes! smiley

    I tested this on my old system (an old phenom X6 with an old 550ti) with an old test scene:

    CPU - 19minutes

    GPU - 8.5minutes

    Together - 5.5minutes

    I concluded, its when the GPU is much faster than the CPU that the load balancing fails and a 'slow' CPU can hold back the GPU.

    Okay well "slow" is a reletive term my CPU is a Intel i7 930 @2.80Ghz my GPU is a 2GB GDDR5 ...just trying to figure out if I should be using both CPU and GPU or just GPU  (or if I should have photo real and interactive checked differently) also trying to figure out if I should have OptiX Prime Acceleration checked or not....

    Most scenes will not use the 2GB video card you have, at least out of hte box. 2GB may or may not fit one figure, with clothing and hair and an HDRI for lighting. (It can go either way.) 

    SO does that mean I should have every thing checke then?

  • CMKook-24601CMKook-24601 Posts: 200
    edited February 2016

    prixat said:

    You didn't actually say what GPU you have.

    I've had no reason to turn off Optix so far.

    Oh sorry I though ti Had it's a NVIDIA GeForce GTX 750 Ti 2GB GDDR5, but is sounds like I should being rendering with all boxes check?

    Post edited by CMKook-24601 on
  • DAZ_SpookyDAZ_Spooky Posts: 3,100

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    That is highly scene dependent. Time to actually render vs. time to load the scene onto the card. 

     

     

    prixat said:

    You didn't actually say what GPU you have.

    I've had no reason to turn off Optix so far.

    So far the only reason to turn off OptiX is with some computers with AMD/ATI cards it is reported to slow down the render. (I have never seen it, but it is reported to do it.) IN cases where you either have an NVIDIA card, or have an Intel GMA chip to drive your graphics, OptiX is a magic go faster button. 

     

     

    prixat said:

    Okay so I want to make sure I am reading this correctly... Using the CPU and the GPU together is slower than just the GPU alone?

    Sometimes! smiley

    I tested this on my old system (an old phenom X6 with an old 550ti) with an old test scene:

    CPU - 19minutes

    GPU - 8.5minutes

    Together - 5.5minutes

    I concluded, its when the GPU is much faster than the CPU that the load balancing fails and a 'slow' CPU can hold back the GPU.

    Okay well "slow" is a reletive term my CPU is a Intel i7 930 @2.80Ghz my GPU is a 2GB GDDR5 ...just trying to figure out if I should be using both CPU and GPU or just GPU  (or if I should have photo real and interactive checked differently) also trying to figure out if I should have OptiX Prime Acceleration checked or not....

    Most scenes will not use the 2GB video card you have, at least out of hte box. 2GB may or may not fit one figure, with clothing and hair and an HDRI for lighting. (It can go either way.) 

    SO does that mean I should have every thing checke then?

    Keeping in mind it is definitely scene dependent, as a general rule, yes.

  • rock63rock63 Posts: 13
    FirePro9 said:

    It appears to me that IRay rendering speed pretty much boils down to how many CUDA cores you can throw at it.  The parallel processing power of CUDA cores is a perfect match for rendering. 

    Given that I am wanting to find the best bang for the buck (within a $2000 - $3,500 budget) I am thinking that my next computer should be designed initially for a 980Ti card but be expandable to support 3-way SLI, with the anticipation of later buying two more 980Ti cards.  This impacts the initial single card build a few ways and with some upfront costs of an extra $300-$500, including:

    • needing a MB supporting 3-way SLI (not very many 4-way boards so I nixed that early on), current thought is Asus X99-Deluxe
    • needing a beefy power supply 1200+ watts
    • and of course a case and cooling to support this much heat and power

    So looking at $/CUDA-Core I believe the 3-way SLI 980Ti solution to be the best bang for your buck.

    Here is brief comparison of Geforce costs per CUDA-Core:

    GTX 980Ti = 2816 Cuda Cores = $600 = $0.21/core

    GTX 980   = 2048 Cuda Cores = $480 = $0.23/core

    GTX 970   = 1664 Cuda Cores = $300 = $0.19/core

    GTX 960   = 1024 Cuda Cores = $180 = $0.18/core

     

    Given the cost per CUDA Core is roughly the same, then the option of using one computer to run 3-GTX980Ti cards, giving you 8448 CUDA cores, and only 1-upgraded computer box to buy, seems like my best bet.

    Lastly, using NICSTT’s rendering times posted above, one can estimate the rendering time using a 3-way SLI arrangement of 980Ti cards as follows:

    Card

    970

    980Ti

    970 & 980Ti

    3-980Ti

    Rend Time (secs.)

    575

    389

    222

    118

    No. Of CUDAs

    1664

    2816

    4480

    8448

    RT x CC =

           956,800

           1,095,424

               994,560

            1,000,000

    (bold text = approx..)

    I may be overlooking something but looks to me that number of CUDAs is the “core” issue!

    What you are overlooking is PciE lanes available from CPU which dictates how fast cards will run. Most mainstream CPUs have 16 lanes which breaks down to 1 card @ x16, 2 cards @ x8. After much testing on my 3770k (16 lane cpu) with 2 x 980 cards it was slower rendering with 2 cards running @ 8x than 1 card @ 16x. So after my upgrade to a 5930k CPU (40 PciE lanes) both my new 980 ti card run at 16x each making renders very fast. With a triple card setup with forty lanes and a supported motherboard you would get 16x,16x and 8x on third card. This essentially means that the third card would not be rendering at its full potential as it is reaching a bottleneck with data bandwidth. Hope this helps. 

     

Sign In or Register to comment.