DAZ IRay Render Speeds

12346

Comments

  • nicsttnicstt Posts: 11,715

    The 900 series cards generally need 200-300 watts; varies on card.

    As an example of redering;

    I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.

    970
    Finished Rendering
    Total Rendering Time: 9 minutes 35.3 seconds

    980ti
    Finished Rendering
    Total Rendering Time: 6 minutes 28.66 seconds

    970 and 980ti
    Finished Rendering
    Total Rendering Time: 3 minutes 41.70 seconds

    Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%

    That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.

    Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.

  • AndyGrimmAndyGrimm Posts: 910
    edited September 2015

    @nicstt thx for your infos - i am surprised that your 970 is just  2 minutes faster then rendering on your CPU?  What are the specifications of your system? 

     

    Post edited by AndyGrimm on
  • MethozMethoz Posts: 19

    HI EVERYONE, I know that this maybe silly but I just bought the Acer Aspire M3450-UR30P for a really good price (100), the mobo is Acer RS880PM-AM, I want to buy the GeForce GTX 750ti or GeForce GTX 690 (if I can find the 690) to help with the iray rendering time, cuz actually it takes from 1hr to 2 1/2hrs on 1100x1400 image size with 2 Genesis2 full dressed with HDR lighting, also I want to upgrade the RAM first to the max and I know that I gonna need a PSU, so plz any help, I don't have a lot of money and I don't know what GPU get, if I can reduce the time to 30min would be nice, also a new pc is off the grip, and this PC works good for me, I made the jump from Dual core, I been reading a lot here and there and everywhere and theres not a really good answer, some say more CUDA is better, others VRAM, and others Processor, btw, the GPU that I want to get it would exclusive for rendering, I not planning to atache any screen.

    This are full specs 

    Product Description: Acer Aspire M3450-UR30P
    Processor: AMD FX 4100 / 3.6 GHz ( 3.8 GHz ) ( Quad-Core )
    Processor Main Features: AMD Turbo CORE Technology
    Cache Memory: 4 MB L2 cache
    Cache Per Processor: 4 MB
    RAM: 12 GB (installed) / 16 GB (max), DDR3 SDRAM, 1333 Hz, PC3-10600
    Hard Drive: 1 TB, standard, Serial ATA-300
    Graphics Controller: ATI Radeon HD 4250
    OS Provided: Microsoft Windows 7 Home Premium 64-bit Edition (RECENTLY CHANGE TO W10)
    Power Provided: 300 Watts
    Motherboard: Acer RS880PM-AM 

    PS: I been researching for this since I bougth the pc (about 2 months ago) and I ready to kill myself jumping from my chair!!XD 
          BTW Im not a pc gamer, I have an xbox and ps for that.

    THNX FOR YOUR TIME IN ADVANCE.

  • BTLProdBTLProd Posts: 114
    Methoz said:

    HI EVERYONE, I know that this maybe silly but I just bought the Acer Aspire M3450-UR30P for a really good price (100), the mobo is Acer RS880PM-AM, I want to buy the GeForce GTX 750ti or GeForce GTX 690 (if I can find the 690) to help with the iray rendering time, cuz actually it takes from 1hr to 2 1/2hrs on 1100x1400 image size with 2 Genesis2 full dressed with HDR lighting, also I want to upgrade the RAM first to the max and I know that I gonna need a PSU, so plz any help, I don't have a lot of money and I don't know what GPU get, if I can reduce the time to 30min would be nice, also a new pc is off the grip, and this PC works good for me, I made the jump from Dual core, I been reading a lot here and there and everywhere and theres not a really good answer, some say more CUDA is better, others VRAM, and others Processor, btw, the GPU that I want to get it would exclusive for rendering, I not planning to atache any screen.

    This are full specs 

    Product Description: Acer Aspire M3450-UR30P
    Processor: AMD FX 4100 / 3.6 GHz ( 3.8 GHz ) ( Quad-Core )
    Processor Main Features: AMD Turbo CORE Technology
    Cache Memory: 4 MB L2 cache
    Cache Per Processor: 4 MB
    RAM: 12 GB (installed) / 16 GB (max), DDR3 SDRAM, 1333 Hz, PC3-10600
    Hard Drive: 1 TB, standard, Serial ATA-300
    Graphics Controller: ATI Radeon HD 4250
    OS Provided: Microsoft Windows 7 Home Premium 64-bit Edition (RECENTLY CHANGE TO W10)
    Power Provided: 300 Watts
    Motherboard: Acer RS880PM-AM 

    PS: I been researching for this since I bougth the pc (about 2 months ago) and I ready to kill myself jumping from my chair!!XD 
          BTW Im not a pc gamer, I have an xbox and ps for that.

    THNX FOR YOUR TIME IN ADVANCE.

    The GTX 690 is two cards, so only gets half the video ram available for render purposes. Just a thought.
  • mjc1016 said:

    Given how many people (including myself) are going to have to buy new graphics cards to render in DS now I think Nvidea should be paying Daz ;-).

    It is absolutely FALSE about NEEDING any sort of video card to render in Studio.  Iray will render just fine without being run on an Nvidia card...

     I myself had an ancient Nvidia card with hardly any cuda cores when I wrote that. I went and got something about half as good as what other people were saying Iray needs (all I could afford now) and I get great results but I am still struggling with slowness. I am thinking I have to figure out something with the settings now. The card did really make a huge difference. That might depend though on how bad your card was in the first place.

    I expressed my original thought with a winkie emoticon to kind of clue you in that it's a wry comment and not that I am saying anybody should have to buy anything. Just to be clear.

  • nicsttnicstt Posts: 11,715
    AndyGrimm said:

    @nicstt thx for your infos - i am surprised that your 970 is just  2 minutes faster then rendering on your CPU?  What are the specifications of your system? 

     

    Sorry missed this; I stopped it at, see the bold below.

    "Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%"

  • fastbike1fastbike1 Posts: 4,078
    edited October 2015

    @Handspan Studios  "..  I am still struggling with slowness. I am thinking I have to figure out something with the settings now"

    Are you using the default settings? Particularly with the render settings (see attached). I have rarely felt the need to changes these. HTH. I have stopped renders well before convergence if they seem largely noiseless.

    render set.PNG
    493 x 646 - 72K
    Post edited by fastbike1 on
  • ZeddicussZeddicuss Posts: 167
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    As an example of redering;

    I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.

    970
    Finished Rendering
    Total Rendering Time: 9 minutes 35.3 seconds

    980ti
    Finished Rendering
    Total Rendering Time: 6 minutes 28.66 seconds

    970 and 980ti
    Finished Rendering
    Total Rendering Time: 3 minutes 41.70 seconds

    Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%

    That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.

    Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.

    This is really interesting, can I ask, what are your other PC Specs?

  • nicsttnicstt Posts: 11,715
    edited January 2016
    risa.kb3 said:
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    As an example of redering;

    I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.

    970
    Finished Rendering
    Total Rendering Time: 9 minutes 35.3 seconds

    980ti
    Finished Rendering
    Total Rendering Time: 6 minutes 28.66 seconds

    970 and 980ti
    Finished Rendering
    Total Rendering Time: 3 minutes 41.70 seconds

    Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%

    That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.

    Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.

    This is really interesting, can I ask, what are your other PC Specs?

    intel i7 4770k (I have an overclocked profile in the BIOS, but I don't run overclocked, although I might on occasions) It is bad for CPUs over the long-term running overclocked if constantly pushing em to the max as well. My bank balance is more important than my ePeen. :)

    16GB RAM (another 16GB would be helpful, but might wait until I do an upgrade)

    4 SSDs

    Corsair AX860i PSU

    Mechanical Keyboard, Good Mouse. 2 x 1440p monitors

    A mechanical and a NAS for backups. (Can't have too many backups.)

    Post edited by nicstt on
  • Has anyone experimented with DazStudio 4.9 Beta yet (which includes the latest build of iRay)? I am curious to know if there is an improvement in render time.

  • nicsttnicstt Posts: 11,715

    Has anyone experimented with DazStudio 4.9 Beta yet (which includes the latest build of iRay)? I am curious to know if there is an improvement in render time.

    At first I got better, but last two or three builds they were about the same or slightly worse.

    I've stopped using 4.9 and have no immediate intention to upgrade.

  • outrider42outrider42 Posts: 3,679

    I have an i5-4690k. Would a i7-4790k make any difference in rendering speed? I would be rendering with a gpu and cpu. I have a 670 now, which I will be upgrading to Pascal when it releases.

    Like say, how much faster would a gtxXX70+4790k be vs a gtxXX70+4690k for HD Iray renders? Or would it make more sense to save the ~$100 it would take to buy an i7 and go for a beefy as possible gpu? I plan on getting a xx70 card. That $100 could maybe go towards a xx80 instead. Or perhaps a xx70 with more ram, as it seems we'll have more ram choices this time (I'm predicting there will be as many as 4 different ram configerations: 4, 8, 12, and 16 gb. 12 and 16 gb may be exclusive to xx80 line.) Too many choices!

  • GPU rendering is much faster compared to CPU rendering but remember that there are many limtations that makes GPU rendering useless in many cases.

    1: The scene must fit in GPU memory, otherwise it will not work (including textures!!!!), Note: two video cards with 4 GB each means you have 4GB available for rendering, not 8!!!

    2: If you have only one video card the UI has to share it with IRAY, less memory for rendering (and slower UI while rendering).

    Each GPU need a copy of the scene so just because you have two video cards does not mean you have double memory available for rendering.

    Brilliantly stated, I was about to harp on this before folks wasted money on cards. 

     

  • ZeddicussZeddicuss Posts: 167
    nicstt said:
    risa.kb3 said:
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    As an example of redering;

    I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.

    970
    Finished Rendering
    Total Rendering Time: 9 minutes 35.3 seconds

    980ti
    Finished Rendering
    Total Rendering Time: 6 minutes 28.66 seconds

    970 and 980ti
    Finished Rendering
    Total Rendering Time: 3 minutes 41.70 seconds

    Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%

    That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.

    Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.

    This is really interesting, can I ask, what are your other PC Specs?

    intel i7 4770k (I have an overclocked profile in the BIOS, but I don't run overclocked, although I might on occasions) It is bad for CPUs over the long-term running overclocked if constantly pushing em to the max as well. My bank balance is more important than my ePeen. :)

    16GB RAM (another 16GB would be helpful, but might wait until I do an upgrade)

    4 SSDs

    Corsair AX860i PSU

    Mechanical Keyboard, Good Mouse. 2 x 1440p monitors

    A mechanical and a NAS for backups. (Can't have too many backups.)

    That's a nice system. I agree about overclocking the CPU - for GPU rendering there is little to no benefit and you run the risk of causing it issues / potentially invalidating warrenty. 

    Did you use OptiX Prime Acceleration with those benchmark times you posted?

  • nDelphinDelphi Posts: 1,869
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    I looked into the 900 series wattages.

    My ZOTAC GTX 960/4 GB uses a max of 120 watts.

    The 970 about 145 watts and the 980 about 165 watts according to the specs on NVIDIA's site.

    The Ti's and the Titans need about 250 watts and TItan Zs about 375 watts. Yikes!

    You can run a 960 on a 400 watts PSU. I am running mine with a Sentey 750w, 80 Plus, Bronze, Modular PSU. A bit overkill, but I didn't investigate this much prior to purchasing my card.

  • nicsttnicstt Posts: 11,715
    edited January 2016
    risa.kb3 said:
    nicstt said:
    risa.kb3 said:
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    As an example of redering;

    I rendered a scene two figures, and background, I just posted it in gallery. It takes the following to do 81% convergence.

    970
    Finished Rendering
    Total Rendering Time: 9 minutes 35.3 seconds

    980ti
    Finished Rendering
    Total Rendering Time: 6 minutes 28.66 seconds

    970 and 980ti
    Finished Rendering
    Total Rendering Time: 3 minutes 41.70 seconds

    Now I started it on the CPU; and let go for a bit: Total Rendering Time: 11 minutes 33.97 seconds - 19.8%

    That gives an idea on the difference; now I had a scene the other day, that depped to CPU on 970; I tried it on the 980ti and it doesn't have enough memory either; it took about 150 minutes on CPU. Ideally I'd like a Titan, but I'm likely to wait until next gen for that, or until I can pick one up cheap.

    Now, with regards to PSUs; if you have a gold rated or platinum; then you can be confident that a 600 watt PSU will run 970 on an i5 or i7 that isn't the performance range and not over clocked; probably be ok overclodked but depends what else you have in the system; I'd certainly not take guesswork before deciding though.

    This is really interesting, can I ask, what are your other PC Specs?

    intel i7 4770k (I have an overclocked profile in the BIOS, but I don't run overclocked, although I might on occasions) It is bad for CPUs over the long-term running overclocked if constantly pushing em to the max as well. My bank balance is more important than my ePeen. :)

    16GB RAM (another 16GB would be helpful, but might wait until I do an upgrade)

    4 SSDs

    Corsair AX860i PSU

    Mechanical Keyboard, Good Mouse. 2 x 1440p monitors

    A mechanical and a NAS for backups. (Can't have too many backups.)

    That's a nice system. I agree about overclocking the CPU - for GPU rendering there is little to no benefit and you run the risk of causing it issues / potentially invalidating warrenty. 

    Did you use OptiX Prime Acceleration with those benchmark times you posted?

    Never use it. I've found in the tests that it at best makes small improvements but generally none/little or slightly worse.

     

    nDelphi said:
    nicstt said:

    The 900 series cards generally need 200-300 watts; varies on card.

    I looked into the 900 series wattages.

    My ZOTAC GTX 960/4 GB uses a max of 120 watts.

    The 970 about 145 watts and the 980 about 165 watts according to the specs on NVIDIA's site.

    The Ti's and the Titans need about 250 watts and TItan Zs about 375 watts. Yikes!

    You can run a 960 on a 400 watts PSU. I am running mine with a Sentey 750w, 80 Plus, Bronze, Modular PSU. A bit overkill, but I didn't investigate this much prior to purchasing my card.

    Not sure when I posted that. :) September, 970 was the most poser-friendly back then iirc, which admittedly is less than 200, always allow some spare. Plus there was my qualifier of generally.

    Post edited by nicstt on
  • FirePro9FirePro9 Posts: 456

    It appears to me that IRay rendering speed pretty much boils down to how many CUDA cores you can throw at it.  The parallel processing power of CUDA cores is a perfect match for rendering. 

    Given that I am wanting to find the best bang for the buck (within a $2000 - $3,500 budget) I am thinking that my next computer should be designed initially for a 980Ti card but be expandable to support 3-way SLI, with the anticipation of later buying two more 980Ti cards.  This impacts the initial single card build a few ways and with some upfront costs of an extra $300-$500, including:

    • needing a MB supporting 3-way SLI (not very many 4-way boards so I nixed that early on), current thought is Asus X99-Deluxe
    • needing a beefy power supply 1200+ watts
    • and of course a case and cooling to support this much heat and power

    So looking at $/CUDA-Core I believe the 3-way SLI 980Ti solution to be the best bang for your buck.

    Here is brief comparison of Geforce costs per CUDA-Core:

    GTX 980Ti = 2816 Cuda Cores = $600 = $0.21/core

    GTX 980   = 2048 Cuda Cores = $480 = $0.23/core

    GTX 970   = 1664 Cuda Cores = $300 = $0.19/core

    GTX 960   = 1024 Cuda Cores = $180 = $0.18/core

     

    Given the cost per CUDA Core is roughly the same, then the option of using one computer to run 3-GTX980Ti cards, giving you 8448 CUDA cores, and only 1-upgraded computer box to buy, seems like my best bet.

    Lastly, using NICSTT’s rendering times posted above, one can estimate the rendering time using a 3-way SLI arrangement of 980Ti cards as follows:

    Card

    970

    980Ti

    970 & 980Ti

    3-980Ti

    Rend Time (secs.)

    575

    389

    222

    118

    No. Of CUDAs

    1664

    2816

    4480

    8448

    RT x CC =

           956,800

           1,095,424

               994,560

            1,000,000

    (bold text = approx..)

    I may be overlooking something but looks to me that number of CUDAs is the “core” issue!

  • mjc1016mjc1016 Posts: 15,001
    FirePro9 said:

    It appears to me that IRay rendering speed pretty much boils down to how many CUDA cores you can throw at it.  The parallel processing power of CUDA cores is a perfect match for rendering. 

    Given that I am wanting to find the best bang for the buck (within a $2000 - $3,500 budget) I am thinking that my next computer should be designed initially for a 980Ti card but be expandable to support 3-way SLI, with the anticipation of later buying two more 980Ti cards.  This impacts the initial single card build a few ways and with some upfront costs of an extra $300-$500, including:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

  • FirePro9FirePro9 Posts: 456
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

  • ZeddicussZeddicuss Posts: 167
    FirePro9 said:

    It appears to me that IRay rendering speed pretty much boils down to how many CUDA cores you can throw at it.  The parallel processing power of CUDA cores is a perfect match for rendering. 

    Given that I am wanting to find the best bang for the buck (within a $2000 - $3,500 budget) I am thinking that my next computer should be designed initially for a 980Ti card but be expandable to support 3-way SLI, with the anticipation of later buying two more 980Ti cards.  This impacts the initial single card build a few ways and with some upfront costs of an extra $300-$500, including:

    • needing a MB supporting 3-way SLI (not very many 4-way boards so I nixed that early on), current thought is Asus X99-Deluxe
    • needing a beefy power supply 1200+ watts
    • and of course a case and cooling to support this much heat and power

    So looking at $/CUDA-Core I believe the 3-way SLI 980Ti solution to be the best bang for your buck.

    Here is brief comparison of Geforce costs per CUDA-Core:

    GTX 980Ti = 2816 Cuda Cores = $600 = $0.21/core

    GTX 980   = 2048 Cuda Cores = $480 = $0.23/core

    GTX 970   = 1664 Cuda Cores = $300 = $0.19/core

    GTX 960   = 1024 Cuda Cores = $180 = $0.18/core

     

    Given the cost per CUDA Core is roughly the same, then the option of using one computer to run 3-GTX980Ti cards, giving you 8448 CUDA cores, and only 1-upgraded computer box to buy, seems like my best bet.

    Lastly, using NICSTT’s rendering times posted above, one can estimate the rendering time using a 3-way SLI arrangement of 980Ti cards as follows:

    Card

    970

    980Ti

    970 & 980Ti

    3-980Ti

    Rend Time (secs.)

    575

    389

    222

    118

    No. Of CUDAs

    1664

    2816

    4480

    8448

    RT x CC =

           956,800

           1,095,424

               994,560

            1,000,000

    (bold text = approx..)

    I may be overlooking something but looks to me that number of CUDAs is the “core” issue!

    I agree that the 980Ti is the way to go. $ to GPU power is better than the 980 but importantly, it has 6 Gigs of Vram, meaning it will be able to cope with larger scenes or more characters in the render. Something else to factor is clock speed, rendering speed is not exlusive to CUDA Cores. Also - Be sure to get a CPU that supports 3 GPU's. The 5820k has 28 PCI lanes, which means it will run 3 GPU's at X8 (total 24 PCI lanes) and leaves 4 lanes left for other PCI stuff. If you get the 5930k, you 40 PCI lanes which means 2 Cards running at X16 and 1 at X8. After doing a fair bit of research it seems some people have noticed running the GPU at X16 is faster than X8. This is not so relevant for gaming as the benchmark suggests there is little difference, but for rendering, people have said there is a noticeable difference. If you are going to fork out over $2500 on GPU's, maybe it's worth getting the 40 lane CPU.

  • FirePro9FirePro9 Posts: 456

    Risa.kb3, you make an interesting point, I would not have looked at the CPU pipe.  I was in fact looking at the 5820k CPU but now have to research the 5930k which is another $150 add cost. I did see some have noted that even PCI 3 v 2 is not significant (maybe referrring to gaming again).  Pretty confusing indeed, and I try to stay up on this stuff.  More research needed to figure that one out, thanks for the info!

  • ZeddicussZeddicuss Posts: 167

    I thought the same as you, the benchmarks clearly show that PCI 3.0 at X8 (which is equal to PCI 2.0 at X16) would be more than enough to not bottleneck the GPU, but I read a few posts where people said they had noticeable render speed increases when upgrading their CPU from a decent 16 Lane to the 5930k (both rendering GPU Only). It's frustraiting that there is not more benchmarking on this to prove this is 100% accurate. I also found a Daz article about picking a GPU that also states that it would be best to make sure your GPU's are running at X16 to get maximum benefit - whether that's with PCI 2.0 or 3.0 in mind I don't know.

  • outrider42outrider42 Posts: 3,679
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

  • hphoenixhphoenix Posts: 1,335
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    edited January 2016
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Post edited by DAZ_Spooky on
  • hphoenixhphoenix Posts: 1,335
    edited January 2016
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."
    Post edited by hphoenix on
  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

  • hphoenixhphoenix Posts: 1,335
    hphoenix said:
    hphoenix said:
    FirePro9 said:
    mjc1016 said:
    FirePro9 said:

    Actually...if you can wait a few months, it may be better to wait for the next generation (Pascal) cards...as they should have at least similar core numbers but more memory and/or the price drop on the current generation cards.

     Thanks MJC1016 for the heads up on the Pascal boards, had not heard of them yet.  Couldn't find specs on number of CUDA cores.  Nothing seems to change faster in the PC area than video cards and Pascal looks like a nice improvement, and will be interested to see what impact these have on rendering times.  My thoughts were in line with your idea a bit, build a single card box now then hope prices drop on the 2nd and 3rd video card a little later.

    The Pascal gpus have a new type of RAM which will allow them to stack more RAM on the card. Nvidia has already stated that consumer versions could feature as much as 16gb, while workstation cards could feature up to 32gb. So yes, it could be well worth waiting to see what options they have. I expect to see 4, 8, 12 and 16gb options available in time, with 8gb being the new "normal." Maybe not all that once though, because there may be a shortage of the HBM2 RAM chips available for a while. So there will be a high premium on a 16gb model when it does ship.

    You can be certain that the series will offer at least a few more CUDA cores as well.

    There is a second advantage to Pascal cards. Nvidia Link, supposedly replacing PCI-e 3 for video cards, and allowing enough bandwidth for the Video cards to use motherboard ram for CUDA tasks. 

    Not sure where you got that NVLink is replacing PCIe-3.0.  NVLink is the replacement for SLI, which does take inter-card communication off the PCIe bus.  But the cards are still PCIe-3.0 cards.

     

    http://www.nvidia.com/object/nvlink.html and http://blogs.nvidia.com/blog/2014/11/14/what-is-nvlink/

    Ah.  Read the linked whitepaper (link is on the first linked page).  While there are plans (for the big exascale supercomputers) to have NVLink connections on the backplanes, consumer plans for having NVLink on motherboards is considerably in the future.  All current designs are based on PCIe-3.0 for CPU to GPU connection.  NVLink is GPU-to-GPU only.  To quote from the whitepaper:

    "In the following sections of this paper, we analyze the performance benefit of NVLink for several algorithms and applications by comparing model systems based on PCIe-interconnected next-gen GPUs to otherwise-identical systems with NVLink-interconnected GPUs. GPUs are connected to the CPU using existing PCIe connections, but the NVLink configurations augment this with interconnections among the GPUs for peer-to-peer communication. The following analyses assume future-generation GPUs with performance higher than that of today’s GPUs, so as to better correspond with the GPUs that will be contemporary with NVLink."

    In the future, is definitely subjective. LOL. Especially with Computer architecture. :) 

    True.  However, based on past history, the bus design for standard x86/x64 motherboards is pretty slow to change.  ISA was put out in 1984, followed by EISA in 1988 (but was still backwards compatible).  VLB (VESA local bus) was in 1992 and worked alongside EISA. PCI came out in 1992 at v1.0 (and went up to v3.0 in 2004.)  AGP came out in 1997, PCI-X came out in 1998.  Both AGP and PCI-X existed alongside standard PCI.  PCI-E started in 2004, with v2.0 being released in 2007 and v3.0 in 2010.

    So I would expect to see NVLink appearing on a few high-end motherboards within a couple of years, alongside existing PCI-E slots.  Significant adoption will be slower, since it is tied to nVidia (and AMD will come up with it's own equivalent), whereas PCI-E is an open standard.  And PCI-E 4.0 is expected to finalize in 2017.  (And PCI-E 4.0 is backwards compatible with 3.0, and doubles the bandwidth.)

    NVLink is a great idea, but a more open standard will more likely appeal to the board manufacturers as it simplifies construction and doesn't alienate half the market.  The current major board manufacturers probably dislike the whole SLI/Crossfire dichotomy that nVidia/ATI have forced into existence, as it requires them to design/build two separate motherboards to accomodate both protocols.  A few have tried to support both on the same board, but that has usually not worked as well as dedicated support.

     

Sign In or Register to comment.