iRay speedup with dual cards: 2x 3080 or 1x 3090?

So... assuming a situation where I am not memory constrained: What is faster, a single 3090 card, or two 3080 cards? It comes to roughly the same price... Two 3080s have more CUDA cores than a single 3090 (~16000 vs ~10000) but I have no idea how well iRay can use two cards versus one. I wonder if more CUDA cores for computations gives me more benefits than more memory?

Does anybody here have any expectations about that, for example based on measured speedup in iRay with two 2080 or 2080ti against one?

Thanks!

    CS

Comments

  • Just found a vray benchmark that compares single and dual card configurations.... in these, a dual 2080ti is pretty much exactly twice as fast as a single 2080ti. Has anybody tested this with iRay?

    https://www.pugetsystems.com/labs/articles/V-Ray-NVIDIA-GeForce-RTX-2070-2080-2080-Ti-GPU-Rendering-Performance-1242/

    The speed doubling is also seen for pairs of 2070s and 2080s with vray in the same benchmark.

    On the other hand, the speedup when moving from a single 2080 to a 2080ti is much smaller in the benchmarks - only about 30%.

    Would love to see if anybody has made a similar test with iray...

    Also, for rendering (as opposed to gaming), does it make a difference to have dual cards connected with nvlink, or just as separate compute resources?

  • TheKDTheKD Posts: 2,691
    edited September 2020
    2080super x2  xionis               2 minutes 35.43 seconds    Titan x1      RayDAnt              3 minutes 49.13 seconds

     

    That's probably the best answer you can get until they are released and benchmarked.

    I have no idea why that code is appearing with my apostrophes, it's not there in the comment box....

    Post edited by TheKD on
  • ???   It's not a choice!

    2 x 3080 is not useful (not for the memory anyway).

    If you go to the nvidia site, you click on "Learn more" on top for the 3000 series info "Ultimate play" (first moving box for now).

    You scroll down to 60% of the page to the spec section.  Click on "Compare Specs".

    Then you scroll down to NVLink, you'll see that it's not supported on the 3080 and the 3070.

    So, yes, even if the card would work and the cuda cores would compute like mad beasts, they will not speak to each other leaving your cpu doing all the sync work and each card needing it's memory for everything in the scene, duplicated.  I'm not saying it would not be faster than one 3080 but still.

    Even if nvidia optimize the compute later thru driver update, you would would never benefit from the "local nvswitch".

    So Nvidia gently removed that choice from anybody wondering about that.  You want to render ?   It's a titan and for now it's the 3900.  The most powerful beast ever created.

     

  • TheKDTheKD Posts: 2,691

    Yeah, but OP wasn't asking about vram, they are only asking about speed. Granted, I don't do scenes with big crowds, only a few people usually, but I rarely get the drop to cpu since I went from 1070 main and 960 secondary, to 2080 super main and 1070 secondary. On the rare occasion I do, I just run the optimizer on some backround textures and that's usually enough to eek by and fit in vram. I used to have to run optimizer always.

  • RaukoRauko Posts: 32

    Nvidia have left themself some wriggle room .. so, once AMD announce Big Navi which "could" be faster than the 3080 rasterization wise - I would expect a 20gb 3080 or maybe a 12gb 3090 .. maybe even a 3080ti .. 

     

  •  

    Then you scroll down to NVLink, you'll see that it's not supported on the 3080 and the 3070.

    To the best of my understanding, you actually don't need NVLink or any form of SLI for rendering - in fact, it seems to be recommended to switch it off for rendering speed. NVLink gives you a larger memory space shared between the CUDA/RT processor cores.

    As far as I know, for raw rendering speed, the CUDA cores are just used as coprocessors, each doing their own independent part of the calculations. It would make sense that SLI does not provide any advantage for that.
    As the GPUs don't have to jointly write to the screen (as in nomal graphics mode) the two cards don't need to be synchronized either. This is why you can use two different cards for iRay. Each card has its own copy of the data in VRAM, and as long as the memory is sufficient, there is no need for the interlink. This is borne out by the benchmark link that I sent out earlier - as far as I can tell these are all non-SLI configurations. I alkso found some more benchmarks for Octane that showe the same - nn-SLI configurations scaling pretty much linearly. Just can't find a benchmark to show that for iray :) 

    Anyways, I currently have an 11GB card, so not much larger than the 10GB of the 3080. And, for the work I do, so far I have not had problems with VRAM (fingers crossed).

    So far my conclusion is that for smaller scenes two 3080s probably will be faster than a single 3090, if iray scales as well as iray does. For large scenes, the 3090 should take the lead. 

     

  • @TheKD, thanks that is good data to see. You would not also, by any chance, happen to have a result running with just one of the 2080super, just to see how iray scales from one to two cards? :)

    Thanks so much!

  • TheKDTheKD Posts: 2,691
    TheKD               4 minutes 56.54 seconds    
  • This is inductive logic, but it has always held for me in the past:

    Going from 1 x 1080ti to 2 x 1080ti roughly doubled the speed.

    Going from 1 x 2080ti to 2 x 2080ti roughly doubled the speed.

    Going from 2 x 2080ti to 4 x 2080ti roughly doubled the speed again.

    So I suspect that if you don't care about the 24G framebuffer of the 3090, 2 x 3080 would be substantially faster than 1 x 3090.

    But as Ampere exceeded everyone's expectations, I'm pretty sure actual benchmark results will flood into RayDAnt's benchmark page as people giddily discover what they can do with it :)

     

  • @TheMysteryIsThePoint - thanks for the hint re. RayDAnt's benchmark page - I had not seen that before. Awesome!

    https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking#Section 2, in case someone is looking for it.

  • nicsttnicstt Posts: 11,715

    This is inductive logic, but it has always held for me in the past:

    Going from 1 x 1080ti to 2 x 1080ti roughly doubled the speed.

    Going from 1 x 2080ti to 2 x 2080ti roughly doubled the speed.

    Going from 2 x 2080ti to 4 x 2080ti roughly doubled the speed again.

    So I suspect that if you don't care about the 24G framebuffer of the 3090, 2 x 3080 would be substantially faster than 1 x 3090.

    But as Ampere exceeded everyone's expectations, I'm pretty sure actual benchmark results will flood into RayDAnt's benchmark page as people giddily discover what they can do with it :)

     

    When reading these comments, please consider that the specs and claimed results exceeded everyone's expectations. We will have to see how exactly this pans out.

    Having highlighted that, I'm considering upgrading my 980ti to a 3090. I'll still render in Blender, but maybe I won't find Iray as much of a resource hog.

  • So Nvidia gently removed that choice from anybody wondering about that.  You want to render ?   It's a titan and for now it's the 3900.  The most powerful beast ever created.

    iRay doesn't work like that I don't think.  It's not like gaming.  You can throw samples at two independent models and then merge them later.  It's why I was able to get a speedup with an RTX 2070 and a GTX 970.  The 970 added a few samples.  No NVLink or SLI needed. 

    Now I wouldn't bother with two cards to be honest.  The extra heat in the case drops both cards and the CPU clock speed by a not insignificant amount. I would go for one 3090 just because it's cleaner.  But it depends on the rest of your setup.

  • nicsttnicstt Posts: 11,715

    ???   It's not a choice!

    2 x 3080 is not useful (not for the memory anyway).

    If you go to the nvidia site, you click on "Learn more" on top for the 3000 series info "Ultimate play" (first moving box for now).

    You scroll down to 60% of the page to the spec section.  Click on "Compare Specs".

    Then you scroll down to NVLink, you'll see that it's not supported on the 3080 and the 3070.

    So, yes, even if the card would work and the cuda cores would compute like mad beasts, they will not speak to each other leaving your cpu doing all the sync work and each card needing it's memory for everything in the scene, duplicated.  I'm not saying it would not be faster than one 3080 but still.

    Even if nvidia optimize the compute later thru driver update, you would would never benefit from the "local nvswitch".

    So Nvidia gently removed that choice from anybody wondering about that.  You want to render ?   It's a titan and for now it's the 3900.  The most powerful beast ever created.

     

    Not useful to you, others may have a different perspective.

    I'm expecting to get a 3090, purely because I don't want to only use one card for render, my other is used for display (3 monitors). I also want as much RAM as possible.

    I thought Nvidia stated the 3090 to be the new Titan, or am I mis-remembering? Other than name it seems to share all the characteristics.

  • prixatprixat Posts: 1,588
    edited September 2020
    nicstt said:

    I thought Nvidia stated the 3090 to be the new Titan, or am I mis-remembering? Other than name it seems to share all the characteristics.

    That's how I understand it. The 3090 is the newer, faster Titan with a $1000 price reduction!

    Post edited by prixat on
  • nicsttnicstt Posts: 11,715
    edited September 2020
    prixat said:
    nicstt said:

    I thought Nvidia stated the 3090 to be the new Titan, or am I mis-remembering? Other than name it seems to share all the characteristics.

    That's how I understand it. The 3090 is the newer, faster Titan with a $1000 price reduction!

    I also thought I remembered them saying the 3080 was the new TI, meant to be the flagship gaming card; of course, releasing a card to replace the Titan but giving it a consumer brand makes that card the flagship consumer card - Titan replacement of not.

    However you look at it though, it's the only card to literally reduce in price, whereas the rest are the same exhorbitant price as the 2000 series (gamers not buying many of em is a good indication that they agree with my opinion). I really wish folks would stop congratulating Nvidia for returning to sensible pricing - they haven't, they may merely being at least giving better value.

    I emphasis may - it looks promising - but let's wait and see what the results are.

    I personally have no qualms about buying a 3090 as it's a Titan for less cash - what I was going to get anyone.

    Post edited by nicstt on
  • I think you should wait a bit longer before making a purchase, there is a 'theory' that Nvidia is going to release a 3080 Ti (the price jump between 3080 and 3090 is WAY too much), probably at $999 with the same Cuda Core count as the 3090 but lower Vram (just like with 2080Ti and Titan RTX).
    This is all of course assuming that you don't need the 24 GBs of Vram that the 3090 provides.

  • nicsttnicstt Posts: 11,715

    The only card I'm considering is the 3090; I have the cash available for a Titan, but had decided to wait and see what was happening with the 3000 series; basically Nvidia have knocked a grand of the price of the card I was planning on. I'm just wondering what AMD are going to do; as I render in Blender, I'm not tied to Nvidia, which I like a lot.

  • KeironKeiron Posts: 413
    edited September 2020

    Hi

    It's an interesting concept as 2 x 3080 would give you loads of cuda cores, but i'm not sure that Daz Studio would use all of them, so i ll wait and see

    The AMD cards at the moment don't have Cuda so I am having to use the CPU even with 12 cores is still slow

    Watch this space I think

    Post edited by Keiron on
  • PerttiAPerttiA Posts: 10,024
    Parsa1999 said:

    I think you should wait a bit longer before making a purchase, there is a 'theory' that Nvidia is going to release a 3080 Ti (the price jump between 3080 and 3090 is WAY too much), probably at $999 with the same Cuda Core count as the 3090 but lower Vram (just like with 2080Ti and Titan RTX).
    This is all of course assuming that you don't need the 24 GBs of Vram that the 3090 provides.

    Yes, the first 3080 is clearly meant just for the impatient ones that are willing to pay the premium (and I don't mean just money) for getting the new cards first, but versions with more VRAM are coming for sure as well as ones with NVLink - Doesn't make sense that the only version with NVLink is the one that need it the least.

    I can understand not bringing NVLink to the 3070, but there is bound to be a 3080 with it, maybe for xmas...

  • i53570ki53570k Posts: 212

    So few games support SLI nowadays and most NVLink users are likely people who buy consume grade cards to do non-gaming tasks. Nvidia probably saw the sales figure of NVLink bridge in the 2xxx era and decided to drop it altogether for gamers and instead looking to move those Super NVLink buyers upmarket to 3090.

  • PerttiAPerttiA Posts: 10,024
    i53570k said:

    So few games support SLI nowadays and most NVLink users are likely people who buy consume grade cards to do non-gaming tasks. Nvidia probably saw the sales figure of NVLink bridge in the 2xxx era and decided to drop it altogether for gamers and instead looking to move those Super NVLink buyers upmarket to 3090.

    Typical marketing baboons... In my 30 year professional career, I have met just one person in marketing that was capable of rational thinking, and I suspect he was a pretender...

    The NVLink would expand the upgrade options for the user, meaning... I'll buy one now and budget permitting, I'll get another one same time next year with the NVLink module, instead of having to use 2-3 times as much to replace the one I just bought for about the same increase in VRAM and performance.

    Limiting the upgrade paths makes the user search for other options, which doesn't mean the user will bow his/her head and just buy the more expensive one.

  • nicsttnicstt Posts: 11,715
    edited September 2020
    PerttiA said:
    Parsa1999 said:

    I think you should wait a bit longer before making a purchase, there is a 'theory' that Nvidia is going to release a 3080 Ti (the price jump between 3080 and 3090 is WAY too much), probably at $999 with the same Cuda Core count as the 3090 but lower Vram (just like with 2080Ti and Titan RTX).
    This is all of course assuming that you don't need the 24 GBs of Vram that the 3090 provides.

    Yes, the first 3080 is clearly meant just for the impatient ones that are willing to pay the premium (and I don't mean just money) for getting the new cards first, but versions with more VRAM are coming for sure as well as ones with NVLink - Doesn't make sense that the only version with NVLink is the one that need it the least.

    I can understand not bringing NVLink to the 3070, but there is bound to be a 3080 with it, maybe for xmas...

    I don't know any more than you do; I'm not as certain as you seem to be, but I'm feeling that there won't be any cards other than the 3090 that will utilise the Nvlink, and I'm going to add, unless the have another 80ti or one to replace that: whatever they decide to call it.

    I keep thinking back what was said during the Nvidia presentation; the 3080 is the new ti version so what acually appears will depend on what AMD have to offer.

    Post edited by nicstt on
  • rrwardrrward Posts: 556
    Keiron said:

    Hi

    It's an interesting concept as 2 x 3080 would give you loads of cuda cores, but i'm not sure that Daz Studio would use all of them, so i ll wait and see

    The AMD cards at the moment don't have Cuda so I am having to use the CPU even with 12 cores is still slow

    Watch this space I think

    I used to have three 1080ti's in my system. IRay used them fully. I upgraded to two 2080ti's and they are running fine. The GPU limit for Studio is far higher than the available PCIE slots in any available computer, so the number of cards is not an issue.

    AMD will never have CUDA, CUDA is NVidia only.

  • Here is my take, Get the 3070 super with 11gb vram and you are going to have to wait for Nvida 3000 support in Daz Iray. Not sure how long could be months could be longer. If nothing else use with Blender support is faster.

  • nicsttnicstt Posts: 11,715

    When was this announced?

     

    Here is my take, Get the 3070 super with 11gb vram and you are going to have to wait for Nvida 3000 support in Daz Iray. Not sure how long could be months could be longer. If nothing else use with Blender support is faster.

     

  • GatorGator Posts: 1,294
    edited September 2020
    We'll have to see how much difference there is when they are released. The 3090 does have higher memory throughput, but I doubt it will be enough to be faster than the difference in the number of CUDA cores of 2 3080s.
    Like nicstt I'm looking forward to the 3090. It really depends on what you're using it for. Many of the scenes at the store, throw in 2-3+ Gen 3 and above figures rendering at 4k and I've used up the VRAM of the 1080 Ti, even the 12GB Titan. I'll happily give up some speed of the single 3090 to not worry about VRAM in Daz for a while. :) Also if you game on the side, SLI/multi-card support seems to be fading. Too many games I have a card pretty much sitting idle. :|
    Post edited by Gator on
  • Hi,

    The 3090 has 24MB memory, this is an advantage for big scenes in comparison with 3080 or 3070 with much lower memory, even if you combine them.

    The performance of the cores seems to add raytracing performance pretty good, you can see this in the Octane Render benchmarq list: 

    https://render.otoy.com/octanebench/results.php?v=2020.1.5&sort_by=avg&scale_by=linear&filter=3080&singleGPU=0&showRTXOff=0

    And just the renderin, independent from memory requirement, of the 3080 TI is pretty close to the 3090 in the meantime.

    If you work with big scenes, you should go for the card with bigger memory. 

    But,what's your experience: I have the 3090, a Core i7 Quadcore CPU, 128MB RAM, very fast SSD. But still loading a figure to a scene takes more than a minute and when using iRay, scenes with more than 1 character are very slow when trying to e.g. zoom in. Daz3D is not very good for editing. It has a great rendering result. I wonder, how this can be improved.

  • PerttiAPerttiA Posts: 10,024

    This thread is two years old and the practical requirements for the GPU have changed quite a lot during this time

Sign In or Register to comment.