Can I successfully use 2 GPU's that are NON-SLI?
![Usernamenottaken](https://secure.gravatar.com/avatar/0fdfc287de0d662d3cb84ad42225b4bd?&r=pg&s=100&d=https%3A%2F%2Fvanillicon.com%2F0fdfc287de0d662d3cb84ad42225b4bd_100.png)
Right now I have a 2080 TI, 32 gigs of ram and an I9 9900K. Renders are ridiculously fast. I cannot afford another 2080TI but would love to go and grab a 2070.
I've heard people say that non-sli actually does improve renders but that sli can hinder the render since it's not reccomended.
Any thoughts?
Comments
Iray treats each gpu as a separate entity; it won't use SLI (there is some thought that it might use Nvlink to do memory pooling on the 20-series cards, but I don't remember seeing any true confirmation of this). By default Iray will start with both cards and if the scene fits in the gpu memory they'll both run to the end; if the scene runs out of memory on the 2070 that card will be dropped and the render will just run on the 2080 TI.
Okay, I apologize if I am not understanding this. With that being said - what's the best way to get faster renders? If it just renders with one card at a time, there's no way to use 2 cards at one but not via SLI / NVLINK?
I just tried as I have a 1070 and a 970. The scene was small enough to render using both but the comparison of time taken was almost identical when comparing two GPUs to only the 1070. I did make sure that both were being used by running GPU-Z - they were definitely both working. So hardly any speed advantage to using both. Hopwever, the one plus is that you can run the system on the lesser of the two cards leaving the stronger GPU free and saving some valuable VRAM space.
I have an RTX2080ti (11Mb) and a GTX1070ti (8Mb).
If the scene fits on the GTX both cards will run.
If the scene is too large for the GTX, but fits on the RTX, then only the RTX renders.
If the scene is too large for the RTX too, then it falls back to the CPU.
Yes, I do see a difference when both cards are rendering, versus only the RTX.
TD
And again - Sorry if I am not getting this because it almost seems that you're saying - They can only render small scenes together. This makes no sense to me. If the scene was large, why wouldn't BOTH cards be used to render it quicker? How do people configure render farms with multiple GPU? Through servers only?
Iray loads the entire scene into every card and each card runs independently; the more cards, the faster the render if the scene fits in the card's memory. I have a 980 TI (6 GB) and a 1080 TI (11 GB); most scenes I put together will fit in 6 GB so both cards run; I have some scenes that run to 8 or 9 GB, and just the 1080 TI gets used for these. There are tricks that can be used to reduce memory requirements, but I haven't really done much in this area as my renders tend to finish in 20 minutes or so and that's good enough for me, Server farms are under the same constraint, but most of them are aimed at professionals who can write off the cost of Quadro or Volta cards with more memory.
Iray will treat each card independently and then merge the results. I do this with a 2070 and a 970 together (non SLI). It knocks about 30 seconds off a 2 minute 30 second render for me. With iRay things like SLI and now NVLINK are more about sharing memory, which I believe is a feature of Quadro cards only. For "gamers" I pretty sure there aren't that many (any) games that will use both independently (Ashes of the Singularity?) even though the new graphics APIs allows this (Vulkan, Metal, DX 12).
People often talk about multi-GPU MEMORY pooling, but what about CUDA CORE pooling? If I have an RTX2080Ti (4,352 CUDA) and an RTX2060 (1,920 CUDA) do I get the benefit of BOTH sets of CUDA cores (6,272)? Only through SLI/NVLink, or...?
I just ordered a KICKIN' new Alienware desktop and should have it Friday so will be testing all of this out, but was hoping to save myself some trial-and-error-bother if I can get answers ahead of time.
(The new machine comes with the RTX2060 and has a large enough bay and additional slot configuration for the RTX2080Ti I already have, and the power supply is adaquate for two GPU's, etc. so no worreis on any of that)
Thanks for any insight.
No NVLINK required for what you call Cuda core pooling. Because of the way path tracing works (ray tracing in general), i.e. accumulation of samples, the scene can be sent to both cards and rendered, then the results combined. The only issue with having a 2060 and a 2080Ti is the latter has far more memory than the former, so you may find it dropping the 2060 if your scene doesn't fit.
Thanks! Good to know. I am trying to decide if I will keep BOTH RTX cards in the new machine, or put the 2060 in my current machine and still use it to render as well. I guess a few test renders with BOTH RTX cards will decide that. :) If I decide to use both RTX in the new machine, I could always put my GTX980Ti back in the current machine as it gave excellent performance.
An alternative is to use either the 980 or the 2060 to drive the monitor/s exclusively so the 2080 is free to just purely do RT. I don't know if that's a performance win. It'll be a small memory win on a lesser card but on a 2080Ti that's probably not an issue for you.
I was thinking about the 'system card' and 'render card' vs multi-card for everything scenario. So, if I physically hook the Dell 34" Ultrawide curved screen, and two Dell 24 widescreens (one portrait, one landscape) to the 2060 would I need to do anything else with the 2080 besides have it properly installed in order to be the default render card? I'm initially leaning towards this split tasks configuration if it will allow the CUDA cores of BOTH cards to pitch in on a render. I use DAZ Studio (very comfortable with most operations), Blender (absolute NOOB - still can't hardly even move a camera around, or re-texture anything etc. But determined to get there) and very heavy Photoshop useage and around 20-30 browser tabs open at any given time. The new Alienware is replacing a more than 7 year old Alienware that has served me very well over the years. Still, I'm hoping to have my socks blown off! (New machine: i9 9900K 4.7 GHz, Win 10 Pro, RTX2060 and will add my current RTX2080Ti, 64 GB RAM, 2TB SSD + 2TB 7600 RPM HDD - WOOHOO!)
I can only speak to Daz not photoshop (I think the latter is harder on CPU and system memory) but you don't need to do anything other than install it and then make sure it's selected in iRay "advanced" settings tab.
OK, thanks Robinson, I appreciate the feeback and info!