Optimizing RAM and GPU
OVD
Posts: 254
Hi all
So I'm pretty tech-savvy in terms of software and how the computer runs in that regard, but hardware is where I'm less knowledgeable.
To that end, I'm running a relatively heavy render and I checked my performance to find that my GPU is only averaging out between 1-4%, where my Memory is much higher.
I don't usually have problems with slow renders or anything like that, but what I want to know is if there's something I can do or change that will speed up my renders a bunch. These might include adding more general RAM, changing the allotment of the Dedicated or Shared GPU, etc.
Thanks!
Screenshot (54).png
1360 x 715 - 79K
Comments
Set one of those graphs to CUDA, or Compute_0 if you don't have a CUDA option, and you should see the usage.
How do I do that?
Click the downward traingle to the left of the graph name.
All those down arrows, 'v,' are drop downs. Click one and choose CUDA.
Based on those graphs your GPU is doing the render.
Got it. So it looks like it's maxing out, which means that increasing VRAM or RAM wouldn't help at all?
Is there anything else I could do to help it along?
Add a second card or replace the 2060 with one higher up the product stack (which would also get you more VRAM).
So by adding a second video card, both will work on a scene only as long as it has enough RAM for the scene on each of them right? Or is that VRAM or CUDA stacks?
If I added a second of the same card then it would in theory work twice as fast as the one? If I were to look for a second card, would I want to look more for VRAM or CUDA?
Thanks!
Each card has to fit the scene in it's own VRAM to be used. Supposedly some cards can use nvlink and the VRAM gets pooled, but I cannot personally verify this.
I guess a thing I'm still confused on is if there's any correlation between VRAM and CUDA cores. It doesn't seem to me like there is, so I wouldn't necessarily need more VRAM where I am, just more CUDA cores. But I need a scene to fit into the amount of VRAM available on a card. I think I'm starting to understand how it all comes together now.
In theory though, I could get a lesser card and put all my shared Windows GPU functions on it, then dedicate my RTX 2060 entirely to DAZ rendering, right? That wouldn't speed up renders since there are no extra CUDA cores, but it would make it so I can render larger scenes. Is that all correct?
Getting a lesser card to use for display etc might save a bit of VRAM for render, but windows, especially win10 will be using a good chunk of VRAM either way. It reserves VRAM in case people want to hotplug monitors, with no way to turn the function off. You have the general idea right though, more cores = speed, more VRAM = more you can fit in a render.
Mkay I think I have a better understanding of how it all works now. Thanks!
Cards with more VRAM have more CUDA, in the same series (RTX 20xx for instance) The higher the number, 2080 versus 2070, the more CUDA even when the amount of VRAM is the same.