GPU Rendering
![JasonWNalley](https://farnsworth-prod.uc.r.appspot.com/forums/uploads/userpics/401/nPAX34WKTHZ75.png)
So, I will be coming into some cash soon, and I'd like to pick your guys' brains.
As I understand it, in order to render using GPU, the first/primary video card, must have enough ram to support the render, otherwise it defaults to CPU rendering. Do I have this correct?
Going on the assumption that I do have that correct, I was thinking about buying a Titan RTX and a 2080TI. I don't have quite enough for 2 Titans, and there's only about 300-400 CUDA Cores separating a 2080TI and a Titan RTX, so for the extra $1000, it's not that cost effective. I do like to render large scenes, and want to get into Animations in 4K, that's why I am looking at the Titan RTX to begin with. Lots of active/visible textures and content in the scenes.
Currently I render using an 1070TI and a 970 but because of the limited amount of Memory on a 1070TI, it often defaults to CPU only renders, and this will only get worse as my scenes become more complex.
The other idea I had was getting a Threadripper 2990WX and an X399 Motherboard, it would save me money, but would it save me time? I already have 64GB of RAM in this machine, and picking up another 64GB (Putting me at 128GB Total) would still put me under budget, but would that be as effective as what would amount to ~9000 CUDA Cores and 24GB Of Vram?
Any input/feedback/experience would be appreciated.
Comments
In a nutshell, yes - although whether a card is "primary" (assuming you mean in terms of PCI-E slot location or being used to drive monitors) or not makes no difference. In a standard consumer multi GPU setup Iray works by loading your scene in full on each and every render device selected and then summing together alternating render passes from each device in a master buffer together for final output. If any one card in the system has insufficient memeory (regardless of its primary/secondary role) Iray will simply drop that card from the process and keep going on all others (or revert to CPU if out of alternatives.)
I have a Titan RTX (got it for similar reasons as you - plus its usefulness in working with 4k/8k video) and the one big drawback in potentially combining it with a 2080ti (something I have also considered) is the lack of support for "Turing Compute Cluster" mode on the 2080ti. This is important because TCC s the mechanism by which professional graphics software currently implements shared video memory across multiple GPUs. And without that functionality any scenes over the 11GB threshold will ony render on the Titan RTX - making the 2080ti a useless expense (assuming 11GB+ renders is your focus, of course.) If the goal is max cuda cores and Titan RTX level memory but at a sub Titan RTX x 2 price, the only really worthwhile solution is a Titan RTX + Quadro RTX 5000. Since that is the only combo that will guarantee you a full feature set with your projected higher GB-sized scenes.
There's a post I did a while ago somewhere (found it!) where I computed the theoretical performance difference between a Ttitan RTX and the 2990WX, and the outcome of that was something like a 2.5 4 times better price to performance ratio in favor of the Titan RTX - especially when you consider the platform cost (motherboard, etc) required for an HEDT system. As a rule, even moderately perfroming current GPUs destroy high end CPUs when it comes to 3D rendering and cost/performance. To the point where people are oftentimes better off from a system power usage/cooling performance perspective leaving their CPUs out of the rendering process entirely whenever possible.
ETA: See this post in the benchmarknig thread from JackTomalin (who has a 2990WX.)
RayDAnt, thanks for all of that info. I wasn't aware that you could share Vram between 2 Titan RTX's. That is certainly something to consider going forward, so it may indeed be something I try to make happen. Though, I would likely buy 1 Titan first, and then wait and buy a 2nd. Also, I figured upgrading the CPU was a futile task, so thank you for confirming that for me. Isn't the Quadro RTX 5000 similarly priced to the Titan though? I thought they were only about $200-$300 apart as far as price goes, and will the Titan and the Quadro be able to share Vram? I had never considered going with Quadro as the "secondary" card.
if you can afford such hardware better learn Maya or 3DMax, seriously, I'm not being sarcastical or making a joke.
What does one have to do with the other? Rendering, period, takes raw processing power, it doesn't matter which render engine or software you use. Sure, some are slightly faster than others, but all in all, no matter what you're using, requires top of the line hardware to get the most speed out of your scenes and renders. So telling me "If you can afford hardware..., you should learn Maya or 3DMax" implying that I don't already know how to use them, but I think really implying that I should buy them and use them, makes no sense. They are both $1500/year, and do absolutely nothing to solve the Vram problem to allow me to render faster than DAZ w/ IRay can by any meaningful amount of time. I would still need to upgrade the hardware to make the most of the software ;-)
FWIW I already know how to use 3DSMax, Maya, Blender, and I have limited experience with ZBrush, to model, I have nearly no experience using any of them to animate... I use DAZ because of its ease of use, and it's library of content and it's licensing uses for its models, and because I am way too lazy to model, texture, and rig every character and object by myself, it's much easier to use DAZ, saves me headache, and works for my needs, all while not having to spend $1500/year on a license... I could just use Blender and achieve those same results, for free ;-)
If I sound grouchy, I'm not, I have a massive toothache, am running on 3 hours of sleep because of school, and appreciate your reply. I just don't understand what one has to do with the other, if you wish to enlighten me further, you're more than welcome to.