Which is better for Daz, RTX Titan or RTX Quadro 8000?
![milesaz](https://secure.gravatar.com/avatar/43f132d6c0392ad2c0e9287982f0bbca?&r=pg&s=100&d=https%3A%2F%2Fvanillicon.com%2F43f132d6c0392ad2c0e9287982f0bbca_100.png)
Hello everyone, how's everyone doing? My current build I7 6700k 4.0 ghz Asus maximus Hero 8 mobo Gigabyte GTX 1070 G1 gaming Corsair Vengeanve 3200mhz ddr 4 (8x4) 32Gb Samsung EVO 850 512Gbx2 16tb HDDs (not mentioning chasis and fans etc) With my current build most of my renders takes upto 13 to 16 hrs at 4kx4k or uhd resolution which i am not very happy with so i am thinking of getting a work station for daz as I also do alot of paid work as well. So what i want to know which gpu is better for the max peeformance to get as less time as possible, less than hour even if possible. And the best high end cards right now are rtx titan 24gb (consumer class) and RTX quadro 8000 (48+gb)(for work station, pro level). Which card is better for daz studio (my plan is to get 2 of them and use them with nvlink) and which cpu will be better for them. My goal here is to render my renders at less time possible i dont care if the scene loading is slow. I will be getting m2 ssds and 128gb of corsair dominator ddr4. I will really appreciate the help from you guys, thanks.
Comments
Depends on if you feel the extra 24GB of RAM is going to get utilised enough to warrant the extra cost of the 8000.
I'm not aware of anyone using one to render in Stufio, which is what I presume you mean. Daz is the company, and they product more than one product.
Moved to Daz Studio Discussion snce it's a question, not a Product Suggestion.
RTX Titan x 2 with NVLink. That is what I would get if I win the lottery.
Not a couple of 48GB Quadros, again with NVLink? Or even two pairs - Scan had a deep learning system with four, at a mere £50,000 all-in.
Chicken feed.
A single RTX8000 is going to have 4,608 CUDA cores
a pair of Titan RTX's linked would have 9,216
That I can't tell you as I've not rendered anything with those cards. Now if you want to send me one of those cards then I'll be able to tell you
Seriously though, you should be able to fit 90% of scenes in the card's memory, especially if using NVLink. As @StratDragon said, a helluva lot more cores which will mean faster rendering.
Titan RTX owner here. The rendering speed difference between a single GTX 1070 and a single Titan RTX/RTX 6000/RTX 8000 (all three of these cards share the same GPU die) is about 300% not counting RTX acceleration benefits. So a 16 hour render in your current system should take around 5.3 hours. With two of them (connected via NVLink or not) you should get very close to double the performance. So under 3 hours total for what currently takes your system 16.
As for which model to go with... 16GB is about all the video memory you will ever need for a single Iray render (at least for the foreseeable future.) Any capacity beyond that is only going to be useful for future-proofing or multitasking (I cannot stress how useful it is to be able to have multiple GPU bound applications running at the same time without needing to worry about memory limits.) So even a Titan RTX should be more than sufficient for your Iray needs.
The reason for possibly going with a Quadro RTX 6000/8000 over a Titan RTX would be what sorts of other workloads besides Iray you might be running. Quadros have error-correcting memory, which makes them well suited for performing mission critical data calculations (eg. simulating protein chains or plotting spacecraft trajectories.) Iterative workloads like Iray or most other graphics rendering engines don't really benefit from ECC because they are inherently self-correcting processes already.
Pretty much ANY modern 4+ thread processor (including bargain bin cheap ones like the Core i3 or Ryzen 3 series) are more than powerful enough to drive Iray to the max so long has you've got one or more high VRAM capacity graphics cards at your disposal. So avoid the temptation of trying to spend your way to where CPU fallback rendering is a viable option. I know there's a lot of hype right now around the massive multi-threaded (ie. rendering friendly) performance gains with the latest high end AMD CPUs. But the reality is that for Cuda-enabled applications like Iray, the cost-to-performance ratio between CPU and GPU rendering is literally orders of magnitude different (see the Overall Value column in this chart - the numbers there represent how many iterations each dollar spent gets you in rendering performance.) And since Iray was built from the ground up with multi-GPU setups in mind, there simply is no scenario where shelling out money for an expensive HEDT CPU/MB combo doesn't lose out massively in value to just adding another Cuda GPU to your system.
The reason for getting into the HEDT CPU market would be other 3D apps you use. Most of which either can't utilize Cuda GPUs at all, or not more than one of them at a time without a lot more R&D. It's been years since I last touched 3DS Max, so I can't speak too much to that (although I know it does happen to be one of the handful of other commercial apps where Iray is a fully supported option.) So you might want to check over there for CPU recommendations to get a broader view. If I was in the position to be upgrading right now (which I'm not since my i7-8700K is still more than sufficient for even my wildest Iray needs) I'd probably be looking at the Ryzen 9 3950X since it seems like the most well-rounded.
Something dumb about NVIDIA cards I discovered today at work. The consumer grade cards, which I assume includes the Titan (it's for baller consumers) are all artificially limited in NVENC concurrent streams to 2. Yes, you cannot hardware encode more than 2 movies simultaneously. Not a hardware limitation, a driver level software one. All but one Quadro card get rid of this limitation. I hate it when companies segment their caps like that (though mine does it and I'm still getting paid so I suppose it's all good).