Best GPU for Daz iRay - NVIDIA GEFORCE RTX 2080Ti

Is anyone using the realtime raytracing stuff? Is this the best GPU for DAZ?
Does Daz support SLI?
You currently have no notifications.
Is anyone using the realtime raytracing stuff? Is this the best GPU for DAZ?
Does Daz support SLI?
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Well, the Titan has 24GB of RAM and I think there are Quadros up tpo 48GB (and they can be pairted to share memory, at a cost in performance) so it depends on what you mean by best.
Yes, lots of people are using the RTX features. It's enabled in the 4.12 beta.
The 2080ti is one of the best GPU's for DS, the RTX Titan and a couple of the RTX Quadro's have the same or better GPU and more VRAM. It is cheaper than those other graphics cards.
At the moment, there is no "fast real-time ray tacing", for iRay, it is already real-time and ray-traced. That is a game-specific, low quality, fast-render, with an overlay for ray-tracing. It simply ADDS a layer of specific selected "real-time ray tracing components", over the fake/fast rendering, done with (Direct-x/open-GL tricks).
Though, some RTX components are being used in renders, there is nothing significant enough to see any real major speed-gains, at the moment. (When compared to a non-RTX version of the same card. One that has, roughly, the same number of Cuda cores, fast memory, lots of memory, high clocks, and Tensor cores.)
Daz already does real-time ray tracing, if you use the iRay preview. Every card, and the CPU does that. RTX isn't needed. Rendering isn't "real-time"... It is a single frame of an image, rendered slowly and accurately. Games render "real-time", as in, "many frames fast". Where RTX is nothing more than a sprinkle of added reality to those "many fast renders". which makes-up the "video" you see.
But, like I said, the components that makes RTX work, are being used to assist with rendering. At a computational level, not at a "visual level". (You can't add real-time ray-tracing to an already real-time ray traced image, for added realism, faster. It is already as real, and as fast, as it can get, drawing that single image!)
There seems to be a speed-gain in rendering "strand-based hair", due to the core that is used, in iRay, when rendering. But that is both for RTX cards and Intel-CPU's. Apparently, they have not updated the non-RTX cores with similar code yet. (My CPU almost beats my Titan-V, rendering strand-based hair. So, there is something bogging-down GPUs with the older dll cores/drivers, which are used to render that specific code. Seen on all my titan cards, X, Xp and Volta, which all use the same older driver. While intel CPUs and RTX cards use a new driver, within iRay, when rendering.)
I have not seen any power-use values posted, but if the results are similar to my other cards, I suspect that they draw a lot more power, doing nearly the same work, as my Titan-V cards. For the price, Titan-RTX is the best deal. Even if it is not any more powerful, by speed. It has much more memory than the Titan-V or any 20xx cards.
Remember, 20xx and 10xx are gaming-cards that ALSO are good for rendering. The Titan cards are rendering cards that are ALSO decent for gaming. Gaming cards consume a LOT more power, to do the same thing that a rendering card does. Nearly 2x-4x more power = an electric bill that will be nearly 2x higher, to produce the same level and quality of work. That is also part of the reason that rendering cards are not as good for games. There is more power-restrictions and focus is on long-term life of the cards, not "burn it out before the mext best thing comes along", water-falls of poorly-regulated power, being dumped into the cards. :)
EG, My Titan-Xp consumes 200 watts and renders half as fast as my Titan-V, which consumes only 100 watts, when rendering. Thus, each picture rendered will consume 400 watts on the Xp and only 100 on the V. (Since it takes 2x longer to render, that is 2x more power, per hour, per image, for an hour-long image that would render on the V, and would take 2 hours on the Xp card.)
Again, no-one has reported power-consumption of the Titan-RTX or other RTX cards, along with any of the benchmarks.
P.S. You can't compare TDP or gaming-power consumption, because gaming uses a LOT more of the chips core components. Rendering only uses a fraction of the GPUs components to do the math. My Titan-Xp cards pull about 450 watts each, when playing games, and my Titan-V cards pull about 290 watts each, when playing games. (Thus my 1600-Watt PSU, which is overkill for rendering, as I only pull about 780 watts when rendering, from the whole system.)
Fwiw any program (eg. Daz Studio) using Iray doesn't need or support SLI because Iray has a task-specific version of SLI already built into it.
You should take a look at the RTXbenchmark thread. the 4.12 beta is using the RTX "cores" and the speed increase is substantial. In some renders my 2070 renders the scene faster than my 1080ti.
My Titan RTX draws about 200 watts (as reported by the SMI tool and verified by a wall meter) for rendering under the current DS 4.12 beta. Don't know what it draws for gaming - not much of a gamer unfortunately.
And for what it's worth - unlike previous simultaneous generations of Titan, Quadro and GeForce cards, all RTX monikered GPUs feature identical die architectures and communication interconnects regardless of Titan/Quadr/GTX branding. Meaning that they all have virtually identical power draw requirements - eg the Titan RTX, Quadro RTX 8000/6000, and GeForce RTX 2080Ti all draw essentially the same amount of power under the same workload.
Nvidia has consistently reduced power consumption with each generation of GPU. The Titan RTX and the RTX Quadro's will draw a little more power than the 2080ti since they have significantly more VRAM and VRAM does draw a small amount of power.
Yeah, VRAM is the main contributing factor to inherent differences. On that note, my 200 watt figure is based primarily on observations from running the various benchmarking scenes floating about. None of which consume much more than 2GB of VRAM. It stands to reason that the watts go up the more VRAM a scene needs (the Titan RTX has 12 individual 2GB GDDR6 Chips in it.)
The default max spec for power consumption as reported by the SMI tool (not to be confused with TDP) on the Titan RTX is 280 watts, and can be configured all the way up to 320 watts. I haven't played around much with those limits (overclocking isn't my usual ballgame and I'm already getting consistent clocks of 1995Mhz just from watercooling) but so far the SMI numbers do seem to be consistent with real world numbers from my power meter. And running eg. Furmark does indeed pull a full 280 watts from the wall.
Not really. RAM chips have to have current flow to be active. It doesn't matter if the VRAM stores nothing but zeroes it is still drawing power.
Talking about increased current usage stemming from read/write operations between individual GDDR6 chips and the GPU's memory controller - not the minimum steady-state current needed to retain memory contents (which obviously doesn't change.)
Again, it depends what you think is "the best"...
Speed or Power-consumption or Value.
The "best", for me, is an army of "Titan-V" cards... Dozens of them. They cost more, initially, but when they are all running, which requires less cooling, less hardware and less power each render... They are "the best". The next below them, in that setup, would be the "Titan-RTX", then the 2080ti. (Due to the higher power to produce the same/similar output.)
Unless the Titan-RTX does 2x faster renders than the Titan-V, then it is just consuming 2x more power, doing the same work. EG, I would need 2x more power-supplies that are 2x to 4x more expensive. You can cram up to 16 "Titan-V" cards in a system drawing only 1800W total. You would need two individual systems with individual circuit-breakers or 220v operation, with two 1800W power-supplies, to run 16 "Titan-RTX" cards, yielding the same output. More CPU's and motherboards. More RAM and have 2x-4x higher electric bills, when rendering and cooling. Not ideal when you render hundreds of hours and hundreds of thousands of images. (Typical for animations or long-term operations.)
NOTE: A standard motherboard can only fit up to 8 cards, unless you use PCIe extenders and risers. This would NOT be a typical use setup, at all. This would be similar to what a professional would use, creating a render-farm, built for speed, and low operation-costs. Not a big issue to someone who only renders a few dozen images a month, or is "just learning", or this is just a hobby.
However, as a hobby, the BEST bet, for a single card, is absolutely the Titan-RTX, than the 2080ti. (The 2080ti, if you plan to play games more than you render things.)
You spent 3k per card for dozens ofTitan V's? What possible use case is there for a 100k render box with a 12 Gb scene limit?
Like @JD_Mortal just said: Animation.
If you've got over $100k for a render box for animations there are much better options. Why in the world would anyone with that kind of cash be using DS and iRay for animations?
Yes indeed with that money decent motion capture and vertex morphing programs is probably a bigger priority
stuff such as faceshift, motionbuilder etc, beautifully iray rendered DAZ mamequins doing stiff aniblocks with mimic lipsync are not going to cut it
Granted, you'd still be looking at getting the same sort of overkill computing hardware...
No, you wouldn't. The sort of rig the poster described is pointless or counterproductive for that sort of workload. A 12Gb scene limit would be laughed at. Even the pro version of the Titan V had 32Gb.
this is what a $100k render box looks like:
https://lambdalabs.com/products/hyperplane?utm_source=google&utm_campaign=1704612860&utm_medium=search&utm_term=&utm_content=336483532073&matchtype=&adgroup=69067689593&feeditemid=&loc_interest_ms=&loc_physical_ms=9021744&network=g&device=c&devicemodel=&adposition=1o1&gclid=EAIaIQobChMIsvqP4NvW5AIVqBitBh3sPQ_NEAQYASABEgI8LvD_BwE
For anything other than animation. Personally if I was in the position of putting that much money into something I'd go for something a lot more flexible (eg RTX 8000s.) But to each his/her own.
Actually that is a very good point. Titan Vs - still to my own great surprsie - do not support NVLink nor any other prior version of interlink technology other than PCIE 3.0 for out-of-card communication. Meaning that even a group of lowly RTX 2070 Supers stand to outclass a bunch of Titan Vs when it comes to combined memory operations.
If I had $100k to invest just for rendering I'd actually go with 3 rigs. A workstation to do the layout work, a server box with whatever hardware made the most sense for the workload to do the actual rendering and a NAS for storing assets, WIP and completed renders.
There is no way I'd build some sort of mining rig with PCIE riser cables all over the place. Too many added points of failure (I shudder to imagine trying to daisy chain enough PSU's to get 40+ PCIE connectors).
How do I know or can I tell if Im using my nvidia gpu to render an image and not the CPU? Is there a tab/button I need to click or check? Thanks!