Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I don't have that specific environment to test, but across the range of enviroments I've used, it's extremely rare to have to optimise at fewer than six figures, unless there's an obvious resource hog in the scene (full body fur, non-instanced high-poly trees, etc).
The 3060 might be low down in the Ampere series, but it's a real heavy lifter.
I think the point is also related to knowing how to spot those resource hogs. Many users do not have a clue.
Wish it worked like that for the consumer cards... it doesn't.
Even without a monitor plugged in, Windows will reserve VRAM for the desktop just in case you plug a monitor in. Only with Nvidia's professional card can you disable it. Even the Titan's weren't good enough in Nvidia's eyes, I had a 1060 for that reason but eventually removed it.
I've got GPU monitoring running, and (unless it is outright lying to me) it's clear that with the 1650 as the active card, any programs/videos/etc I load are loaded into just the 1650's VRAM, not both.
Yes, Windows still has a small amount of VRAM reserved for WDDM, but that's not the only thing that could potentially be taking VRAM from the render - at least if you're hoping that the computer can be used for other things without disturbing the render.
The programs aren't loaded into the idle cards VRAM, it just reserves it from use. Depends on your resolution, I forget but it's a pretty substancial amount.
Dang, I should just put my old MB in a box and send it to you. It's a LGA2011-3 X99-E MB with a 12-core Xeon.
On W10 it's about a gigabyte, on W7 just 300MB's (with 3 monitors connected)
As I recall, a (relatively) recent update of Windows 10 has reduced the impact.
As far as I was able to track it down, it was the update that brought it down to around gigabyte, previously it was a fixed percentage of the VRAM, which combined with bigger VRAM reserved even more than a gigabyte.
I've been using a 3090 for over a year, and have 64GB of system RAM.
I've never had Studio showing as using more than 54GB all the time I've had this build; even when the 3090 has been shown as using over 23GB it hasn't got over the 54GB max i've seen. When it was using said 23GB, it was actually only using about 30GB of system RAM - at its peak.
The 54GB useage, was actually before I got the 3090
..,.I'm running dual 24" displays in W7 and the VRAM load is averaging around 210 MB (with Chrome open).
The larger WDDM footprint of W10 was one more reason I held out.
True that he Titan series can be run in TCC mode however I read somewhere here (tried to track the thread down with no success) that non-RTX cards (Pascal and earlier) take a hit to performance and VRAM in 4.12 or later as some standard cores and VRAM are used to make up for the lack of RT and Tensor cores to handle Iray ray tracing..
Shoot...
And EVGA has their RTX 3060 12gb Card for $389 now...
I checked, and they don't ship to my address. Otherwise, I'd be doing the happy dance right now.
I wish that were the case where I live. Here the cheapest I find the EVGA GeForce RTX 3060 12GB GDDR6 XC is around €500.
I have wanted to give rendering a go for a few years now, but the GPU pricing held me back. Now I debate between getting an RTX 3060 12GB or waiting for the 4000 cards. Recently I was talking with a friend about these 2 possibilities and he mentioned that the RTX series is not meant to be used for renders and that the life expectancy of the cards drops when used in that way. Obviously, I don't plan to have the card doing renders 24/7, so mostly a casual/beginner approach, but I've been wondering if this is true and what should my expectations be.
Hmmm... "RTX series is not meant to be used for renders and that the life expectancy of the cards drops when used in that way."
That type of statement needs just more than the word of a friend, and seems highly unlikely, or at least minimally true, not important in the scheme of things. There is so much untruth in the world today that we need to start investigating facts, and considering the agenda of the source.
Can anyone report similar comments but from reliable sources?
If we assume he means "GeForce" rather than "RTX" (There are definitely workstation cards marketed as RTX), that's maybe technically true, but still definitely wrong.
The GeForce series is indeed not designed with rendering use as a primary concern but instead for gaming. So one can technically say that it's "not meant to be used for renders", but that's different to saying it's "meant not to be used for renders". Nvidia are fully aware the cards get used in this way, and provide driver suites that make it possible to use GeForce cards for Iray. Heck, their "Studio" driver suites, which definitely support GeForce, are actually tested with Daz Studio - amongst other things - which certainly doesn't count as them telling us not to do it.
Also, any load shortens the life expectancy of the cards (so, again, technically true), but it's not uncommon for Iray renders to use less power and create less heat than it would be to run a demanding video game. The difference will often be that a render is run for longer than you might game, but still, the likely lifetime of a GPU (unless you're running it 24/7 trying to mine crypto on it) is generally going to be quite a long way past the time it's functionaly obsolete.
The card doesn't know whether it's rendering, gaming or mining and there's plenty of protections built in. If you ran it close to the thermal limit 24/7 there may be some degradation but I'm guessing it would be old enough to be obsolete before it become an issue. Modern PC parts are pretty robust.
LTT did a video where he tested ex-mining cards and didn't find any performance or stability issues and mining stresses the vram more than gaming or rendering.
I use my GPUs for gaming and rendering. They draw more power and generate more heat while gaming, especially if ray tracing and upscaling is in use.
Serried ranks of graphics and video workstations with RTX cards
https://www.scan.co.uk/shop/computer-hardware/workstations-pro-graphics/nvidia-rtx-studio-pcs
https://www.scan.co.uk/shop/camera-and-pro-video/systems/nvidia-rtx-studio-pcs
If " not suitiable", then why and by how much? Facts.
I'm not going to look it up because I don't care. My RTX 3060s work just fine for me, and yes, I'm 73 and I believe I will die before they do. But admittedly, I don't run them for day-long renders. And if it turns out that the factory shipped "OC" (overclocked) cards run hotter than is healthy, one can easily undervolt the card just a tiny bit and achieve significant reduction in power draw and temperature but lose little speed. There are lots of YT videos showing how.
Me thinks it is much ado about nothing.
A modern high-end graphics card is like a Ferrari, wonderful machine as long as it's maintained by competent people. The biggest threat to graphics cards is improper cooling of the device by negligent owners.
We always fall into this trap of "more & faster" is better, and "newest is the best".
Current generation has the kinks ironed out. Next generation is still a cloud of virtual worms. Let somebody with more money manifest them.
Totally agree with that PerttiA. Given the peculiarities of DS memory handling, optimising scenes and re-starting DS (shut down & kill the process in task manager after each render) may well give you the benefit of a higher tier card at no cost.
The "scene does not fit to GPU memory" thing is misleading when renders that previously drop to CPU can be "fixed" by simply shutting down and re-starting DS. There's obviously much more to it than just GPU memory thresholds.
How is that misleading? it may well be that restarting frees up resources and so enables a scene that was borderline to work - and of course once memory has run out a restart is needed to free it up, Iray doesn't seem unable to clear the partial upload while running.
Misleading because when rendering drops to CPU some people think OMG I better run out and get a GPU with more VRAM, when as you say all it is is a DS memory handeling issue that can often be fixed with a re-start of the prog. Or as PerttiA says, reducing resource hogs (usually extreme texture sizes) that would also mitigate the need for throwing more money at a better GPU. Perosnally I don't see any real need for anything above a 2xxx RTX for rendering... if you know what you're doing.... that's the clincher.
I have a ryzen system with 5900x CPU and had a rtx 3060 12gb GPU, until I upgraded the GPU to an RTX 3070 TI 8gb. Both are great for rendering but the 3070 TI renders a fairly high resolution scene (HD Textures) with 5 or 6 figures (G3, G8 and G9) in nearly 1/2 the time that my 3060 did. HD textures.
Don