Iray CPU+GPU or only GPU?
XoechZ
Posts: 1,102
in The Commons
Hello!
Finally I have got a brand new MSI GTX 970 (4GB)!
Now I have set up a small test scene in DAZ Studio. In the advanced render settings both, the CPU and the GPU are enabled (ticked). When I render the scene my GTX 970 is working (according to the sensors in GPU-Z), but I also have a CPU load of 99%. Is this the way it should be? What is the CPU doing when the GPU renders? I thought Iray renders with GPU only? Should I disable the CPU in the render settings?
It is a small scene with one character, so it fits into VRAM (2GB of 4GB are used). But it seems the CPU is also rendering. I am a bit confused about that.
Comments
Yes, it will use both if you set it too :)
I have a GTX 970, and I invariably render using GPU only. It makes almost no difference to the render time using CPU+GPU, however my PC is much more usable for other tasks during rendering when using GPU alone.
I have found my machine to be unstable when rendering using both... I tend to crash, a lot. I think its probably exposing some bad RAM, but unsure as the error msg is generic. I tend to avoid it whenever possible.
Ok, thank you! I understand.
But as I have read about Iray, it renders either CPU only or GPU only. So, are there any advantages with rendering CPU+GPU?
Havos said, there is no real difference in rendering speed, and evilded777 said that this option is unstable and tends to crash. So what is it good for?
CPU+GPU rendering is good for people with poor GPU cards, ie those with just a few cuda cores. The GTX 970 is not as good as a top of the range card like the 980 Titan, but it is still far better than CPU only unless you have a very fast CPU. I have a 4 core hyperthreaded i7 processor, but that still renders 10 times slower than the GTX 970.
Every machine and setup is different. It is a mistake to think someone else's rendering experience will be EXACTLY what you will experience. Other people's experience can give you a general idea but that's as far as you should take it.
GPU + CPU can probably help render faster on machines that have an older NVIDA card with very few CUDA cores.
I have a GTX 780 and my experience is similar to Havos'. GPU + CPU does not render faster for me, but it is not unstable
GPU + CPU can make your PC crash if you don't have enough power, ; example my overclocked I7 at full load can draw up to 230W + an other 250W for the 970 that's 500 watts right there.
DS 4.8 even had a bug that caused a crash with some materials if GPU+CPU was used. The advice I was given was to uncheck CPU. That bug is fixed in 4.9, but I still leave CPU unchecked. I have a GTX 980 ti.
I said it was unstable for ME, and that I think it is a hardware issue... which further implies that it is unique to my situation. Just want to be clear. I am not making a general statement that it is unstable.
It is one of those options that for most is a good thing, but does tend to be system/hardware influenced, at least. And for some, it is prone to cause more headaches than it prevents.
I dont think it works at all. I get the same render times, only my computer becomes very choppy. I dont use both at the same time. I like to browse the internet while I'm cooking a render up. Matter of fact thats what I'm doing right this very second. If I checked the CPU, I get the same render times, only I cant use my computer.
It is working, just not the way you may think it works. Iray likes to "time share' between available render hardware. It will try to balance the CPU and GPU, so depending on the speed of your hardware (some CPUs are much faster than others; some cards have more CUDA cores than others, etc.) you may not get much difference in render times, or whatever difference is not worth it. All you can do is try it on your hardware. If the difference is lackluster, turn the CPU off, and regain the use of your machine during rendering. Most CPUs aren't made to be pegged at 90-100% utilization anyway.
I have 28 cores and 56 threads in my cpus. My gpu is a 1080ti. When I render with cpu+gpu, it is about 20 to 30% faster than gpu alone. I rarely use Daz iRay, but when I do, you can guess what option I choose. As the previous poster said, YMMV. Just try it out and see if it works for you.
"Most CPUs aren't made to be pegged at 90-100% utilization anyway."
There is a misconception in this forum that won't seem to die. Somehow, some people think that gpu's are stronger than cpu's. They think cpu's are delicate snowflakes that will break if they are used hard. Most of these people are probably very young and weren't around when there were only cpu's doing all the work. To be more clear, if cpu's aren't made to be worked hard, neither are gpu's. There is nothing special about a gpu except that it is a specialist and can do some tasks very well. They are made out of the same kind of materials that cpu's are made from. They have the same tolerances and they will both fail if they overheat (there is no such thing as overworking a cpu or gpu). CPU's have, and will continue to for the seeable future, been doing the bulk of the computing in the software that we use. Don't worry, as long as they are cooled adequately, they can take it.
Yeah, usually if you tick both they will both render simultaneously. Personally, I never use CPU rendering because my GPU renders immensely faster than even my 8 core, 16 thread CPU. Like 3 minutes versus 20 minutes. So adding the CPU is worthless.
If your GPU is really slow then maybe you'll want to enable the CPU, depending on your CPU and how powerful it is. The downside is your CPU will be locked up during rendering, which for me is a big pain. I prefer GPU rendering, and leave my CPU for all the other stuff.
But in your case you say you have a GTX 970, which should give a decent render time. The benchmark scene shows 4.5 minutes versus 3 minutes for a GTX 1070. I'd be surprised if your CPU can really make a dent in that.
Huh. My GPU can regularly reach 82c on long renders (at which point it starts braking itself) unless I keep the environment cool or point extra fans at it.
Then you are a good candidate for extra cooling. Most microchips have a fail safe brake like yours to protect the chip, so you probably don't need to buy an extra fan or cooler, but if it causes your work to slow down, you might be already spending the extra money in time wasted.
Here's some actual facts from the NVIDIA website regarding GPU's:
"NVIDIA GPUs are designed to operate reliably up to their maximum specified operating temperature. This maximum temperature varies by GPU, but is generally in the 105C range (refer to the nvidia.com product page for individual GPU specifications). If a GPU hits the maximum temperature, the driver will throttle down performance to attempt to bring temperature back underneath the maximum specification. If the GPU temperature continues to increase despite the performance throttling, the GPU will shutdown the system to prevent damage to the graphics card. Performance utilities such as EVGA Precision or GPU-Z can be used to monitor temperature of NVIDIA GPUs. If a GPU is hitting the maximum temperature, improved system cooling via an added system fan in the PC can help to reduce temperatures."
Yeah, I'm going to have to agree that the maximum operating temperature is what the manufacturer thinks is safe for the GPU. It's a conservative number. However cooling is a very effective way to make sure you never ramp up to that and get throttled.
I agree, but for some reason I've always heard the opposite here - people are warning about GPUs / Nvidia cards not being built for rendering, but for gaming which isn't stressing them as much, so the cards won't last as long when using them for rendering. Other things being equal that may be true, but I'd rather warn about using CPUs for rendering for they may run maybe 20-30 times longer at max speed and temperature than a GPU, when rendering the same scene.
Still, CPUs (and presumably GPUs also) are rather tough if cooled sufficiently. I have several PCs with first generation quad core CPUs (Q6600), some of the hottest CPU's ever made, and some of them have been running 16+ hours almost daily now for many years, often with a heavy load for long periods. Never had a problem with any of them them. But they are also being cooled well so even when running stress tests the temperature stays below 55° C. My GTX 1070 usually stays around 60° C during long time rendering but I have to set the case top fans (2 x 14") at full speed (3 x normal) to keep it there. The card fans will run at around 50-55% speed then so I assume the card is far from being overloaded.
"I agree, but for some reason I've always heard the opposite here - people are warning about GPUs / Nvidia cards not being built for rendering, but for gaming which isn't stressing them as much, so the cards won't last as long when using them for rendering. Other things being equal that may be true, but I'd rather warn about using CPUs for rendering for they may run maybe 20-30 times longer at max speed and temperature than a GPU, when rendering the same scene."
Quadro graphics cards are underclocked for the same reason that Xeons are: because usually the users of these brands will be doing heavy duty work and the extra margin is good insurance. But in practical use, most experts won't recommend an expensive Quadro for reliability alone. Consumer cards are almost identical except for clock speed. Therefore there is no reason to use a cpu for rendering if there is a faster gpu option for the reason you stated unless cpu+gpu gives you an even faster render. In that case, using them both will expose them to less heat than if you used any one of them alone to render, and most importantly for me, I can finish my work faster.
My vague understanding is that the more heat a chip (any chip) is exposed to, the faster it degrades. I expect I’ll want to replace (or supplement) this 1080 chip in a couple years as a result. And yeah, I wish the in-system cooking was better. It does definitely start throttling at 82c, according to GPU-z— I see those little red spikes of chokage and go flip on the room AC....
I think that if there's a heat problem with GPU cards it may be with the VRAM and other hot components, I've seen such problems reported with several cards in tests. One company made a cooling component that users could upgrade their overheating cards with to fix such a problem.
I think the devil is in the details. I think most CPU's/GPU's are designed to last many many years (ie, decades) if used within their design specifications. So even if you cut a year or two off their lifespan, does it really matter? The rest of your computer will probably fail before the CPU, or it will become obsolete WAY before it fails.
And I also think that we're back to the basic issue that if you run it within its continuous ratings there's nothing to worry about. That's why the manufacturer HAS continuous ratings.
And keep in mind there are systems in place to make sure the CPU or GPU protects itself by cranking up the fans and even cranking down the speed/voltage of the CPU/GPU. Or in the worst case shutting down altogether, though that's pretty unlikely if you're operating as designed.
So if you're throttling at 82C and NVIDIA says to keep it below 105C, then I'm not sure why the concern. As I recall my GTX 1070 runs around 80C at 100% utilization too. I'm not sure why the concern.
Too much heat is bad, but that doesn't mean that colder is better. If you're within design specs then colder might be irrelevant.
And by the way, here's a recommendation by Puget Systems, which some here believe to be somewhat of an authority:
"For the average system, our rule of thumb at Puget Systems is that the CPU should run around 80-85 °C when put under full load for an extended period of time. We have found that this gives the CPU plenty of thermal headroom, does not greatly impact the CPU's lifespan, and keeps the system rock stable without overdoing it on cooling. Lower temperatures are, of course, better (within reason) but if you want a target to aim for, 80-85 °C is what we generally recommend."
I'm not a chip designer so I can't give you a definitive answer, but I think its reasonable to want to see as little of those red spikes as possible. If not anything else, it is preventing you from getting the most out of your expensive graphics card. The problem is not always easy to ascertain. Maybe you have a cramped computer case with insufficient airflow. Maybe you don't have enough fans for a high powered card. Maybe there is a problem with the thermostat that triggers the fans, or a flaw in the chip or memory modules. There are so many factors that could cause overheating. Maybe your computer is sitting in a corner and blocking the airflow. If you want to avoid the red spikes, you will have to figure out the best way for you. Adding a fan is cheap, but if you don't know what you are doing, it is possible to worsen the problem. Adding a hybrid watercooler is a catch-all solution, but it costs more than adding a fan and you will probably need someone skilled to install it. Or you can just buy an electric fan and aim it at the computer. Just make sure you leave room on the other side for the hot air to escape. But if you plan to add another card in the future, you should probably handle your overheating problem now. More cards in the box will just add more heat. My next two cards will be watercooled but a well designed cooling and ventilation system might be good enough for you. Taoz's way of cooling is effective (forcing the fans to high) if you can deal with the extra noise.
A note about Iray hijacking your machine in CPU mode: with multiple cores, you can set one or two aside to allow other processes someplace to run without trying to squeeze a sliver of CPU time out of the dispatcher when the render sucks it all in. You can set the processors' affinity for DAZ Studio to as many, or as few, as you want. Task Manager -> Details -> (right-click) DAZStudio.exe -> Set Afinity from the popup menu. Uncheck one or two, and the next time you run an Iray render on CPU, it won't lock you out. I had this problem, especially early versions of Iray, where it would take me 30 mins to an hour to get back control (short of the one-finger reset). Sure, the render takes longer, but you can do other stuff while waiting for it to finish. (Hope to get me a new system with an appropriate nVidia card and scads of RAM to get this passtime off the ground, soon...)
Best tip ever! Thank you!
That was my main problem with my 6-y-o quadcore i5 "T" which I bought for saving on energy with its TDP of 35 watts as my PC is running 24/7. It does the trick, including gaming, so far best buy ever.
Anything more? Like, can I disable the smaller of two GPUs in a pinch? Got a 4 GB GTX 970 which is doing fine, but I am thinking about thinking to get a second GPU, better and newer, obviously, with more RAM. So in normal cases putting them both to work simultaneously is obviously the way to go, but just in case I do a render that needs like 8GB, can I tell 3DS to just take the larger one?