Iray Renders: Does the CPU really matter?
![JTL2](https://farnsworth-prod.uc.r.appspot.com/forums/uploads/userpics/429/nF38421A416B1.jpg)
With the new RTX 3000's on the horizion I've been debating wether it's time to upgrade my 7 year old rig. I'm confortable spending around $3000.00.
But lately, I've been wondering if I'd be better off just buying the rumored RTX 3090, or whatever the top of the line is (RTX 3080ti/TitanX/etc) and keeping my current setup. Or buying a new processor/motherboard and opting for lower RTX 3080 Founders/OC version. My specs are below. Just wondering what would give me the otherall better render times.
● Windows 7 64bit SP1
● Intel Core i7-3770K @ 3.5GHz
● Asus Z77 Motherboard
● Nvidia G-Force 1060
● 32gb RAM
Comments
My current PC is a similar configuration to yours (I have Win-10 and a 1070) and I have been wondering the same. I have a more limited budget so I would be looking at a RTX 3070 but I'm concerned that my old CPU will not be up to the task of modern applications. I was thinking of an AMD CPU to go with the 3070 but I have no idea about compatibiliy and whether I'll need to upgrade my 32 GB RAM and possibly my motherboard too (I have an i7 6700 and an Asus Z170-E LGA1151 motherboard).
If you are upgrading for the sole purpose to do renders, you'll most always benefit from a highier end GPU than a CPU. However, it depends on a few things. For example, iray renders will take advantage of CUDA cores and NVIDIA technology whereas 3DLight will not. Also, memory may be more limited on a GPU if you don't have a lot of mainboard RAM, and they aren't shared. Also, multiple CPUs versus a single GPU might change things. IMHO, if you're making a render box or high-end gaming rig, go with multiple GPUs. I have dual RTX 2080ti cards and am very happy with my render times and the response speed in iray preview mode.
Agree with all of that (although 2x2080ti or 3080ti is just wishful thinking in my circumstances) but I wasn't thinking of either/or, I was thinking of GPU upgrade only or GPU + CPU upgrade. Either way the GPU would be upgraded.
To put it another way, using the current RTX 2080's for example, Marble and I considering:
Keeping a 7 year old 3rd gen cpu with 4 cores and 8 threads for 4352 cuda cores ala the 2080ti, or get a new gen cpu with 8 cores and 16 threads with a lower tier gpu with 2944 cores, ala the entry level 2080.
How much of a dramatic different would we expect, giving up ~1408 cuda cores for more core/threads in the cpu? That is the question.
Well quite. Only so much in the budget and would it be wiser to blow it all on the best GPU in that price range or split the budget between the GPU and CPU? But, as I mentioned earlier, I'm not clear on the ramifications - does a new CPU require new RAM and new motherboard. I need to do some compatibility research but I won't have the budget until the end of the year anyhow (Covid-19 has handed me the budget I had earmarked for a holiday so that I can use it on a PC upgrade).
more RAM means you can just load more stuff into the scene
whether you can render it on the GPU depends on VRAM
I upgraded my RAM on my sad Ryzen to 32GB and now I can at least fit several clothed HD figures in a scene
even if I use openGL and do postwork thats a big plus for me (8GB was pathetic)
my 980ti my only limitation now but I can now also fit more on it as DAZ wastes less VRAM on textures
likewise Octane out of core
Why yes the CPU matters unless you want to forget about every render you attempt that is too big for your GPU's RAM to give one difference of what a good, faster, better multitasking CPU makes.
The render speed on my present i7 6700 CPU is so glacial that I don't even consider rendering scenes that I can't fit in 8GB VRAM. I render stories that have iterations of about 100 scenes. I hate to think how long a project would take rendering on my CPU. Modern "Threadripper" CPUs come in at a frightening cost so they are out too. I gather that DAZ Studio can't take advantage of all those cores anyway.
So render speed is paramount for me but I don't want my PC to be so outdated as to be CPU-hampered when it comes to all other applications outside DAZ Studio.
I think context matters here. If one is doing stills, a fallback to CPU might not be so bad a thing, since a recent Threadripper would go a long way to ease one's pain. But if one is animating, a fallback to the CPU without out-of-core options means one's project has failed.
At this point, I'm less interested in the raw speed of the new offerings from NVidia and AMD than I am their VRAM and VRAM pooling capabilities.
I don't have an nvidia GPU at all in my DS-working machine (a 2019 27-inch iMac), and I am doing 1920-x-1080 Iray renders for animations in anywhere from three to 12 minutes per frame, depending on the type and amount of lighting and nodes in the scene. The main processor is a 3GHz 6-core Intel Core 5 with 72 GB of DDR4 RAM. The GPU is a Radeon Pro 670x with 4 GB RAM.
Colour me astonished. I struggle to get a 1200 x 960 render down to 3 minutes on my NVidia GTX1070 GPU.
@TheMysteryIsThePoint - wasn't there talk of NVidia introducing out-of-core with the next generation cards?
@marble They already do. It's the renderer, i.e. IRay that would have to support it.
Like @WendyLuvsCatz says, Octane's claim to fame is supporting this well, and Cycles on CUDA already supports it to some degree, but not at all in Cycles on Optix.
@mavante So, a 10 second shot at 30fps would take between 15 and 60 hours to render, and only that if you are lucky enough to not spot anything you have to re-render. You've proven my point better than I did. :)
I didn't prove any point, and don't even know what point you made. I just laid out some facts of my experience without an nvidia card to do Iray. You do multiplication well, though.
Dazzle me with some real-wolrd comparative nvidia-card results to make me feel jealous. How fast should a 1920-x-1080 frame render? Two minutes? One minute? Thirty seconds? Ten seconds? Two seconds?
I'm glad you got all the DS animation problems worked out so you can render them because I couldn't get it work such that I was willing to spend potentially over a hundred man hours keyframing an animation just to have my work vanish in DAZ Studio. No, I'm not a professional animator but I don't want to throw away 100 hours of work nevertheless.
I have a Ryzen 7 2700 (8 core/16 thread) with 16GB RAM and a FHD render that takes 3 hours on it still takes 2 hours on a nVidia GeForce 1650 Super with 4GB DDR6 so not nothing to right home about at all. I'd spend all your money on a new Ampere 3080 TI+ if you are planning on iRay rendering.
That Ryzen 7 2700 only cost me $149 plus $69 for the Gigabyte B450 DSH3-WIFI MB. It uses DDR4 system RAM.
Eventually, if I get adept (and that's a big if as I want to keyframe edit too; not only Kinect capture as it won't be sufficient) at animating I'll have to simplify and composite scenes or use Eevee or the Unity / UE4 / Crytek equivalent realtime render engines. Well possibly I could be during full on raytracing at 4K but it remains to be seen how fast the Ampere 30XX or NAVI 2 GPUs will be at 4K raytracing. I don't think that's in the cards for another generation or two of GPUs.
I have a similarly aged rig and have been doing some research of budget allocation for my next build. I will upgrade CPU regardless since I use a lot of compressed files and mutltask a lot and 4 cores are showing its age. Daz does need CPU to set up the scene before sending it to GPU, however, I beleive Daz is single threaded so the only benefit is going from 3.5Ghz to 4-5Ghz of next gen CPU. You will also miss a couple of things by keeping the current CPU/MB.
RTX 3090 or whatever the next Titan is could be the first GPU to push PCIe 3.0 to the limit and will need PCIe 4.0.
NVMe SSD that uses PCIe 4.0 will be much faster launching Daz and loading scenes than HDD or even first gen SSD used in those Z77 platforms.
My animations are so short and simple that I couldn't claim to have solved any timeline problems. I have not moved beyond the most basic of functions, don't understand the IK features and don't attempt anything so complicated as a walk (which I gather is almost impossible to hand-keyframe in DAZ Studio). So my questions here are more about the best use of available funds.
My three options are, as I see it:
1. A powerful Ampere GPU (possibly 3080ti) and no CPU upgrade.
2. Two less powerful (RTX 3070) GPUs.
3. One 3070 and a CPU/RAM upgrade.
I have an Oculus Rift S sitting here and I am told that, although it uses my current 1070, some of the physics in the VR application I tested are CPU bound and that my CPU is just not up to the task. So IRay is not the only consideration.
The way your post was worded, the way posts lack other communication cues, and given the topic of render speed, I did interpret your post as suggesting that a GPU, and the fastest one that one's budget allows, was not absolutely necessary for animation. I'm sorry if I offended you, that was not my intent, but rather to meet facts with facts so that it would be useful for others who have important build decisions to make.
I'll assume the rest of your post was sarcasm/rhetoric, and you are not really interested in hardware solutions or scene optimization for IRay, the topic at hand.
The answer is YES. Why? You're often doing other things whilst you're rendering. It's kind-of a background task. I recently upgraded to a 3900X from a 4790K (like.. yesterday) and everything is now so much more responsive than it was before. Single core is really not that much better than my old Haswell but the multicore obviously blows it away. Definitely wait for NVIDIA's next iteration before commiting to a new card though. I hear crazy things about it's performance (not all from NVIDIA's own marketing). Pity Daz doesn't have a free AMD renderer because I'm hearing crazy things about its upcoming cards too.
Okay. Well, just for clarity, I don't "suggest" or give "communication cues": I say exactly what I mean, and I mean exactly what I say. Maybe for productive discussion, you could just address what I said, and not read "interpretations" into it. It would be a tremendous help.
Pffff. You didn't offend me even slightly, not even a whiff. And FACTS are exactly what I asked you for—so far, not forthcoming. I asked specifically, clearly, for "some real-world comparative nvidia-card results." I asked you specifically: "How fast should a 1920-x-1080 frame render? Two minutes? One minute? Thirty seconds? Ten seconds? Two seconds?"
I'd like your facts to be able to compare them to my facts from my own rendering experience, which I've posted in this thread. I'll keep checking back into this thread and hope for some relevant facts in response.
This is less than a concern for me, please I typically do renders in my sleep. Literally. When I start up my next major project (likely a daz webcomic), I'll planning to get Render Que so I can render at night and when I'm at work. Then do my page composition after work. Ideally, I want to be able to pump out 6+ panels at or above 1500px in a 12 hour span, and right now I can't even get one out in half that time.
This I hadn't considered. PCIe 4.0 could easily be the determining factor. I am now running Daz on a SSD (via SATA 6Gb/s) though my assets are on a large 3TB drive. Guess I'll have to wait and see what specs are required/recommend.
Instead of interacting with you any further, I think I'll just refer you to RayDAnt's IRay benchmark page which should have enough data to satisfy anyone.
It's not just rendering, it's modelling/posing/interacting with Daz too. Things like building projection maps and fit-to info (though this is more dependent upon processor clock speed than multicore). For example, I'm modelling and listening to music or a YouTube video, I've got Whatsapp open, maybe Discord, there are lots of things that are pinging your CPU for cycles whilst you're using Daz. Anyway just personal anecdote, since switching to 3900X from 4790K it's a lot more responsive.
Try running a pc without one.
Yes it matters, but only limited if you use Iray only.
I use Blender and it matters a lot.
... And if your scene wont fit on your GPU, the GPU is an expensive and useless paperweight.
You could always upgrade GPU first and upgrade CPU later when AMD launches the next generation CPU towards the end of this year and either new cpu will be more powerful or you can get an old gen for cheaper.
Another alternative I have been contemplating is pick up a 3080 and later add anther 3080 with nvlink for shared memory. Will have to wait and see what sort of memory new cards come with. Supposedly only 3080 and above will get nvlink.
I think you need to plan ahead if you want to install two or more graphics cards (from personal experience). Type of case and motherboard is important here, especially having one of the cards blowing hot air onto the other results in one or both downclocking, perhaps also affecting your CPU boost (overall case temperature). I think this is where blower cards are superior and also a very roomy case with good airflow. This is notwithstanding being a baller who can afford custom loops.