RTX2060 that weak for rendering? What's average Dazer's spec and scene size?

My Spec
CPU: i7-5820
RAM:32G DDR4
I/O storage: 1T SATA3 SSD
GPU: RTX2060
So far I find myself very limited on the stuff I can do with this spec, My average DAZ's render rez are usually 3840x2400 or 1600x2100 (not that I find lower rez render reduce the stress at all) my ideal render scene size is usually 3-5 morphed characters with minimum dForced outfit and poly made environment(optimized of course) + custom HDRI. But I found my spec has no chance to survive on such scale of rendering. Whenever there are 3 characters on screen at once, my system starts struggling, if I add on ploy made props + a few walls, I got 50% chance it won't render with return of a black screen. If I close chrome, chance improved to 30%, but if I add high rez HDRI as global lighting source, 95% of time it won't work, always out of Vram here and there, Strangely if I put the CPU as rendering power source, everything will work slowly but surely.
So I'm not sure if it was because my GPU is too weak for such scene scale? I know RTX2060 is nothing of high end, a mid range card at best(I only use it for the time being because my 1080TI died doing what he loves) but I can do amazing ray traced scene in UE4 in much faster rate, why DAZ's iray engine is so slow & inefficient? how's your experience with your current spec? what's your average scene scale?(how many characters with what kind lighting & background?) Those with Titan & TIs, what's your largest scene size and hows your system holds up?
Comments
iRay is not a game engine. Game engines do all sorts of cheats that avoid actually rendering most of a scene. iRay does render the whole scene.
But a 6Gb card just has next to no chance of rendering a pretty heavy scene at 4k. Try rendering at 1080p and cutting down to 2 to 3 characters.
The VRAM of your graphics card is a decisive factor. As soon as a scene is too big for your VRAM DAZ will render the scene via CPU which is much slower.
Having five Genesis 8 figures with highly detailed textures plus a room with lots of further details can fill the 6 GB of your Geforce RTX2060 rather quickly. I have run into this issue with my Geforce RTX2080 occasionally, too.
There are 2 options;
- more stringent resource management in order to make a scene fit into your VRAM (products like this one can help; https://www.daz3d.com/resource-saver-shaders-collection-for-iray )
- upgrading to a graphics card with higher VRAM (the upcoming Geforce RTX3090 with 24 GB DDR4X is sooo tempting in for stuff like rendering.)
Your 32 GB system RAM are totally sufficient I think and as long as a modern graphics card handles the rendering the CPU does not have such a huge impact as far as I know.
It is not that the GPU is weak, as you have seen for yourself it can be faster than a 1080ti. But rather the issue is VRAM. Your 1080ti had 11gb, but the 2060 only has 6gb. That is a huge drop, and that is your problem.
This perfectly demonstrates how important VRAM is. You could have the fastest GPU on the planet, but if you run out of VRAM in Daz, it will be as useful as a fancy paperweight. VRAM should be your top priority when choosing a GPU for Iray.
Iray is not like Unreal. Iray is fully path traced, and every surface has a lot of additional PBR materials (compared to gaming models.) With Unreal it is a hybrid engine. It does some ray tracing, but it is not 100%. It is however capable of being very close. Quake RTX is pretty much all path tracing, which is why a 20 year old game has suddenly become a big demanding title. But Quake RTX works as well as it does because the models are super low poly by modern standards. If you used Daz models, it would not run as well (though it would be way faster than Iray, LOL.)
So yes, if Unreal is working for you, then for now that is probably the best way to go given how much VRAM you have. You will need a much larger VRAM capacity if you wish to make larger scenes in Iray. With any luck, maybe you could find a 2080ti on ebay for cheap, which would get you back to the same capacity your 1080ti had. Also, there are rampant rumors that Nvidia may release larger capacities for the 3070 and 3080, but these have not been confirmed, so this may not happen. However if they do, you can bet they would come at a premium compared to the current 3080 and 3070. As it stands, the 3080 has 10gb and the 3070 has 8gb. I have a feeling that 8gb would still be too restrictive for you. It would be better, no doubt, but if you can get more, well obviously that would be better.
It's not much of a stretch to say a good gaming PC is only an entry-level rendering PC.
As others have said, games cheat, use pre-calculated data and take shortcuts that Iray can't because of the type of Render Engine it is. This isn't particular to Iray, but shared amongst any engine of the same type.
For Iray Nvidia cards are the only way to go; the 2060 is (IMO) barely useful. I have a 6GB 980ti and also consider that barely useful. Folks can do amazing things with a 6GB card, but that can involve work to get the scene to fit on the card, and also involve rendering the scene in parts: foreground, midground and backgroud or perhaps spliting the forground into parts and rendering the background seperately.
Textures are the main resource hogs. There are some pretty simple tecniques to massively lower resources
even at 3840x2400 if you have 3-5 characters there is very low likelyhood that all of them are up close enough to need 4k textures. There are free and paid tools that can help you lower your texture resolution. Or, if your character is wearing pants - don't use leg textures - all they are doing is consuming resources. If you want to be extra frugal you can have your characters share bump specular etc textures and only have different diffuse textures
you mentioned using HDRIs for lighting if you are using them just for lighting and you are not relying on them for a visible background there is 0 reason to have them super high res - the lighting from a 2k version will be nigh identical
a six gb gpu doesn't mean you can't fit stuff it just means you have to work harder at optimizing.
I personally have a laptop with a 1060. Things I have rendered on it include: 6 characters and a simple background (I had room for a more complex bg but it was out of focus any way so I didnt bother) 2 characters each with strand based hair and a stonemason set for scenery (this one I didn't even have to optimize other than tweak the sbh settings) on my previous laptop with only 2 gb gpu I could consistently fit 2 characters with strand hair (pre dforce so these were converted to geometry manually) and simple backgrounds.
the average DAZ user's specs have moved well beyond what I can afford long ago
as the scenes, clothes and figures get more and more resource intensive I buy and render less and less, at least in DAZ studio, I can still squeeze some sets at least into other softwares, even figures sometimes
3Delight is actually becoming more of an option to me too
Studio drivers prioritize stability and have been tested more. Game Ready drivers prioritize launch day support for games and patches, sacrificing overall stability and testing for quicker access (which isn't to say they're bad but as a creator, stability and reliability are often far more important than being able to use your horse armor or whatever on day one).
Studio drivers are just an older gamer driver that Nvidia designates as a Studio driver. Hopefully that is because it is more stable and better tested but there is no concrete evidence of that. Further Daz releases new versions of DS that only work with game drivers. Do not get stuck on the studio versus game drivers distinction as there is just barely one.
I've always had better luck with Studio drivers. Then again I also don't use the most recent version of DS or play hot and fresh, microtranscation-laden AAA titles at launch, so go with whatever works best for your use case. If it's stable for you, it's stable for you. I'm just going by the information available from NVIDIA.
New versions of DS on their own are buggy enough, though. The prospect of using them with a fresh out of the oven driver is unpleasant.
Thanks for the insight folks
Fully aware it's a rendering engine, just little disappointed that its so much more demanding than other tools, yes, 1080P is my current render size.
Well, if a 2080 with 8G vram is sturggling with 5 Gen8 characters, then my situation seem to be normal. yeah, I'm also getting a RTX3090 once it released, it really sucks when you spent hours set up a scene and the last thing you know, you machine refuse to render because you are out of Vram
My 1080Ti died right before I started using Daz (2 month ago) so unfortunately I didn't get a chance to see the difference, only get RTX2060 as a temp replacement because I knew the Amepere 3xxx series is right around the corner and they will make all 2xxx models look like a scam, will get a RTX3090 this time for sure and report back the difference. I believe UE4's final gathering method is close to full path tracking, but astonginly fast, even without ray tracing, most prop with PBR material still look increadible in raster mode, too bad I'm not very good at UE4 and can never get a morphed figure look as good as is in Daz in UE4 or in any 3D tools(to be fair UE4 isn't built for that propose either) so DAZ render is ideal for my stuff.
Yes, as it turns out, my 2060 really is the bare minmun if that 2080 user is also struggling with same scene scale, I just wish the debugging tool in render is more useful by telling you what's your limit once you put too much stuffs on a scene, instead, you can only find out when you start rendering after all the hard work.
Textures are the main resource hogs. There are some pretty simple tecniques to massively lower resources
Thanks, very smart otpmization advise, but it's too labor intensive, I can't imaging I have to remove all my figuers leg texture or hide their x part of body whenever I do a scene, if I'm making a game, these effort might be worth it, but it's just a render. BTW on HDRI's rez, I just experimated the 4k(lowest I have) lol you were right, the difference is hard to spot vs 16k/24k and came much cheaper.
lol! this is why I started this thread, I wonder how Daz is able to attract Mac users(with potato GPUs) and mainstream users if the tax on rendering is so high, becuase most artists I know got a far dated GPU than a RTX2060. And from the replies I gathered, my GPU is really just the bare minimun.
This kind of query pops up all the time, with VRAM being such a limiting factor with iRay. Its one of the reasons i dont render inside of Daz, i export out so i have access to better engines and other tools.
Pretty much every other GPU renderer has out-of-core functionality, which allows the engine to use system RAM if there is not enough VRAM, at a cost of render speed. Albeit not nearly so much of a speed-hit as the render dropping to CPU. Its something iRay is sorely lacking. I remember reading that Nvidia were developing that functionality into iRay, quite a few years ago now, and to the best of my knowledge its still not implemented or close to being so.
Originally i thought this was due to iRay's ability to drop to CPU if not enough VRAM, so it was not as much of a priority to implement this for them as it was for other GPU renderers - the early days of other GPU renderers was basically a case of not enough VRAM = no render - but now i wonder if the reasons this functionality still isnt in iRay is more about trying to sell GPUs that have more VRAM on them.
This certainly isn't a hobby for the faint of heart. It is quite niche. Outside these forums you wont find much discussion of it. I am fairly certain that I am the only person in my family and friends IRL that uses this program or even knows what it is.
Besides, the hardware is only a part of it. While Daz is free they make their money selling content, and if you get into that you can easily drop a lot more on 3D assets than you ever spend on hardware, 3090 included.
The Oct 15th (or is it Oct 30) RTX 3060 TI is supposed to have 8GB and can computer as fast and faster than the 2080TIs already out and cost $399 MSRP.
3840x2400 and 1600x2100 are not 1080p.
1080p is 1920 x 1080. 4k is 3840 x 2160
The stuff that I do frequently I make presets for. and then stick up in favorites
So I have a "get rid of leg textures" preset I can just click
There's definitely some stuff thats pretty manual, but I've found that the more scenes I optimize the faster I get at it. I'm too cheap to splurge on a fancier machine (and I'm never moving away from a laptop) so its worth it. 5 minutes of optimising renders that are hours shorter, no money spent on hardware - thats a good deal for me.
To answer one of the other initial questions, I don't think its possible to know what percentage of users are sheling out for the latest and greatest tech. I think its very possible the forum skews a bit more tech oriented, and a thread with rtx2060 is aditionally more likely to atract the tech interested. Its worth noting, for instance, that the absolutely overwhelming majority of images in the daz galleries feature 1-2 characters, which will rarely exceed a 6gb gpu in my experience, even without optimizations. So looking through the gallery I would say most people don't need much beyond of what a mid range computer is capable
deleted
Yep, that was my thoughts as well. Nvidia made it hoping to sell more of their top of the priceline gpu's. I been slowly making a switch to a blender pipeline lately, at least cycles gives me the option of switching from optix render to cuda hybrid render which has out of core capabilities. Not as fast as optix, but faster than a pure CPU iray render by a long shot lol. Every week the two major importers for blender, sagan and diffeomorphic get better and better.
As already mentioned above, I would also like to stress the importance of texture size management when optimizing VRAM usage. One 4k texture (square, 8 bit RGBA), which is a pretty standard size for a gen8 character, has 4096x4096 = 16 777 216 pixels. Each pixel takes 8 bits per channel, and since a normal RGB image with an alpha (A) channel has 4 channels, a single pixel uses 8 x 4 = 32 bits = 4 bytes. Therefore the total amount of memory reserved by one 4k texture is 4 x 16 777 216 = 67 108 864 bytes, or 64 megabytes (48 without an alpha channel).
A character's body will have separate textures for the face, torso, arms, legs etc, and each body part will have more than one type of map - diffuse, specularity, bump, normal etc. (Note that most PA's use multichannel textures for specular and bump maps, where a single channel greyscale image would be enough. This increases vram use needlessly, unless Iray converts these automatically under the hood) All this adds up, and a single gen8 character can easily consume more than one gig of vram for textures alone.
If you lower the texture size to 2k, each texture will only use 16 megabytes, 12 without an alpha channel.
Game engines compress the textures into different formats optimized for real time use, that's one of the reasons why Unreal has an easier time rendering the same scene. A DXT1 compressed 4k texture without alpha will use about 6 megabytes.
Well, if you prepare downsized textures in advance (takes maybe a minute per texture in most photo editing software), rename the downsized textures in a smart manner, and save them in the same folder, then you can quickly find your downsized textures and easily bring down the VRAMM use by half (considering you might want to keep a few textures on a foreground character in high resolution). Obviously, if you ever decide to use the same characters again, then you already have those low-res textures at hand! So spending 10 minutes on converting those textures to low res, can earn back the invested time several times in future renders.
Some designers include character textures in a range of 1k to 4k themselves, which is great, since they generally also include easily accessible buttons in smart-content to use them. On really distant characters, you could even ditch all the textures, and just give them a suitable color
Removing unused textures (either covered by clothes or because they're outside view) can be done in about ten minutes, but there's tools that can accomplish the same thing even faster.
The result will be a *much* faster render as well. Not only because it sticks on your GPU instead of needing to run on CPU, but even if it were to still run on CPU, all the calculations would be simpler.
Another advantage is, that it will become more managable to switch to iray in previewmode. It's faster and more responsive, so you can use the iray preview mode more easily to spot errors like floating and clipping.
Eventually, you might sometime want to make a render of a big crowd, with all custom figures. Well, even a 3090 will struggle when there's 20 G3/G8 figures with 4k textures on screen. Unless, again, you manage your resources. It's always safer to get used to doing so now, rather than later. And it's a time investment that earns itself back.
Rough rule of thumb: 2Gb VRAM per character with default textures. Not always valid but it's a good start. Some hairs are a lot more demanding than others. and may tip the number of possible figures down by 1.
My 6Gb GTX 1060 can do a 3 character render in GPU with negligible complexity background and default textures, or 2 characters with more detail on the background.
Does the store provide hardware specifications for what users need buy to be able to use/run DAZ studio?
Yes there is, even though it took a bit of digging... DS may start with this, but...
System Requirements
Windows 32-Bit
Intel Dual Core equivalent or greater
1.6 GHz (2 GHZ dual core or faster recommended)
1 GB RAM min (2GB+ recommended)
1 GB are hard drive space for installation
OpenGL 1.6 compatible graphics card with at least 128 MB RAM (hardware accelerated OpenGL 2.2, or higher, compatible recommended with 512 MB RAM)
DirectX 9 (used for audio processing only)
Options for Iray 6gb GPU rendering.
Never try more than 8k HDRI. Destroy those 8k maps using scene optimization.
Look for 4k texture and if trere are too many then replace with other resource friendly shaders. Also there are some shaders those are invisible in viewport but uses less resources.
Instancing is extremely helpful (Render settings may need to change)
Look for resolution settings and if it is too high then lower it.
The GPU doesn't use the same bus lanes as the APUI (GPU+CPU) does and the lanes it does have are too slow. That is one of the limitations they have been working on changing. Maybe GPU manufacturers would love to supply GPUs with no RAM at all and leave that expense as something the PC builder must attend to when buying system RAM. Or maybe there is not much motive to do that at all as it would increase the years of usage PC owners would got out of their GPUs and the GPU manufacturers would loose the gigantic price markups they get for including RAM directly on the GPU card.
https://www.daz3d.com/get_studio
And there is an expansion tab about 2/3rds down the page.
Currently I'm doing CPU renders. I have been considering upgrading my Windows desktop PC with a decent graphics card and suitable power supply so that I could do GPU renders. Reading threads like this I'm starting to question doing this upgrade. I'm undecided if the faster render is worth the significant chunk of money I would have to spend.
TheRetiredSailor: until mid last year I did CPU only renders because my existing card couldn't support iRay. Spent £115 on a 2nd hand GTX1060. Render times dropped from ~55mins to 4mins for a single character. If you could find a 1660 for that price you'll feel like you have a new machine. OK, limited to 3 figures still, but the CPU renders will mostly disappear.
Regards,
Richard.
The speed difference between GDDR6 and PCIE, even the upcoming gen 5, is just too high. on board VRAM is not going away any time in the forseeable future.