Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I have yet to see an explanation as to why this happens with an image series. I can render the same scene (and larger scenes with more content) as single images but when I try to render an image series starting with that same image, it drops to CPU on frame 2. Then I have to jump through hoops to try to get the VRAM down to a level which will actually go beyoned frame 2. Saying it is Optix Prime doesn't help me understand that.
How on earth you manage to get all that content so easily within the 6GB of your 1060 I am at a loss to understand. It wouldn't be an issue for me either if it didn't keep falling back to CPU on frame 2.
Recently my GTX-1070 gets burned. Fortunately that happened a little bit before the end of warranty. I send it to the manufacturer and get back a brand new RTX-2070. After install it I never get the drop to CPU issue which some people are experimenting. I was wondering why not and remembered that just after instal the new card I tried launch Substance Painter and get a warning about GPU drivers crashing with long computations and a link to a step-by-step instruction of how to fix the issue. So to those getting the drop to CPU issue, maybe it would be valid give a try.
GPU drivers crash with long computations
interesting but I am not keen on messing around in the registry
As I reported it's really just resized textures (with xnview), nothing else. You can get the same and better with the scene optimizer addon, that it seems to me you already use. Also consider that in that scene there are four dressed characters, but only two low poly hairs, that could make some difference with other scenes. Then the pergola environment itself is rather simple too. And there are no HD characters.
You must be reducing the texture sizes a lot more than I am. I generally start with half size and then, if necessary, half that again but have not tried at even lower resolutions. I do leave HD active though - what's the point of having that detail if you can't render it? I use HD musculature and some Zev0 HD morphs a lot in my characters. All in all, after 15 years of working with DAZ Studio, I don't think I've ever been this disillusioned and on the verge of just abandoning it and finding another hobby. Perhaps I'm forgetting all the past frustrations but 4.12 just seems to stand out as a disaster from my point of view.
Do you tend to have other things (eg. multiple open Chrome windows) going on on your machine while rendering? Because that's something that could very easily be the cause for why some people seem to have so much worse luck with rendering on lower capacity cards.
No, I did think of that too. I have made sure that DAZ Studio is the only thing running that is likely to demand video memory. Of course, Windows itself grabs a portion but that is usually around 400MB. If I look right now with a few Firefox tabs open, Task Manager says that my GPU memory usage is 0.3 GB, so a pretty low portion of the total available.
Besides, I don't consider my 8GB card to be "low capacity". I started using IRay with a GTX 970 and only 4GB. It was only when I wanted to have more than two characters in a scene (plus clothing & props) that I decided to upgrade to a 1070 with 8GB. If we have moved into a situation where a 2080ti is considered the minimum then I'm definitely out of here.
Well if you need multiple HD characters and 4K textures in a complex environment then I fear you also need a high-end card to manage it. As for my average shots I'm fine with 1K or 2K textures and no HD, but that usually takes from 3 to 4 Gb so there's room if I need more. I also dedicate the 1060 to rendering so windows doesn't take any vram.
I agree and, as I said (several times, I think), I DO use Scene Optimizer so the textures are already reduced to 2K or even lower. I could try 1K but I have noticed that some texures start showing seams at 2048 so the figures would need to be in the background. Mostly, mine are not. I can't dedicate the 1070 because other programs I have require the 1070 to be primary (i.e. the same card as the display).
We are still skirting the issue of the image series. I just tried a scene with three characters all with textures reduced to 2048 by Scene Optimizer as were the props and archtecture (the room and furniture). It did render on the GPU although it was pushing the upper 7GB ceiling. If I were to try to render that as an animation and image series it would undoubtedly fail back to CPU. I know this because scenes with much less content are failing back to CPU as I have described in various posts above. I have to revert to 4.11 and switch off Optix Prime to get those scenes to render using the GPU.
This may happen in the viewport if you set low quality textures in the user preferences. But it shouldn't happen in iray rendering unless the texture is really bad designed, as for no padding at all around seams. Personally I never met this issue even with 1K resized textures. Then I don't know what filter is used by the scene optimizer to resize textures and if this may make a difference. You may try with xnview or gimp on the offending texture just to see if a mitchell filter may work fine. Also you may try setting the high threshold compression to 4096 in the iray panel to get it out of your way, since I've noticed that high compression may give artifatcs with some images while low compression seems to be good.
As for using the 1070 as primary this may make a difference, since it will also work as opengl device for the viewport, plus the buffers for windows and other apps. You may consider unplugging the 1070 when you have to render scenes that are close to your card limits. Or you may keep the viewport in wireframe or bounding box mode while rendering so to use less vram for opengl, this may be especially useful for large scenes.
Marble, I don't have time right now to read the past comments so my advice might not be useful to you.
You can use windows task manager to find out how much vram is being used by each process. When you use your render card as your display monitor there can be programs using your vram that you didn't know existed or never expected to use vram.
The options to see the vram usage isn't on by default and there are different tabs in taskmanager that show you different things. If you don't know how to enable it, let me know
Thanks but I do know about the task manager and also GPU-Z. I have even got MSI Afterburner GPU monitor running in the System Tray which also shows the VRAM. If you had checked back through the thread you might have seen my posts with GPU-Z screenshots. But I know that people often don't have time to read the post history so thanks for the suggestion anyway.
Something to keep in mind. Going from eg. 1k to 2k or 2k to 4k textures isn't a doubling of basic memory requirements - it's a QUADRUPLING (1k is 1k x 1k = 1k textures, 2k is 2k x 2k = 4k textures, 4k is 4k x 4k = 16k textures and so on.) So if you're talking about going 2+ fully outfitted figures + hires environments + HD geometry expansions... I'm sorry to say that 8GB really isn't all that much any more. Even for modern gaming, 8GB is pretty much the minimum required for the fullest experience in all titles. And it stands to reason that what's best for content creation is always going to be greater than what's best for that same content's consumption.
Personally I consider 8GB VRAM the current bare MINIMUM to be doing serious 3d content creation (which is what DS/Iray usage technically is - hobby or not.) As demonstrated by others, with some concessions and texture reductions you can pretty much always get 8GB to work (current render sequence bugs not withstanding.) And I currently wouldn't recommend anything less than 16GB vram for someone doing it as more than a hobby.
The point being that for me it is, always has been and will be a hobby. I can't afford to throw money at it with 16 GB VRAM cards. Even the 2080ti (out of my price range) has only 11GB. Until 4.12 I was rendering 3 characters in a scene (admittedly with 2048 texture sizes). With 4.12 I am pushing the 8GB limit with two characters and forget about rendering an image series. Unless I can find some way of rendering - either with Blender/Cycles or the recently available free tier Octane plugin, this hobby has reached the stepping off point for me.
That seems odd though that 2.5gb of free vram are not enought for optix prime. In my system it takes about 1gb extra vram compared to 4.11 without prime. Then if that's true that optix prime may require more than 1gb vram causing cpu reversing, then I can understand why it's considered a bug rather than a feature.
Or may be some vram is allocated by windows and other apps so it's not 2.5gb free.
Maybe it's a dumb thought, but since some people with 6 gigs can render big scenes without an issue while people with 8 gigs cannot render a simple dressed char in an empty environment, couldn't it be due to some options in the GPU settings?
Something like some new feature that can be safely disabled to save some "space" for renders. In my cheap GPU, thanks to latest drivers, I got options like VR Frames, DSR Factor or Image Sharpening.
I think comparing the various settings could bring to at least to a mitigation while DAZ and NVidia people fix the main issue.
@Imago Don't forget that some people meet drivers issues so that's another story. Also some people use different daz studio versions so different iray versions. If drivers work fine and the iray version is the same then vram usage is comparable. Then there are many "tricks" to save vram from disabling the denoiser to using a wireframe viewport to dedicate the card to rendering, but these are also a personal choice.
I made some test with 4.12 about how much vram is used. Interesting is that compression settings have a dramatic effect.
A simple G8F with hair and no clothing, no scene props etc. can have about 370 MB to 1.4 GB depending on the settings.
Maybe that is one of the reasons why there are different results with similar vram space n the cards of various users.
Thread is here (compression tests come at the end):
https://www.daz3d.com/forums/discussion/367596/some-iray-ressources-consumption-tests#latest
@Anim Unfortunately the high compression makes artifacts so it's unusable in most cases. The low compression seems to work fine.
As you say, you haven't read all the thread. If I were the only one having problems, you might have a point but I'm not (nor am I the only one discussing issues about the Octane plugin). I think it is fairly well established that there is a real problem with IRay and rendering an image series. It is certainly more of a grey area when it comes to what people can fit into VRAM and how. Also a grey area is the right DAZ Studio version / NVidia driver combination. Another problem seems to be affecting RTX card users (which doesn't include me). Just because I post details of my findings doesn't mean that I am the only one having problems - indeed I was criticised earlier for not posting enough detail.
Marble, I'm asuming you are using Windows 10 and your 1070 is probably loosing alot of video ram due to W10 being both a vram and regular ram hog. I have win10 pro on a gaming machine and installed a 1070 for gaming and it uses more vram than my win7pro render box in daz studio. I should add that I'm running daz studio 4.10 in my windows 7 pro with a pair of old titans and my gaming machine is running windows 10 pro with a single 1070 with daz studio 4.12. I agree daz studio 4.12 is a disaster not because of daz studio but because of windows 10. I had to use a non MS sanctioned script called windows 10 decrapifier just to get decent frame rates out of my windows 10 gaming box and it has an old 8 core amd bulldozer cpu 4ghz and 32gb of ram. I don't think the problem lies with studio I think it has to do with windows 10 and how it uses ram and vram resources. If you have the resources I would get an old socket 2011 xeon 8 core and a cheap socket 2011 motherboard from aliexpress and some ecc ram and build a offline render system with windows 7pro and use your nice 1070 in this system. Because MS is no longer gonna support Windows 7 does not mean it is no longer usefull it just means it is vulnerable to Internet attacks, just disconnect this system and only connect when downloading or manually install stuff.
Thanks @SilverDolphin and I really do appreciate the help offered by several contribuors here. I can't see me spending more on another PC - even if I had room to put it. I'm squeezed into a corner in a two roomed apartment and under my desk has just about enough space to fit my knees. As it is, I can see how much Windows is taking and it tends to be about 0.5 GB before I start DAZ Studio. That doesn't increase by much even after DS is started until the render starts.
A scene with a single G8 figure and nothing else jumps to about 3.5 GB while rendering and all the optimizing that has been talked about here seems to make little difference to that basic VRAM figure. The biggest effect I saw was increasing the max compression figure in the IRay Render settngs to 2048 (from 1024). That save a few hundred MB.
However, having said all that it has become clear that VRAM allocation is not linear. I have a scene with a room, furniture, two G8 characters and clothing and GPU-Z (and Task Manager) told me I was getting close to the limit - i.e. already above 7GB. Then, for the hell of it I added another fully clothed G8 figure and it still rendered! GPU-Z reported 7.78 GB. You might suspect the room and furniture was the real VRAM hog but no. I deleted them and restarted. Still well over 7GB with just the three G8 figures. So IRay seems to front-load and attempts use up or squeeze as much as possible of what is available. I don't know how but that's the way it seems to me.
What resolutions are you typically rendering to?
I usually like 5:4 aspect ratio and 1600x1280 image size. I've tried at 1200x960 and then used an app (Topaz Gigapixel) to double the size but the app was on a 30 day trial which has now expired. I haven't found a free alternative that gives results as clear as Gigapixel yet.
If I'm rendering an image series (animation) I drop the size down to 1200 or smaller - anything larger would just take far too long. For viewport test renders while I'm setting up a scene I keep them smaller too. I avoid the IRay preview because I can't work with that posing lag (I don't know how anyone can).
Darn. No seeming red flags there. Something that not many people seem to be aware of is that the memory premium required for OptiX Prime raytracing isn't based on scene content alone (like eg. the amount of memory needed for geometry/texture storage), but on a quasi-multiplication of scene content * render dimensions. I've come across multiple people with seemingly anomalous memory issues with OptiX Prime who it turned out were in the habit of doing all their renders at 3-4k or higher resolutions. Which can easily result in otherwise seemingly random out-of-memory situations on all but the highest capacity GPUs.
1600x1280 doesn't sound nearly enough to be making a multi-GB difference (although which specific figures/wearables/etc you tend to favor could change that.) With that said, it might be worth your time to try playing around with different render dimensions to see what level of effect that has on your CPU-fallback situations.
And thanks for the quick reply! My aim is not to annoy with endless questions - just provide possible solutions as potential explanations come (back) to me.
On a side note that's why cycles is tile-based, so the required vram is for a single tile and it's independent from the final resolution.
I've suspected that what you are saying about large resolution images is true for a while but I also suspected that the distance from the camera was a factor in VRAM allocation and it seems I was wrong about that.
I am also experiencing the fallback to CPU rending on frame 2 onwards of animation render series. Whilst it is possible to avoid this by massively simplifying scenes to fall within the graphics card VRAM, this is not an acceptable solution. The animation series that I could render in 4.11 with GPU, I can no longer render with GPU in 4.12. Hence it is clearly a bug in 4.12 that urgently requires addressing. Also the fact that the first frame renders fine with GPU and only falls to CPU on the second frame clearly identifies there is a flaw in the 4.12 code. Note that it doesn't matter on which frame of the animation I start, the first renders on GPU and all others fall to CPU.
If the rectification of this error in 4.12 can not be rectified in a timely manner, I would hope that access to install Daz 4.11 in parallel is made avaialble until the problem is rectified. I have submitted a tech support ticket and hope that the Daz technical staff will address the matter promptly.