Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I tested. That is exactly how it behaves for me, too.
Thanks for testing and confirming. I wonder if that is how it is supposed to work.
I thought abut this some more and have rationalized it like this:
Anything parented to the top node is treated as part of that node. When you create an instance of that top node, it copies everything including parented items. The instance is an exact copy of that whole structure. Individual instances can be moved and rotated, but the instance is the whole structure as a single unit. If the structure of the original unit is changed, like by moving a parented item, the instances must change to become exact copies of the original again. That is why all the instance chairs move when the original chair is moved. With this rationalization, I can make sense of what is going on, and it seems correct to me now.
It's the normal behaviour of instances. The main advantage if them is save vram by using same texture set on each instance. If you want to move each object of a specific main-child assets groups you have to use "Duplicate" command. It will copy-paste the selected objects as new props and each part can be moved indipendently from the source.
You can't move the chair of the Instance because you instanced the chair and desk together making it one instance. To get them to move independently as instances they would have to be instanced as separate objects.
So will my 1080ti be useless when it comes to Daz going forward or can we expect some kind of fix?
It certainly won't become useless.
I'm having trouble rendering 2 fully clothed G8 + enviornment at the moment, something my old gtx 980 had no issues with. It either gets stuck in a endless loop when trying to Iray preview the scene or it crashes to desktop. If that's the limit then I'd say it's pretty much useless. Hoping it gets fixed though, either with Nvidia drivers or a fix by Daz.
I did wonder about that but didn't get around to trying it, thanks. When I get back to that scene I will unparent the chair and instance them separately.
I have the same card and rendering 3 clothed figures and environment. That said, I do try and keep e.g. normal maps down.
The window doesn't start up recently when public beta is started. The display has crashed and cannot be operated at all.
I tried a clean install but still it wasn't.
Kill the Daz Studio task in Task Manager and start DS again.
That is the correct behavior. (ETA: At least, the correct behavior for previous versions of DS.)
There is an option for the instances to use the parent and children or just the parent. If you want to be able to move the chairs independently, select all of the instances in the scene, in Parameters click on All in the left column and then find the Instances option in the right column. Set that to use just the parent. (Render computer is off, so this is from memory.) Then select the original chair and create an equal number of instances. You can parent an instance of a chair to an instance of a desk. Then moving the desk will also move the chair. After all the chairs are parented to their respective desks, you could use the All feature in Parameters to set the X, Y, and Z translates to 0, which should set the chair to the middle of the desk. Then adjust each chair separately from there. Again, doing this from memory, and I can't test it at the moment.
I read that twice and I think I've got it now - thanks. :)
In the first place, if you solve that way, you are not asking this question in the forum.
How many times do you think you did it! ?
あのすんません。そもそもその方法で解決してたらここでこんな質問してません。
証拠はこっちです。
https://www.youtube.com/watch?v=hoIGPQuntvM
I downloaded it for my RTX 2060 and don't see any recognisable difference from the 483? I was using. The last driver just went straight to CPU but this one only does it when I have a really big scene i.e over 5GB as there is only 6GB on the card :)
Memory consumption due to BVH Acceleration Structure construction/storage is entirely to do with scene Geometry. Not shading complexity (which is what the seeming correlation between light/dark light balance and overall rendering time you mention is an example of.) To know why, having a practical understanding of just what exactly a BVH Acceleration Structure is (beyond an obscure computer science term) is required.
Imo one good way to understand what a BVH Acceleration Structure really is/does is to look at something like a church hymnal. Hymnals typically feature a large collection of finite items (hymns/songs) which practicality dictates get arbitrarily numbered and printed in a specific order (usually to economize on paper space) despite the fact that real-world use of those hymns/songs almost exclusively dictates unique sequences of retrieval (both in terms of selection and sequential ordering.) To help remedy this inherent inefficiency, individual events planned with the hymnal's use in mind usually employ the creation of a single-use multi-level table of contents (structured temporally around event order) wherein only numbers for the specific hymns/songs relevant to each sub portion of the event are included for maximum speed of retrieval - commonly known as a service program.
In cosntrucing 3D scenes for raytraced rendering, you typically have a large collection of finite items (scene content geometry) arbitrarily addressed and stored in a specific order (usually to economize on storage space) despite the fact that real-world use of those object models almost exlusively dictates unique sequences of retrieval (both in terms of selection and spatial placement.) To help remedy this inherient inefficiency, individual render jobs usually employ the creation of a single-use multi-level table of contents (structured spatially around scene layout) wherein only the portions of scene content geometry relevant to each sub portion of the scene are included for maximum speed of retrieval - uncommonly known as a BVH Acceleration Structure.
And the reason why significant memory consumption enters the picture is because the initial step of planning out that BVH Acceleration Structure necessitates at least one brief moment of simultaneously looking at each and every individual component (including memory intensive things like multi-instanced items considered multiple times, HD geometry enhanced models considered at their true maximum level of complexity, etc.) marked for inclusion and comparing/sorting them all against each other without any sort of meaningful abbreviation/compression active. Once this is done the BVH Acceleration Structure can usually be losslessly compressed (hence why Iray memory consumption usually goes down a little bit after initially peaking during the init stage of the rendering process.)
tl;dr: How much geometry you have going on in your scene is pretty much the determining factor for how much additional memory consumption fully accelerated raytracing can cost you. So I’m sorry to say that the only real answer to your question is use less stuff.
Believe it or not this is almost totally unrelated. The reason why moodily lit scenes (my wording) often take so much longer to reach an acceptable level of convergance in Iray is because Iray employs a mechanism called Light Importance Sampling as a key component to how it simulates light. At the cost of grossly oversimplifying things (Iray’s LIS implementation is covered extensively on pages 7-12) Iray is trained to prioritize casting rays to/from light sources and specific areas they effect since most object visibility in a scene comes about from near interactions with light sources. And since darker areas of a scene are obviously less directly influenced by those same sources of light, it takes the renderer longer to accumulate enough pixel samples with meaningful data in them to fill in those dark patches since the renderer has no easy hints on where to shoot its rays.
Interesting. I have been assuming that the only difference between same version Game Ready and Studio drivers from NVIDIA was the release channel and schedule. But this is evidently not the case. The release notes for each are very slightly different. In fact, the Game Ready driver release notes does not contain the line about OptiX. All other changed items are identical. I don't know if this implies the drivers are actually different, but the release notes certainly are.
Also, I had no idea what a "GAS handler" is, so I looked it up here.
A "GAS" is a Geometry Accelerated Structure, generated and used by OptiX internally.
So, essentially, the note says "OptiX generated an invalid GAS for simple scenes, which caused OptiX to do something which the CUDA subsystem choked on. OptiX now generates valid GASes for all scenes."
yeah my solution was cut my scenes down drastically
gone are the days I can render more than one HD Genesis 3 or 8 character in the latest highpoly clothing
unless I use Octane
the free tier of that came along the right time
Based on some snooping around in the uncompressed driver installation files I'm pretty sure this is just a typo. With the exception of some human-readable files containing lists of things like supported devices, both packages appear tocontain the exact same versions of all shared files. There are an additional dozen or so files in the Game Ready release (totalling around 3MB) not present in the Studio one (kinda surprised it isn't the other way 'round.) But given this is a bug fix, it makes no sense for the version with a smaller set of identical files to the larger one to contain a bugfix that that the larger one lacks.
Based on some very quick testing with my Titan RTX (granted, not a card prone to running out of vram in the first place...) vram usage does appear to have gone down significantly. I was consistently getting around 4GB total consumption for my benchmarking scene with the previous driver in the latest DS beta. That has now dropped to around 3.5GB (a decrease of 25%) after installing 441.66.
Am also still seeing CPU fallback as a result of tweaking an existing scene's content after having already initiated a render of it at least once - although now it only seems to be happeining in the case of Iray liveview being active. Before this is was getting fallback with consecutive individually triggered renders too. That seems to no longer be the case.
Had to read that a few times to let it sink in. Understand now better in a general sense what the BVH structure is designed to do for Raycasting. Thank you.
Did not really see anything definitive re: ""individual-scene-object-placement dependent"" other than the more objects, then the more spatial relationships etc need to be defined. And not so much anything about subroutines for pockets of scene geometry density or anything like that.
As for your tl;dr ....LOL good one. Don't say that too loud in proximity to Daz sales though. And my runtime is now loaded with stuff, so that isn't exactly what I wanted to hear. But I knew that limitation already from a more basic observation POV during so many different scene test-renders, so can just carry on best as one can.
Hope Nvidia recognizes this memory overhead, and starts doing something about memory sizes that come with premium cards. A RTX2080Ti is 11GB and an RTX Titan is 24GB. Have to pay double the price of what I paid for 2080Ti to get a Titan. Seems a bit steep that next step up when all I want is the memory size bump.
Right now, feels like my RTX 2080Ti memory in IRAY raycasting environment can handle about the same amount as my GTX1070 in an IRAY "PRE-raycasting" software environment. So that extra 3GB I was excited about in my GPU card upgrade seems to be lost to Raycasting memory overhead costs. Again, maybe a bit hasty to conclude that. But that one test case of 13xG8Fs stands out.
And I don't think 13 G8Fs with most at subD of 1, 13xHair, and 13 outfits should be alot of Geometry either. But, maybe not. Now have to start looking at hair, buttons, jewellery and even each clothing item for densities to make sure. First I buy stuff. Now I have to sort it and some has to be adapted. The joys of 3D.
Won't spend anymore time trying to understand if there is a way to optimize for RTX-Iray. Essentially now (1) waiting for Nvidia's next Gen of Cards to get 24GB min as it doesn't make sense to buy a Titan now, and hopefully (2) Nvidia gets this raycasting implementation of IRAY more optimized and (3) DS does what it can to help us users get the most out of our hardware.
Ah thanks. Knew my observation was reliably repeatable & true. But I didn't know what mechanic was causing that.
Nice to have this IRAY component breakdown. Thanks again for sharing your technical insight. Was appreciated!
Imagine for a second that you wanted to render an incredibly simplistic scene with only 4 separate triangles in it where each triangle sits in its own front corner of the virtual scene space. This is what the BVH Acceleration Structure for that scene configuration would most likely look like:
That's a heirarchical structure with a total of 10 items in it that need to be stored in a file somewhere for constant analysis by your Path Tracers.
Now imagine rendering the same scene except this time with one triangle in the front top right corner and the rest in the front bottom left. This is how the BVH Acceleration Structure would now read:
That's a heirarchical structure with a total of only 6 items in it (a 40% size reduction from before) despite the fact that the scene's content remains exactly the same. Hence why I say individual-scene-object-placement also plays a role in how much memory is needed to accommodate the construction and storage of these acceleration structures.
So what I'm seeing is that like myself a lot of people have installed this update, and gone from being able to render scenes comfortably within our GPU's memory capacity on our not inexpensive but non RTX GPU, to now being almost entirely forced onto CPU fallback for every render because reasons...
Bravo chaps.
I have an RTX 2060 6GB card in this laptop and, since loading the newest driver from nVidia version 441.66, the renders are now using anything up to 5.9GB where before it was only using 4.9GB and then would drop to CPU. I have noticed too that it clears the memory before starting the render where before sometimes it didn't. I am using 4.12.0.86 and haven't tried the Beta since changing drivers.
THis version of beta doesn t have Optix Prime , why?
It actually does. It ONLY uses OptiX Prime unless you have an RTX card. In which case it uses the RTX card's built-in hardware intead, since that is orders of magnitude faster than OptiX Prime ever can be.