Is DAZ ever gonna be better with handling dense geometry?
mwasielewski1990
Posts: 343
So...as the topic says. Just thought about it today when comparing dense geometry in DAZ and Blender. Load times for .obj's with very dense geometry (even couple hundred MB) are similar, but Blender handles them flawelessly, while DAZ somethimes is struggling with a 4x subdivision on multiple characters or very dense hair. Is this matter ever going to be resolved? What contributes to this problem?
I have an i7 8700. Will buying a better CPU resolve the problem? Does DAZ even use more than 1-2 CPU cores for handling geometry (in the OpenGL viewport)?
Let me know what you think :)
Post edited by mwasielewski1990 on
Comments
How exactly are you comparing the two because its kinda hard to state one is better than the other without setting up a one-to-one test.
A better video card will improve view port handeling. Regardless of what SubD level you need to use for rendering, always leave the SubD level for the figure or item at 1 and change the Render SubD level to what you want. That way the figure will be a lower res when you work on it, allowing you to move around the view port much more easily, but render at the subD level you need without you needing to manually switch things back and fourth. The only draw back is that Daz Studio will need to subdivide before rendering so you will see a slight delay while it does so, before the render begins.
I used some Kitbash3D assets recently, which go for about 200MB to 600MB. Daz handled them better than I thought it would. While Daz isn't nearly as well-optimized as Blender, I do think the excessive lag in Daz is due to things like smoothing modifiers and bone chains rather than just polygon count.
I notice the same thing. For comparison, Manttymanx, I load the item with the geometry into the program, then attempt to rotate the camera. At higher geometry leveks, Blender responds smoothly, Studio is jerky. Yes, there are ways to reduce the geometry the viewport has to draw, but if you ask both programs to draw the same amount of geometry, Blender just does it better.
A plain object vs a rigged mesh (what a subd-ed figure and hair both are) are not all that comparable.
As someone who has used genesis figures in blender you basically *have* to completely turn of subd to be able to move it at all. I can move subd-ed figures unquestionably better in DS. I makes sense if you think about it, as that is what DS is designed to do
on the other hand blender does handle larges amounts of geometry much better in different instances, like... instances...... for instance :) I can have hundreds of thousands of instances and zip around generate a new seed and move all of their locations nigh instantly in blender DS is much less fun there. Blender is also speedier on super dense unrigged meshes: If I find some million poly mesh (like photo scans) I'm bringing it into blender to decimate it before I try it in DS
That's kind of my point.... Blender is open source. Can't the DAZ devs just copy/paste Blender's viewport rendering solutions to DAZ so we can have both?
I cannot understand what is the problem here. Graphical API's used? Some other limitations?
I'm starting to create a lot of nature outdoor scenes now and honestly DAZ is starting to be completely useless. With something like 10,000 instances of low-poly grass the viewport is un-navigable, after trying to navigate it you have to wait 15 seconds for DAZ to respond. Load the same stuff in Blender and you can roam around freely like a bird.
No, Blender is not public domain code that can be used in any way by anyone - open source merely means that the code is visible and can be modified subject to restrictions, some of which would preclude its use in DS.
Okay, fair point.
However... Still can't understand what's the problem here. OpenGL games from 10+ years ago could handle an enviroment with dense foliage made up of thousands of instances of grass, trees, rocks, etc... How is that different from apps like DAZ? I'm just curious about what's the technological challenge here. I just find it funny as an RTX3090 user that a bunch of grass instances made of 50 tris can choke up my viewport.
Game engines usually lower the detail the further the items are, not the case with DS
I was talking about grass props made of a couple of polygons. You can't really go much lower than that.
What do you use for "Render Settings / Optimisation / Instance Optimization", Speed, Memory or Auto?
I'm no expert, but programs like zbrush seem to be usable with millions of polys in the viewport.
If I recall correctly (IIRC), I read a thread once that said the user was sculpting somewhere in the 20 million poly range.
This is one of the arguments of "what makes zbrush better than blender".
Blender can't achieve the same viewport/workspace poly counts that zbrush can according to zbrush users.
Unless Daz developers somehow achieved perfect optimization the first time around and there is no longer any possible way to improve there must be something that can be done to help with viewport performance.
Two or three characters sitting in what would be considered a normal room with a sofa, TV, and common other furnishings will drag the viewport to crawl for me,and I've got 128GB of system ram and all the drives in my system are NVMEs.
I find DAZ Studio handles very dense geometry very wel, better than Blender
it's the textures that are an issue
I can import massive fractals Zbrush struggles with un UV mapped and unTextured but of course Zbrush imports those same fractals with vertex colours
If it's the textures that need to be optimized then that is what should be done.
I've got no personal preference to where the optimizations are done, just want to have a better experience while working around large complex scenes with 2 or more fully clothed characters with hair.
the opposite of this is true in my experience...
i load an insanely high poly obj and instance it several times and daz viewport is as fast as ever. but if i load the same thing in blender (not even instanced, just by itself) and it's incredibly slow.
So popular consensus is that it's the textures in "texture" (openGL) viewport mode that is causing the viewport slowdown and not the geometry?
I'll admit, I never use filament, but is it any faster?
I like to be able to see the textures in the scene when I'm building.
Filament is the fastest viewport draw style imo. but that might be GPU dependent
Neither option yields and improvement. BTW - I believe this option affects Iray only?
Same here... Previously I had an i7 8700 + RTX 3090, 64GB ram, all disks SSD. Now I'm working on an Alienware x17 R1 (i7 11800H, RTX3080, 16GB version, 64GB RAM + the RTX3090 in an eGpu enclosure), two PCIE NVME disks. Same story. In my case, 5 characters (SUBD=1 in the viewport) + some architecture + ~10k instances of grass and foliage and the viewport update rate is like once every 15 seconds if I try to move the camera, or any object.
Nope, not really... With a simpler scene I can load 8-16K textures all over the place and this has nothing to do with openGL viewport performance. For me it's the geometry that kills it.
Just checked it, and no. Same story.
This is interesting. May I ask what CPU brand you have?
Just a personal opinion but most of the time I prefer the OpenGL viewport to Filament although there are times I switch to Filament. One of those occasions is when I need to see whether the character's feet are actually grounded on the surface of the floor: Filament is much better at showing those different surfaces whereas they get lost in the dark shadows of OpenGL. Otherwise, I don't like the look of Filament and long for the day when the IRay preview is quick enough to use that instead.
Daz's issue is simply that it uses a "human-readable" format for data and, as such, also has to "decode it" with a form of "texutal processing". Additionally, because it is using such a bloated format, they also use "file compression", to combat the textually bloated files.
As opposed to many 3D programs that use an actual "custom format" like a "transportable database", which does not require compression, because it's already reduced down to raw-data, and already formatted, and requires no real processing to read, other than injecting it into defined arrays.
EG...
Das has stuff like this... Just a simple TXT file (Wrapped up in a zip/rar/arj/cz/gzip etc...) {First, unzip it to a temp location or RAM = wasted time}
name = "some/long/file/name/and/path/to/some/file.dx"
object = 1.000000032, 423.45456666, 234.34234343... (insert millions of long numbers here. 3x for every individual triangle in the object, even though you only need 1/3 of them, unless the triangles are THE entire object itself with no shared sides)
attributes = {some wasted empty values and some used} (Insert hundreds of object-component traits that could be reduced to NULL, or they are repeating and redundant)
Now it has to read those values, line by line, and turn all those "text numbers" into "floating-point-values" {actual numbers, not pictures of numbers drawn as "words/text".}
#############
While other files use something more like this... {No unzipping}
[Data-chunk ID - not a file-path-name] {1-4 bytes}
[Bbi7d7] {This is the same 3 number set above, as 2 byte floating-point values as actual numbers, as bit-data not text-strings, also, often these THREE points are shared between triangles, not saved EACH for every triangle}
[sfh7d56f] {This is ALL component attributes, actually relevant and used, only, no redundancy or NULL values used}
Without needing to be "processed", these fit 1:1 into a data-form or "cast" or "user-type"... [0][##,##,##] = point1, point2, poin3 of triangle 0
Usually, the format tends to be something that the "rendering engine" can also digest easily, if it isn't EXACTLY that same format being saved, for even faster loading.
It is the same issue that programming languages have. Python, is an "interpreted" language. {Something reads the text and then that real program does things, based on that text.} As opposed to "C++", which literally compiles directly into something like "machine" or "assembly", and what you code is just used to guide the compiler into creating this "raw instruction set", which requires no external programs to run it. That is why C++ is 300x to 10,000x faster than any "interpreted" programming language. (The only things making those languages "seem faster", is the fact that they ALL use "C++" or other languages "pre-compiled API" files, to do the actual work that the interpreter just can't do fast enough.)
This issue is also extended to the fact that Daz3D seems to "load all transforms" onto every model, even if you are NOT using them, because "you may use them". The more transforms you add (morphs), the slower everything loads. Most people who are not buying hundreds of "styled people", have no problem loading things fast. Daz3D only has to load the base-model and a FEW custom people and morphs. However, if you have 300+ people, they are ALL being loaded onto the model, with a base-pose of ZERO, but, like soup, if it's an ingredient you MAY taste... "It's in there".
That's why Daz loads real fast when there is nothing in the actual library, besides default models and morphs. That's also why things break if ONE person makes a model with a morph that is saved as NON-ZERO... it applies itself to EVERY morph, because it's pre-loaded onto the original model, messing up the original model, but not actually messing it up. (It just seems that way, because it's always present, unless you find it and zero it out yourself and save it that way, fixing it for them.)
Also, even while they have a "cachefile" for the morphs, the cachefile is still in human readable format = Very little help (if any) with the time it takes to load figures.
Interpretted languages have to be turned into instructions step-by-step, a JSON file is read and turned into data once, by a dedicated aprser, on loading and is then the same as data serialised in a pure binary format. The comnparison is not valid.
Daz needs to read all the proeprties in so that it can link them correctly and display the sliders. If it did not then it would have no way to know whether setting one proerpty should trigger changes in others. Building these links is the principle cause of slow l;oading, but is the price to be paid for a flexible, extensible system. Uninstalling, or disabling (for example through Turbo Loader), unneeded properties (characters and morph sets, principally) is the way to manage this. Daz is aware of the issue, but it is also aware of the need to meet the desire for flexibility in adding morphs and characters; soem degree of tradeoff is almost certainly going to be necessary.
Interesting discussion.
I just tried the following:
4 x G9 Female at sub-d 1 with clothing and hair.
100 instances of ultrascenery grass patch.
Disabled turboboost on the CPU (this way it is limited to its base frequency which is 2.6 GHz in my case)
Notebook with 10th gen i7 and RTX 2070 8GB
Result: (see edit below):
Viewport feels like about 5 frames per second. G9s and grass can be moved with a slight delay (maybe 3 seconds).
When I allow turboboost on the CPU the delay is about 1 second, maybe less. Viewport feels like 10 frames per second.
What would kill the performance of course is setting the eyes of a G9 to "point at". But that limits the speed even with one figure and no grass because it is constantly updateting the morph evaluation.
EDIT:
The above test is wrong:
I just noticed that the Victoria 9 HD character was corrupted somehow. Every manipulation on her (bend arm) was stuttering. I loaded the figure as a scene subset, maybe that broke it somehow.
I deleted her and loaded a new V9. Now there is no delay anymore in scene manipulation. Moving one of the 4 figures or the grass and navigating the viewport is reasonably fast. Frames per second feel like 25.
So maybe it is not the amount of geometry that plagues the OP but maybe a broken figure or the amount of instances. He wrote 10 K instances of grass. That means 10 K objects to handle. I my test I only have 100 instances.
EDIT 2:
Yes, it is the amount of instances. I made another 2000 and now navigating the viewport and moving objects starts to slow down.
Also, another idea/question:
I read somewhere that Nvidia's Quadro cards have some sort of driver optimization for handling instancing better than GeForce series. Could anyone with an actual Quadro card confirm this is true? Does DAZ behave differently on a Quadro card as compared to a GeForce card with the same chip symbol?
For example: GA102-300-A1 is the chip found both in the consumer-oriented RTX3090 and the "professional" Quadro RTX A6000. I would be interested to see if the viewport filled with a lot of instances is smoother on the latter.