Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
From what I can tell houdini don't get a free tier. Just blender, DS, unity and unreal. Houdini you need to rent octane x, whatever that is. Now I remember why I don't use octane for anything. "Use of the software is available only while online, connected via the internet to the OctaneRender licensing server". I don't mess with stuff that demands internet access lol.
I see.
Going to solicit more opinions so I know what to tell Alex.
Thanks for your input.
Just watched this Blender 2.92 overview and something called Geometry Nodes is mentioned as a new feature. Is this similar to what you guys were talking about with Houdini "procedural" stuff?
Yes. The whole "Everything Nodes" revolution currently underway in the Blender community could also be called "Everything like Houdini" :)
But keep in mind that everything that runs on a digital computer is procedural, but just differ in the degree. Assigning a texture image to a material has a very low level of proceduralism because the algorithm is very simple: map the UV coordinates of each vertex to a pixel in the texture map, and the input itself (the texture map) provides all the details. A procedural texture, on the other hand is just the opposite: the algorithm is much more complex, and the input itself is simple, maybe having just a one or two parameters that control how the generated texture should look. In that sense, many other things are procedural as well: for example, what is an armature other than procedural deformation of a mesh?
Houdini recognizes this and takes it to the logic terminating point.
I only half get what you are saying but that's a start. I use procedural materials without having a clue what the term means. I have some idea that procedural textures are (like) Iray shaders as opposed to textures which need UV maps but, beyond that, I don't understand the technology.
One thing I notice about the Blender Geometry Nodes is that the word "geometry" is used but I noticed that in Wolf's comment (above), I got the impression that the procedures create something that has no geometry but appears to have when rendered. Again, I am probably just stumbling around in the dark here. I read the Wiki page on Procedural Generation but can't say that it made me much wiser. For example, I can imagine an algorithm producing geometry from random noise but am very unclear how the same procedure could produce specific shapes such as an army of soldiers or a forest of leafy trees.
We are in agreement there... the limiting factor is the imagination.
Procedural Tardigrade, WTF
I was eccstatic when I found out that Blender 2.93 finally moved away from Python 2.7 to a version that supports shared memory. This means that when DS and Blender are running on the same computer, there will be no need to write out any files anymore; Blender and DS can be made to instantaneously "look" into each other's memory and get the data they need directly. The only thing I suck at more than Win32 programming is Python programming, but I already have a proof of concept where I can get an object from DS into Blender with the push of a button.
I would like to create One Plugin To Rule Them All that acts as a server where other apps that no longer need to even run inside DS, can request whatever scene data they want, and access the information directly and instantaneously. I'd love a file format that is like Alembic, but is editable, so I'm going to make one. I'm finding Blender's internals a huge and challenging undertaking, but I think the results would be a game changer.
That would be badass
o_o
That would be interesting if Studio running under WINE could take to Blender under Linux.
That's the most insteresting case to me, but if I'm not mistaken, I think I remember doing some research and unfortunately it so happens that the WINE project's implementation on Win32 shared memory makes all the Win32 processes' shared memory shareable amongst themselves, but not to another POSIX process on the same box. I would really love to have misinterpreted it, but I don't think this combination is going to work as magically as the others.
But my original thought before shared memory was to make the same information available through a socket. There'd be copying involved, but allowing one to ditch Windows forever altogether is a rather intriguing thought.
Hello, thanks for your work.
I use Sagan to import in Maya the alembic files and it works very well.
But I have a problem with the hair; don't recognize the Udims and the UVs are overlapped. Can I do anything?
Thanks!
Hi @francescogenovesi78 I'm not sure the forum works correctly; I am watching this topic but I keep missing messages, so sorry for responding late.
I'm not sure about the hair UVs... Sagan exports them as-is with no transformations whatsoever, and so if they are overlapped, my first reflex is to say that the UVs are simply that: overlapped. But on second thought, I am finding that the SDK's C++ API seems to be lagging behind, and is often less reliable than the information in the .duf files. Can you give more information on the troublesome product, perhaps I have it and can take a look? I think maybe I should be reading the info out of the duf file instead of asking the API for it, just like I have to do for a few other things already.
Let me know and we'll make it work.
@Mustakettu85
Something like USD is exactly what I do not want. The problems begin with the U :)
The problems we face as DS users with all of these formats is that they try to solve interchange problems in a general sense, and bad things happen when either the source app or destination app violates one of the format's assumptions because they are agnostic with respect to it. I am not trying to create something that will work for for any future and abitrary permutation of any two source/destination apps, but rather something specifically for just DS/Blender, that knows about all the idiosyncrasies of both. So it'll Just Work (tm).
And as far as I understand it, USD is based on Alembic, which is a read-only cache that can't be edited in Blender. It is very important to be able to tweak geometry so that sims will work or to fix poke throughs. It is also a very powerful format that is really overkill for just getting geometry out of DS. It's fast compared to Wavefront, but is quite slow in an absolute sense. Something with just the good parts of MDD would be ideal: a simple format for a simple problem and that is highly transparent, i.e. very hackable.
P.S. - I checked out your 3DLite stuff, and wow! I had no idea it could look so good. Congrats.
But... wasn't imported USD geometry supposed to be ultimately editable and isn't Blender development going in the direction of making USD the "heart" of its geometry pipeline? That was the impression I got from reading Blender-related discussions a couple of years ago.
And technically USD isn't really "based" on Alembic, more like a conceptual extension.
Thank you. It's spelled 3Delight, though =P And you've surely seen lots of movies using 3Delight for VFX, such as District 9 and Chappie (the latter using basically the same 3Delight version we have in DS to this day, hehe). It's just that, like any programmable renderer, it needs TDs who know what they're doing.Incidentally, the current 3Delight version (the one leaving "Renderman compatibility" far far behind, hence not compatible with DS either) is making waves among Houdini users. If you search youtube for "3Delight Houdini", there's quite a few interesting things.
I had not read that the cache will be editable. I know that the implementation is kind of a hack using another modifier, but it's been that way for a long time now...
And by "based on" I meant that USD's and Alembic's API look suspiciously similar and I assumed that they probably just leveraged Alembic. I, of course, could be totally wrong.
Of course they (Pixar) wanted to leverage existing standards (Alembic) as much as possible, but internally the formats are different... usdcat can convert geometry back and forth to Alembic. I'm not sure though if Alembic did get as far as supporting lights, materials etc as was initially planned, or if that development stopped when USD (which explicitly adds these) became a thing.
used Sagan to export an obj series to use in Carrara
Thanks for sharing that! I was thinking that no one used Sagan but me...
I haven't used it much because I don't use Blender much but now I know it is useful for Carrara my favourite program thanks to the object sequences I will use it sometimes
mainly for things using dforce and other studio only functions.
I must try hair too but imagine with line tesselation and PBR visibility those objects will be rather large
I understand.
Soon(-ish) I'll be finishing a new version that can support certain hair assets directly, as well as options flexible enough to work with most if not all ribbon and strand based Hair.
To be honest, Diffeo is so good these days that there is decreasing need for Sagan, and I think I'm going to turn my attention to making a bonafide, first class Blender Modifier that imitates Daz Studio's smoothing modifier, which is the only thing from DS that I think is lacking.
I must get into Blender more too but I am rather fond of Carrara, I wish DAZ would start developing it again since they actually own it.
The only other thing I use a lot and would love more options for is Unreal engine, I do use Blender to do grooms for that and can use their hair plugin to convert DAZ opacity leafed hairs to grooms, maybe a way to do DAZ strandbased hairs to import as Grooms using Alembic
I haven't actually used Diffeo at all and should.
The smoothing modifier can be replaced fine enough with shrinkwrap. Overall it works even "better" and it was implemented in diffeo some time ago.
https://bitbucket.org/Diffeomorphic/import_daz/issues/226/importing-the-daz-smoothing-modifier
Thanks for the tip, @padone. I had looked at the shrinkwrap modifier before I even knew that Diffeo supported it, but I found the need for something more flexible, e.g. allowing a vertex group per collision object, and multiple vertex groups per object. The shrink wrap modifier suits my needs well 90% of the time, but I always seem to find something in the other 10% that bugs me enough to actually get off my butt and to do something about it.
@TheMysteryIsThePoint Well Donald if you can make something even better than shrinkwrap that's great of course. I'm curious if you can make an example where shrinkwrap didn't work while the daz smoothing did. That would be useful to me to understand the limitations you're talking of.
Hi @Padone,
It's not a particular shortcoming of Blender's shrinkwrap modifier vs DS's smoothing modifier, but rather the shrinkwrap modifier by itself.
Using the Outside snap mode, I can get almost exactly the same effect as in DS, but I don't like having to define vertex groups (especially when in edit mode, the armature snaps back to the rest position and you can't immediately see where to make the vertex group), and I'd like to be able to control the fall-off better.
But the biggest shortcoming in my opinion is that it is per-object and doesn't work well with layered clothing of, say, three objects that mutually intersect in a messy way. I want to be able to say there is leather armor on the skin, there is chainmail on top of that, and platemail on top of the chainmail. It seems like Blender applies the modifiers to objects in arbitray order, once, intead of realizing that later modifiers may have changed the mesh in a way that would change the output of earlier modifiers and the earlier modifier should be re-evaluated until everything converges, or detect if it will never converge.
Also, it would be nice if it could resolve the intersections via rigid transformations to simulate something like a bracelet or a pauldron without distorting the object. It should be possible to calculate the minimum orientation change that satisfies these conditions via something like Levenberg-Marquardt optimization, the same algorithm used in many armature re-targetting systems.
I understand your multi-layer example, thank you. As a suggestion in this complex case, apart using vertex groups, you can also merge multiple layers into a single object then shrinkwrap over the figure. Or shrinkwrap just two layers then apply then go on shrinkwrapping the next two layers.
Thank you, that's an interesting approach that I did not think of.