Amazing results: Daz + Stable Diffusion

I would like to know if anyone else is combining Daz and Stable diffusion, so we could exchange tips and information.

I've been experimenting with Stable Diffusion ( https://huggingface.co/spaces/stabilityai/stable-diffusion ) for a few days now and, in the last two days, I started to do tests involving Daz and Stable Diffusion. The results are amazing, even for a beginner in Stable Diffusion like me (four days practice in my spare time).

One test I did was to create renders in Daz and then replace the faces with others that Stable Diffusion knows (celebrities). Note that it's possible to train other faces at home (I already did) and, if made correctly, after trained they can be used in the same manner. Here are the results of this test, with four well-known celebrities replacing the face I had originally rendered:

https://www.daz3d.com/gallery/user/6604867495788544#image=1259725

After this first successful test, I became bold enough to try something way more difficult: bring a Daz character to life. I chose Ensley (from Bluejaunte), the one I like the most among all that I have. For this second test, I rendered six images of Ensley (the post below explains in which poses and why) and then put Stable Diffusion to learn her facial features. After about an hour (in my own computer with a humble RTX 3060) I was happy enough with the results (judging by the test renders the program saves from time to time), stopped the learning and got to work. After half an hour experimenting, and using a suitable photo from Pexels (free to use) I got the amazing result in the post below:

https://www.daz3d.com/gallery/user/6604867495788544#image=1260114

I'm really excited thinking about the possibilities! So, if anyone else is using the software, please let me know.

 

POST-COVER.jpg
1940 x 1940 - 2M
ENSLEY-PRONTA-01-gigapixel-standard-scale-4_00x.jpg
3796 x 3188 - 3M

Comments

  • MimicMollyMimicMolly Posts: 2,194

    I haven't used Stable Diffusion yet, but I was using WOMBO and crAIyon. Basically, I found it easier to ask the AI to generate an image of a random character first. Then, I tried to recreate the character in DAZ and attempted to match the lighting, skin tone, poses, etc. After that, it was just trying to combine the DAZ render with the AI output, in GIMP, with lots of painting over it. I haven't done much though because I keep getting distracted with my Switch.

    My case is different than most people, I can't feel it's "my art" if I'm not doing any actual work behind it whatsoever. I've always had this problem with my DAZ renders too, no matter how much I've tried mixing shapes or using different skins.

  • alaltaccalaltacc Posts: 151

    I'm not going too much into the concept of art mainly because I don't consider myself an artist (I'm an IT enthusiast that happens to use Daz, because I think it's an interesting tool). Even so, I tend to think that the CONCEPT itself (the idea, the visualization of what is being created, etc) is what makes an artist. So, if you have a conceptual idea and uses a tool (any tool) to make it real, I tend to believe you're the creator. On the other hand, if the AI creates something and you only makes a few changes, it's a different story. But this is just my opinion. :-)

    About the character face change in Stable Diffusion, I haven't done extensive tests (using different poses, expressions, etc) yet, especially because my trained model of Ensley was created only as a test, using six renders, in no more than 45 to 60 minutes of CPU/GPU time. To have a more fiable model I would need to create several more poses and expressions (inside Daz), like smiling, crying, sideways, looking up, down, etc, and then use all of them to train the model. Maybe I'll do that if I have the time. :-)

  • MimicMollyMimicMolly Posts: 2,194
    I see AI as another tool too. After all, modern problems sometimes require old-fashioned solutions. ;) To me, it's always more fun to actively participate in the creation process than let someone else do it. Besides, I'm not a programmer so I can't teach any AIs anything. That's why I see AIs more as collaborators, and we help each other out by compensating our own weaknesses. (In this case, the AIs create a base image, and I try to "correct" it with my edits. Whatever I do myself doesn't use too much GPU or CPU. Depending on what the AIs give me, I don't need to waste time figuring out colors, composition, poses, etc.)
  • kenmokenmo Posts: 908
    edited December 2022

    I've been experimenting with Stable Diffusion using some of my own photographs (cars, landscapes) and 3D renders I've created using Blender, DAZ Studio, e-onsoftware's Vue and 3D Coat. I'm using img2img via Stable Diffusion ver 2 Autmatic 1111. Sometimes after several tries I get something I dislike, other times I say "wow" and have something I'm really impressed with.

    As far as AI art not being art, is that not what was said about photography when cameras were first being used to create "art".

    As someone who has been employed in the IT sector for 35 years, firstly as a database programmer (dBase, Clipper, FoxPro, VisualBasic/SQL) and later as a systems administrator for Novel Netware & Citrix Winframe servers and later Windows and SUSE servers (file & print services), I believe AI is just a tool, like a camera or paint brush. You use a brush, camera or AI to augment your skill, not to replace it.

    Post edited by kenmo on
  • I have been experimenting with Stable Diffusion as well and it's really great stuff. 

    One of the consistent problems I've encountered with it so far is that it is quite difficult to get consistent renders of the same character when you're trying to update things like expressions and especially poses. With enough tweaking you can make it happen, but it takes a lot of effort. That's part of the point of SD, I suppose. I see a lot of people who work with it use the same general workflow, which goes something like "I tell SD to generate X images and tweak parameters A,B,C to create X*A*B*C number of images and then pick a subset that fit what I was looking for."

    And I'll admit, the images it produces can be positively breathaking. It's one of the frustrating points of working with Daz is that while it might take me hours of fiddling with Daz to get a particular look and render completed to where it looks "okay", I can get the same thing in SD in about minutes. Where I have the issue with SD is that next step. Once I get the model down and everything defined well in Daz, if I want to update a pose or an expression or clothes, I can do so relatively easily and most importantly consistently. If I change the clothing for example, the character is recognizably the same, just with different clothes on or a different expression or whatever. With SD, I might get the same character if I change the prompt to say "wearing red pants" instead of "wearing blue dress", and I might not. Or doing so might change the facial features and make it look noticeably, even if subtly, to not be the same character.

    I say all this because one of the things I am looking to do is develop consistent characters for all sorts of uses, but most particularly in things like illustrations for stories and visual novels and the like. In those things, you expect that the character is recognizable from one illustration to the next, regardless of the changes in pose, expression, clothing, etc. Daz can do this currently, SD cannot, at least out of the box. 

    However, I think there is a workflow which might make it possible to use Daz characters with SD to make consistent SD characters. alaltacc's workflow above is a great start and I think it points in the right direction, which is, creating character specific traininig sets in Daz (i.e. a set of renders to define what character X is) that can then be used to train models that can then be added to SD to include those consistent characters in SD produced artwork. 

    If anyone is interested in an example of this workflow, these tutorials are a good resource:

    Right now, for me at least, the limiting factor is being able to generate the initial set of images with enough of a variety of poses to produce a useful set for training. It's a bit of an art rather than a science as to how many images you need, of what variety, environmental factors, etc. It would be very cool, for example, to be able to script Daz to say "I've set up my base character that is completely render ready and now I need you to render this set of say 15 images with this set of poses and expressions". The closest I have been able to come to this so far is to set up an animation timeline where each keyframe is a different pose/expression or other variant, and then have Daz render the individual frames, which I can then use. 

    I'd love to hear other people's experiences with this. It's an exciting new technolgy and an exciting new era for art. :-)

     

     

     

     

     

     

     

     

  • acharyapolinaacharyapolina Posts: 726
    edited February 2023

    In my opinion, AI has some major limitations. One of them is, if you want a set of images with a single character in them, getting the AI to recreate a specific character again and again in different poses, it is very difficult, because each character it creates, might look kind of the same as the last character, but there will always be light differences. I think that is one of the strengths of Daz3D. Especially when creating comics, or animations.

    I'm playing around with stable defusion currently, but the only use I see for it right now is the ability to create amazing character and environment art to base my 3D models from, for future creations.

    Post edited by acharyapolina on
  • ControlNet can help somewhat there

    you can use it to repose your character with ActivePose and a very low noise value

    also inpainting to retain features you want or change ones you don't 

    DAZ poses on a render can be used too

  • alaltaccalaltacc Posts: 151
    edited February 2023

    acharyapolina said:

    In my opinion, AI has some major limitations. One of them is, if you want a set of images with a single character in them, getting the AI to recreate a specific character again and again in different poses, it is very difficult, because each character it creates, might look kind of the same as the last character, but there will always be light differences. I think that is one of the strengths of Daz3D. Especially when creating comics, or animations.

    I'm playing around with stable defusion currently, but the only use I see for it right now is the ability to create amazing character and environment art to base my 3D models from, for future creations.

    To keep consistency of faces and body among all renders, you can create a model of the "person" using DreamBooth. It's even possible to generate a few renders of a Daz character and then, using these renders as input, create a Dreambooth model of this character (I've done it and it works well). So, you can have your Daz character's face (and body type) inside Stable Diffusion (or even your own face) and generate renders using it. For the poses, with ControlNet there's almost absolute control over the final pose. I can decide EXACTLY the pose I want for the character.

    On the other hand, there are points where Daz can do things that SD can't:

    - In SD, clothing consistency is harder, almost impossible. You can have the same clothing type and color using a good prompt, but the small details of the clothing can and probably will vary from render to render

    - SD rarely generates renders with more than one person inside it. When you have two, three persons, all of them have the same face (that it chooses from your prompt, even if you use more than one name)

    - The lighting can be influenced by the prompt, but not with the absolute control Daz gives. It's more of a general instruction, like "from behind", etc

    - Even with ControlNet, having control of the output of complex scenes, with objects, furniture, etc, is almost impossible. Again, you can give general directions, but not choose the exact result. So, if you're telling a story (a game, or a comic, etc) where you have places you need to show in more than one scene, it's not feasible - they will not look exactly the same

       Right now, Daz and SD are two different beast, each with its own objectives, strengths and weakenesses. They are also two different tools that can work very well together.

       I've said it before and I'm saying it again: Daz should look into integrating some type of AI technology into Daz3D, allowing for very fast renders (in one minute you can have a photorealistic render in SD) and absolutely perfect photorealistic skins. I'm aware it's not a simple job, but if they can do that, we would have the best of two worlds inside Daz.

    Post edited by alaltacc on
  • Art2EagerArt2Eager Posts: 23
    edited April 2023

    I mean, they explicitly say in the TOS that you can't use Daz Studio output to "train" AIs, so wouldn't that rule out making your own LORAs, textual inversions, or anything else that could even come close to making characters consistent enough to be suitible for animation? Or even halfway serious sequential art?

    Even if that were allowed, I've dug into it, and from what I can tell on my meager lappy, Stable Diffusion will never be useful for anything other than what people are already using it for. (Let's call it "surface-centric abstract hyperbolic one-and-done pop art.") Maybe some other AI that integrates 3D from the ground up could someday become a sutible companion to or replacement for Daz Studio, sure. But not Stable Diffusion.

    It's a toy for making pretty, maderately abstract, sometimes slightly surrealistic single images devoid of any kind of context or continuity. You can staple training wheels and laser sights and stabilizer fins onto the toy, but you can't turn the toy into a workhorse. The fundamental approach behind it is simply wrongheaded for any kind of serious, useable design work.

    Turns out that fixing the hands was the easy part. What they will never fix is the subject's backstory.

    You're literally better off typing a prompt, squinting at the pretty pictures that come out, and then manually trying to recreate it in Daz Studio. You will fail, especially with the lighting, but you'll learn a lot.

    And at least the belts and collars won't exist in a quantum superposition transcending the multiverse every frame.

    Post edited by Art2Eager on
  • alaltaccalaltacc Posts: 151
    edited April 2023

    Art2Eager said:

    I mean, they explicitly say in the TOS that you can't use Daz Studio output to "train" AIs, so wouldn't that rule out making your own LORAs, textual inversions, or anything else that could even come close to making characters consistent enough to be suitible for animation? Or even halfway serious sequential art?

    NOW they do, but the DIDN'T when I started this thread and made my tests, months ago, Anyway, I'm long done with my tests with Daz characters inside SD (when they changed the TOS I was ALREADY long done).

     

    It's a toy for making pretty, maderately abstract, sometimes slightly surrealistic single images devoid of any kind of context or continuity. You can staple training wheels and laser sights and stabilizer fins onto the toy, but you can't turn the toy into a workhorse. The fundamental approach behind it is simply wrongheaded for any kind of serious, useable design work.

     

    First, someone could say the exact same thing about Daz (not that I agree, but they could). Secondly, I believe you haven't really followed SD's progress these last months. I'll not post any links here because I don't want to violate any forum rules, but if you take a look at the Reddit's Stable Diffusion subreddit, you'll see you're completely off the mark. SD can generate fantastic renders, from near-perfect photographs to surreal images, from fantasy scenes to paintings. I really like realism, and I generated LOTS of images that my friends can't say are renders, because they simply look real (as in REAL real, not 'plastic skin' real). There are lots of excellent models specialized in realim of fantasy, in objects or architecture, and now with ControlNet (that make possible to tell SD the exact pose or even composition of the image you want to generate) and Regional Prompting (that make possible to define what you want in each part of the image, including multiple characters), the control you have over the output is tremendous.

    As for "art", as I said before, someone can also say that using Daz is far from art (again, I don't agree, but they can). "Daz users simply pose pre-made characters and assets and click render" is the equivalent to "SD's users simply write a prompt and click generate". Both statements are wrong, because yes, someone CAN do that and get mediocre results, but there is a great deal of experience and work to get GOOD results, both from Daz and from SD. 

    Finally, not liking something (be it for whatever reason) will not make that something go away. AI is here to stay, not only for images generation but also in textual tools like ChatGPT or even specialized ones like the models that can "look" at images from test results like tomographies and detect anomalies (as cancer) WELL BEFORE humans (even specialists) can. Yes, there will be lots of problems in society, like unemployement or fake images like the Pope's image wearing a coat, but this is a societal problem (mostly greed), not AI's problem. Trying to stop AI is like trying to stop the industrial revolution: counter-productive and impossible. 

    Post edited by alaltacc on
  • The best way to use the two together is to use Daz Face Transfer with Stable Diffusion. Start with a base G8 model with 3 directional lights. One light at the default parameters, one at -75 rotation on the Y, and another at 75 rotation on the Y (the goal is to get even lighting with as little shadow as possible. Render a straigh- on headshot. Use that headshot with ControlNet in Stable Diffusion, with the Lineart and Tile models, a CFG of 0.9 and a denoiser strength of 0.7. Put in some prompts, and pick out an image with the best symmetry. Use that for a Daz Face Transfer. If you are serious about using Face Transfer, you should have Face Transfer Shapes too. 

    Now, put the new character, with the new textures and morphs in a scene and do some renders. The renders don't have to be perfect, you can even use filament renders. Bring a few different renders of the same character back into Stable Diffusion. Use the same ControlNet models, but change the Denoiser Strength to somewhere between 0.2 to 0.5. Use the same prompts you used for the image you used for the Face Transfer. If your Face Tranfer looked good, you're basically just using Stable Diffusion to make the renders look better now. For animation, I use 0.2 to avoid flicker. A denoise strength of 0.5 creates richer images.  

    A couple other notes. 1. It is best to do separate renders for the background and the character if doing animations. The flicker ends up being much worse if the background is part of the Daz renders. 2. My best results are when I do Canvases of the character's clothing and hair and add those back to the characters after I use Stable Diffusion. So I basically do nude and bald renders in Daz. When I do that, Stable Diffusion usually gives me the best results, especially when I render the hair separately. If you do this, you can use a Denoiser strength up to 0.65 and get some really amazing images with the same character. You may have to go in and do some post work to clean up inconsistencies though (Stable likes to add things when shadows and angles confuse it, like nose rings, belly button rings, moles and freckles. 

  • I made this.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,220

    you dug up this thread to type that?

  • alaltaccalaltacc Posts: 151

    He did this

    laugh

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,220

    hey I know the Facebook meme with AI created sculptures and knitted cars etc with a kid, granny whatever

    but at least the people here are starting with their DAZ renders

    though I know they didn't sculpt and rig the models

Sign In or Register to comment.