Anyone looking into the inclusion of AI tools in Daz3D?

Photoshop essentially just went almost "Full AI", as they spam the world of ads with that, honestly vague, title. Not to mention the world of AI-Art, in general, blowing up over the release of all these FREE AI tools, based around "Stable Diffusion" and "DreamBooth".

So, is Daz3D looking into any of these potential AI prospects? Especially since they embrace CUDA cores and are now getting good AMD and CPU expansions.

Currently, I am using AI to make real textures that repeat. I am also using them to make crazy detailed backgrounds. Others are using them to make AI generated faces and even swap clothing on existing images. But that is not a total limitation of uses. It is being used for animations, video, etc...

In my extended case, I am trying to get some rendered models trained, so I can make some creations without actually having to open Daz3D to do more than create poses and use those as a foundation for applying my art styles to the rendered images. Also for adding some, more correct, realism to the models faces and hair.

I know there are all sorts of mini-AI scripts for textures and lighting and physics and morphing. Things that could possibly be included into the Daz3D pipeline, natively, or by scripting. Most using standard API communications when they can't be directly integrated, VIA code. (But most are also CC or FREE in GIT also, for ever-expanding updating and integration assistance.)

Just for fun, I am including some of my AI assisted creations. (If left ONLY to Daz3D, it would have taken me days or weeks to create these. These all took minutes to create. Which is why I want to "train my own styles", based off my Daz3D renders, to get endless variations without the need to spend additional time in Daz3D, fighting render settings and turning my room into an inferno as the cards max-out on renders.)

2022-11-16-19-10-41-03-by_Albrecht_Anker_and_Anton_Semenov-1639311505-scale10.00-k_euler_a-v1-5-pruned-emaonly.jpg
512 x 512 - 102K
2022-11-16-19-13-07-06-by_Albrecht_Anker_and_Anton_Semenov-1019818268-scale10.00-k_euler_a-v1-5-pruned-emaonly.jpg
512 x 512 - 97K
2022-11-16-19-25-40-07-by_Albrecht_Anker_and_Anton_Semenov-683940751-scale10.00-k_euler_a-v1-5-pruned-emaonly.jpg
1024 x 1024 - 372K
2022-11-16-19-36-28-02-by_Albrecht_Anker_and_Anton_Semenov-1979772977-scale10.00-k_euler_a-v1-5-pruned-emaonly.jpg
1280 x 1280 - 729K
2022-11-16-19-49-56-06-by_Albrecht_Anker_and_Anton_Semenov-1979772981-scale10.00-k_euler_a-v1-5-pruned-emaonly.jpg
1280 x 1280 - 819K
2022-11-15-20-57-38-1-by_Antonio_J._Manzanedo-1590898882-scale15.00-k_euler_a-v1-5-pruned-emaonly.fix.jpg
4800 x 4800 - 3M

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,204
    edited November 2022

    Nvidia are

    doing AI Tools that is

    don't need DAZ wink

    Post edited by WendyLuvsCatz on
  • JD_MortalJD_Mortal Posts: 760
    edited November 2022

     

    Nvidia are

    That is essentially how I made these... Just drawing shapes and telling the AI to make my shapes look like they were creations by two individual artists. Like a fusion.

    Yeah, NVIDIA is a bit behind in the AI world, oddly enough. They are just now catching-up to "Stable Diffusion", and will surely pass it. But it will most-likely not be free to use or be SOOOO demanding that it requires their "best cards to run".

    "Stable Diffusion" can be run online, using googles computers, for free... If your computer isn't fast enough. It'll never run NVIDIAs programs, because that'll surely be "Proprietary" and "behind closed doors". Even though they are using the same "free code" to make the AI, and building off that entire SD library.

    My question was about the OTHER components of AI... Like having the ability to generate a "new texture" from an existing one, or a face, or just "creating a background". The AI interpolation, for animation and frame-blending to video, is another thing that they could adopt. (Requires less frames that need to be rendered as it can, correctly, draw the tween-frames, giving us higher frame-rates and faster renders for animations. Sure this can be done post-render, but why not make it a native component?)

    I'd settle for AI just used to "assist in making realistic hair"... Perfect place to start! The most demanded item by everyone here... Real hair, and Fast hair... No more ribbons and skull-caps! No high demand fake-strands to simulate real-fake hair... :P (Those are still needed, to a point, to make the foundation for AI to build off of. So, it doesn't harm anyones "sales", only improves them now.)

    Post edited by JD_Mortal on
  • JD_MortalJD_Mortal Posts: 760
    edited November 2022

    Another, more direct example of how I use it. (Since I can't find good "shaders" that do this, or skins.)

    Depending on how similar I want it to make the creation, I can push it all the way down to "cell shaded". However, here, I left it more true to the model. It created a new creation, based off the form of the old one. New hair, eyes, mouth, etc... More actual "anime" looking than the original.

    NOTE: Not the worlds best example, just one I made in the last minute, as an example. Just wanted to show how it uses the "rendered model", as the foundation of the new creation. I could have made her into a pencil drawing or ink, or acrylic or an oil-painting or made her into a "real looking person". (Because, well, the original hardly looks real. It would have looked like a photograph of an actual person who looks nearly identical to the rendered model. Including real hair. :P)

    hzBFP1c.jpg
    1024 x 512 - 116K
    Post edited by JD_Mortal on
  • Another potential "concept" here...

    Models would not have to be "ultra HD", or clothing, or scenery... Only suggestive of "form". Daz3D and artists, could make "custom trained models", (fancy name for image-data-sets, nothing to do with 3D models), which become compliments to specific items and models. These "compliments", are "trained AI image data", which it uses to add that realism into the final rendered model. The training would be done off the specific items, which may be photoshopped or just rendered in ultra-HD real hair, that is similar. The AI knows what to do with the info, after it learns what it is and how to draw it.

    Instead of "cell shaded shaders", needing to be perfect. They can also now be "suggestive". Again, because once AI learns how to draw a model as a cartoon, by showing it 20+ images in training, it can turn almost anything into that style. (Well, anything trained, like people, buildings, vehicles. Doesn't have to be specific for AI.)

    Scenery, instead of us having to store and load uber-HD images, just to get a singular, static, "seen it before" image... The AI KNOWS what a forest looks like, a mountain-scape, the sky, grass. It can simply create it, as needed, being unique and random, or repeating a pre-fab seed and settings. Saving us TB of hard-drive space and "what does that one look like again", wasted time. Including the obvious, "I've used this too many times, it's getting old", or its just not HD enough anymore. :P

    Instantly "style" freckles onto any model, without the need to make more new skins or having to add more layers. Optionally trying variations of freckles, and not being limited to one singular "skin of freckles".

    Textures, for me, is the bigger thing. If masked, it can easily kill all these horrible "repeating patterns", in textures. Roads would look like a singular, non-repeating pathway. Sidewalks, brick walls, cobble-stone paths, dirt paths, grass, beach sand, bushes, trees. Crowds of people made as "billboards", could be replaced with lighting appropriate "realistic looking people", or models. Again, allowing us to use lower quality components and yield high quality results, without the overhead of using actual high quality models and images.

    For clothing, this also applies. As long as the "form" is there, the AI can draw all the high detail where the low detail components exist. Daz3D already has the ability to mask any specific item in a render. It can replace those thick solid bands on a dress, over the shoulders, with actual cloth bands that the AI was trained with, for that model. (Or, honestly use any other trained set for the same straps or outfit, if it's similar enough.)

    I could go on, but I am resisting... I'll talk myself to death, if given the freedom to do so.

  • wolf359wolf359 Posts: 3,828
    edited November 2022

    @JD_Mortal
    Good stuff.. very creative use of AI resources.
     

     

    Instead of "cell shaded shaders", needing to be perfect. They can also now be "suggestive". Again, because once AI learns how to draw a model as a cartoon,

     

    I asked for some “anime styled” 3D  portraits of Natalie Portman here are some results.

     

     

    NATA P 14.jpg
    1536 x 1536 - 1M
    NATA P 23.jpg
    1536 x 1536 - 883K
    NATA P 24.jpg
    1536 x 1536 - 1M
    Post edited by wolf359 on
  • I am behind on keeping up with all of the latest ai engines available. I knew about most of the text prompt ones, but which ones are you using to enhance or build off of photos you provide?

  • That is really good to give the render a style, better then the PS actions I use, if they work from any angle, and stay simular. 

    Test of 3D render action.jpg
    1280 x 605 - 111K
  • I am using a program called "NMKD Stable Diffusion GUI". The other popular one is called "Automatic1111 Web-UI". (The latter may require some manual installs and fighting with. The prior is more of an "all inclusive" setup.

    The other element of the process, for training your own images as styles and stuff... That is called "DreamBooth", and it mostly has a high GPU demand at the moment. Most versions for full function, requires a GPU with 24GB VRAM. However, there are also a few "google colabs" which gives you access to googles GPU database and free processing time to do the same thing. (With a paid service for priority and faster GPU time.)

  • JD_MortalJD_Mortal Posts: 760
    edited November 2022

    hejjj12 said:

    That is really good to give the render a style, better then the PS actions I use, if they work from any angle, and stay simular. 

    I like how it corrected the "hair through the arm" and added the straps to the suit and made the hair look "more appropriate". (Results may very) Might be mistaken but I think her breasts grew a little too! (Also a possible prompt command that works if added. :P )

    If you added "at the beach", it may have changed the background to water and beach sand.

    Post edited by JD_Mortal on
  • hejjj12hejjj12 Posts: 51
    edited November 2022

    JD_Mortal said:

    hejjj12 said:

    That is really good to give the render a style, better then the PS actions I use, if they work from any angle, and stay simular. 

    I like how it corrected the "hair through the arm" and added the straps to the suit and made the hair look "more appropriate". (Results may very) Might be mistaken but I think her breasts grew a little too! (Also a possible prompt command that works if added. :P )

    If you added "at the beach", it may have changed the background to water and beach sand.

     

    Ah, my bad, I wasn't being clear, I ment that AI might work better then Photoshop to fix up render images. The image I posted were just made with some simple brushes and cartoon render actions in photoshop. But if AI can be consistent in how it adds a style to a image, It think it can be way more efficant and also give me better results. :D  (I didn't actually rezise the boobs, I think the slight zoom in on the image and the light makes them appear bigger) :D 

    Post edited by hejjj12 on
Sign In or Register to comment.