Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
there is a plethora of captioned public domain media out there that could be used to train AI
Google Arts, Wikipedia Media, various curated stuff from Unvertities and Museums that are available online under a CC0 licence
Mugshots of criminals for faces (profile and front)
it's just that so far they have not been forced to retrain it and if they do it won't be as commercially viable or OpenSource
Artists who use 3D can still make money from clients who want a consistent character with specific poses. AI is still difficult to create EXACTLY what you want and be able to repeat it. It creates some amazing one-offs but still has issues with hands, feet and random artifacts. Saw an invite for a NYE party today that was obviously AI, stunning at first glance, but zoom in and yikes!
Being able to go from rough doodle to finished image through a series of refinements, or to switch back and forth from 2d to pseudo 3d is also crazy, and again, that's close to six months ago now.
THIS is why I do not believe that AI will ever replace artists, we have persisted with every innovation and new tools, and AI is simply another tool, not a replacement for us...
What I'm toying with is using DAZ to create a character/scene Using DAZ content assets , and render it, then use it as a reference for Controlnet (two controlnets, one with Canny, one with Depth). In my earlier renders I didn't use the depth one, just prompted Stable Diffusion to create a background (usually though it ended up being super basic). the only one in this vignette that used the depth is the last one, in that case I set up an entire scene with the cabints, floor and wall, and all props and the depth is needed to seperate the character from the other stuff. I attach the original render from DAZ, pretty basic quality, it's all you need to send over to get the info needed.
That you can see though is that the DAZ assets (shirt, skirt, shoes, G8F, hair, even the poses) transfer over to Stable Diffusion and I need to prompt for the color (it works well for the skirt and shirt but not so well for the shoes). For all intents and purposes I'm just using Stable diffision as a render engine. But with the SDXL Turbo models it will crank out a render in like 8 seconds. Next experiment will be to see how it does with mutliple characters.
Now to the main topic, if I wanted to use this to make a comic book I think I could assuming I sort out the mult-character process. I don't see this use of the AI as at all different than sendng it to, say, octane or Blender. as others say, it's another tool.
Unless you had 'sarcasm' mode silently enabled in your post, Stable Diffusion can't consistently generate hands in complex poses (especially connected or interlaced fingers) without a signifcant amount of correction via Inpainting, and even then, results will vary wildly.
In my experience, SD can still botch them. For example, the finger poses might be accurately generated, but the fingers themselves look like flesh-colored letter openers. And good luck rendering hands gripping throats.
Part of the problem is that SD is heavily biased toward portraits, so getting it to produce images that feature hands doing anything complex is often an exercise in frustration that might never yield truly satisfactory results.
I've found that the Canny is best for large detail ar a distance or fine detail up close. So it seems do do fingers well up close, it's hit or miss at distance. For example, this is a quick and dirty render from Daz and the Stable Diffusion + Canny + Depth version demonstrating how well it does up close.
Not that well.
- It generated an extra pinky finger on the AI figure's left hand where the light is shining on the 3D model's right hand.
- I can also see the sliver of what looks like an extra index finger peaking out from behind the visible index finger on the right hand.
- The base knuckle on the middle finger of the right hand looks like it's completely disjointed from the rest of the hand.
- It also couldn't replicate the crease in the 3D model's right wrist well enough to maintain the transition to the hard shadow on her right hand. Consequently, the AI's hand looks like it's wearing part of a thin, transluscent glove.
Yes I was just posting that, including the canny map that shows the sunlight that it misinterpreted. Postwork is still a thing.
That said, just because it sucks at hands doesn't mean it can't have a great career as an artist. Rob Liefeld couldn't draw feet but he did okay for himself
I was using DS (or originally Poser) to create 2D looking characters from the very beginning with Photoshop postwork. I was never really into realism. But now people are seeing my older stuff and thinking it's AI. Sometimes I show people on my phone both newer AI stuff and older heavily postworked 3D and they can't really tell the difference. I was always creating more comic book/2D gamelike looking characters and images. I love combining my work now with AI but really find it disheartening when people assume everything I've ever done is AI. And the AI I use on renders is not random, it's literally like spinning dials in DS or adding makeup or lashes or brows to your character but manually using my finger or Apple Pencil on my iPad to change the shape of the face or body or use a slider to make the character taller or thinner in a SELFIE app. These selfie apps are f-ing amazing! Some selfie apps give you more manual control than DS even.
All AI isn't just a Midjourney descriptive prompt. And now Photoshop has AI included with generative fill and even some selfie app elements. So it's all becoming one big mish-mash of "digital art" which now includes AI in its various forms. I have to admit I've made some incredible looking images with pure AI, just words, but now you can tweak areas you don't like. It somehow becomes a fun game to see what you can come up with. But to me it's kind of like being an art director rather than an artist. You are still doing something creative. Picking the best images, adding prompts you think might improve it, correcting areas that are screwed up, changing colors, expressions, details, all through words as if you were an art director which is still a creative job, I don't feel like I'm "cheating" at all. But it is devaluing art because now everyone can be an art director with their AI artist. So it's going to about who is the best art director, because a lot of images come out crappy and it's still going to be hard for non-creatives to be able to even know how to get the AI to make the best image. Hope this makes sense. Just got back from a New Year's Eve Eve dinner party and I'm a bit drunk lol. BTW, happy New Year's everyone!
Hands generated by AI are not the best or realistic in my experience, so far. Below are some examples.
Happy New Year 2024.
It took me 10 seconds ro recreate Marilyn with AI.
I've been using AI for about a year (Midjourney, Stable Diffusion - multple models, LoRAs, etc) and I can tell you AI needs to take a basic Human Anatomy course. I still go back to DAZ studio and Zbrush to make my morphs and I get the same character in any pose I want. I pretty much have way more control over the characters. I see AI as being a tool to make things easier and faster. If you want cool 1 offs AI opens doors, but until it can do all DAZ can do its not going to replace DAZ or artist anytime soon. In order to have as much control over what you are doing and expected outcome AI would need to fill some pretty big shoes. Face Transfer 2 is a simple example of what is possible using AI as a tool in your workflow. I have been using it, but it has a way to go (cleaning up eyelashes etc and side views). I do like it alot though and its fun to use. I have tried img to img in AI and I get some great results. As stated above its almost like a rendering tool. The hair looked much more realistic, but my V9 eyes from Iray looked better. AI at this point can make vague 1 off's pretty good expecially for portraits (it has improved over the last year). Again I see it as a tool making things quicker and easier for the artist, but not really a replacement.
The core result is that training with step-by-step explanations might be a good way to go, human or ai generated. In these forums i think i had already stated the assumption, that the step-by-step explanations on the internet allow solving a lot of problems out there, even with different parameters, though likely not all. Training with only step-by-step explanations could be interesting here, but i am not sure, if we reach totally new depths with this, but then again, i am not a specialist, neither knowing it all.
Concerning the new bot, it outperformed an open source 90$ chat bot (might mix up some currencies and numbers here) in some benchmarks by 100%, which is kind of touching, but what's quite interesting is, that it ended up level with ChatGPT in some tests. So maybe that's a transformational thing for some kind for solving capabilities, allowing for more focused training of skills. One should be aware, that it does not magically outperform ChatGPT, if i am right. It's still interesting, how few degradation there seems to be for some benchmarks. Thinking of humans providing those step-by-step explanationes, it would be like writing all training data by hand, which would be a huge amount of work to do. Maybe social media or learning websites may one day be abused, like with recaptcha, in order to create step-by-step explanations. "Are you an astrophysicist? Good. Answer the following questions to send your aunt instant money: ..."
The lawsuit stuff - of course. If they can, they will.
There just remains some entropy questions, or if you will, questions about basic laws of the nature of the thing, like "can you make a better bot out of a bot without external training data". It's a bit like asking for another variation of a perpetuum mobile. In the end, using random internet input and then boiling it into step-by-step information, might be a good way of extracting things and allow for some cross-checking in the process, all automatted. So that could really be interesting, but right now, they obviously are not ending up cheap, nor better, as they are leaning on ChatGPT in that research. Maybe there is some more future potential for more efficient ways of correction by interaction, with those step-by-step explanations, but we are not realtime yet, and there always is the byzantine question with the masses of the users, and the random astrophysicist's time is limited too, in a way.
For Daz, AI could be a game changer if executed properly. A few ideas:
I learned about a free solution called Upscayl.org for that last suggestion, thanks to a YT video posted yesterday. They offer a downloadable app for Mac and Windows. I used it on one of my old HD pieces and it upscaled to 4K remarkably well. I'm considering upscaling the rest of my work.
Note: After upscaling, I converted the 4K version of my Evlin render to a JPG with 8x compression via PaintShop Pro to reduce the file size.
I saw that upscale, here is the img to img link below for a video
See the attached below. An old V8.1 and AI img to img in Seaart.ai. The AI image improves hair etc. Its a bit rough and needs improvement, but AI as a renderer to make skin, hair, etc more realistic (for us who stive for photo realism).
Joequick, did you make those AI examples (the little purple monster, woman in coveralls, etc)?
Unfortunately, Nytefall is right about the accuracy. Even with Controlnet, proper fingers, feet, and many more expressive faces are still tough to get. When you see perfect hands on an AI image, they either got *really* lucky or it's postwork.
I think for Joequick is not a problem.
He has already proved what he is capable to achieve
by creating a lot of amazing characters by himself.
Yes, they were stable diffusion experiments from over the summer.
Thanks, they look good. I was interested because I'm doing a ton of similar experiments. I'm not particularly interested in making 3D art for the forseeable future, at least not unless we get some real technological advancements, so I'm trying to find ways to get 2D results from combinations of sketches, OpenPose, and using AI generations as references for manual drawing. Results always seem to fluctuate between "eureka!" and "it's impossible" though, haha. :)
are you all happy now?
The video does make me happy. I am using AI at the moment merely to learn the basic skills. When I can (a) use an AI database that I am confident was acquired fully legally, and (b) customize that database with the addition of my own creations, then I plan to use it as just another tool. Both (a) and (b) are very important to make me happy as far as AI goes. Both are on the way.
...now this is where AI can be a useful tool
Unfortunately after I downloaded and installed Upscayl, when I attempted to open it the following error message popped up on the screen:
Just checked at istockphoto.com and they provide access to AI generated images
from prompts in packs of 100 for not so bad prices,
if one wish to use such images commercially
and have insurance on each of the images for up to $10000.
I feel a sudden urge to share two quotes from this article by Ai Weiwei: (The Guardian)
1 "Today’s harsh reality witnesses technology reducing age-old modes of poetic expression and the warmth of art to a somewhat barbaric artifice."
2 "The rapid development of technology, including the rise of AI, fails to bring genuine wellbeing to humanity; instead, it fosters anxiety and panic. AI, despite all the information it obtains from human experience, lacks the imagination and, most importantly, the human will, with its potential for beauty, creativity, and the possibility of making mistakes."