AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I will admit I stare in fascination at the pattern of a pine cone which no human had a hand in
my bar is pretty low
You now can feed your thoughts directly to stable fusion to create the art. All you need is an MRT...![wink wink](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/wink_smile.png)
https://sites.google.com/view/stablediffusion-with-brain/
I feel like that. Except it's not AI that makes me feel that way.
this looks cool
I use both techniques, rendering images in Daz Studio, Unity, but also exploring AI techniques,
maybe for the reference, but also for fun.
If I only could find a way to create a similar images directly in Daz Studio...
Train...
Some interior.
any image have it's own prompt included, just use PNG info in Stable Diffusion Automatic 1111
my works so far...I had only 2 weeks to get results as shown, Youtube is my tutor.
PC used: Ryzen 5600, RTX-3050 8Gb, 16Gb DDR4-3200
Most of these app's don't create anything, they use existing images in combinations with some defomrations and stuff on top (huge copyright issues ahead....), there is very little in term of actually "creating" anything, same thing with those fancy stamp tools, they do not create anything, and no an "AI" cannot "write a novel", they can get some ok stuff in terms of short childrens books and things like that, but once again, it's very little in terms of actually "creating" anything, as far as I know there is nothing that is even worth the name "AI".
People go bonkers for all kinds of silly things, saying they made an AI fall in love with them or it said nasty things to them, the software analyze input, use huge amounts of data to create something that represents an answer of what could match the input data (the algorithm's they use for analyzing the input data are pretty cool though in some cases), it does not "know" what it actually output, it is just a data set, just as if you would input 1+1 and get 2 back, just a little more fancy, what makes it look impressive is the fact that they have a huge amount of input data that makes it look very fancy.
I would hope more people would start complaining about those "AI's" that " create images, they are using other peoples images, how about copyright, the compaines vaccum internet for data not giving a [thought] of who owns it.
It's a little more complex than that, it isn't so much combining elements as it is abstracting patterns and then combining those. I don't disagree in principle, where they have not obtained permission to use the training samples, but the legal status of what they have done remains undecided.
The rabbit whole gets deeper. I found out Deviant Art is part or a Lawsuit related to Stable Diffusion. DA if you didn't know had tones of Daz3D artists who use and used Daz3D. DA is part of the lawsuit because in part they provided images it's alleged that help train a Stable Diffusion Model...
" Stable Diffusion, for example, was trained using LAION-5B, a database of 5.85 billion text-image pairs with sources including Flickr, DeviantArt, Wikimedia, and overwhelmingly, Pinterest. Using an AI art tool like Stable Diffusion is as easy as typing in a thread of words. You can even create an image in the style of an artist whose work was scraped."
From: Artists sue AI art generators over copyright infringement: Stability AI, DeviantArt, and Midjourney named in class-action suit By Nicole Clark Jan 17, 2023, 2:56pm EST .
Interestingly Stability AI and Mid Journey are not mentioned. I wonder why? Are they not popular enough. Well, that might be related to the Deviant Art ties with Daz 3D content.
Now, things get really interesting, when you consider... Twitter, Instagram (i.e. Meta), PixIV, TickTok, Google Photos, etc. whereas by default any image that was posted on these sites where as users gave a royalty free license for each service who in turn could potential use all those user images to create AI generated images for whatever purpose.
Focusing on the range on Daz3D users, it seems unfair that major corporations could create images from Daz3D content and may have (i.e. Deviant Art) while forcing Daz3D users not to use new tech that could make their lives and production workflows better.
My only other thought is this is actually to help protect Daz3d from potential lawsuits from other companies or Daz3D content creators. For 3rd party companies not sure what Daz3D is concerned about, but Daz3D featured artists may not be to happy, if other people can create art based on their work without compensation. So, Daz 3D can at least say they tried to stop such uses even thought cat is out of the bag because of other websites that allowed Daz3D content to help train AI Models that make Generative Art.
But, note: I’m not a lawyer.
@generalgameplaying and anyone interested. Find this vid.
Lawyer Explains Stable Diffusion Lawsuit (Major Implications!) On: Corridor Crew's Youtube
If AI is meant to be using work by other artists, then why doesn't Google image search pick up on those elements?
I've just created a new character using AI. And I can't find anything about her on an image search.
Stable Diffusion img2img batch render used in a Nicki Minaj video
That description is completely, totally uninformed about how these types of generative models actually work.
Wow. People today still fervently denying the dizzying rate of progress are going to be so embarrassed bytheir own comments in a year or two. Literally all it's going to take now is some grad student living off Top Ramen to figure out how to guarantee coherency across frames.
Directed and animated: Tillavision Prompt engineer: Matt Penttila MoCap Studio: Cinetica Studio MoCap Director: Jonny Mehraban Dancer’s: Valeria Cordero, Ximena Gutierrez
this is now a job
A Vintage one...
A Worker
Until I get around to submitting a query to DAZ about use of 2D renders in AI, I am using my photographs and video footage converted to images
hence posting here not the mixing my art with ai thread![blush blush](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/embarrassed_smile.png)
was too depressed today as a year since Lynx died
Hugs![heart heart](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/heart.png)
Hugs, too.
A robot...
When they crack coherency across frames
We animators will be able to “restyle”our rendered work
to acheive any look from UE5 to Pixar to classic anime.
I hope it happens sooner than later
Technically, incorrect.
The AI actually learns. It isn't merely splitting existing images and making retouched collages out of them.
But the AI learns in a way that is akin to a someone who's not actually having in-depth knowledge of hard rules like anatomy, color theory or composition. Instead they are learning by tracing from their favorite artists. Some people actually learn how to draw this way, mostly fanart artists. The result is ability to create polished knock-offs, but without actual knowledge how it all works those artists can't go out of the comfort zone of a copycat. If they do, they start making technical mistakes. It still is not an outright copy or collage.
The AI is more like Greg Land is this aspect.
Very much like Greg Land it needs images to scavenge how to draw from.
I'm very curious what would happen if people building art AIs went with a senior art teacher approach instead and tried putting those rather hard rules of artistic basics inside before moving further. It'd probably have less problems generating hand images...
Anyway, there's already technology for making images impossible to "understand" by the AI, thus impossible to learn from. So far it was tested only as a tool preventing AI from using proofed images in deepfakes, but the basis is fairly simple. It uses another AI to fill the image with invisible noise "layer" making the pic impossible to be reused.
I'm willing to bet it's going to develop further as long as there will be people willing to pay for securing their images, whether it's photography or hand drawn. There's demand for this and where's the demand the supply follows.
Granted it's better than they used to be, but I still find it unwatchable for more than a few seconds.
Richard Haseltine,
In some respects, it seems like the past is present again. I'm reminded of something that Richard Williams, director of animation in Who Framed Roger Rabit, said about animated films he first saw in the 1960s.
"... the Beatles' feature cartoon The Yellow Submarine. Though I liked the ... styling, the 'start-stop, stop-start' jerky quality of most of the animation meant that after half an hour much of the audience went to the lobby. No matter how stylish or inventive -- jerky or bumpy animation seems only to be able to hold the audience for about twenty-five minutes. While The Yellow Subamarine had an authentic cult following from advertising agencies and unversity crowd, the general public avoided the film. It kiled the non-Disney feature market for years."
Then he goes on to talk about watching The Jungle Book when it hit the theaters, rhapsodizing about the animation.
"... I remember the boy Mowgli riding a black panther moving and acting in a cliched way -- until he got off. And suddenly everything changed. The drawing changed. The proportions changed. The action and acting changed. The panther helped the boy up a tree and everything moved to a superb lever of entertainment. The action, the drawing, the performance, even the colours were exquisite. Then the snake appeared and tried to hypnotize the boy and the audience was entranced. I was astonished.
"... Film executives at that time always said of animation, 'If it doesn't have the Disney name on it, no one will go see it.' But the real point is, it wasn't jus the Disney name -- it was the Disney expertise that capitvated the audience and held them for eighty minutes."
While we witness the astonishing developments in AI generated images sequences, it's worth comparing The Yellow Submarine versus the original Jungle Book, simply as a point of reference.
Cheers!
Oops! Double Post.
I was thinking the other day that some animations in the past were jerky, so it was odd that the issues with neural-net geernated sequenxces are so bad. I used to like Roobarb and Custard where the scratchiness was part of the style, and while they were short they were longer than I could watch these genrated animations. Anime also tended to hae the same effect, and as you say the Beatles animations, so it is odd.
I love the Monty Python style myself