AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
![](/static/images/logo/daz-logo-main.png)
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Derivative vs Transformative is a legal issue... Stable Diffusion has been served legal papers in which the claim is that their AI-generated images are derivatives of the original copyrighted images, and so breach of copyright has taken place. If the court agrees, this will effectively shut down the databases that AI art software use.
Stable Diffusion have not yet responded to this claim, but it is expected they will say that the AI images are transformative and not derivative i.e. the AI images are sufficiently different from the originals that they do not impact the original artists. If the court agrees with this argument, AI software can continue using their image databases.
Oh right, apologies. It happens so that "transformer" also is a term within the design of ChatGPT, for instance, so i got confused here, despite having read and watched content about exactly those terms in the law context for this matter.
"If the court agrees, this will effectively shut down the databases that AI art software use."
That may go for the models resulting from training. However, the underlying image collections probably needen't be affected, as they could be used for science in general, as well as totally different kind of applications, which needn't pose any issues. (Just nitpicking.)
been learning about dreambooth and lora this weekend...
So you trained a 3D character as a LoRA? Can you run it through an anime model like Anything or AbysssOrangeMix and see what the results are? I'm really interested to know if you can get 2D results with a specific character whos training images were 3D renders.
i haven't got anything trained (dreambooth wise) on a anime model, which would give the best results but this is what a lora looks like attached to one...
you can double up on them... ie, Dreambooth train a character on a model then insert a lora file that has also been trained on the same character.
I have started with something simple...
Or something more sophisticated...
A stylized one...
For this image I used Herschel Hoffmeyer's Tyrannosaurus Rex 3 and made the background in MidJourney ( /imagine prompt: Steamy prehistoric jungle, lush, dense vegetation --ar 7:4 --v 4 ) and then used Photoshop's Neural A.I. filter "Landscape" to add a touch more realism than MJ gave me and had the bonus effect of blending the T-Rex in a little more (had to edit that bit, a lot though!) and after enjoying the image a couple days decided it needed a Stegosaurus in it, because that was a staple of my childhood - dinosaurs that never existed together in history battling for their lives! Rendered a Stegosaurus from Dinoraul and did some Photoshop-Foo on it. Over all I am pleased with the image and how each element/tool I used added something to the whole that none of the tools alone were giving me. I love having all these tools!
![](https://www.daz3d.com/forums/uploads/FileUpload/f5/316dbeec3ebffa8003987374064d19.jpg)
What more can I say: once you pop you can't stop
These tools are amazing...
Quite true!!
The animation stuff really has me intrigued
![](http://img.youtube.com/vi/LkpgN570B-M/0.jpg)
More colors and the cat
Can one resist?
took about 16 hours, could have gone longer as had more footage
(was a tiny old scratchy silent movie from the Library of Congress of a woman dancing in B&W)
I used the Stable Diffusion ControlNet OpenPose but it doen't know when she spins she faces away from the camera, to be fair on a 320x240 video it's hard to tell
my processed AI one is of course HD, it used a prompt Beautiful Eastern European woman dancing and only really copied the poses
BTW put it here and not the Mixing your art with AI thread as no DAZ renders used,
That sounds interesting. I wish they had a website or something that said specifically what it is and how it works.
oh they do, it's called Google Search
I had to check something in settings
Thanks for the examples. I guess you do need to train the 3D renders on an anime model to really get 2D results. I'll look forward to seeing someone try that sometime (if I don't try it myself). :)
I haven't done LoRa training, only textual inversion. I did train a 3D model on the vanilla 1.5 (just the face), then out of curiosity switched to another model to see if it would change the style, like turn a realistic human into an anime. It did work.
There were a couple of issues with it. In some cases it was not really my character but its interpretation of what my character would look like.
The style was also influenced by the embedding.
In the anime version, it reduced the saturation of the colors. Another was even more extreme.
Did the vanilla 1.5 get the face of your 3D figure right? Also, if anime results are desaturated, use the Anything VAE with it.
From the Corridor Crew
Lawyer Explains Stable Diffusion Lawsuit (Major Implications!)
(if someone already added this then obviously carry on)
Yes, it did, but it's not very consistent in and of itself. Depending on the seed used, it will change the eye color or the general shape of the face. I had more success linking it back to a 3D model via imgtoimg. It takes tweaking. Didn't think about the VAE, thanks!
TBH, if you use text like i might animation, it's barely edible. Otherwise i'd say the text sucks/hasnograndparents(andnoscribe). "And dangers, and nothing relevant is contained, and i watched generations pass because i'm terribly old... YAWN/BURP/worksdrunkperhaps".
The animations look good!
I am still amazed ....
Just one more - maybe an inspiration...
Some futuristic prediction ...
When I'm not using MidJourney A.I. for generating backgrounds to use in renders I use DAZ to replace, say, in this instance, the eyes of an A.I. generated person. All of the tools, freely mixing together for World Peace, and Love and Harmony. ;)
![](https://www.daz3d.com/forums/uploads/FileUpload/33/c3375d304a56a77357fd011405bc6c.jpg)
Interesting, how did you replace only the eyes in the image?
Another thought: how many parts of the ai generated image need to be replaced
to be counted as your own work or at least fair use?
That isn't how it works - make X% new and it is OK.
Thanks, Richard. It make sense. One just need to estimate this X% of new.