AI is going to be our biggest game changer

1323335373848

Comments

  • JabbaJabba Posts: 1,460

    Derivative vs Transformative is a legal issue... Stable Diffusion has been served legal papers in which the claim is that their AI-generated images are derivatives of the original copyrighted images, and so breach of copyright has taken place.  If the court agrees, this will effectively shut down the databases that AI art software use.

    Stable Diffusion have not yet responded to this claim, but it is expected they will say that the AI images are transformative and not derivative i.e. the AI images are sufficiently different from the originals that they do not impact the original artists.  If the court agrees with this argument, AI software can continue using their image databases.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited February 2023

    Jabba said:

    Derivative vs Transformative is a legal issue...

    Oh right, apologies. It happens so that "transformer" also is a term within the design of ChatGPT, for instance, so i got confused here, despite having read and watched content about exactly those terms in the law context for this matter.

     

    "If the court agrees, this will effectively shut down the databases that AI art software use."

    That may go for the models resulting from training. However, the underlying image collections probably needen't be affected, as they could be used for science in general, as well as totally different kind of applications, which needn't pose any issues. (Just nitpicking.)

    Post edited by generalgameplaying on
  • SolitarySandpiperSolitarySandpiper Posts: 566
    edited March 2023

    been learning about dreambooth and lora this weekend...

     

     

    Post edited by SolitarySandpiper on
  • SnowSultanSnowSultan Posts: 3,633

    So you trained a 3D character as a LoRA? Can you run it through an anime model like Anything or AbysssOrangeMix and see what the results are? I'm really interested to know if you can get 2D results with a specific character whos training images were 3D renders.

  • SolitarySandpiperSolitarySandpiper Posts: 566
    edited March 2023

    SnowSultan said:

    So you trained a 3D character as a LoRA? Can you run it through an anime model like Anything or AbysssOrangeMix and see what the results are? I'm really interested to know if you can get 2D results with a specific character whos training images were 3D renders.

    i haven't got anything trained (dreambooth wise) on a anime model, which would give the best results but this is what a lora looks like attached to one...

     

     

     

     

    you can double up on them... ie, Dreambooth train a character on a model then insert a lora file that has also been trained on the same character.

    Post edited by SolitarySandpiper on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    I have started with something simple...

    image

    Mountains01.png
    512 x 512 - 432K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    Or something more sophisticated...

    image

    Beautiful01.png
    512 x 512 - 495K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    A stylized one...

    image

    Manga01.png
    512 x 512 - 483K
    Post edited by Artini on
  • GoggerGogger Posts: 2,416
    edited February 2023

    For this image I used Herschel Hoffmeyer's Tyrannosaurus Rex 3 and made the background in MidJourney ( /imagine prompt: Steamy prehistoric jungle, lush, dense vegetation --ar 7:4 --v 4 ) and then used Photoshop's Neural A.I. filter "Landscape" to add a touch more realism than MJ gave me and had the bonus effect of blending the T-Rex in a little more (had to edit that bit, a lot though!) and after enjoying the image a couple days decided it needed a Stegosaurus in it, because that was a staple of my childhood - dinosaurs that never existed together in history battling for their lives!  Rendered a Stegosaurus from Dinoraul and did some Photoshop-Foo on it.  Over all I am pleased with the image and how each element/tool I used added something to the whole that none of the tools alone were giving me. I love having all these tools!

    DS_Tyrannosaurus_Rex_Pursuing_Phantoms_3D_Erik_Pedersen.jpg
    3440 x 1440 - 766K
    Post edited by Gogger on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    What more can I say: once you pop you can't stop

    These tools are amazing...

    image

    Manga02.png
    512 x 512 - 391K
    Post edited by Artini on
  • wolf359wolf359 Posts: 3,834
    edited February 2023

    What more can I say: once you pop you can't stop

    These tools are amazing...

     

     

    Quite true!!

    The animation stuff really has me intrigued

    Post edited by wolf359 on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    More colors and the cat

    image

    cat03.png
    512 x 512 - 446K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    Can one resist?

    image

    cat07.png
    512 x 512 - 414K
    Post edited by Artini on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited February 2023

    took about 16 hours, could have gone longer as had more footage

    (was a tiny old scratchy silent movie from the Library of Congress of a woman dancing in B&W)

    I used the Stable Diffusion ControlNet OpenPose but it doen't know when she spins she faces away from the camera, to be fair on a 320x240 video it's hard to tell

    my processed AI one is of course HD, it used a prompt Beautiful Eastern European woman dancing and only really copied the poses

    BTW put it here and not the Mixing your art with AI thread as no DAZ renders used, 

    Post edited by WendyLuvsCatz on
  • NylonGirlNylonGirl Posts: 1,916

    WendyLuvsCatz said:

    I quite like the ControlNet OpenPose

    shame batch image won't work for me with it for making videos

    That sounds interesting. I wish they had a website or something that said specifically what it is and how it works. 

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited February 2023

    NylonGirl said:

    WendyLuvsCatz said:

    I quite like the ControlNet OpenPose

    shame batch image won't work for me with it for making videos

    That sounds interesting. I wish they had a website or something that said specifically what it is and how it works. 

    oh they do, it's called Google Search cheeky 

    I had to check something in settings 

    Capture.JPG
    1920 x 1040 - 175K
    Post edited by WendyLuvsCatz on
  • SnowSultanSnowSultan Posts: 3,633

    SolitarySandpiper said:

    SnowSultan said:

    So you trained a 3D character as a LoRA? Can you run it through an anime model like Anything or AbysssOrangeMix and see what the results are? I'm really interested to know if you can get 2D results with a specific character whos training images were 3D renders.

    i haven't got anything trained (dreambooth wise) on a anime model, which would give the best results but this is what a lora looks like attached to one...

     

    Thanks for the examples. I guess you do need to train the 3D renders on an anime model to really get 2D results. I'll look forward to seeing someone try that sometime (if I don't try it myself).  :)

  • IteraItera Posts: 5

    I haven't done LoRa training, only textual inversion. I did train a 3D model on the vanilla 1.5 (just the face), then out of curiosity switched to another model to see if it would change the style, like turn a realistic human  into an anime. It did work.
    There were a couple of issues with it. In some cases it was not really my character but its interpretation of what my character would look like.
    The style was also influenced by the embedding.
    In the anime version, it reduced the saturation of the colors. Another was even more extreme.

    different_versions.jpg
    2400 x 800 - 1M
  • SnowSultanSnowSultan Posts: 3,633

    Did the vanilla 1.5 get the face of your 3D figure right? Also, if anime results are desaturated, use the Anything VAE with it.

  • inception8inception8 Posts: 280
    edited February 2023

    From the Corridor Crew

    Lawyer Explains Stable Diffusion Lawsuit (Major Implications!)

    (if someone already added this then obviously carry on)

    Post edited by inception8 on
  • IteraItera Posts: 5
    edited February 2023

    SnowSultan said:

    Did the vanilla 1.5 get the face of your 3D figure right? Also, if anime results are desaturated, use the Anything VAE with it.

    Yes, it did, but it's not very consistent in and of itself. Depending on the seed used, it will change the eye color or the general shape of the face. I had more success linking it back to a 3D model via imgtoimg. It takes tweaking. Didn't think about the VAE, thanks! 

    Post edited by Itera on
  • wolf359 said:

    What more can I say: once you pop you can't stop

    These tools are amazing...

     

     

    Quite true!!

    The animation stuff really has me intrigued

    TBH, if you use text like i might animation, it's barely edible. Otherwise i'd say the text sucks/hasnograndparents(andnoscribe). "And dangers, and nothing relevant is contained, and i watched generations pass because i'm terribly old... YAWN/BURP/worksdrunkperhaps".

    The animations look good! 

  • ArtiniArtini Posts: 9,666
    edited February 2023

    I am still amazed ....

    image

    image

    mask01.png
    512 x 512 - 506K
    mask02.png
    512 x 512 - 518K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,666
    edited February 2023

    Just one more - maybe an inspiration...

    image

    mask06.png
    512 x 512 - 542K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,666

    Some futuristic prediction ...

     

  • GoggerGogger Posts: 2,416
    edited February 2023

    When I'm not using MidJourney A.I. for generating backgrounds to use in renders I use DAZ to replace, say, in this instance, the eyes of an A.I. generated person.  All of the tools, freely mixing together for World Peace, and Love and Harmony.  ;)

    __VT_Shae_Fae_2_MJ_Erik_Pedersen.jpg
    1200 x 1920 - 308K
    Post edited by Gogger on
  • ArtiniArtini Posts: 9,666

    Interesting, how did you replace only the eyes in the image?

     

  • ArtiniArtini Posts: 9,666

    Another thought: how many parts of the ai generated image need to be replaced

    to be counted as your own work or at least fair use?

     

  • Artini said:

    Another thought: how many parts of the ai generated image need to be replaced

    to be counted as your own work or at least fair use?

    That isn't how it works - make X% new and it is OK.

  • ArtiniArtini Posts: 9,666

    Thanks, Richard. It make sense. One just need to estimate this X% of new.

     

This discussion has been closed.