AI is going to be our biggest game changer

1151618202148

Comments

  • The AI system to do this is open source, you can use it with miniconda script prompt, GitCMD, and the source files for Stable Diffusion. It is a large download but you can run the AI software on your system if it can handle it. Of course the learning curve, space and if your system can handle it comes directly into play. I played around with a discord script prompt last night with this since reading about it here. It is impressive. 

    Chaosophia_dark_background_with_Morbid_Angel_style_melting_face_fe1a6348-8edd-43cc-a4de-b61f6b9ae8e2.png
    512 x 512 - 464K
    Chaosophia_Demon_in_the_abyss_holding_a_12_headed_serpent_with__4a75af6e-6c41-419c-b80b-f8ff29edbaae.png
    476 x 476 - 337K
    Chaosophia_Demon_in_the_abyss_holding_a_12_headed_serpent_with__227d8d22-911d-4443-a8d5-1eb915ce5202.png
    513 x 513 - 462K
    Chaosophia_Frozen_mountain_with_a_long_haired_troll_wearing_lea_93527ba5-c5ee-4b34-b565-c6969600d73d.png
    476 x 476 - 437K
    Chaosophia_Ogre_mage_from_Dungeons_and_Dragons_that_looks_like__0aa29737-b917-42e9-ac1e-545caff3c3ed.png
    476 x 476 - 459K
  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    Chaosophia said:

    The AI system to do this is open source, you can use it with miniconda script prompt, GitCMD, and the source files for Stable Diffusion. It is a large download but you can run the AI software on your system if it can handle it. Of course the learning curve, space and if your system can handle it comes directly into play. I played around with a discord script prompt last night with this since reading about it here. It is impressive.

    "On your system": the training data isn't, the training data is baked in to the machine-learning component of the software, it's already been trained with huge amounts of data, using powerful hardware. So the consistency of the innerts... it may be Open Source, but no one understands much from looking at a neural network, likely for this case.  Or if i were to train it, i would have terribly few data to feed it with :).

    Post edited by generalgameplaying on
  • This was what was done on the discord server. I haven't even begun to try the stable diffusion yet. :)

     

  • As for the data it downloads it as it is compiling from the test run I did. One package was over a gig, So it is resource heavy. Slow net connection I just stopped the program prompt and am going to wait till they install the higher speed net in a few days.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    Chaosophia said:

    As for the data it downloads it as it is compiling from the test run I did. One package was over a gig, So it is resource heavy. Slow net connection I just stopped the program prompt and am going to wait till they install the higher speed net in a few days.

    This is a bit tricky. It comes with data, i.e. if i judge right, low resolution images, which get upscaled and somehow combined for the results then. But the neural networks of the deep-learning system are already trained, and training them might be beyond some tesla cards, and demand more input data than what they distribute with the software. 

    https://en.wikipedia.org/wiki/Stable_Diffusion ;

    Citation: "A third-party analysis of the model's training data identified that out of a smaller subset of 12 million images taken from the original wider dataset used, approximately 47% of the sample size of images came from 100 different domains, with Pinterest taking up 8.5% of the subset, followed by websites such as WordPressBlogspotFlickrDeviantArt and Wikimedia Commons.[23][14]

    The model was trained using 256 Nvidia A100 GPUs on Amazon Web Services for a total of 150,000 GPU-hours, at a cost of $600,000.[24][25][26]"

     

    So a lot of GPUs for training the weights in the deep-learning network. AND i was still wrong: they already used images from deviantart ;).

     

    (Edit to comment myself on the infringing part: as long as the toolmaker don't host the stuff themselves, of course the users have full responsability for posting such results. Maybe they have some sort of formula of exclusion of liability in their TOS, to be on the safe side. So at this stage, the uploadfilter are with the platforms where users publish the results, though they might still prevent nudity and other to some extent, just by the training. Anyway, i'm curious what a start-up has for a business case, spending 600000 and certainly more for starters. So far this is a show-case, and it's remarkable. Maybe they get bought by bigger, as the linked interview suggests at Wikipedia: ""Startup Behind AI Image Generator Stable Diffusion Is In Talks To Raise At A Valuation Up To $1 Billion"). There is a lot of potential, question remains: in what direction?)

    Post edited by generalgameplaying on
  • Oh I agree on the where it is coming from aspect. I am wondering if it has a seek COO licence type or open source licence pulling for source images, but I wouldn't think it would, depends on the dev team and how savvy they wish to be on the infringement issues. I figured it was a pool of royalty free images that was collected with tag words then filtered through the program, hence the initial download package for the model base. When i ran the script prompt then the downloads started I cut it off cause 1 bandwidth issues on slow net and others using it at the time, and I saw the size of the download. While I have the space on my drives if that is the process it will fill up quick, and be a headache to get rid of the aftermath.

    It is still a pretty cool thing to play with if your not trying to exploit it for commercial use, with it pulling those images that are copyrighted then that can definitely be a BIG issue. The thing I am seeing from some of the AI where you have to get credits to use the prompts is that they say you can use the images however you see fit. Meaning commercially. Trust it was a cool feel to just say what ya want then poof there it is now tweak it, and at the speed it went was OMG, now if Daz only rendered that fast. LOL. But honestly not worth the headache of is this legal to use. 

    Another point is all the user agreements, how long before you have to "allow ai usage" of whatever you post, you know someone will sneak that in their tos at somepoint if it hasn't already been.

    Now don't get me wrong this is awesome, just playing with it last night was crazy. And the code being open source, double edged sword there but can be fun to use for practical leisure. It would have to pull from an open source library and that seems that it will be too hard to police and a headache for artists to keep up with DMCAs. And the demand for it will almost certainly bring forth a slew of "individuals or groups of individuals" that will surely exploit it. Just like everything else "online" 

     

  • Now what would be really awesome is if you have Daz opened up and you give it text, Daz pulls from your library of content and tries to do the same as the AI art creator with 3d content. I mean sure it takes the element of fun tweaking the 3d models ect, but a fun thing to play with while bored or something. Plus you can use that.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    If i am right in that they include low-resolution images from the internet in their software, it's very likely, especially after the Cambridge Analytica case, that they are aware of distributing other people's works, and likely they have ensured to at least only use appropriately tagged or generally legal to use images. Likely not hand-picked, though. They do need textual descriptions of the images, tags or descriptions, so the system can be trained at all, so it has to be as reliably tagged stuff as possible. On the other hand a little deviation also makes stuff more interesting at times.

     

    It's rather interesting where they will be going with the upscaling and the deep learning. Is there something fundamental and general to it, so there are totally new applications, or is it just a very promising art-thing, though likely with links into more general realms like 'perception'. I wouldn't be suprised, if at asome point, we discover more potential constructive parts of our own brains :), with some trick like "thinking of upscaling". But that's not really even an educated guess, i just meant to illustrate potential that may be more far leading than content-creation. Content-creation is a big thing, thogh. 

     

    ***

     

    Edit: I have to correct this potato post. I was initially using general (and correct) information about the topic, and then skimmed through some article to focus on some aspect, but obviously here it went a little bit into the wall.

     

    UPSCALING: Nope. This is a potential use for the underlying method. This also can be interesting for intelligence and surveillance, but also for all sorts of other applications. De-noising data is part of the system, but that's a slightly diffferent thing.

     

    So let's assume they're not explicitly distributing lowres images, in fact even internally they excluded lowres images in a certain training phase, so that just adds to confusion, so who knows where i got that from.

    Let's focus on original works and assume i just tried to stay fuzzy and use the didactical trick to hint at the question, if original images might be reproduced to an extent, that it's like using them directly (more or less), which could boil down to a very similar thing. Training data may be extractable from neural networks, to some extent. The main point however is the legal+societal question: 1) do you want to see your images/works/data processed and used for AI development? 2) Do you want to post your works on websites that make you sign off them using or selling it for AI development. That's of course relevant to artists. Apart from that, everything public will probably be (ab-) used by some entity, however do note, that using everything commercially within a legal system, is a different kind of thing, than being an intelligence service...

     

    ***

    Potato2: Due to the abilities of the system, to just put in parts of original images into the result (seemingly?), this remains somewhat fuzzy, and i have still not found a concise explanation, what exactly the distributed software contains. It doesn't make a huge difference, perhaps, but still.

    For instance here: https://ommer-lab.com/research/latent-diffusion-models/

    We read "Additionally, their formulation allows to apply them to image modification tasks such as inpainting directly without retraining.".

    This doesn't say the software comes with images, but that such a system CAN more or less believably "inpaint" something from a given image into another one, in theory.

     

    In the end you'll go to some source like: https://github.com/CompVis/stable-diffusion

    "a Safety Checker Module, to reduce the probability of explicit outputs," - Oh no this is the nude question, maltranslated.

    If you look at some of pre-trained models here: https://github.com/openai/guided-diffusion#download-pre-trained-models

    ... it looks like the models easily can have a size of gigabytes.

     

     This discussion rather is for the future question, i.e. "Where do they take the training and reference data from?"

    The other point being, that someone could dump money into running this kind of thing with higher resolution images, taken from anywhere (tagging still necessary), possibly it could be applied to video directly. It could take part in suspect-recognition for instance, e.g. render specific people with specific clothing and folding of clothing into a lowres surveillance camera recording, to allow humans or another system to distinguish, if potential matches actually are realistic matches. More plain: "Was that subject B and did they carry a gun?" Just with an interactive and pretty fast to implement process. There is a ton of applications in various fields, so perhaps the question "why someone should buy it" is not so relevant, until someone does :)...

    Post edited by generalgameplaying on
  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    Chaosophia said:

    Now what would be really awesome is if you have Daz opened up and you give it text, Daz pulls from your library of content and tries to do the same as the AI art creator with 3d content. I mean sure it takes the element of fun tweaking the 3d models ect, but a fun thing to play with while bored or something. Plus you can use that.

    That's what i thought too, for a tool-like application of the underlying method. I don't know if the number of assets and the textual tags are both enough and spot-on enough to do this. After all it's also quite a thing, to render all assets from different angles (and how... meshes, iray, outlines...), and then there remains the question, what the results will look like, if you feed individual assets instead of "ok to best rated images from the internet". Maybe both in the end, but there is a lot of questions as to what to train with and so on. Maybe the textual part can be trained more independently of the image part in the end (if it can be separated that way at all, maybe some day), which could enable people to train the system with fewer or specific tagged images, at some point in the future.

    Apart from that, the ability to tell used assets, which such a system could have, could be interesting, also if it could combine a set of images you provide yourself and it starts using/referencing/altering them. That's a slightly different applications there, though. Edit: Clarification: that's pretty much exactly one of the capabilities of the underlying system. Further interesting: how would it look like, using .... sorts of shaders... just for the nose...

    Another future application for machine learning in general could be something similar, like adding assets YOU own to a 3D scene (stablediffusion is 2D, so maybe a 2D-post tool is more realistic, though it can still be applied to 3D using another machine learning system for asset placement, given that the first system can tell what it did and where it placed stuff on the 2D imagee...), or to fill in generic assets in post, just to allow you to see if it could work, but all this based on textual description. 

    So the example i have in mind is like this: "Create a spiraling flock of excited and aggressive crows in the background behind the arch."  The system will maybe first slam it onto a rendered image (post or just what stablediffusion does) and once deemed ok, after an interactive and iterative process, the other system actually places your crows in your 3D scene - again, there is some magic contained, like knowing how to pose them, say yet another machine learning system will do the posing, or you have to select the assets and poses or elements of manipulation manually, for starters. The potential of a system like stablediffusion in post would of course include things like, actually changing the appearance of crows in ways, the pure assets don't provide, like making them look more "aggressive".

    This is a tool-like application, that enables artists to work differently, even if rather classic tools for scattering and the likes already exist. In fact this and others, and already DAZ Studio as it is, enable people to tell stories in images, even if they can't draw, and are not so good in describing what they want to describe, for instance. Especially in the latter case such AI tools could become very interesting. But it's a longer way there, even with something like stablediffusion popping up. The dystopy would be everyone using the cloud service of the next buyer of the technique, and no one resorting to actual artwork anymore. All the streets lead towards Rome ;)...

    Post edited by generalgameplaying on
  • I wish they'd cite the original source material used to produce the output.  I'd like to see what the humans had created.

     

  • charlescharles Posts: 849

    RangerRick said:

    I wish they'd cite the original source material used to produce the output.  I'd like to see what the humans had created.

     

    One would think that this should be a legal requirement. However that would probalby also place not only the piece but the engine in general in legal jeoprady if any sampling was shown to come from a copyright piece that hadn't opted in. And when you can have a computer running the ai from any country how would one enforcer copyright infrigement coming out of less regulated copyright areas of the world?

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493

    RangerRick said:

    I wish they'd cite the original source material used to produce the output.  I'd like to see what the humans had created.

     

    https://en.wikipedia.org/wiki/LAION 

  • superlativecgsuperlativecg Posts: 140
    edited October 2022

    I used a couple of AIs to voice and lip sync this Genesis 8 character that I imported into Unreal Engine 5.  The voice was generated using Micosoft's Azure text-to-speech available on their demo page.  The lip syncing was done using a free tool on Github called Wav2Lip.

    Post edited by superlativecg on
  • WendyLuvsCatz said:

    RangerRick said:

    I wish they'd cite the original source material used to produce the output.  I'd like to see what the humans had created.

     

    https://en.wikipedia.org/wiki/LAION 

    Thanks!  LAION's FAQ was interesting.

     

  • ghastlycomic said:

    More than better, faster render engines I think AI will be the biggest game changer as far as 3D computer animation is concerned. I have a feeling this will mean instead of being modellers and animators we'll all become directors. Can't wait.

    https://www.youtube.com/watch?v=ZXFmZsv0Ddw

    Imagine when we get to the point where we're able to describe to our computers what we want and then work on refining how the computer interprets our wishes.

     

    I prefer the red pill.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    superlativecg said:

    I used a couple of AIs to voice and lip sync this Genesis 8 character that I imported into Unreal Engine 5.  The voice was generated using Micosoft's Azure text-to-speech available on their demo page.  The lip syncing was done using a free tool on Github called Wav2Lip.

    I would assume specialized tools like that to be the bigger thing for artists. Looks like the results stablediffusion are "interesting", but still resource-intensive, and soon repeating or "familiar-ish". The underlying technique and theory has more other potential, some of which yet to be discovered.

     

    charles said:

    One would think that this should be a legal requirement. However that would probalby also place not only the piece but the engine in general in legal jeoprady if any sampling was shown to come from a copyright piece that hadn't opted in. And when you can have a computer running the ai from any country how would one enforcer copyright infrigement coming out of less regulated copyright areas of the world?

    I am not sure where law is at right now and in what country, concerning that. It's being discussed, and i wouldn't exclude the possibility, that either "social networks" may change their TOS to allow it, or add features to distinguish what content you want to be used for what, nor would i exclude the faint possibility, that some player just does it, similar to cambridge analytica or primeyes, knowing that it's hard to prove. With the capability of the system to just put in explicit bits of original pictures, though, i am not sure if such would really work out. In the end the user will be responsible for posting the results, as is now, so one shouldn't use such services blindly, should they emerge from uncertain data origins. You probably have to be nearly as weary about using such an image, than you should be with using bits and pieces from video or music in your work, which other people might have made (always know the origins).

     

    (To clarify on the source of images: by now, with the "small system" which like stablediffusiuon is distributed, it looks like they used public sources, but not at random. Likely they used legal to use material or even asked. Or it's still not even covered by law, what data you may train an AI with, so they only asked the social networks, which host the data. This may be researchable, apologies for my laziness. A university is involved, so it's likely not fully random, what they're doing, perhaps it's even seen as a "science project" with publicly available outcome, which may make the law side easier to deal with. However how to do similar for real now? If you need masses more of training data...)

    Post edited by generalgameplaying on
  • PaintboxPaintbox Posts: 1,633

    superlativecg said:

    I used a couple of AIs to voice and lip sync this Genesis 8 character that I imported into Unreal Engine 5.  The voice was generated using Micosoft's Azure text-to-speech available on their demo page.  The lip syncing was done using a free tool on Github called Wav2Lip.

    I enjoyed the joke :)

  • metasidemetaside Posts: 178
    edited October 2022

    Maybe this question has been answered already, but I read different takes on what one can do with the images specifically generated by Stable Diffusion: As I understood what I read about the license, I expect it to be ok to use all generated images for commercial purposes, other comments I read said that the situation is still very much unclear or even that everything made based on these images becomes creative commons. So I basically have no idea if it is already clear and if so, what the legal situation is.

    I'm specifically thinking about using some of the pictures in game projects that will probably become commercial:

    - textures for models used in the game (SD is really great for this imho! no need for hands or whatever)

    - in-game "artistic" pictures in frames

    - loading screens or menu backgrounds or other UI elements

    - in-game backgrounds or 2D objects in some form

    Is it clear already if all these uses are allowed with stuff generated in SD? Does it affect the final game and how it can be distributed in any way? And if it is ok to use the pictures in commercial games, is there a specific way in which I have to "cite" SD in the finished game?

    Thanks for all info and thoughts on this in advance!

     

     

     

     

    Post edited by metaside on
  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    metaside said:

    I'm specifically thinking about using some of the pictures in game projects that will probably become commercial:

    My scepticism concerns possible conflicts with the rights holders of images used as training data, which doesn't always seem to be social networks. In addition i can't foretell, e.g.  if the EU will come forth with another round of "implicit consent doesn't count", this time for AI being trained on your art... law may get updated within the next few years. Importantly, in this case, you have an "AI", that is capable of filling in images used in training data pretty much 1:1 into a resulting image, in theory +- if you're lucky/unlucky. They scraped deviantart, for instance... ok for science, but further, really? And for how long...

     

    The license of the code project: https://github.com/CompVis/stable-diffusion/blob/main/LICENSE

     "6. The Output You Generate. Except as set forth herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License."

    For me this reads as if i have to take care of other people's copyright and so on. If i was to use output images, i would consult a (specialized) lawyer or wait for one to comment specifically on this topic. I assume, you could use outputs in commercial works - but you don't have copyright (no lawyer, just reading), and still it's at your own risk, e.g. if it recreates a protected image/thing by accident or by it's nature. Unlikely maybe, particularly improbable if you check the outputs for potential copyright edges, foot in jail... maybe too ;).

     

    Quick stumbling across the internet (no proof):

    https://www.theverge.com/2022/9/15/23340673/ai-image-generation-stable-diffusion-explained-ethics-copyright-data

    "Unlike DALL-E, it’s easy to use the model to generate imagery that is violent or sexual; that depicts public figures and celebrities; or that mimics copyrighted imagery, from the work of small artists to the mascots of huge corporations."

    Looks like the "up to people"-part a little further down means, that people might have no idea about violating other people's rights, in the worst case.

    Illustration by me (not from the article): "The AI just placed twelve pairs of a red and a green hydrant each, crossed like a cross. Alas, it's been a modern artist's work on AI, and we got sued."

    In the end of this article a few of my maybe-s turn to defintives with "We know for certain that LAION-5B contains a lot of copyrighted content.".

    So this means, you do have to carefully craft, whatever you're doing, which includes using images generated or altered by this/such deep-learning systems!

     

    Others:

    https://petapixel.com/2022/08/19/ai-image-generators-could-be-the-next-frontier-of-photo-copyright-theft/

    https://www.siliconrepublic.com/machines/ai-generated-images-legal-risks-copyright

    https://analyticsdrift.com/getty-images-bans-ai-generated-content-over-litigation-concerns/

     

    (And of course feel free to ignore my texts, as well as to consult other people anyway.)

    Post edited by generalgameplaying on
  • evacynevacyn Posts: 975

    I'm starting to see more people posting on Twitter, Reddit, Instagram, DA, etc. AI-created images without identifying them as such, crediting the source image/artist or, even worse, taking credit for their "full" creation. 

    I love the prospect of AI-assited art and finding ways to incorporate it into artwork, but I think full AI-created images need to be kept off of dedicated art platforms (like DA, Artstation, etc.) because it's moved from "I created this cool image based on influences by X artists' to flat out forgeries. I get that you can type in some words to mimic Frazetta's style, but your involvement in its creation is akin to your barista skills when ordering a latte through the Starbucks drive-thru window.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited October 2022

    Facebook groups are rather entertaining

    there are those posting AI generated Waifu's with huge boobies and those loudly complaining about them

    I don't care either way, DAZ artists are much the same as far as that divide but nowhere near as vocal

    no nudity etc, in fact lots look like renders in clothes I wish DAZ sold (the pretty anime dresses) just overly well endowed

    I honestly don't get the outrage

    the ones objecting would probably freak out seeing the clothing sold here because the FB outfits are modest by comparison 

    just the wearers rather over blessed by silicone

    I tried one myself but it went really badly blush

    didn't dare share surprise

    (no the ones on FB actually look nice NOT like my monstrocity) 

    Note Mods, you can remove the image if it goes against the TOS, I never expected such a horror myself

    (image removed)

    Post edited by Cris Palomino on
  • metasidemetaside Posts: 178
    edited October 2022

    generalgameplaying said:

    Illustration by me (not from the article): "The AI just placed twelve pairs of a red and a green hydrant each, crossed like a cross. Alas, it's been a modern artist's work on AI, and we got sued."

    In the end of this article a few of my maybe-s turn to defintives with "We know for certain that LAION-5B contains a lot of copyrighted content.".So this means, you do have to carefully craft, whatever you're doing, which includes using images generated or altered by this/such deep-learning systems!

     

    Thank you very much for your thoughts on this subject and all the links! I absolutely see your points. A few additional thoughts on this. As I understand it, the pure text-to-image prompts are the most problematic area since these may generate anything and are generated with little user input. Theoretically, I would assume that the pure text-to-image prompts have a much higher chance of producing something copyrighted as compared to image-to-image when using your own art or simple scribbles as image prompt, especially when there is a high influence of the image prompt? Also, I would assume that using it for textures with some postwork and some image-to-image iterations should be less problematic, but I'm gonna wait for some decisions on this subject before publishing a game with anything ML-generated... I guess there's also the possibility that someone makes a new model based on only CC works...

     

     

     

    Post edited by metaside on
  • ByrdieByrdie Posts: 1,783

    Some people are into training their own models. You might want to look into that; there may also be ones around that are purely public domain/CC based. 

  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    @metaside Yes, with your own art as a base, using the software to alter the images, will be much less prone to copyright issues. (This would still be text+image to image. The text part constitutes the "magic", in way.) There may be a million specific applications, with very specific set

     

    Byrdie said:

    Some people are into training their own models. You might want to look into that; there may also be ones around that are purely public domain/CC based. 

    Of course there will be different fields of application and different sets of training-data, which will lead to largely risk-free applications. In theory it could always go wrong somehow, but depending on the training data and application, probabilities may be near that of a "spontaneous materialization of a rabbit".

    Some of the magic of the current approach is, that you can "talk" all sorts of contemporary, as well as historic topics, rendering memes, but also stories relating to real life. This will remain difficult, if you want to have no risk concerning copyright. The largest data sets are with or with uncertain copyright and even further demand consent by some corporations for even scraping them, so the very similar application remains to be half a minefield, already for the application developers. Apart from that i am absolutely sure, that there will be mostly harmless to use applications, more and less specialized ones.

    As mentioned before, DAZ could render stuff from their store from angles in a certain way, and incorporate that into a training data set, though that'll probably be ridiculously expensive just for the hardware and time. So i guess, more simple similar approaches may happen, like "for all textures" or post-work effects, with consent of artists, maybe for promo-renders, who knows. It needn't mean that artists are signing off their rights, the output could have a specific license for the DAZ3D Gallery maybe, or even just for personal reference as a tool, or it'll be a postwork-utility, just adding effects to existing images (though there may be more efficient deep-learning systems possible to build just for that). This and similar fields will stay fluid for sure...

    Post edited by generalgameplaying on
  • DandeneDandene Posts: 162

    WendyLuvsCatz said:

    Facebook groups are rather entertaining

    there are those posting AI generated Waifu's with huge boobies and those loudly complaining about them

    I don't care either way, DAZ artists are much the same as far as that divide but nowhere near as vocal

    no nudity etc, in fact lots look like renders in clothes I wish DAZ sold (the pretty anime dresses) just overly well endowed

    I honestly don't get the outrage

    the ones objecting would probably freak out seeing the clothing sold here because the FB outfits are modest by comparison 

    just the wearers rather over blessed by silicone

    I tried one myself but it went really badly blush

    didn't dare share surprise

    (no the ones on FB actually look nice NOT like my monstrocity) 

    Note Mods, you can remove the image if it goes against the TOS, I never expected such a horror myself

    I feel like I saw that in an Anime once.  cheeky I knew it would be a matter of time before people started using AI to generate waifus hehehe. 

    I will say that for the most part, most of the artists I follow on socials are still sticking to Daz for making pinups, waifus, and such.  I'm sure that'll change soon.  Many of them are almost strictly posting other AI art at this point, so I can see them venturing into that territory.  I know some sites/apps have strict rules about what you can and cannot generate.

  • charlescharles Posts: 849

    WendyLuvsCatz said:

     

    I don't care either way, DAZ artists are much the same as far as that divide but nowhere near as vocal

    no nudity etc, in fact lots look like renders in clothes I wish DAZ sold (the pretty anime dresses) just overly well endowed

     

    I don't know if I read this wrong or you just don't visit the same Daz sites as me laugh 

  • WendyLuvsCatz said:

    Facebook groups are rather entertaining

    there are those posting AI generated Waifu's with huge boobies and those loudly complaining about them

    I don't care either way, DAZ artists are much the same as far as that divide but nowhere near as vocal

    no nudity etc, in fact lots look like renders in clothes I wish DAZ sold (the pretty anime dresses) just overly well endowed

    I honestly don't get the outrage

    the ones objecting would probably freak out seeing the clothing sold here because the FB outfits are modest by comparison 

    just the wearers rather over blessed by silicone

    I tried one myself but it went really badly blush

    didn't dare share surprise

    (no the ones on FB actually look nice NOT like my monstrocity) 

    Note Mods, you can remove the image if it goes against the TOS, I never expected such a horror myself

    Reminds me of the DIana of Ephesus statue from the classical era or the lady from Total Recall.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited October 2022

    charles said:

    WendyLuvsCatz said:

     

    I don't care either way, DAZ artists are much the same as far as that divide but nowhere near as vocal

    no nudity etc, in fact lots look like renders in clothes I wish DAZ sold (the pretty anime dresses) just overly well endowed

     

    I don't know if I read this wrong or you just don't visit the same Daz sites as me laugh 

    I actually only follow Renderosity and a general 3D group for DAZ, Poser, Carrara and Bryce etc as many of the Facebook groups annoyed me with people spamming the same renders 10X a day cheeky

    so admittedly don't see  all that's out there

    but still see lots of BDSM and big tiddy renders that put the AI ones upsetting the Stable Diffusion groups to shame

    the ones that upset them are basically just Final Fantasy and Genshin Impact style Characters with huge boobs

    they call it porn

    I would not go so far as to say that

    many think the faces are children but I just see Aiko 3 myself

    Post edited by WendyLuvsCatz on
  • I posted this in another thread here, but thought it might be of interest:

     

    InTheFlesh said:

    Been away from the Daz scene for a while, and now I'm eagerly awaiting the release of Genesis 9 to see if the changes bring any significant improvements to my Genesis 8/8.1 character creation workflow. But here's a WIP of Henry Cavill I've been working on utilizing a bunch of custom tools I built:

     

     

    I've been delving into AI and specifically Super Resolution GAN's and training my own models from scratch, tuned to very specific purposes. Some that I've used on this project so far include a skin texture specific AI upscaling model, a skin detail enhancing model, and a skin detail displacement generating model:

     

     

    Pretty cool, but super complicated and time consuming to research and develop.

    But lets see if Genesis 9 has something interesting to bring my focus back to Daz 

  • DandeneDandene Posts: 162

    This article popped up on my feed earlier today.  I'm not sure how this guy thought he'd get away with accusing the artist.  The timing made it fairly obvious something was up.  And of course the usual AI goofy mistakes.  They defended themselves till the end.  They fought and died on that hill hehe.   

    https://kotaku.com/genshin-impact-fanart-ai-generated-stolen-twitch-1849655704

    A new fear unlocked for many artists, I imagine.  As mentioned in the article, it will probably make folks more wary of posting WIP online.

This discussion has been closed.