Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
this is not true, I checked with https://haveibeentrained.com/ and found at least one of my images in that collection besides several others I know from other 3d creators
also as someone who's been using dA for years, their course of action was a blow to the nethers because initially everybody was automatically opted in with all their images (1.1k images to manually opt out, yeah thanks a lot) they changes this only after they had a veritable shitstorm coming down on them in the social media. Plus this option only concerns training for external AIs. If you want to opt out of training their in house AI you need to fill an extra form where they might comply or not to your plea to be left out of that based on their own whims.
as a result I'm currently reducing my gallery over there to a bare minimum as well and I've sat silently through all their UI changes before hopeing it will get better again at some point.
How does that site work? I just ran a couple of my gallery pieces thru their search function and apparently each one has around 80-85% similarity with the top result -- whatever that means. I noticed many of the images in the results include Daz, Renderosity and other 3D store promo shots, a lot of which were for products I own and use in making my 3d art. So if that's the case, I probably wouldn't know if my work's been used for AI training. Not that I think it is because I'm not extremely prolific or a big or even moderate name artist that would attract imitators.
BUT ... if my original work is already that close a match for what AI has learned then I don't know whether to be impressed or dismayed. On one hand that means it should be a very useful tool for me to make pretty pictures with. On the other :sigh: I guess I'm not that original after all. What a blow to the old ego!
they use the Stable Diffusion engine but according to their journal train it themselves
Regarding the dA controversy: The initial roll out was opt-in by default, and if you wanted to opt-out from 3rd party data scraping you had to do it for each piece individually. If you've been with dA for a significant amount tIme, well, that is a bit of a slog. To opt out of their in-house ai generator required a separate Google docs form that was limited to five tags that would not be searched, if I understood it correctly. The form explicitly stated that there was no guarantee that your work would be removed from the data set, as I read it suggesting a request could be refused. All this went over about as well as you can imagine. The opt-out by default came after a great deal of pushback from both hobbyists and professional artists who make a living from their work. Some web comic artists whose work I follow dumped the site before the opt-out changes, and probably won't be back. Keeping the Stable Diffusion based generator on the site is just the cherry on top. @linwelly above and others probably know more about it than I do.
I was courios and tested out. Simple Fox portraits. What a joke. 5 of them still had the signature from the paintings it learned from.
And that's how you can understand that the images used for this are not CC (creative commons free to use as designed by the creators) but the creative property of creators who didn't agree to that use
Exactly . DA is now full with AI created art for to sell. I think this is wrong. I personally will never support this.
I have noticed Stable Diffusion will on occasion throw out an obvious training image that is in some way marked or stamped. Those I discard because (a) broke and (b) not kosher. However, it also seems to think that if there's a painting, there must be a signature or else it's not correct. When that happens you'll get things that bear resemblance to signatures or the like but actually aren't because whatever it's trying to write doesn't exist. And no, you do not need to have used any particular artist's style as part of your prompt for this to happen, sometimes if a prompt word has similarity to a name SD tries to produce a signature or logo, etc. These artifacts can show up in the strangest places and there may also be more than one of them on an image. Recently I've seen it put whatever it's idea of a signature should be twice on the same render, once on each bottom corner. No, they were not identical, which I think is a pretty fair indication it was the AI being silly, and no, there were no artist's names mentioned in the prompt, just the subject and "acrylic painting style". Probably the word "painting" is what made that happen, once I took it out I got no such markings after.
That said, it can indeed be difficult at times to tell when a mark or signature-like squiggle on an AI produced image is genuine and when it isn't. Which is why one must be very careful, especially if any commercial use is going to be involved.
Having watermarks and/or signatures end up in your output should tell you that the dataset used to train the AI contained copyrighted materials. Otherwise the AI wouldn't know to put one there. (It wouldn't put a legible signature or watermark...because it's using an amalgamation of copyrighted works to calculate what should be there.) That's one of the big bad things about all this. Most AI training datasets literally just scraped the interwebz of images regardless of copyright and/or permission. That's akin to those folks who scrape tumblr or google images and put the images up for sale on cafepress or mint them to the blockchain. That's obviously illegal. How is this not?
It's certainly going to keep the lawyers and legislators busy trying to figure it all out, that's for sure. The training data sets may well have mixed source material in them but works in public domain may also have signatures, so no way to be 100% certain which set the AI learned about such things from. I can say with certainty that it's possible to get Stable Diffusion -- dunno about MidJourney, etc. because I haven't tried doing it there -- to "sign" works that really shouldn't be. To test this, I ran a batch of images with the simple prompt: "dog with butterfly, acrylic painting" and for style, added "art by" somebody who does not exist. Of the results that came back appearing to be signed or stamped/watermarked, it was clear enough that the AI was attempting to "sign" them with that name. How do I know it was an artist who doesn't exist? Because I invented him and no, I did not use any combination of real artist's names. So if it went looking in a database for works by that guy to amalgate/imitate/whatever, there wouldn't be anything for it to find. All of the renders it made should have been unmarked, instead it randomly added the fake logo of a fake artist. And if I can make it do a trick like that ...
Like it or not, use it or not, one thing's certain about AI: it's not going to go away.
In the process of using my free Stable Diffusion credits, I got results back with obvious stock photo watermarks. Nowhwere in my promts did I indicate "watermark" or "stock photo"...so you can be sure whatever dataset it was referencing, that included copyrighted stock photo previews scraped from the likes of Shutterstock et al. The AI added the watermark because enough of the images it was trained on included watermarks, so the algorithm thought they were supposed to be there as part of the output. (For argument's sake, my prompts were looking for varying styles of cottontails sitting in grass eating flowers...so I would assume most of what the algorithm included for reference of cottontails was...watermarked stock photos.)
If you aren't providing an artist name, I have to wonder if the AI is intelligent enough to pretend to add a signature...I would assume that it doesn't even know what a signature is and is simply putting it in there because its algorithm sampled enough images with signatures that it just thinks it's part of the image.
In my test, I did provide a name. A fake one. Not sure how intelligent it actually is because that's not my field of study but it certainly seems to have "seen" enough signed and otherwise marked images to be able to produce watermarks, logos, etc. whether or not one asks for stock photos. Maybe it's all algorithim, maybe not. In any case it added what appears to be the first name where a signature should be on one of the pictures and the last name on another. Any guesses as to what it says?
It makes sense to me that the signatures it's adding appear as gibberish...because the algorithm doesn't actually know what it is that it's adding...that the sample source data it pulled based on your promt simply consists of lots of images with different signatures, so it just knows something is there...not specifically what it is.
Agree completely!!
This thing is here to stay and the only thing still art Illustrators ,can do at this point, is figure out how to adapt or even change careers.
As I stated in my recent video.
It is indeed very bad at reproducing words that should be on things. Such as the wording on traffic signs and the like. The results can often be quite hilarious, almost as if it has a language of its own. But more like some sort of sci-fi alien language. Or Simlish.
Could be an interesting time to go back to oil paints, watercolors, pencils and other more physical mediums.
Traditional art could really stand out more than ever in this sea of AI eye vomit.
Perhaps earn enough to eat.
*Brightside?*
you only need to be good at drawing hands
Hands yes. Also properly recognizable anatomical elements. ;-)
Thanks for this thread ^ ^ ^
I am obviously a couple of years behind (been sick etc.) but seeing as how they (Deviantart) trotted out negative prompts **I THINK** a couple of days ago it seemed like a good time to check on the pulse of things.
Heh heh... but with all the hand poses in the DAZ store, there's no longer a need to coerce the wife into the studio (or kitchen) and ask her to hold still while you get the shot(s) you need.
You mean like references to sex, anatomical elements, fascism etc.? Indeed the list appears to be long and you get a subtle threat to cease and desist.
Of possible interest - an item in Art News from last summer. And I'm pretty sure there was some media in January of this year (2024) about a class action where Midjourney Corp. [which is or was worth half a billion dollars?] was being challenged in court by a group of professional users.
I think that's a good point. Unfortunately Deviantart Corp. (of Santa Monica, CA) are engaging in semantics for their short-term benefit.
It's possible too that A.I. appliers are "crafters" of a sort. In my case I am a comedy writer so I'll use A.I. to suggest a scene or elements, and then I'll look up possible components in the DAZ store for precise high resolution rendering. Here's a quick example - my little dog has returned from the grave to haunt the neigborhood. I laughed at this and then proceeded to the DAZ store to get the little B.E.T.T.Y. coffin for the "undead" doggy to sleep in. Not sure what is up with what looks like "crown jewels" where the anatomical elements should be. But you get the idea.
Personally I am confused by all the terms! When I first read the above I thought it was a reference to Nightcafe. Urp... I see now that they are on the "art" bandwagon as well.
Which in a way destroys the training data needed to build our little "killer whales". For a while we'll be nitpicking about 3D and 2D, and maybe sculptors and perhaps if people contribute without noticing, and maybe someone can do something, until we notice that it's not been the infinity thing, and we're a tad bit short of the Thanos thing. For now of course, you may well be right, if/as too few brain is employed any further on this matter (politically).
Maybe, though it's pretty niche. It's less of a "producing artist" thing, if i understand PA right. And you do have the ai-based "make like oil painting button", when it comes down to commercial competition. Art remains elusive and you could do stuff with tools, and there are and will be tools based on some kind of ai too, so the productive edge likely will remain in existence in some way. The hard parts are how the transitions are shaped, e.g. not at all, how many people you throw into the fire deliberately, and if technology allows for concentration of power in the clouds, or rather for a democratization in whatever way, It could become a fast sequence of non-possibilities happening, and if we're unlucky, someone attaches the next thing to their nukes. Well, on the bright side i see three scenarios: 1. We are the first somewhat intelligent species to discover other ones, invent time travel, possibly destroy the universe twelve times, but eventually help ourselves and others beyond such and similar.. 2. We are being watched and we will be helped. 3. We are somehow lucky anyway.
That touches the topic of "not just ai". It's so much more, e.g. clickworkers in you-know-the-country for training, or filters applied on some stage like afterwards, such as checking a database of copyrighted works (rather image hashes), intermediate stages within a multi-stage thing of different components, some of which might be "ai". Typically, in order to not resemble copyrighted works too much, there are image hashing methods that might work (somehow / up to some point). For images such might about work, but it's by no means a guarantee, we are already weilding statistical numbers here, and if legislation falls for "statistically good enough", you'll always have some people getting destroyed. Then moving images... it might become horrible. Think about the ridiculous EU copyright reform with the upload filter discussion, in terms of "prevent from ever happening again" - pure magic. We're not past that magic, simply not.
(This is the post i wanted to answer to, the others are random :p. And maybe do note, that "ai-made" stuff might be detectable for a while, or maybe in general somehow, so there might still be other levers to pull, potentially. Adoption of checking methods also has to do with interests of commercial players, and if legislation is keeping up with what happens. It just looks like the holy grail is moving away from us, rather, if it exists. It appears to be shifty terrain, for now, and very likely it's not getting more simple on the short term.)
I would love for these to be the answer, and I am not really convinced that's the case.
I've downloaded the ~10Gb of stuff required to make Glaze and Nightshade go, including the Cuda Toolkit. These are AI-powered products being used to fight off other AI. The effectiveness of these tools is unclear. At this point, it seems you must choose which effect you want. Glaze attempts to make it so that the AI can't duplicate the style of your image. Nightshade attempts to poison the AI by making the AI associate your image to the wrong prompt keyword - eg making the AI think dogs are cats. At least that's how I understand it and based on what you actually do in the Nightshade interface. Once one inputs the desired options in the software interfaces, the programs re-render your digital image in the resolution of your original using your graphics card as the renderer. It does affect the appearance of the image. It generally takes a few to several minutes to get the result, depending on the image, your graphics card, etc. The user guides advise against announcing that these treatments have been applied to the images.
As long as I'm being all gloom in this thread, let's also mention that the courts might not find that AI-generated works violate copyright. That was never guaranteed. But the article I linked here also states that AI may well replicate part or all of the works used to train it. https://www.theatlantic.com/technology/archive/2024/02/generative-ai-lawsuits-copyright-fair-use/677595/
And as a final note, evidence is building that all the hardware used to make AI go is not exactly eco-friendly. https://www.theatlantic.com/technology/archive/2024/03/ai-water-climate-microsoft/677602/
I can't b*tch about AI. Chat GPT has helped me with my work, and getting back into art. And when I was looking after my Mum (and after losing her), I was using AI sites to generate images of my favourite gaming character, and my oc - For comfort, or some sort of distraction, I guess (I didn't want to use Daz, draw, or do anything creative). But I'm now getting "back on the horse".
I've just had a look at "HaveIBeenTrained", and three of my works were used to train the DeviantArt AI. Except... THEY'RE ALL WATERMARKED IMAGES. Dumba**es XD