AI is going to be our biggest game changer

1141517192048

Comments

  • Ghosty12Ghosty12 Posts: 2,065
    edited September 2022

    Here is another one done by The Corridor Crew came out pretty good, and shows a new tool for AI.

    And this one.

    Post edited by Ghosty12 on
  • Ai does not work without existing maps, i.e. artists' paintings or photographers' example images. AI reads material from the Internet (or it is separately entered into the program) from which it processes a written or graphically outlined result. AI is a wonderful tool, but it is flawed. Human anatomy is strange to it, often a person has three hands or a strange face, so the picture always needs corrections. I look forward to Daz adding AI to his program. Ai doesn't disappear anywhere, and it really is going to be  in music and literature. The possibilities are endless, but it needs human ideas and innovative insights. Ai is also created by humans, the machine itself does not create anything.
    p.s. An art competition was already won by a work done with AI

    https://eu.chieftain.com/story/news/2022/08/31/ai-painting-wins-at-colorado-state-fair-pueblo-artist-explains-jason-allen/65466872007/

    Note! I wrote this from Finnish to English using google translator because then I'm faster in my writing, so there may be mistakes in my text, but thanks to the internet and artificial intelligence etc.

  • I just enjoy zooming through the alien guts and stuffs it creates on my PC

  • Kaleb242Kaleb242 Posts: 344
    edited September 2022

    The anatomical mishaps are the most entertaining part of these diffusion-based A.I. image generation models... it's not uncommon to get 20 fingers on one hand, 3-4 legs attached to the hips, head attached to waist without arms, backwards legs (aka. "front butt"), or glutes attached to chest, or 3-6 breasts on 1 torso, 4 nipples (2 on each breast), areola's on glutes, feet where hands should be, hands where feet should be, 2 heads on 1 torso, or another body growing out of the head.

    At the same time, it's so much fun when you can explore so many different compositions, concepts, styles, mediums, and color variations in minutes... something that would normally have taken hours, days, weeks to achieve traditionally now just takes minutes. It's so much fun, but it's also a bit like having infinite possibilities to sift through. It takes a lot of experimentation with crafting the perfect text prompt and diffusion settings...

    I've spent the past month diving deep into Softology's Visions of Chaos machine learning models including Stable Diffusion's Text-to-Image and Image-to-Image, as well as Deforum Diffusion for animation. There's so much potential in this technology... it's even possible to train it to learn new concepts with a custom keyword with just a few 512x512 images.

    I put one of my unfinished traditional character designs created on Canson paper with prismacolor pencils into Stable Diffusion's Image-to-Image mode as an Init image, then used a text prompt to describe everything in the image, and then explored variations of that character design in different media like oil paint, watercolor, octane render or 8K photography. Haven't had this much fun generating art in years... the barrier to entry is becoming increasingly lower.

    To learn how to craft better text prompts, check out Lexica: https://lexica.art/
     

    Post edited by Kaleb242 on
  • Kaleb242Kaleb242 Posts: 344
    edited September 2022

    DALL-E 2 is finally open to everyone.. no more waitlist.
    https://openai.com/dall-e-2/

    Stable Diffusion is much more robust though...
    https://stability.ai/blog/stable-diffusion-public-release

    The newest version of Stable Diffusion v1.5 can be explored online in the dreamstudio beta...
    https://beta.dreamstudio.ai/

    There's a Photoshop Plugin by Christian Cantrell for layer-based Stable Diffusion img2img in-painting...
    https://www.youtube.com/watch?v=t_4Y6SUs1cI

    If you have a powerful computer, you'll get much more utility out of Visions of Chaos and its implementations of Stable Diffusion on your local machine, especially if you have an NVIDIA GPU with at least 8 GB of VRAM... but 24 GB is highly recommended...
    https://softology.pro/voc.htm

    The machine learning modes require some set-up, but it was very easy to follow Jason's instructions page...
    https://softology.pro/tutorials/tensorflow/tensorflow.htm

    If your computer isn't powerful enough, using a Google Colab notebook is another option...

    Here's how to run Stable Diffusion from a Google Colab notebook:
    https://medium.com/geekculture/2022-how-to-run-stable-diffusion-on-google-colab-5dc10804a2d7

    https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

    Post edited by Kaleb242 on
  • ByrdieByrdie Posts: 1,783

    It does indeed create amusing mutations. Here's one of the most work-safe, Daz_TOS-friendly I got last night while working on a grown up Harry Potter to go with my Snape. Was going to toss it in the bin, but might use it as a relative of Hagrid's.

    2022-09-28-23-40-34-2-914635302-scale8.00-ddim-2.png
    512 x 512 - 464K
  • Not sure if it was already posted here, but this one seems interesting and kinda relevant here:

    https://nv-tlabs.github.io/GET3D/

  • The only AI that I'm looking forward to is 3D mocap from 2D video! Imagine just how advantageous that would be for our hobby!

  • takezo_3001 said:

    The only AI that I'm looking forward to is 3D mocap from 2D video! Imagine just how advantageous that would be for our hobby!

    https://plask.ai/ 

     

     

     

  • WendyLuvsCatz said:

    takezo_3001 said:

    The only AI that I'm looking forward to is 3D mocap from 2D video! Imagine just how advantageous that would be for our hobby!

    https://plask.ai/ 

     

    Cool, I was hoping you'd give some great advice with animation, yeah, I'm also interested in “found” 3rd party footage as with videos such as this… It would be great if Daz specifically took advantage of that tech, as that would be huge for their animation community!

    That means no more need for cams, you could just go to YouTube or the like for content!

  • the biggest issue is DAZ BVH import

    you noticed I used Aiko 3

    that was for a very good reason

    I had a Genesis 1 there but avoided 3 & 8 entirely 

  • PendraiaPendraia Posts: 3,598

    SnowSultan said:

    And then sometimes you make something like this in seven seconds like I did this morning and wonder why you keep messing with 3D at all.

    We barely even have 3D clothing as detailed as this, let alone this style being almost impossible to replicate purely in 3D. I want to keep using my 3D figures and models, but it's incredibly frustrating that no matter how much I spend and how many programs I own, they are not going to give me results like what this did in less time than it takes for Iray to even warm up - or for that matter, if I spent a year trying to do it myself.

    It comes down to whether you a process orientated person or an end orientated person. For me the reason I do 3D is I enjoy messing around with meshes and surfaces. I have no interest in typing text into a box. If I don't have the things I need I try to create them. It's about improving on what I can do. So really if all you're after is the end images then, yes ai is for you. If you are about the journey then maybe not.

  • takezo_3001takezo_3001 Posts: 1,997
    edited September 2022

    WendyLuvsCatz said:

    the biggest issue is DAZ BVH import

    you noticed I used Aiko 3

    that was for a very good reason

    I had a Genesis 1 there but avoided 3 & 8 entirely 

    Agreed, Daz does need to implement better BVH support, as that's the whole backbone of animation conversion from mocap data!

    I'm hopeful that once DS5 rolls out, we may have a better animation solution and support for BVH.

    Post edited by takezo_3001 on
  • outrider42outrider42 Posts: 3,679

    I think gaming has already given a strong hint of how this can go.

    A number of "remastered" games have been released in the past few years. A number of them use old assets that have been upscaled by AI. There are several ways this effects the development.

    Using the AI to generate upscaled textures is obviously cost saving. And it shouldn't be a surprise how, they save by not spending as much on labor to manually create the assets. It saves time, which saves money, and they don't need as many people in the first place.

    It is not always a bad thing. Because of the reduced costs, it makes it possible to see things that likely never would have happened. The new Suidoken remasters I think would never happen without AI. Those games are just too niche to warrant a remaster otherwise.

    But on the flip side, you can get the Grand Theft Auto remasters, which had all kinds of problems. Gamers spotted signs that made no sense, because the AI has trouble making out words. Much of the issues with the game came from cutting too many costs. The whole project was outsourced. This is not an opinion statement, it is actual fact. 

    You can certainly use AI upscaling for good. The Mass Effect Trilogy remasters used a combination of AI upscaling and hand made texturing to recreate the original games. They also tried to rebalance the games and make quality of life improvements to various outdated menus and controls.

    So with this you can try to argue that AI has already taken people's jobs. But that might not be true here. Like I said earlier, some of the remasters would not happen at all without AI. As for the GTA remaster, that game was outsourced because the primary team is working on the next game in the series. So they did not necessarily lose their jobs, they are still doing the job at hand. The GTA remaster also shows that you still need people to put the game together and at the very least QA check what the AI was generating. You can't just AI generate everything.

    I saw an article that showed characters from Fallout 2 reimagined with Stable Diffusion. These old games were made with low resolution 2D sprites. They took the old sprites and used AI to generate new character portraits that are more realistic. It is pretty impressive.

    https://gameworldobserver.com/2022/09/28/fallout-2-stable-diffusion-reimagined-characters-remake

     

  • SnowSultanSnowSultan Posts: 3,633

    Pendraia said:

    SnowSultan said:

    And then sometimes you make something like this in seven seconds like I did this morning and wonder why you keep messing with 3D at all.

    We barely even have 3D clothing as detailed as this, let alone this style being almost impossible to replicate purely in 3D. I want to keep using my 3D figures and models, but it's incredibly frustrating that no matter how much I spend and how many programs I own, they are not going to give me results like what this did in less time than it takes for Iray to even warm up - or for that matter, if I spent a year trying to do it myself.

    It comes down to whether you a process orientated person or an end orientated person. For me the reason I do 3D is I enjoy messing around with meshes and surfaces. I have no interest in typing text into a box. If I don't have the things I need I try to create them. It's about improving on what I can do. So really if all you're after is the end images then, yes ai is for you. If you are about the journey then maybe not.

    Good point. I personally don't like the process, it's frustrating and stressful and I still do not get results that I'm completely happy with. If the time ever comes when we can sketch a stick figure and some sun rays and have AI make a figure in the pose we want and with the lighting we want without the need for too much fixing and postwork, I'd be happy to kiss the world of normal maps, denoisers, and UVs goodbye.

  • outrider42 said:

    I think gaming has already given a strong hint of how this can go.

    A number of "remastered" games have been released in the past few years. A number of them use old assets that have been upscaled by AI. There are several ways this effects the development.

    Using the AI to generate upscaled textures is obviously cost saving. And it shouldn't be a surprise how, they save by not spending as much on labor to manually create the assets. It saves time, which saves money, and they don't need as many people in the first place.

    It is not always a bad thing. Because of the reduced costs, it makes it possible to see things that likely never would have happened. The new Suidoken remasters I think would never happen without AI. Those games are just too niche to warrant a remaster otherwise.

    But on the flip side, you can get the Grand Theft Auto remasters, which had all kinds of problems. Gamers spotted signs that made no sense, because the AI has trouble making out words. Much of the issues with the game came from cutting too many costs. The whole project was outsourced. This is not an opinion statement, it is actual fact. 

    You can certainly use AI upscaling for good. The Mass Effect Trilogy remasters used a combination of AI upscaling and hand made texturing to recreate the original games. They also tried to rebalance the games and make quality of life improvements to various outdated menus and controls.

    So with this you can try to argue that AI has already taken people's jobs. But that might not be true here. Like I said earlier, some of the remasters would not happen at all without AI. As for the GTA remaster, that game was outsourced because the primary team is working on the next game in the series. So they did not necessarily lose their jobs, they are still doing the job at hand. The GTA remaster also shows that you still need people to put the game together and at the very least QA check what the AI was generating. You can't just AI generate everything.

    I saw an article that showed characters from Fallout 2 reimagined with Stable Diffusion. These old games were made with low resolution 2D sprites. They took the old sprites and used AI to generate new character portraits that are more realistic. It is pretty impressive.

    https://gameworldobserver.com/2022/09/28/fallout-2-stable-diffusion-reimagined-characters-remake

    Looks to me as if it has turned the heads around to look at the camera in several cases, though - which might be the AI "making an assumption" based on its input.

  • csaacsaa Posts: 824
    edited September 2022

    Pendraia,

    Yes, that's a very good perspective. It calls to mind the back-and-forth arguments years back over digital versus analog photography. For certain some prioritize the outcome -- preferrably the sooner the better, hassle free; meanwhile others luxuriate in the process no matter how inconvenient.

    I've tried both digital and film photography. When a friend heard that it takes a week before I could get the negatives from the lab, he wailed about the wait. Why bother, he claimed, when digital gets the result in an instant. All true. There's certainly value in the speed and quantifiability of digital images; on the other hand there's pride and alchemic wonder that digital can never have over film.

    Just as there's a resurgent and sustained interest in film photography, I suspect that AI art will likewise reach a high water mark, but never fully overwhelm bespoke art. Faced with the overwhelming presence of what's currently "in" people will yearn for what's "out" ... that's just the Yin and the Yang of things.

    Cheers!

     

    SnowSultan said:

    Pendraia said:

    SnowSultan said:

    And then sometimes you make something like this in seven seconds like I did this morning and wonder why you keep messing with 3D at all.

    We barely even have 3D clothing as detailed as this, let alone this style being almost impossible to replicate purely in 3D. I want to keep using my 3D figures and models, but it's incredibly frustrating that no matter how much I spend and how many programs I own, they are not going to give me results like what this did in less time than it takes for Iray to even warm up - or for that matter, if I spent a year trying to do it myself.

    It comes down to whether you a process orientated person or an end orientated person. For me the reason I do 3D is I enjoy messing around with meshes and surfaces. I have no interest in typing text into a box. If I don't have the things I need I try to create them. It's about improving on what I can do. So really if all you're after is the end images then, yes ai is for you. If you are about the journey then maybe not.

    Good point. I personally don't like the process, it's frustrating and stressful and I still do not get results that I'm completely happy with. If the time ever comes when we can sketch a stick figure and some sun rays and have AI make a figure in the pose we want and with the lighting we want without the need for too much fixing and postwork, I'd be happy to kiss the world of normal maps, denoisers, and UVs goodbye.

    Post edited by csaa on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited October 2022

    this is the scary stuff I am uploading to my AI art channel

    those use a video of me

    I have also done a Poserecorder animation of the same video for my DAZ dolls which will pop up eventually on my other channel

     

    a video of totally ai generated medieval tortures

    Post edited by WendyLuvsCatz on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493

    my original viral Carrara rendered video Youtube removed because posting the lyrics broke an unrevealed rule

    I reuploaded it to it's own dedicated channel

     

    the AI Remake using it I just uploaded 

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493
    edited October 2022

    many CGI groups are now insisting on a wireframe render to prove it is 3D

     Might be something DAZ could choose to do for future competitions if 3D remains their main focus

    I got another NFT email from them today so they honestly may not care

    Post edited by WendyLuvsCatz on
  • digitelldigitell Posts: 577

    I disagree..there is a similar thread going on at another site. I think this is a mute point. People love creating. That is all there is to it and to just use AI to create takes all the fun out of it.

    I really feel that there shouldnt be a worry..folks will continue to create as they wish and AI will be left to the folks who need an easy image to suit their needs.

  • Payat ParinPayat Parin Posts: 1,024

    This is so DISHEARTENING! for existing and future artists. Learning about art and its ways is going to vanish soon replaced by AI generated images. No more brush strokes, canvasses, 3D, etc. in the near future; because they require hours upon hours of work and messy workplace. Not to mention the frustration of drawing or sketching a figure or character with degree of finesse and perfection. Art itself is going to be diminished. It will demotivate art enthusiasts and frustrates them.

    Midjourney, Dale-E, Nightcafe, Deep Dream Creator, Artbreeder, and DeepAI are the prominent AI art generators at this point in time. There will be more in the foreseeable future. I am sure people are findings way to make profit out of their generated AI images. I read somewhere that subscription starts at $10 a month. That would be $120 a year with thousands of quality generated images.

    So the question is whether to continue buying Daz assets and spending money, time, and effort? Knowing that AI generated images are far better and superior than some of our renders. Lighting, posing, color, characters, and content can be dictated in these AI programs in order to produce a quality image. All the money spent on buying here may have been wasted specially if its only a hobby. Learning 3D need not be this expensive. The use of Maya, Cinema4D, Autodesk, etc  will be greatly affected. Even using Photoshop maybe in question. Behold the booming of AI!

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,493

    Payat Parin said:

    This is so DISHEARTENING! for existing and future artists. Learning about art and its ways is going to vanish soon replaced by AI generated images. No more brush strokes, canvasses, 3D, etc. in the near future; because they require hours upon hours of work and messy workplace. Not to mention the frustration of drawing or sketching a figure or character with degree of finesse and perfection. Art itself is going to be diminished. It will demotivate art enthusiasts and frustrates them.

    Midjourney, Dale-E, Nightcafe, Deep Dream Creator, Artbreeder, and DeepAI are the prominent AI art generators at this point in time. There will be more in the foreseeable future. I am sure people are findings way to make profit out of their generated AI images. I read somewhere that subscription starts at $10 a month. That would be $120 a year with thousands of quality generated images.

    So the question is whether to continue buying Daz assets and spending money, time, and effort? Knowing that AI generated images are far better and superior than some of our renders. Lighting, posing, color, characters, and content can be dictated in these AI programs in order to produce a quality image. All the money spent on buying here may have been wasted specially if its only a hobby. Learning 3D need not be this expensive. The use of Maya, Cinema4D, Autodesk, etc  will be greatly affected. Even using Photoshop maybe in question. Behold the booming of AI!

    a Facebook AI group post  hideously long URL so used link function

    people making money on Adobe and Shutterstock selling stock images made in Midjourney

  • kyoto kidkyoto kid Posts: 41,198
    edited October 2022

    LeatherGryphon said:

    What's AI?

    Artificial Intelligence, and honestly I can't see much benfit for it in DS

    Yea, I don't either, but then no one's ever accused me of being much of a visionary ;).

    Laurie

    In the '70s after I turned down a job offer from a friend in college to help him make video games, I confidently predicted "there's no future in computer games". blush

    ...many years ago I was at a Sic FI con in Seattle amd one day I saw a note on the wall near the gaming area promoting the play-test of a new fantasy card game.  Curious I signed up to attend as back then I was into trying out ne gaming concepts (Advanced Dungeons and Dragons and Traveller were "maisntream" back  then).  After the session we discussed the pros and cons game and one of my comments was that it was "quaint" but liskem ost card based games seemed too limited and finite and I didn't see it catching on very much given how charcter based RPGs were more popular.  One of the rewards for participation in the test was the deck we each played.  

    As I was more into the P&P role play games at the time and didn't see much of a future for the concept, I gave it the box with the deck to a kid later aftewards.

    That "quaint" card game came turned out to be Magic the Gathering, and I probably could buy a shredding system wiht a 3090 for what that deck of cards is worth today.

    Yeah, a total...

    ...moment in retrospect.

    Post edited by kyoto kid on
  • Well when someone goes oh you ain't an artist because you use pixel barbies to make your art, you can kindly point their face at the AI generators, and say you were saying?

     

  • kyoto kidkyoto kid Posts: 41,198
    edited October 2022

    ...someone on page 1 of the thread (I skipped through to the last page as I saw it began two years ago) mentioned about "neural interface" (actually referred to as "direct neural Interface" or DNI)  With DNI ,you theporetically connect yourself into a computer system via a data interface implant like a micro jack or array of EEG induction pads which transfer thought signals to the AI which would interpret and translate them into digital signals so you can create fully rendered images with your mind. The AI would also serve as a feedback buffer to shield the individual from any instability that may arise from the hardware end.

    I have no doubt it will eventually come to that sometime in the near future, but unlike AI created art, it would put the individual artist back in full control again,  simply removing the physical interfaces we depend on today. Of course such a process would involve a fairly steep learning curve that would require a form of mental discipline to avoid random thoughts and distractions from interfering.

    Yeah, I've read too much William Gibson and seen too may cyber future films, but then, we now have "personal communicators" that we can pull out of our pocket and talk to anyone on the world with which were once a trope of a certain old Sci-Fi series over a half century ago..

    Post edited by kyoto kid on
  • wolf359wolf359 Posts: 3,834

    I hope to see the day When AI does ALL of the heavy lifting  leaving me to just describe the sci-fi space epic I want produced.

  • takezo_3001takezo_3001 Posts: 1,997

    digitell said:

    I disagree..there is a similar thread going on at another site. I think this is a mute point. People love creating. That is all there is to it and to just use AI to create takes all the fun out of it.

    I really feel that there shouldnt be a worry..folks will continue to create as they wish and AI will be left to the folks who need an easy image to suit their needs.

    Indeed, artists are always going to be artists, whether it's pen & paper, 3D, or throwing paint at the canvas, I mean, the same thing was said about digital vs traditional art, Zbrush sculptors/modellers vs Poser/Daz artists…

    As artists, we should care less about the medium, as it's all about creating something out of nothing, and expressing ourselves through our art!

  • generalgameplayinggeneralgameplaying Posts: 517
    edited October 2022

    For now, the technique is machine learning-based, as opposed to having a "strong AI", the latter of which would be like human intelligence. Using the term "AI" muddles the water somewhat and might lead to people expecting too much, and ads promising too much. Don't expect miracles, it'll rather be cold-blooded killers, like alpha-go or alpha-zero.

     

    The current state of things isn't that great, concerning "AI"-assisted content creation, it's just "interesting", maybe on the verge of becoming something annoying. However, specifically as more specialized or cheaper and at the same time better, as well as more parallelized hardware evolves, actual tools that support you, will become better or even just feasible to build at all, meaning tools that employ machine learning (substantially, not just for the buzz-words). It's still a lot of work and research to build useful software, that actually does something with machine learning. So on the one hand, many more machine-learning-based applications pop out of the ground as we speak, specifically in audio processing, but of course also in the headlines with the top-notch stuff at the movies and somewhere in between with image/photo  applications. For Video and rendereing i'd assert, that the complexity is just one or another level higher than with typical audio and photo applications, so the big advances for video and rendering currently are rather with data centers or cloud-applications, at this point, or maybe just up to now. It's just not so easy to build something lasting, that stays useful, like "helping with animation", so not so many players will have succeeded by now, at the edge of the doable. Denoising probably is one of the simpler tasks, in comparison, but it's complex enough to not have been available at randomly low prices on each and every corner, a moment ago. In fact "AI-denoising" actually might be distributed more and more as a part of photo-software, that manufacturers of digital cameras themselves offer their buyers free of charge. So maybe we are there already, at that specific junction for photo, though there still is a lot of higher ground to cover, beyond denoising.

     

    I see a problem with ai-assisted art-generating Software taking the jobs too quickly: copyright.

    It's premade content but very limited, even if it's been fed "so huge an amount of content". That just for the content, and then it's further going to be limited by policies, like "no religion, no politics, no nudes". Blood and gore and violence likely will be ok on the menu, but probably will not really remain manageable. (Paint him without legs and with tomato juice and upside down, ok not, paint him walking upside down, with his leg taking a swim in the pond, we'll get there....). Let's not divert, the current thing is not enough, it needs more abilities and more content.

    So where to train such a system from? There is thinkable sources, of course, but it doesn't look like it could easily work.

    1. Copyright against the results. The result might be a copyright infringement, no matter how sophisiticated the system is, no matter if they own all the rights to the assets in use, the result can still infringe on other people's rights. That's probably not even special, BUT it's half a dealbreaker for the "free of charge" cloud-model, that feeds of user-generated content.

    2. Deals with other platforms. Maybe platforms like deviantart ;), hopefully not though. So who posts their stuff there, will in future remove their own job by training the ai, or independently: many people train the ai (with what though...). Variations of this. Problem with user-generated content remains the copyright, which may at first seem indetectable within the innerts of the "ai" because you don't see it in the machine learning system, however it's not gone, ... infringing material from training will likely have a chance to generate infringing material as a result, in general.

    3. This brings us to the old lie: "the user is safe". Likely they'll have to exclude all liability for generated images, or do it already. With current EU-copyright law there is no way to heal this, AFAIK, not having fair use, and risking high fines for repeated infringement. 

    4. Contracts with the biggest organizations that hold rights on videos and images, like "the content industry" :), or in germany the VG-Bild-Kunst (compare to what GEMA is with music) and which again was it then. I don't see them making enough money to pay similar amounts of x*1000000000$ to parts of content industry and the publishing industry in europe, on short to medium term.

    5. There could be peculiar alliances (content industry + big tech), which may have some idea of how to pull something off, with using somthing like "the whole internet" for it.

    6 . All this will lead to upload-filters and "other kind of restrictions" of all sorts, including showing skin at all, and all the false positives we know from such systems, while the users will have to live with the uncertainty of infringing other people's rights all the time. Can anyone solve this?

     

    In essence copyright looks like half a show-stopper for any fast development, but that might not hold true on the medium term. The other very big point is, that for the artist, it probably makes more sense, if the ai-system trains and learns with YOUR very asset library, because you can control the training data this way, and the tool will have to be more of a tool, than a guessing thing. Unfortunately that is a different type of beast "AI-wise". It would have to learn on DAZ-side for instance, on the one hand to train on a lot of assets, but also for retaining assignability for used products, in terms of license-management. Possibly DAZ would free their users from some of the edges, allowing lots of stuff or derived stuff in,  and/or from there it would have to learn detached, just interacting with the user and their differing assets, which may also include assets in different formats or from different vendors, which is where it gets complicated again :), but that's on the user then. The guessing thing may be fun and may be terribly good some day, but it'll likely be "like with smartphone cameras: good enough for a lot, but not everything...".

    Then again, if they added a section for adults, nobody knows what'll happen. In that instance, in theory, it could become instant consumption, destroying other markets :). So i am curious if for instance producers of adult movies would license content to such a platform, knowing that it might knock them off the feet the next day. Complexity-wise generating videos (really new ones) is another level, yet to even scratch with less than a super computer.

     

    So in the end... what do i think DAZ Studio could profit from, concerning machine learning?

    - I think it can :). It usually means a lot of extra work, if it is something substantial, so i don't know how they'll monetize, or if they have already priced it in, or if at all...

    - Of course some of the already mentioned algorithms like denoising may come, and maybe helping with animation from poses, helping with muscles from the time line and the posing, helping with dforce cloth, hair, what not, ... all the stuff that's still itchy to do with classic algorithms, but could in theory be mended at a glance. And you might still turn it off, if it can't handle something.

    - Might create a learning-capable smart content tool :). Simple things like selecting or advising the best matching categories, e.g. during character setup, after deselecting, reducing clicks. Smarter sorting order with multiple panes, e.g. when i'm looking for shaders for a specific type of item/thing, there might be better choices than showing shaders or morphs from products, for which those are not the primary thing.  Selecting replacment shaders for some contexts. Machine learning might actually help in such instances. Better im- and export of formats, maybe ai-assisted (assigning textures, grouping...). Maybe internal tools to export better to other formats for gaming contexts, e.g. using ai to transform the non-compatible parts like some shaders or rigging into whatever the goal is. However, typically, if it has to run on "some laptop", it has to be baked into a machine, and the end user device is just using the trained system. That for instance means, that DAZ will have to think up and train such a system all on their own, or that some bigger or specialized player has to develop such a system and DAZ pays them license fees, to use it in Studio. Meaning, "AI" doesn't initially help DAZ developing machine learning features, so they would have to do that on their own, at their own risk. Maybe that leads to DAZ waiting forever, until it's clear that a specific application can actually work, i.e. others already having implemented such a case. I can't judge that...

     

    Post edited by generalgameplaying on
This discussion has been closed.