Is there text to 3D Pose AI ?

I imagine, we can just type what we want, and an AI will give use the pose which we can use to our DAZ3d character.

Comments

  • wolf359wolf359 Posts: 3,827
    edited September 2023

    James said:

    I imagine, we can just type what we want, and an AI will give use the pose which we can use to our DAZ3d character
     

    @James
    Perhaps the new AI character creation product from Tafi may eventually implement such a feature when it comes out.

    Post edited by wolf359 on
  • mbug90mbug90 Posts: 53
    edited December 2023

    I need this... right now.

    Not already having something like this that I can use, in DAZ Studio or not, pisses me off.

    Post edited by mbug90 on
  • crosswindcrosswind Posts: 6,914

    Haha ~~ I bought a lot, used a lot and tweaked a lot. I'm an AI user, however, even if there's a Text to Pose in 3D app, I'm afraid you still have to tweak the poses after AI "pushes" some to you ~~

  • and the answer is

    a BIG

    YES

    for Poser 12

     

  • crosswindcrosswind Posts: 6,914

    That seems still the case of feeding poses to SD rather than generating poses in DS or 3D application by AI ~~  BTW, I even prefer a realtime "Chat to Poses" wink

  • well there is also https://plask.ai/

    works with iClone 3DX6 for me and Carrara, I have used ot on M4, V4 and Aiko3 too in DAZ studio but Genesis+ not so great

    I do however now have Bone Minion to convert M4 and V4 to Genesis 9 so I really should revisit it

  • crosswindcrosswind Posts: 6,914

    Yeah I know that site. Similarly, I'm learning and experimenting with AI-assist physics and MoCap in Cascadeur ~~

  • takezo_3001takezo_3001 Posts: 1,971

    James said:

    I imagine, we can just type what we want, and an AI will give use the pose which we can use to our DAZ3d character.

    All I want from AI is to animate a character using 3D data created from 2D video, ANY 2D video, not just video that was self-shot, and that doesn't rely on a internet connection to process! 

    So hopefully, Daz will get in on this tech once it's made available!

  • Seven193Seven193 Posts: 1,079

    WendyLuvsCatz said:

    and the answer is

    a BIG

    YES

    for Poser 12

     

    I think that's the opposite of what OP's asking. He wants to go from AI to Daz Studio, not Daz Studio to AI.

    Since AI only generates 2D images, you'll need to find a way to convert 2D poses to 3D.  Something like this:
    https://github.com/HW140701/VideoTo3dPoseAndBvh

    Daz Studio can import BVH files, but you still need to remap and retarget the pose to fit the base character's skeleton.

  • cain-xcain-x Posts: 186

    DAZ would get lots of attention if they could get text to pose/animation to work. Imagine if you can tell your DAZ character to "walk to this waypoint, stop then go idle", that would be something.

  • cain-x said:

    DAZ would get lots of attention if they could get text to pose/animation to work. Imagine if you can tell your DAZ character to "walk to this waypoint, stop then go idle", that would be something.

    That should actually be fairly simple to script: the logic is, at any rate.  The user names his figure "Bobby," for example.  Then he calls up the AnimAI script, and tells it that "Bobby walks to the bus stop, sits on the bench, and looks left and right."  This instructs the script to do the following sequence:

    1. Each comma separates an explicit action, as with MidJourney, so the first is as follows:
      1. Select figure "Bobby".
      2. Select an aniBlock for action "walk."
      3. Trace a path to node "bus stop" from Bobby's present coordinates that intersects as few other meshes/instances as possible.
      4. Apply "walk" aniBlock to Bobby along generated path and update his location.
    2. After the comma, we have the script
      1. Select "bench" in the vicinity (+/- 2m virtual radius, adjustable in script settings before entering the AI prompt) of the bus stop.
      2. Select a "sit" aniBlock, "Walk" Bobby to the bench (using previously selected "walk" aniBlock) and orient him to "sit" on its upper surface.
      3. "Sit" Bobby on the bench.
    3. Following the third comma, the grammatical "and" tells the script to end execution after this sequence if it tells it anything at all:
      1. Select Bobby's head, neck, and both eye nodes.
      2. Execute a "look both ways" aniBlock, or rotate the selected nodes (per the "look" verb/command) approximately 30° counterclockwise* each (for additive effect on child nodes), then 60° clockwise, then 30° counterclockwise to the 0 position.

    The paths for motion should follow mesh contours whenever applicable, of course, and "sit" should use some such collision-detection system to sit on the bench rather than in or above it.  The script will also have to distinguish translation commands (walk, run, drive, gallop, trot, crawl) from prop-interaction blocks (open, sit on, lift, pick up, lie on), and different sorts of prop interactions from each other.  But that should really be all you need for it:  it's not so much "AI," really, as an algorithm to translate English instructions into multi-stage scene manipulation sequences.

    *If the third clause is "right and left," the sequence of 3.2) will be clockwise/counterclockwise/clockwise.

  • DartanbeckDartanbeck Posts: 21,549

    crosswind said:

    Yeah I know that site. Similarly, I'm learning and experimenting with AI-assist physics and MoCap in Cascadeur ~~

    How is that going by the way, having fun?

     

    I'm still getting a Rush out of doing it My Way! :)

  • DartanbeckDartanbeck Posts: 21,549

    I love the 'hands-on' approach. So fun!

  • Text to poses and text to human motion is an area of active research.  An example project is https://github.com/naver/posescript with the text-conditioned pose generation task/script.  Someone would need to modify the script to take the pose data and convert and map it to a Genesis figure pose.

  • DartanbeckDartanbeck Posts: 21,549

    RobotHeadArt said:

    Text to poses and text to human motion is an area of active research.  An example project is https://github.com/naver/posescript with the text-conditioned pose generation task/script.  Someone would need to modify the script to take the pose data and convert and map it to a Genesis figure pose.

    Even still - that's doable. Very cool stuff.

    While I love getting my hands dirty and animating via aniMate 2, the timeline and other tools, I think this is really cool - as cascadeur sounds fun to me as well. 

     

    I've been collecting aniBlocks since my first Daz 3D purchase and I also grabbed up all of Reisormocap's (then Posermocap) animated PZ2 files and others as often as I could.

     

    Now my aniBlock collection is really cool. Instead of Artificial Intelligence, I use my own imagination and knowledge of my motion files to construct anything I need to put my characters through. It's really slick, incredibly fun and addicting, and it usually doesn't take much time. So I've got my ways and I really enjoy them.

     

    Still, I think that this whole idea is very cool!

  • burntumberburntumber Posts: 16
    edited March 7

    Eustace Scrubb said:

    cain-x said:

    DAZ would get lots of attention if they could get text to pose/animation to work. Imagine if you can tell your DAZ character to "walk to this waypoint, stop then go idle", that would be something.

    That should actually be fairly simple to script: the logic is, at any rate.  The user names his figure "Bobby," for example.  Then he calls up the AnimAI script, and tells it that "Bobby walks to the bus stop, sits on the bench, and looks left and right."  This instructs the script to do the following sequence:

    1. Each comma separates an explicit action, as with MidJourney, so the first is as follows:
      1. Select figure "Bobby".
      2. Select an aniBlock for action "walk."
      3. Trace a path to node "bus stop" from Bobby's present coordinates that intersects as few other meshes/instances as possible.
      4. Apply "walk" aniBlock to Bobby along generated path and update his location.
    2. After the comma, we have the script
      1. Select "bench" in the vicinity (+/- 2m virtual radius, adjustable in script settings before entering the AI prompt) of the bus stop.
      2. Select a "sit" aniBlock, "Walk" Bobby to the bench (using previously selected "walk" aniBlock) and orient him to "sit" on its upper surface.
      3. "Sit" Bobby on the bench.
    3. Following the third comma, the grammatical "and" tells the script to end execution after this sequence if it tells it anything at all:
      1. Select Bobby's head, neck, and both eye nodes.
      2. Execute a "look both ways" aniBlock, or rotate the selected nodes (per the "look" verb/command) approximately 30° counterclockwise* each (for additive effect on child nodes), then 60° clockwise, then 30° counterclockwise to the 0 position.

    The paths for motion should follow mesh contours whenever applicable, of course, and "sit" should use some such collision-detection system to sit on the bench rather than in or above it.  The script will also have to distinguish translation commands (walk, run, drive, gallop, trot, crawl) from prop-interaction blocks (open, sit on, lift, pick up, lie on), and different sorts of prop interactions from each other.  But that should really be all you need for it:  it's not so much "AI," really, as an algorithm to translate English instructions into multi-stage scene manipulation sequences.

    *If the third clause is "right and left," the sequence of 3.2) will be clockwise/counterclockwise/clockwise.

    The ideal AI method, would be to teach the AI to animate.  For business, build your own AI assistant, without code is getting very popular. If someone could train an AI assistant to animate on DAZ,  I think that app would sell. I certainly would buy it. 

    Post edited by Richard Haseltine on
  • Roman_K2Roman_K2 Posts: 1,239

    Seven193 said:

    Since AI only generates 2D images, you'll need to find a way to convert 2D poses to 3D.

    I think about a week ago (this is in mid-March 2024) OpenAI announced the... existence of their video engine.

    This Youtube clip is supposedly dated a month ago. One of many I'm sure.

     

     

Sign In or Register to comment.