Pose capture from Mocap hardware/software

rannar22rannar22 Posts: 6
edited July 2020 in The Commons

I have been looking for a good software/hardware way to capture real-life pose and transfer it to a genesis figure.
I have heard there are some Mocap options, but i am not too familiar with that, plus i don't need animation.
Can anyone suggest a suitable solution, preferable full-body, and facial expressions.

Post edited by rannar22 on

Comments

  • wolf359wolf359 Posts: 3,837
    You wont get a decent mocap hardware suit for less than $1000 USD

    The only way to get Human mocap Data into Daz studio, is as BVH after parsing it through some other third party software like AutoDesk motionbuilder or Reallusion Iclone pro Pipeline.

    It seems it would be less expensive to buy poses from the Daz & renderosity markets and learn to adjust them yourself as needed.
  • rannar22 said:

    I have been looking for a good software/hardware way to capture real-life pose and transfer it to a genesis figure.
    I have heard there are some Mocap options, but i am not too familiar with that, plus i don't need animation.
    Can anyone suggest a suitable solution, preferable full-body, and facial expressions.

    If you are not animating, I cannot think of any mocap solution that would give you better results than just doing it manually from multi-angle reference images. Even more so for facial expressions. You vastly overestimate how good mocap systems less than $30,000 are.

  • GordigGordig Posts: 10,192

    Not to mention that even the best mocap hardware and software don't circumvent the need to do manual adjustments.

  • The OP may be interested in 3D rotoscoping...

  • rannar22rannar22 Posts: 6

    Wow, i am really disappointed.
    I was sure there would be something half decent at this point.
    I saw a Kinect demo and go excited.
    Currently, i am using default bought posses, and manually change them.
    But i was hoping to, create something more natural on my own.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,621
    edited July 2020

    perception neuron and iClone 7 or other compatible software is probably the only affordable but still very expensive hobbyist solution out there, beyond that you are likely looking at tens of thousands of dollars

    and just for a pose?

    I would understand mocap but even then it would need to be pretty good mocap of something not on the market already, some special  sports, gymnastics,  fight moves, choreography etc as there are only so many idle, talking, walking, eating and sits stands etc people need.

    ... and probably the adult erotic industry but they understandably prefer using real people to 3Dblush

    IMO a good photographer is a better choice capturing your pose from all angles to facilitate matching it with a 3D rigged figure

    Post edited by WendyLuvsCatz on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,621

    probably a link not allowed google neuronmocap

    start making a list ...

     

  • wizwiz Posts: 1,100
    rannar22 said:

    I have been looking for a good software/hardware way to capture real-life pose and transfer it to a genesis figure.
    I have heard there are some Mocap options, but i am not too familiar with that, plus i don't need animation.


    Actually, you do need animations. The motion helps the algorithms converge. It's like dForce clothing, in reverse. So let the system track you through a few moves, terminating on where you need to be.

    I've used a setup with multiple Kinect 2 units. Can't remember the name, K2 has been dead for a while, and I haven't gotten any K3 yet for my own work. I really need to finish the new space.

     

  • fred9803fred9803 Posts: 1,564

    I have to admire the animation people. Personally I don't have the patience to do animation and very rarely even try it. I gave it a bash today and had to look up how to do it because it's been so long.

    A 120 frame 350x400 pixel one took nearly an hour to finish. And that was with a good RTX GPU and in a very simple scene. Can't imagine how long it would take at a decent size with multiple scene elements.

    I might just stick to still renders.

  • mindsongmindsong Posts: 1,715

    brekel mocap with a kinect might be able to get you some decent useable motion sequences that you can use as a baseline to extract out some still-pose files for earlier DAZ/Poser figures (and up-convert to G3/G8), depending on your standards/needs. The soon-to-be released version looks like the result of lessons well-learned over a number of years working in that domain. Probably $250 to play for body motions. (vs face/hands)

    (I notice that the current iclone motion-live suite doesn't even offer a kinect plugin solution. Their older mocap was kinect-based, but the new stuff is devoid of that capture H/W option.)

    As others have indicated, doing the full grab, clean, apply, refine process with mocap is best done with a full tool-suite that supports the entire workflow - keyframe and graph-editors, etc., and something that gets you some decent data from the start. I personally wouldn't use DS for anything over a minute or so, but I would recommend it with 'canned' animations for moving figures into dynamic still shots as mentioned by the OP. Others will likely argue. Do your own due-dilligence (starting at the current DS release and beta threads - look for animation comments...).

    One other angle, look into the mixamo and some of the other pre-created mocap animations that are available in the DAZ store and elsewhere (mixamo=adobe). If you import those, you can freeze any frame to taste and adjust/save to a standard pose if you like. Aniblocks can be used this way as well.

    best,

    --ms

  • SevrinSevrin Posts: 6,310

    If I were going to do this, I'd probably use canned poses and try to vary them with Puppeteer.

  • wizwiz Posts: 1,100
    edited July 2020
    mindsong said:

    brekel mocap with a kinect might be able to get you some decent useable motion sequences that you can use as a baseline to extract out some still-pose files for earlier DAZ/Poser figures (and up-convert to G3/G8), depending on your standards/needs. The soon-to-be released version looks like the result of lessons well-learned over a number of years working in that domain. Probably $250 to play for body motions. (vs face/hands)

    (I notice that the current iclone motion-live suite doesn't even offer a kinect plugin solution. Their older mocap was kinect-based, but the new stuff is devoid of that capture H/W option.)

    A lot of developers stopped working with Kinect when the incredible Kinect 2 came out. K2 was faster than K1, and literally about 10x highee resolution and more accurate. Unfortunately, Microsoft decided to use it as a vehicle to promote Windows 8 back in the day when serious stuff was running on 7. So we went from K1 

    • Ran on windows 7, the most popular version of Windows and 8 (justifuably hated and with under 20% market penetration in the day, and no embedded version).
    • Had open API for the IR depth camera and the normal camera, so you could at least do point-cloud math on Macs and Linux boxes. There were open source skeletal tracking projects catapulting off this, and commercial apps like

    To K2:

    • Artifically tied to Windows 8 with an installer that refused to work if it was run on a W7 machine. (It was posssible to run something like Tripwire on Windows 8, install K2, and use it to pull the newly installed files and move then to a  W7 machine. It was also possibe to hang a USE 3 analyzed between K2 and the PC and measure the traffic, then run 3x that traffic through a USB3 machine vision camera and work just fine, putting the lie to Microsoft's claim that Win 7 USB3 support was inadequate for K2).
    • Deliberately complicated multiple sensor use, forcing people to use an awkward server model instead of just plugging a few K2 into one machine.
    • Had encrypted feeds from the TOF depth camera, so goodbye iClone and friends.
    • Got broke during a mandatory Windows update and not fixd for six months.
    • Involved in not one but two privacy related scandals. (always on camera and mic in your family room, and an always-on near-infrared "clothing penetrating" camera pointed at your kids).

    To K3:

    • Marketed as "Kinect for Azure" despite Azure being totally unncessary for 95% of use cases,
    • Continuing to lock out Mac and Linux.
    • Brand new APIs, so you can't port K1/2 apps to K3 or visa verse. Writing dual-system support apps is a pain.

    And that's what happened to the 3D camera based motion capture industry.

    Post edited by wiz on
  • wiz said:
    • Continuing to lock out Mac and Linux.
     

     

    Are you sure about that? This link has instructions for Ubuntu 18.04. I've had two for a good long while, but I haven't tried it with Linux yet, believe it or not.

    https://docs.microsoft.com/en-us/azure/kinect-dk/sensor-sdk-download

     

  • mindsongmindsong Posts: 1,715
    wiz said:
    mindsong said:

    brekel mocap with a kinect might be able to get you some decent useable motion sequences that you can use as a baseline to extract out some still-pose files for earlier DAZ/Poser figures (and up-convert to G3/G8), depending on your standards/needs. The soon-to-be released version looks like the result of lessons well-learned over a number of years working in that domain. Probably $250 to play for body motions. (vs face/hands)

    (I notice that the current iclone motion-live suite doesn't even offer a kinect plugin solution. Their older mocap was kinect-based, but the new stuff is devoid of that capture H/W option.)

    A lot of developers stopped working with Kinect when the incredible Kinect 2 came out. K2 was faster than K1, and literally about 10x highee resolution and more accurate. Unfortunately, Microsoft decided to use it as a vehicle to promote Windows 8 back in the day when serious stuff was running on 7. So we went from K1 

    • Ran on windows 7, the most popular version of Windows and 8 (justifuably hated and with under 20% market penetration in the day, and no embedded version).
    • Had open API for the IR depth camera and the normal camera, so you could at least do point-cloud math on Macs and Linux boxes. There were open source skeletal tracking projects catapulting off this, and commercial apps like

    To K2:

    • Artifically tied to Windows 8 with an installer that refused to work if it was run on a W7 machine. (It was posssible to run something like Tripwire on Windows 8, install K2, and use it to pull the newly installed files and move then to a  W7 machine. It was also possibe to hang a USE 3 analyzed between K2 and the PC and measure the traffic, then run 3x that traffic through a USB3 machine vision camera and work just fine, putting the lie to Microsoft's claim that Win 7 USB3 support was inadequate for K2).
    • Deliberately complicated multiple sensor use, forcing people to use an awkward server model instead of just plugging a few K2 into one machine.
    • Had encrypted feeds from the TOF depth camera, so goodbye iClone and friends.
    • Got broke during a mandatory Windows update and not fixd for six months.
    • Involved in not one but two privacy related scandals. (always on camera and mic in your family room, and an always-on near-infrared "clothing penetrating" camera pointed at your kids).

    To K3:

    • Marketed as "Kinect for Azure" despite Azure being totally unncessary for 95% of use cases,
    • Continuing to lock out Mac and Linux.
    • Brand new APIs, so you can't port K1/2 apps to K3 or visa verse. Writing dual-system support apps is a pain.

    And that's what happened to the 3D camera based motion capture industry.

    Very interesting history - one to save in the "tidbits" file.

    I knew the USB3.x drivers were a requirement for keeping up with the K2 data-rates, and some the USB3 h/w controllers didn't work well. That thinned the crowd and Kinect momentum too.

    I've also seen older versions of drivers work on W10 when newer versions don't etc., so it seems somewhat intentional sometimes.

    It's amazing how the protectionistic silos that vendors try to throw out there auditably fail each time, and yet history rhymes again.

    @wiz, knowing this, what direction have you taken for your mocap efforts (body, hands, facial) in light of the mentioned constraints?

    To the OP, there are some basic bone-capturing apps for the original xbox-360, and if you can capture a popular BVH skeleton and convert/map to poser or DAZ figure bones, you may be able to 'play' for almost free in a Windows environment.

    YMMV,

    --ms

Sign In or Register to comment.