New Face MoCap Coming

1235»

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,597
    edited December 2020

    drzap said:

    MoonCraft3D said:

    Im just over here hoping for full body motion capture / tracing from a movie.

    https://getrad.co/?utm_source=facebook&utm_medium=post&utm_campaign=anime&utm_content=unityviewer

    there is also Deepmotion 

    oh edit, see it is the same thing

    Post edited by WendyLuvsCatz on
  • drzapdrzap Posts: 795
    edited December 2020

    Faux2D said:

    The Genesis 3 & 8 Face Controls product already has everything that you need for facial motion capture. The Dynamixyz software itself is very expensive but there is a free trial version on their site. The main advantage is that you only need a recording of an actor's face to capture the motion capture data. Nothing fancy like a depth camera the iPhone has, even 720p works just fine. The main disadvantage is the learning curve, don't expect a plug-and-play style solution. I attached a file going into more detail for the Dynamixyz to Daz pipeline.

    If you are interested in motion capture I do recommend trying out the Dynamixyz to Daz pipeline, or at least reading the documentation + the video tutorial and going through one test session with Dinamixyz's Performer. This will make you better understand the challenges studios have (AAA studios) when dealing with facial motion capture.

    I also saw someone here mention Masquerade 2.0. By looking at the quality of animation from big productions, Masquerade 2.0 is a clear winner with Dnamixyz coming in second. As an example check out Avengers: Endgame. Thanos' facial animation was created with Masquerade, Smart Hulk's facial animation was created with Dynamixyz. Taking a more critical look at the scenes with the 2 of them, when the Hulk talks notice how shy the camera is to show his lip-sync. Thanos' lip-sync is top notch however, you can even do lip-reading on the animation.

    Depth cameras are great but that's not even half the battle of getting good mocap data. Dynamixyz for instance doesn't need them, neither does Masquerade. Both solve the depth issue by using two cameras at once.

     

    Dynamixyz is probably at the top of the chain when it comes to tracking and solving to a facerig with one or more cameras (I have used 3), but it takes a lot of manual work to get professional results, not to mention a steep price ($8k per year for multi-camera).   But this new generation of machine learning tracking and solving is a great leap forward.  Nvidia's lipsync Omniverse plugin is a whole new level and it shouldn't require much, if any, manual work at all after initial setup:  https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face/ ;  and Ziva's new product is almost as accurate as photogrammetry.  If I'm going to spend thousands of dollars for software, I'd look into these as well.  Even Dynamixyz has a new product brewing that does hands-free machine learning tracking.  These are good times for facial animators.  By the way, I have a suspicion that Daz Studio will have entry into Omniverse.  It would be a huge omission on Nvidia's part not to offer a path from iRay to its flagship creation product.

    Post edited by drzap on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited December 2020

    That Audio2Face demo is great! Wish it was ready. I hope DS will have entry into Omniverse.

    https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face

    Post edited by Kevin Sanderson on
  • Faux2DFaux2D Posts: 452

    drzap said:

     https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face/ ;

     That's insane. The results are actually better than Dnamixyz when dealing with lip-sync.

    Project Jali should come out soon as well: 

    It basically does the same thing as Audio2Face, does a lip-sync animation from an audio file only which you can later modify its intensity, like how expressive each phoneme is, how much lip movement over jaw movement, etc. But Nvidia's tech still looks better to me. All you really need is the lip-sync, the facial intensity of each sound can be adjusted manually though a tool like the Genesis Face Controls: 

  • nonesuch00nonesuch00 Posts: 18,320
    edited December 2020

    MoonCraft3D said:

    Im just over here hoping for full body motion capture / tracing from a movie.

    That's being made and it's in beta but I'm almost certain it's based on one of the Omniverse SDKs. However, when I asked if that was the case, I got no answers, which in my way of thinking means it probably is based off one of the nVidia Omniverse SDKs/apps and they don't want to divulge that for fear of hurting their own product's sales based off that nVidia Omniverse SDK/app.

    AI DRIVEN Motion Capture - No Freaking Way!???????? - YouTube

    Post edited by nonesuch00 on
  • drzapdrzap Posts: 795

    Faux2D said:

    drzap said:

     https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face/ ;

     That's insane. The results are actually better than Dnamixyz when dealing with lip-sync.

    Project Jali should come out soon as well: 

    It basically does the same thing as Audio2Face, does a lip-sync animation from an audio file only which you can later modify its intensity, like how expressive each phoneme is, how much lip movement over jaw movement, etc. But Nvidia's tech still looks better to me. All you really need is the lip-sync, the facial intensity of each sound can be adjusted manually though a tool like the Genesis Face Controls: 

    Jali has been offering facial as a service for quite a while.  Their prices start at $25K per project.  They are a small outfit, only about 6 employees and they are only just now forming a prosumer product to offer publically.  They put me on their waiting list and will let me know when they get their prices sorted out for it.  And while we're on that topic, Jali's competitor, Speech Graphics has a product that seems to be just as impressive. https://www.speech-graphics.com/production-software-sgx/
    It's a Maya plugin that works similar to Jali and they actually have something out now to offer. Licenses are negotiated individually but be prepared to spend a few thousand dollars.

  • drzapdrzap Posts: 795
    edited December 2020

    nonesuch00 said:

    MoonCraft3D said:

    Im just over here hoping for full body motion capture / tracing from a movie.

    That's being made and it's in beta but I'm almost certain it's based on one of the Omniverse SDKs. However, when I asked if that was the case, I got no answers, which in my way of thinking means it probably is based off one of the nVidia Omniverse SDKs/apps and they don't want to divulge that for fear of hurting their own product's sales based off that nVidia Omniverse SDK/app.

    AI DRIVEN Motion Capture - No Freaking Way!???????? - YouTube

    That video is a promo for DeepMotion.  Nothing to do with Omniverse.  It's an A.I. cloud-based service that turns your videos into mocap.  They've been in beta for a long time and just started offering service to the public this year.  Out of all the mocap products, I think this one is the best suited for general Daz Studio animators.  The mocap isn't very accurate (as you can see) but it is easy and the price is manageable. https://deepmotion.com/animate-3d

    Post edited by drzap on
  • wolf359wolf359 Posts: 3,837

    It pleases me to see more Audio based solutions (like Audio2Face) still being developed.


    Because us one man animation studios depend greatly on PRE-RECORDED dialogue, from often disparate sources,  and those depth camera based solutions that require a live realtime performance capture,  are not as easy to manage. 

  • nonesuch00nonesuch00 Posts: 18,320
    edited December 2020

    wolf359 said:

    It pleases me to see more Audio based solutions (like Audio2Face) still being developed.


    Because us one man animation studios depend greatly on PRE-RECORDED dialogue, from often disparate sources,  and those depth camera based solutions that require a live realtime performance capture,  are not as easy to manage. 

    That app from nVidia does have ai assisted posing not just audio2face but they state it's for 'supported games'. I can't imagine that nVidia wouldn't eventually extend it out for general use with Blender, Unity, & UE4 (& even DAZ Studio if DAZ 3D staff develop it) but until they do so they best course of action seems to use deepmotion & edit those results or use the one of the nVidia supported games for ai pose assist and convert those results to the general case. Still a big time savings but less then optimal work pipeline. What would be nice is if they let you use ai pose with your own developed game(s) and then you could feasibly use game play to actually direct your own movie and write your own book based on your game in parallel to writing your game (while improving and extending the animations and dialogue with Mechimana/ai pose). Then you can do your own advertising in your own 'properties' to cross promote them on shoestring budgets. Developing your story in such different modes of consumer consumption will cause you to pay more attention to fine details. If the story isn't 'getting you' in your text novel, it's probably not getting anyone as a movie or game either. Lots of extra work but it should be fun.

    Post edited by nonesuch00 on
Sign In or Register to comment.