Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
there is also Deepmotion
oh edit, see it is the same thing
Dynamixyz is probably at the top of the chain when it comes to tracking and solving to a facerig with one or more cameras (I have used 3), but it takes a lot of manual work to get professional results, not to mention a steep price ($8k per year for multi-camera). But this new generation of machine learning tracking and solving is a great leap forward. Nvidia's lipsync Omniverse plugin is a whole new level and it shouldn't require much, if any, manual work at all after initial setup: https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face/ and Ziva's new product is almost as accurate as photogrammetry. If I'm going to spend thousands of dollars for software, I'd look into these as well. Even Dynamixyz has a new product brewing that does hands-free machine learning tracking. These are good times for facial animators. By the way, I have a suspicion that Daz Studio will have entry into Omniverse. It would be a huge omission on Nvidia's part not to offer a path from iRay to its flagship creation product.
That Audio2Face demo is great! Wish it was ready. I hope DS will have entry into Omniverse.
https://www.nvidia.com/en-us/design-visualization/omniverse/audio2face
That's insane. The results are actually better than Dnamixyz when dealing with lip-sync.
Project Jali should come out soon as well:
It basically does the same thing as Audio2Face, does a lip-sync animation from an audio file only which you can later modify its intensity, like how expressive each phoneme is, how much lip movement over jaw movement, etc. But Nvidia's tech still looks better to me. All you really need is the lip-sync, the facial intensity of each sound can be adjusted manually though a tool like the Genesis Face Controls:
That's being made and it's in beta but I'm almost certain it's based on one of the Omniverse SDKs. However, when I asked if that was the case, I got no answers, which in my way of thinking means it probably is based off one of the nVidia Omniverse SDKs/apps and they don't want to divulge that for fear of hurting their own product's sales based off that nVidia Omniverse SDK/app.
AI DRIVEN Motion Capture - No Freaking Way!???????? - YouTube
Jali has been offering facial as a service for quite a while. Their prices start at $25K per project. They are a small outfit, only about 6 employees and they are only just now forming a prosumer product to offer publically. They put me on their waiting list and will let me know when they get their prices sorted out for it. And while we're on that topic, Jali's competitor, Speech Graphics has a product that seems to be just as impressive. https://www.speech-graphics.com/production-software-sgx/
It's a Maya plugin that works similar to Jali and they actually have something out now to offer. Licenses are negotiated individually but be prepared to spend a few thousand dollars.
That video is a promo for DeepMotion. Nothing to do with Omniverse. It's an A.I. cloud-based service that turns your videos into mocap. They've been in beta for a long time and just started offering service to the public this year. Out of all the mocap products, I think this one is the best suited for general Daz Studio animators. The mocap isn't very accurate (as you can see) but it is easy and the price is manageable. https://deepmotion.com/animate-3d
It pleases me to see more Audio based solutions (like Audio2Face) still being developed.
Because us one man animation studios depend greatly on PRE-RECORDED dialogue, from often disparate sources, and those depth camera based solutions that require a live realtime performance capture, are not as easy to manage.
That app from nVidia does have ai assisted posing not just audio2face but they state it's for 'supported games'. I can't imagine that nVidia wouldn't eventually extend it out for general use with Blender, Unity, & UE4 (& even DAZ Studio if DAZ 3D staff develop it) but until they do so they best course of action seems to use deepmotion & edit those results or use the one of the nVidia supported games for ai pose assist and convert those results to the general case. Still a big time savings but less then optimal work pipeline. What would be nice is if they let you use ai pose with your own developed game(s) and then you could feasibly use game play to actually direct your own movie and write your own book based on your game in parallel to writing your game (while improving and extending the animations and dialogue with Mechimana/ai pose). Then you can do your own advertising in your own 'properties' to cross promote them on shoestring budgets. Developing your story in such different modes of consumer consumption will cause you to pay more attention to fine details. If the story isn't 'getting you' in your text novel, it's probably not getting anyone as a movie or game either. Lots of extra work but it should be fun.
Updated notes a couple days ago on Audio2Face
https://docs.omniverse.nvidia.com/app_audio2face/app_audio2face.html