Stable Diffusion in Blender
TheMysteryIsThePoint
Posts: 2,943
You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
damn, getting good at 3D is going to be rendered obsolete soon.
And that is a very, very good thing, isn't it?
It would be cool to have a "stable 3diffusion" to generate 3d models instead of images.
I am sure that that's just a matter of time.
That "stable 3diffusion" needs at least few decades to be practical, currently the 3d photogrammetry techniques are not even able to generate working materials from real images with correct UV. The 3D models in the market are still using faked textures.
Edit: It's the free 3d photogrammetry software I've tried that can't generate correct materials and UV. The Megascans and some 3D body scan software like MetaHuman are exceptional, their techniques are kept secret, users can only download the scanned assets but not the software used for photogrammetry.
depth maps and parallax can be pretty convincing though
not Blender but doable in Blender
(DAZ studio iray renders of Stonemason's Streets of Steampunk inbetween)
used https://convert.leiapix.com/#/local/1 on the Midjourney and Stable Diffusion images
it also outputs depth maps you can use to create mesh in Blender
@TheMysteryIsThePoint
Hi Donald ,
I have found a few online services that Might be viable at least for “virtual hosts” narrations for My video tutorials.
https://voicemaker.in/
I get it… companies have to make money.
But if they are going to offer a “free tier”
,with major limitations, then they should be more clear about (previously unmentioned) limitations to the “free tier” after certain amount of generated voice tracks.
Here is a real world example that happened two thirds into my generating voice tracks for my recent Blender tutorial.
I opted for the free tier which limits me to 250 characters per generated track and two voices( male or female) of any national dialect, using their
“Neural” voice engine
It may be subtle to some people
But at At 6 min 42 sec seconds ,of my tutorial video, notice how his voice suddenly becomes more “robotic”.
This is because ,even in the free tier, you apparently only have a limited number of tracks before a popup informs me I am being kicked down from the high quality Neural engine to the standard voice engine..unless I start paying.
I know beggars can not be choosers ,but it would be nice to know ALL of the true limitations of the “Free” version before committing to a project using it.
Great info and new . Something I like from AI is IMG2IMG feature . It possible to generate variation from your render scene into multiple iterarions idea with same kind of style . yeah it still hit and miss but after generating hundreds of images from that features , you will get decent collection of coherent images before rebuild those scenes in Blender
There is also tool to train AI to recognize your own style if you concern about ethic stuff . you can get info in this video
Here my IMg2IMG quick study ( correction in Photoshop to make more coherent)
Watched the Youtube videos about AccuRig and it's not actually free, it needs to be used with expensive Reallusion software for custom motion data, otherwise it's another custom rig without animation support. Also it needs additional addon in their paid software to generate the face rig. That's big money pit behind the freebie lure.
The Quick Rig extension for the Blender Auto-Rig Pro has all the features of AccuRig, plus supporting face rigs and mapping presets including DAZ, Mixamo, Unreal Mannequin.
I love animating ARP rigs in Blender.
@acatmaster
Not entirely true.
Now yes ..The FBX files that Accurig exports to Blender are rigged with the Actorcore/ CC4 skeleten.
However you are NOT restricted to using actor core motions once in Blender
If you use the FREE Blender pipeline tool from RL to convert the Accurig to Blender figify (One mouse click)
you can use the FREE Expt Kit addo that can retarget nearly any mocap to a rigify control rig.
Here is poser 11 “Lafemme” rigged with Accurig imported & converted to rigify
Her control rig now being driven by a random mocap from Mixamo.
Of course she has no face rig and her joints look horrible!! but frankly so does the Iclone figures in blender(with face rigs intact though) ,
This is why I switched back to Diffeo/G8 for Character work in blender
Chiming in to +1 the mention of Quick Rig. Between that, ARP, Onion Skin Tools, and Animation Layers, that's eveything one needs.
plugin creator got a new teaser on his youboob channel, 2d to 3d mesh O.O
Well that's not very impressive to me. Reminds me of biscuit tin lids when I was a child which were pressed sheet metal to make the image stand out a little. I would not describe this process as creating a 3D object from a 2D image. I guess you need photogrammetry for that.
yeah true but that's just the start I think. Unless he can't figure out how to improve it any.
Seamless Textures
It's beginning.
Can I ask what is the name of this free pipeline tool from RL? And what is this FREE Exp kit addo?
Accurig or Rigify can be used to rig characters. Autorig Pro can rig, skin bind and retarget animations to any custom rigged character.
---------------------------------------------------------------------------------------
Expy Kit: Character Conversion for mixamo, rigify, unreal
https://github.com/pKrime/Expy-Kit
---------------------------------------------------------------------------------------
Autorig Pro does not retarget root motions, the root motion tracks need to be copied to root bones manually. Also UE5 retargeter does not retarget root motions, have to make the retarget animations outside of UE5 using either UE4 or Blender.
Both Maximo and Daz rigs lack the root bone for extracting root motions in UE5.
There is a Blender plugin Mixamo Converter that adds a root bone in the skeleton, and converts Mixamo animations to be root motion compatible (remove the armature translation track of hip bone in the armature, and copy it into the root bone track)
----------------------------------------------------------------------------------------
UE5 retargeter actually can retarget root motions, needs to create retarget chain for the root bones of both skeletons with translation mode set to global scaled.
I'd still prefer retargeting in blender and export new animations into UE5, because of the ability to modify source animations and clean up the mocap results.
???? Ummh..... yes it does.
Animations Retargeted directly from Mixamo FBX files to free Character meshes from Sketchfab
I mean the plugin doesn't extract the armature transitions of the hip bone relative to the armature origin, and convert the transitions to the root bone. The root bone used in game engines isn't the hip bone used in Blender for armature movements.
This is Unreal related problem, if animations are used in Blender, the ARP works well for the purpose, the retargeted armature indeed moves with the armature transitions.
In Unreal there is a mechanism called root motion extraction https://docs.unrealengine.com/5.0/en-US/root-motion-in-unreal-engine/
This mechanism basically extracts the armature root transitions, and apply them to the character capsule to move the character in the game world. The mechanism requires separate root bone and hip bone, root bone stores the armature movement relative to the FBX origin, and hip bone stores the hip motion relative to the root bone.
Examples of the root motions in games are the dashing movements, side dodging movements, vertical climbing movements etc.
ARP retargeting list doesn't include such root bone, and both G8 and Maximo skeleton don't have a seperate root bone, the hip bones store both the armature world space movement and the hip motions. The mixed motions of the hip bone needs to be seperated and assigned to the newly created root bone for extracting root motions for game engines.
---------------------------------------------------------------------------------------------------
Animations used in games need to be applied with character blueprints which are placed in the game maps, so there are differences between the asset's local space and the game map's space, and the needs to extract root motions from the local space, to move the characters in the game maps in some cases, although the basic locomotions are not based on root motions but driven by character movement logics.
Understood,I Dont use game engines
so I am not affected, but good to know, thanks
Thanks very much for the github link to this plugin. Really appreciate it.
Cheers
Kenmo in Nova Scotia...
if AI could do a face scan from videos like this seems to suggest, i'd be very happy
Here is a video on using EXPY kit with Diffeo imported G8 characters
FYI :Mixmao recently altered the default rotation of their FBX files and cocked up alot of peoples retargeting pipelines
I suggest using the 2475 Mixamo FBX files from this archive instead
https://drive.google.com/file/d/1G86S76VvvJMHTyAsQU8mzLt8QwtnH9st/view?usp=share_link
UE5 retargeter actually can retarget root motions, needs to create retarget chain for the root bones of both skeletons with translation mode set to global scaled.
Would prefer retargeting in blender and export new animations into UE5, because of the ability to modify source animations and clean up the mocap results.
------------------------------------------------------------------------------------------
Very nice tutorial of how to use EXPY kit for retargeting.
Auto Rig Pro: Retarget from source skeleton to target control rig
EXPY kit: Bind target control rig to source skeleton -> bake keyframes to target control rig
It's quite amazing how much the EXPY kit can do as it's for free.