Strategies and methods for audio

SpeculativismSpeculativism Posts: 98
edited December 1969 in The Commons

I'm posting this for anyone who enjoys thinking outside the box and speculating about possible different ways of achieving interesting results in animation.

Audio in 3D animation is a real challenge I think. I keep thinking about strategies for integrating audio into the workflow. For instance, there already is an existing way of linking audio to movement, but only for the lips, via Mimic. To me the existence of Mimic is a "proof of concept". In other words, it proves that it would be possible to link sound to movement in a broader way, say for instance by triggering aniblocks in relation to soundtrack and thus creating dance. I can already achieve this in a different way, by rendering short video clips of a character who performs a different single dance move in each clip and then triggering the video clips in a VJ programme such as Visual Jockey Gold. However, some sort of script which links mimic to aniblocks instead of visemes would speed the workflow and make more possibility for play time. Another, different, method which occurred to me would be to use the Rudolf Laban system. Rudolf Laban was a dance and choreography researcher who devised a system of notation for movement. The system, known as "Labanotation" makes it possible to write down a sequence of movement in a similar way to writing down music. Just as we would write symbols meaning musical notes, chords, and other musical information and then convert the sheet music into a midi file which plays the tune so it is possible, in Labanotation, to write down a dance or other movement sequence and then recreate the exact movements either with real live actors or with digital 3D figures. What we don't have at the moment is a file format like midi (only for movements of the rigged figure) As I'm sure you know, a midi is a different type of thing to a wav. A wav contains the audio but a midi only contains the coded information to make the audio, and different midi instruments can be used. So a midi sort of file for 3D movements could translate sounds into movements of the body parts. Like linking the sound of footsteps to the actual movement of the foot, or linking qualities of sound to free expressive dance moves. As I said before, the existence of mimic shows proof that audio can be used to move part of the 3D figure, it's just a matter of linking the sound to the arms and legs etc. rather than just the lips.

Then again, I started thinking about the origins of the AVI file format which was created all those years ago, back in the 80s or whenever. AVI is "audio-video interleave" and does what it says, interleaving video frames with sound samples. So then I thought, "What if there could be a file which interleaves movement of the 3D figure with a sound?" - a sort of "audio-movement interleave" or "AMI" file. It would be a simple interleaving of BVH information with Wav (or other) audio information. But that's a different idea than the one in the paragraph above.

Sign In or Register to comment.