=

puppeteering video production / lip sync & pose sync

One crucial feature of working with synthetic characters is the possibility to steer them with proper voice and pose acting done IRL. While working with fully synthetic characters with pure synthetic steeting, we have a hard time dealing with uncanny visuals, so it is a obvious to use the real acting and proper human input to produce relevant videos and animations. This combines the advantages of both worlds in one workflow.

The workflow comes with proper keyframe production in the first place. The keyframe need to have a proper shot of the main character in a plausible setting. Choose a relevant camera angle, that suit a portrait or dance scene interpolation.

keyframe of character in plausible setting

The keyframe is the starting point for the next step – a proper animation / camera / character movement – using a video diffusion model. This works best, when the situation and all thing visible are mentioned in the prompt – plus all the instructions of what should happen during the show. Besides that, it is helpful to describe the camera and lighting behaviour too.



person in excentric bio outfit. handhelf camera. person tilts the head. camera orbits slowly

The steering video helps to guide the puppeteering modifications in the already generated video. The steering video should be shot in plain diffuse lighing with a person waist up posing in front of a neutral plain background. The face should be visible due all time of the video. ( proprretary service: Runyway Act-Two ). This the pose- and lip sync workflow is in strong evolutionay process – so we will see improvements and proper open source solutions – even on consumer PC’s – in the near future.