Figures created with generative AI don't always move the way you want them to – if you want more control over animated characters, you can use motion capture, for which Runway has been offering an AI tool for some time. It has recently been updated and, under the name "Act-Two", can now transfer movement specifications from a video template even better to a figure. This does not have to exist as a 3D model, as with traditional animation, but a single (possibly generated) 2D image is sufficient.
"Tracking" of head, face, body and hands is supported - the following video impressively shows what is possible with just a few clicks:
In fact, such MoCap animations were quite complex to implement before the advent of generative AI. In order to transfer the facial expressions of a person to an artificial model, markers had to be tracked in the face and mapped onto the 3D model. With Runway's motion capture model, the AI algorithms are obviously able to apply a person's filmed facial expressions to very differently shaped figures without any further aids.
For this, apart from a controller with which you can set how closely the AI should adhere to the specification, you have no possibilities for intervention. According to Runway, the best results can be achieved if the person in the template video occupies a similar position in the image as the target figure, i.e. is seen from the same angle with a similar shot size.
Runway Act-Two should now be available to all enterprise customers and "creative partners" (i.e. apparently not yet for normal subscribers).
Bild zur Newsmeldung: