Alternative titles: Quick and easy: Special Effects by AI Quick and easy: VFX by AI AI instead of motion capture
We recently presented a short clip of a virtual dynamic fashion show, which shows how easily and yet effectively different AI tools can be used in combination to create interesting effects in videos.
Filmmaker and musician Scott Lighthiser has combined three other free AI apps for for simply made but quite spectacular special effects in the following clips here: the AI image generator Stable Diffusion, the style transfer tool Ebysnth and the AI voice transformer Koe Recast.
Each was based on a short video of himself scaled to 1,024 x 1,024. With the help of Stable Diffusion per Google Colab and its img2img function, which makes it possible to change an image specifically per AI, Lighthiser then created completely different but precisely fitting variations of the speaker, once a female Greek marble statue, then an old man and last but not least a bloody zombie. These three images were then upscaled to 1,024 x 1,024 pixels by AI and used as style sheets in Ebsynth, which were then applied to the entire video. Finally, to change his voice to match, Lighthiser used the AI-powered app Koe Recast, which allows you to change its sound in real time as desired.
Even more blatant is his following horror animation, which transforms not only his face but his entire body into that of an undead (the prompt for this, by the way, was "The dry creature, film still from the movie directed by Denis Villeneuve and David Cronenberg with art direction by H. R. Giger and Bernie Wrightson"):
The horror motif is particularly suitable for these demos because dynamic artifacts created by an imperfect style transfer via Ebsynth, which would otherwise be distracting, tend to enhance the shock effect here.
The classic workflow would have been a lot more complicated: first the speaker&s face would have to be read in as a 3D model via face capturing, then a 3D model of the new face would have to be created using a 3D modeling program, which would then be mapped onto the original face and rendered. All of this requires a lot of specialized knowledge in addition to the right tools.
In contrast, using the simple AI workflow and relatively easy-to-use tools, very different alienated versions of the original video were created in a short time - a small foretaste of the extensive possibilities of the currently explosively developing AI tools in the field of image - and also video.
The whole thing is just a small demonstration of what effects can be created by combining multiple AI tools - before there are even any AIs specialized entirely in video that can freely create or edit entire videos on demand using only text description - much like Stable Diffusion or DALL-E2 can do with images right now. If such tools come in the future and the resolution and image quality are improved even further, then after the revolution in filmmaking, which has so far been driven primarily by hardware, the next stage will be ignited with the help of AI tools by making professional and yet easy-to-create special effects (and also animations) accessible to a broad mass. more infos at bei twitter.com