Already since longer time Nvidia works together with the Finnish Aalto University on neural networks that generate amazingly real looking faces. In the course of time the corresponding AI - more precisely a GAN, a Generative Adversarial Network - was developed further and further and also allowed individual aspects of a face to be specifically changed via "style transfer", for example gender, skin color, hairstyle, age or facial expression. A nice demonstration of the technique is provided by the website This Person does not exist, which generates a new face every time it is called up.
The next logical step of the StyleGAN2 algorithm was to animate these newly created faces or to morph between faces with slightly changing parameters. This looked quite good, but the resulting faces suffered from a particular problem: the faces seemed to change virtually "under" some features, such as a beard or hair, which looked stuck.
The latest version Alias-Free GAN takes this a decisive step further by fixing this bug through a small improvement in the neural network architecture. Changes in facial expressions now look much better, as does the smooth morphing of parameters, allowing the generation of not just photos but videos of realistic looking artificially created faces.
Like its predecessor, the AI can also generate not only faces, but also other objects, such as dogs, cats or landscapes, depending on the training material it is fed with. And even these videos now look better - the neural network can now even generate tracking shots of landscapes. In September the code of Alias-Free GAN will be published on Github, then everybody can do his own experiments with it - provided enough computing power - the project was done on NVIDIA DGX-1 system with 8 Tesla V100 GPUs with PyTorch 1.7.1, CUDA 11.0, and cuDNN 8.0.5.
Here is the new Alias-Free GAN procedure as always nicely explained by Károly Zsolnai-Fehér in the latest Two Minute Papers video: