Style Transfer was one of the first deep learning applications that indicated to the world that artificial intelligence would change a lot in the coming years. Style Transfer allows you to change an image or video by showing a second image to an AI, which is then used to draw the first image.
In the first generation of algorithms, images were created in this way, following the styles of famous artists such as van Gogh. In fact, it is once again a greater challenge to maintain a style consistently over a longer period of time.
But this is now only a marginal problem. Typical scene lengths of 10-20 seconds at 24 fps are usually (depending on the difficulty level, of course) rarely a problem anymore.
So it is time to ignite the next stage. In this case, some scientists have now turned to interactivity and real-time. Real time is a problem in that the typical DeepLearnig learning phases are still very time-consuming because they require a lot of computing. Training a corresponding model for video used to take at least several hours on a midrange GPU. And only with a fully trained model the style transfer can usually be initiated.
A new paper shows that "Learning while Painting" is possible in principle. An artist can paint a style template and in the process receives an interactive, real-time video preview with running images. This should open up completely new workflows for talented artists, whose limits need to be explored first.
And that is not even the end of the story, because it is to be expected that in the future, forms will be able to be retouched interactively in this way. So you paint your desired changes in one frame and they are always transferred to the entire clip in real time.
The only thing that surprises us about these applications is that such innovations still come primarily from independent research. Actually, we would expect that companies like Adobe or Blackmagic would use such killer features to enhance their software suites in order to set them apart from the competition. Hopefully it won't be too long before we can finally use similar features in Premiere or Resolve...