This is not the first time that AI has been used to try to compute intermediate images from a sequence of images. However, the quality of ADOP (Approximate Differentiable One-Pixel Point Rendering) now achieved is so good that the paper made it into the popular stream of Two Minute Papers.
In fact, the visual impression of the AI interpolation with ADOP is hardly distinguishable from reality, unless the program is confronted with extremely hard-to-resolve problems (including reflective surfaces or moving objects between images). A slight flickering of surfaces is also occasionally noticeable, but hardly anyone would suspect from the playground video shown that it was created from a sequence of images.
It's also worth noting that the work was done entirely at Friedrich Alexander University, or rather at its Chair of Computer Graphics in Erlangen. If one skims over the research work of the three authors (Darius Rückert, Linus Franke and Marc Stamminger) there, ADOP seems to have been just one project among many.
If you have some compiler and AI experience, you can get the project running yourself
thanks to the Github page as a C++/CUDA/ libTorch project. A fast Nvidia GPU should be sufficient for acceptable training times.
On the Github page you can also find an experimental VR viewer for the interpolated scene, as well as a tool that generates HDR scenes if the input image sequences were captured with different exposures.
As you can well see, AI opens here a wide field of application beyond frame interpolation, which we will surely find in various commercial applications in the next years. And "computational cinematography" will certainly not disappear any more...