We just released our
current article about generative AI videos, and Google is already coming up with the next interesting AI project.
Neural Dynamic Image-Based Rendering (short and cryptic: DynIBaR) is the name of a technique that makes it possible to generate a tracking shot around an object from a single camera perspective like a constrained bullet time effect. Since the AI model calculates many predictions about the image composition, slow motion, image stabilization and synthetic depth of field/bokeh also fall off as side effects.
The following video shows the current quality of DynIBaR:
.
What&s interesting here is that this AI model ultimately already handles many AI post-production steps at once, which we described in our
basic article "How best to film for computational postproduction?".
The whole thing also opens up far wider worlds through the associated
possible directorial interventions, which we had already touched on in our article on regenerative AI videos. For example, one could now add even more motion dynamics to typical Wes Anderson AI clips - quite uncharacteristic of Wes Anderson, of course - with camera movements. For subtle dolly zooms or focus shifts, DynIBaR already seems to work quite well.

DynIBaR, Neural Dynamic Image-Based Rendering
We actually believe that already in a few years filmmaking can and will work via such "modifiers". I.e. you generate still images for a movie with generative AI for a kind of storyboard and then animate them via virtual camera movements and
controls object movements like puppets.
Still sound like the distant future? Obviously, it has already begun...