With the
launch of Firefly and subsequent
integration into the first official Photoshop beta, Adobe has opened the door to Generative AI for all Creative Cloud users.
The new Generative Fill feature is based on a generative AI model that can conclusively fill masked areas in an image. As with Stable Diffusion, not only can objects be deleted or changed, but an image can also be extended beyond its frame - as the Firefly AI visually "spins" the presented image motif.
With moving images, however, such an extension usually does not work, because for each frame of a video a different content is invented, which disturbs the temporal consistency of the video.
What does work, however, is a fictitious frame extension as long as the camera position is static - that is, shot with a tripod. As long as no objects like birds or shadows enter or leave the original frame, a static area around the video can be "invented". Sounds a bit cryptic? Then maybe this video here will help for easy illustration:
So if you follow a few rules when recording, you can use this trick to expand your "set" very impressively and inexpensively. New videos are currently popping up on the web, for example at Twitter
here or
here, which impressively show the possibilities.
Further tricks are conceivable for refinement, such as additional noise, virtual zooms or other camera movements. The latter can increase the perceived authenticity despite the lack of parallax shift, as long as you use the effect very slowly and subtly. It is also helpful that Photoshop itself already has a timeline for moving images in which you can work in a rudimentary way.