Adobe MotionStream - realtime control over AI Video generations coming?
[18:17 Fri,17.April 2026 by blip]
Adobe’s research division has once again come up with a rather interesting, though still experimental, AI technology. Called MotionStream, it allows users to interact with AI-generated videos while they are being created — essentially giving live directing instructions — and to control the movement of objects via mouse movements or sliders, as well as change camera angles in real time.
Thanks to its relatively low latency, users get immediate visual feedback — a video can be manipulated while it is still being generated. Starting from a text prompt, objects can be controlled by clicking and dragging in order to steer their movement and adjust the camera position. Users can also choose which elements should move and which should remain static.
The model behind MotionStream is designed to capture physical laws and natural motion in the world — in essence, the video generator underlying MotionStream simulates the world in real time.
The new approach behind MotionStream shows where work with generative video could — or should — be heading: away from delayed rendering and toward real-time interaction with greater speed, responsiveness, and control. A demo screen capture shows, for example, how parts of the image can be “dragged” in a certain direction with the mouse, whereupon the corresponding image object (head, eyebrow, lip, or similar) moves in that direction.
Virtual camera movement can also be influenced in this way. One example shows how part of the image is marked as something that should remain still, causing the framing — the “camera” — to orbit around that point.
MotionStream was developed on the basis of a new real-time still-image generation approach. To further accelerate AI video generation, the team divided the video creation process into individual steps. Earlier generation models created an entire video before making it available to users — each frame took every other frame into account, with the future depending on the past, but the past also depending on the future. This helped improve generation quality, but, as Zhang, one of the developers, explains, “the universe doesn’t work in a way where you know both the past and the future. We removed that constraint.”
The result is a method that makes it possible to generate a video in segments, with future frames depending only on what has already been created — known as an “autoregressive” backbone. The first part of the video is already playing while the tool generates the next part in the background; the user is thus shown a generated video in real time as a stream, which is also where the name comes from.
Although Adobe Research has only just begun to draw attention to the MotionStream project, the researchers themselves published their paper on it several months ago. The paper also mentions plans to release MotionStream as open source, but nothing further has happened since then. So it remains unclear whether, and in what form, these new AI video capabilities will become accessible.