[15:09 Tue,30.July 2024 by Thomas Richter] |
Runway had publicly launched its latest video AI (about 1 month ago), which represented a significant quality leap compared to the previous Generation 2. However, until now, it was only possible to create videos using text prompts, as images could not be used as input. This changed with today&s update: Runway Gen3 Alpha now also supports Image2Video.
![]() Thus, the image AI Midjourney is popular among many creators to create an image like the ones seen in the examples below. Once the ideal starting image is found, it can then be animated using a prompt in Runway Gen 3, describing what should happen in the video. This allows specific objects to be moved ("the face should turn forward and smile"), defines a camera movement ("the camera should circle around the person"), and describes other changes that should occur during the video ("the factory in the background should explode"). However, multiple attempts are often still necessary to produce the desired movement, as Gen 3 sometimes struggles with the correct implementation of the prompts (but Gen 3 is still officially in the Alpha stage).
In this example, it is nice to see how remarkably well Runway Gen 3 handles the very complex simulation of fluids.
It is only a matter of time until the missing features from Gen 2 will also be available in Gen 3 maybe no longer in the Alpha version: the ability to generate longer videos than 10 seconds using "Extend Video" and the ability to precisely control the camera movement. Runway Gen 3 is one of the current top video AIs in our news, you can find several comparisons of ![]() Here is a direct comparison of the Image-to-Video capabilities of Runway Gen 3, Kling, and Luma AI:
![]() deutsche Version dieser Seite: Gezielter generieren - Runway Gen3 animiert jetzt auch Bilder |
![]() |