[19:12 Thu,6.February 2025 by Thomas Richter] |
Generative video AI has made significant progress, but often still struggles to depict the movements of humans and objects physically realistically, i.e., correctly and causally. Meta has now introduced a new model, VideoJAM, that addresses these issues because—unlike competing video AIs according to Meta researchers—it does not prioritize the quality of the rendering over the movement during training. VideoJAM&s superiority over currently competing models is to be demonstrated by a qualitative comparison with leading AI models (the proprietary models—Sora, Kling, and Runway Gen3) and the base model from which VideoJAM was fine-tuned (DiT-30B) using representative prompts: Bilder zur Newsmeldung:
deutsche Version dieser Seite: VideoJAM von Meta erzeugt Videos mit sehr realistischen menschlichen Bewegungen |



