[12:09 Sat,10.January 2026 by Rudi Schmidts] |
LTX-2 from Light-Tricks aims to be the first open-source AI model for generative video creation that combines all modern state-of-the-art core features in one model: synchronized audio and video output, high fidelity, multiple rendering modes, production-ready outputs, API access, and open access to weights and training code. ![]() Light-Tricks LTX-2 is now freely available as an open-source AI video model Numerous code variations are already available online that further compress the model through weight reduction/quantization—though with compromises in capabilities and quality. This is interesting because it allows for very cost-effective video prototyping at home, followed by generating successful clips in high quality in the cloud. The developers also prioritized day-one support for ComfyUI, which means experienced users can get started right away without major hurdles. Despite being open source, there is already a lot of infrastructure around the model, such as documented API calls and a Prompting Guide. In ComfyUI-LTX-2 pipelines, the "enhance\_prompt" parameter helps to automatically improve the prompt before generation. Anyone who wants to fine-tune the model for their own projects can find documentation and tips for LoRA training at github.com/Lightricks/LTX-2/blob/main/packages/ltx-trainer/README.md. The demos for the open-source release already look quite convincing in some parts: As the first video foundation model, LTX-2 sets the bar significantly higher for open-source AI at the start of the year. And when you consider what can be conjured from a (admittedly quite expensive) gaming graphics card for videos these days, one can be quite amazed. The development of new possibilities in AI videography—even if much of it is just fluff—is consistently fascinating to watch. deutsche Version dieser Seite: Neues Light-Tricks LTX-2 Open Source Video-Modell erstellt 4K-Clips mit synchronem Sound |


