.footer { } Logo Logo
deutsch
/// News
Panasonic: New Lumix firmware versions with better AF and S5 RAW output

AI synchronizes lip movements with audio in real time

[14:44 Wed,23.September 2020   by Thomas Richter]    

The new DeepLearning Algorithm "Wav2Lip" of an Indian research team can match the lip movements of a speaker to the words from any audio recording. It beautifully demonstrates the continuous progress that machine learning technology is making, as the new method delivers significantly better results than older projects. Not only does it work in real time, but - and this is the real progress - it is also more universal, because it can handle any face, any language and any voice.

The usefulness of such an algorithm for working with video is obvious - as already shown in the demo video, it can be used to adapt the lip movement of a speaking person in a video to a synchronous version created in another language in order to eliminate the asynchronicity of mouth movements and words, which is otherwise disturbing for many viewers. This is practical for post-synchronized film versions as well as for lip-syncing lectures, press conferences or figures from animated films into other languages.



And last but not least, this technology could also help in principle to make it easier to use the voices in the post by overdub instead of the original sound in scenic productions. Even minor speech errors (which would otherwise render a scene unusable) could be easily corrected by briefly "tracking" the lips automatically.

Using deep learning algorithms, it would also be conceivable to automatically offer different language versions of any clips, for example on YouTube. YouTube already provides an automatic transcription, and the next steps are already possible using algorithms: the translation of the transcribed text into another language, speech synthesis with the voice of the original voice, and then lip-syncing the video with the new audio.

Of course, the technology can also be misused to generate clips in which people seem to be saying things they never said - the new audio can also be generated via neural network to mimic the real voice.



How good the Wav2Lip algorithm is, anyone can try it out for themselves on the project&s demo website and upload a short (maximum 20 seconds) video clip of a person speaking plus a speech audio clip to get an output of the newly lip-synced clip. For those who want to try more, please visit GitHub to find the appropriate program code. (Thanks to our forum member Ruessel for the news)


Bild zur Newsmeldung:
Wav2Lip-Schema

Link more infos at bei bhaasha.iiit.ac.in

deutsche Version dieser Seite: KI synchronisiert Lippenbewegungen mit Audio in Echtzeit

  



[nach oben]












Archiv Newsmeldungen

2025

May - April - March - February - January

2024
December - November - October - September - August - July - June - May - April - March - February - January

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000






































deutsche Version dieser Seite: KI synchronisiert Lippenbewegungen mit Audio in Echtzeit



last update : 9.Mai 2025 - 08:02 - slashCAM is a project by channelunit GmbH- mail : slashcam@--antispam:7465--slashcam.de - deutsche Version