[12:02 Sun,18.July 2021 by blip] |
The question of where authenticity ends and pure storytelling begins in documentary film is a recurring topic of discussion, and not just since it became possible to manipulate images and sounds digitally. However, with increasingly better manipulation possibilities, the discussion is becoming more and more urgent as to which of these means are harmless in documentary storytelling or when a line is crossed.
However, it turns out that they are not always original sounds spoken by Bourdain himself, but among them are also three "audio deepfakes", which were subsequently created especially for the film by an AI, after it was trained with over 10 hours of audio material to be able to deceptively imitate Bourdain&s voice. This was noticed, among others, by a reporter from the New Yorker, who was surprised to hear Bourdain read out an e-mail he had written himself, but the director also speaks openly about this in this ![]() Neville emphasizes that he did not put words into Bourdain&s mouth that he had not written himself, but for dramaturgical reasons he had wished to be able to use them as "original sound" for some quotations. In addition, he had received the green light from Bourdain&s widow and his executor. So everything is not so bad in this case, one might think. And yet one gets the feeling that the documentary film has -- once again? -- lost its innocence. After all, original sounds were constructed that didn&t exist as such; moreover, a voice automatically adds emotions to a statement that one doesn&t know whether they were contained in the same way in the written passage. In fact, Neville says it wasn&t that easy to decide on a voice pitch, because Bourdain&s voice pitch also changed over the years. Last but not least, every rule violation stretches the realm of possibility - if it&s OK to have an AI repeat a protagonist&s written sentences, it might not be so far-fetched to tweak the sentence structure a bit, or maybe combine two sentences into one, just as you need for your timing. Perhaps it will soon be possible to link the fake voice with an artificially calculated image, including simulated facial expressions. The truthfulness of a documentary has always depended to a large extent on the integrity of the person who creates it. But the technical temptations are undoubtedly increasing. Bild zur Newsmeldung:
![]() deutsche Version dieser Seite: Deep ja, Fake nein? Dokumentarfilmer läßt O-Töne von KI einsprechen |
![]() |