YouTube has just announced that it will soon be introducing labels for AI-generated content in order to better protect users from AI-generated manipulation. However, this appears to be less about automated AI recognition and more about legal protection on the part of YouTube. According to this, content producers themselves should indicate whether their upload is AI-generated material or not. Anyone who has been affected by a deepfake attack or whose face, voice etc. has been used or manipulated without consent should be able to request the deletion of content via the long-established "Privacy Request" channel.
How effective this AI label, which is based primarily on voluntary transparency, will ultimately be remains to be seen. In principle, we welcome the associated sensitization to AI-generated content. A technical solution that automatically recognizes AI-generated content would be more effective. But this is precisely where a fundamental AI problem lies: AI can be trained with AI in such a way that AI-generated content can no longer be distinguished from real content. Corresponding GAN setups Generative Adversarial Networks usually produce "perfect real fakes" with sufficient training time.
It also remains to be seen whether the opposite approach is more promising - i.e. issuing a technical certificate of authenticity when generating content.
So perhaps in the end it&s all about a fundamental, "healthy" mistrust of any digitally generated image/sound and thus a new, general media literacy - the coming months should hold exciting developments in store here ...