[18:19 Wed,13.August 2025 by blip] |
More and more people are falling for AI-generated video clips and images that go viral on the internet. Most recently, trampoline-jumping rabbits, seemingly filmed by a surveillance camera, were viewed and shared millions of times on TikTok. The AI generators are getting better and better, can sometimes calculate videos with sound and are only a click away for many, as they are integrated directly into social services. The flood of fake images - we are already wading knee-deep in it. ![]() 234 million views for fake bunnies But what does fake even mean? This was recently asked of Sam Altman, CEO of OpenAI, the company behind ChatGPT and Sora, among others. His answer is both outrageous and - unfortunately - realistic. When asked how we will be able to tell the difference between what is real and what is not in five years, he says: "my sense is what&s going to happen is, it&s just going to like gradually converge" - in other words, he thinks it will just gradually become one. And further:
> "The threshold for how real does it have to be to be considered to be real will just keep moving." In the future, not only beauty but also truth will be in the eye of the beholder - rosy prospects indeed. Altman doesn&t seem too worried about it, his argument being that even now, photos from iPhones, for example, are only mostly, but no longer completely "real", because AI algorithms already process the image data to make the results more beautiful (he speaks here of Computational Imaging): ![]() > "There&s like a lot of processing power between the photons captured by that camera sensor and the image you eventually see. And you&ve decided it&s real enough or most people decided it&s real enough. But we&ve accepted some gradual move from when it was like photons hitting the film in a camera." So only a gradual difference between brightening, sharpening and completely AI-generated? Sure, Mr. Altman. At this point, a real interview should have included a critical follow-up question. However, the conversation took place as part of Cleo Abram&s "Huge If True" podcast series. These are explicitly "optimistic" podcasts about how science and technology can make our future better. Possible negative consequences are largely ignored, which leads to almost absurd contortions in other parts of the conversation, such as when it is briefly mentioned at the end that some AI researchers warn that the technology could destroy us. What should be the practical consequence of Altman&s theory of relativity in the multimedia future? Should fake images only be "fake" if they feel fake, but quasi "real" if they could have been real? As far as pure entertainment snippets on TikTok & Co. are concerned, forget it. However, if real and fake are no longer categories by which pictorial representations can be measured, there are no limits to manipulation and propaganda. But hey, it&ll be alright, won&t it? Just don&t be negative. Below is the conversation, the relevant section starts at 18:35 (It&s 2030. How do we know what&s real?). ![]() deutsche Version dieser Seite: OpenAIs Sam Altman findet, fake ist echt und echt ist fake - und eh egal? |
![]() |