fake but realistic content with little more than the click of a mouse. This can be fun: a TikTok account on which—among other things—an artificial Tom Cruise wearing a purple robe sings “Tiny Dancer" to (the real) Paris Hilton holding a toy dog has attracted 5.1m followers. It is also a profound change in societies that have long regarded images, video and audio as close to ironclad proof that something is real.
Phone scammers now need just ten seconds of audio to mimic the voices of loved ones in distress; rogue AI-generated Tom Hankses and Taylor Swifts endorse dodgy products online, and fake videos of politicians are proliferating. The fundamental problem is an old one. From the printing press to the internet, new technologies have often made it easier to spread untruths or impersonate the trustworthy.
Typically, humans have used shortcuts to sniff out foul play: one too many spelling mistakes suggests an email might be a phishing attack, for example. Most recently, AI-generated images of people have often been betrayed by their strangely rendered hands; fake video and audio can sometimes be out of sync. Implausible content now immediately raises suspicion among those who know what AI is capable of doing.
Explore more of our coverage of AI The trouble is that the fakes are rapidly getting harder to spot. AI is improving all the time, as computing power and training data become more abundant. Could ai-powered fake-detection software, built into web browsers, identify computer-generated content? Sadly not.
Read more on livemint.com