deepfake incident — reading the takedown rulebook to social media platforms and reminding them of the legal provisions against digital impersonation — must be backed up by vigorous prosecution against the perpetrators. Fakery, deep or shallow, is detectable, and policing it is a matter for enforcers of the law. Deepfakes require considerable computing capacity and a large digital footprint of their targets, making it easier to narrow down the search for their origin.
Social media platforms do not have common guard rails against synthetic content created by AI, and the protection against it will have to be led through legislation, which India does not yet have in place.
Lawmakers will bear down on digital platforms to improve content curation and detection technology. The risk from deepfakes is that they could undermine trust about authentic content. So, both the law and its enforcement must be up to speed.
They could also jeopardise the development of AI if licensing requirements become restrictive — the bathwater-baby problem. Deepfakes have beneficial effects in communications for individuals, companies and governments. It remains a legitimate course of tech advancement.
However, some degree of proliferation control is needed at this stage to protect personal and national security. Again, technology to detect deepfakes is improving and may merit bigger investments.
The alarm over deepfakes is not so much about it becoming an epidemic as it is about being used to coerce voter and consumer choice. The defence against these attacks on democracy and capitalism are to be found in governance and markets.