NEW DELHI : With AI tools becoming more accessible, deepfakes are a rising threat in audio, video and photo formats. But, catching the actual perpetrators is next to impossible, thanks to how cyber tools allow people to obfuscate the traces of origin. Mint explains why: A deepfake is more sophisticated than basic morphed content.
As a result, they require more data—typically of facial and physical expressions—as well as powerful hardware and software tools. While this makes them harder to create, generative AI tools are becoming increasingly accessible now. That said, true deepfakes that are hard to detect, such as the video that targeted actor Rashmika Mandanna recently, require targeted efforts to be made, since accurately morphing facial expressions, movements and other video artifacts require very sophisticated hardware and specialized skills.
Deepfake content is typically made in order to target a specific individual, or a specific cause. Motives include spreading political misinformation, targeting public figures with sexual content, or posting morphed content of individuals with large social media following for blackmail. Given how realistic they look, deepfakes can pass off as real before a forensic scrutiny is done.
Most deepfakes also replicate voice and physical movements very accurately, making them even harder to detect. This, coupled with the exponential reach of content on popular social media platforms, makes deepfakes hard to detect. Yes.
While generative AI has not given us tools to make accurate morphed videos and audio clips within seconds, we are getting there. Prisma’s photo editing app Lensa AI used a technique called Stable Diffusion to morph selfies. Microsoft’s platform Vall-E needs only three
. Read more on livemint.com