Subscribe to enjoy similar stories. Do you know if the person talking to you on Zoom or a dating app is a real human? Or is it an AI agent pretending to be one? This is becoming an increasing concern as artificial intelligence (AI) advances in its ability to mimic human behaviour. Financial fraud in India is rising rapidly; now, fraudsters can potentially deploy AI agents to defraud thousands simultaneously through automated AI-generated calls and convincing human-like conversations.
The key question in the age of AI is: How do you prove you are human online? The traditional methods of distinguishing between humans and AI bots online, like CAPTCHA tests that require users to identify blurry letters, are becoming ineffective. AI has already cracked CAPTCHA. Similarly, websites that require extensive personal information for verification compromise user privacy.
We need a solution that balances security with privacy protection, ensuring that we interact with real people without asking them to disclose personal data. I co-authored a paper with a brilliant group of AI researchers from OpenAI, Harvard, MIT, Microsoft and other institutions titled, “Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online.’ (https://bit.ly/4k3TKVc). In it, we explore the concept of ‘Personhood Credentials’ (PHCs), which are digital credentials designed to verify, in a non-repudiable manner, that an online user is a real person rather than an AI bot at the point of a transaction or an interaction.
Read more on livemint.com