Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise. “There were always fraudulent calls coming in.
But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new," said Bill Cassidy, chief information officer at New York Life. Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast," said Kyle Kappel, U.S.
Leader for Cyber at KPMG. How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse.
Among the concerns are that bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts. Chase Bank was fooled recently by an AI-generated voice during an experiment. The bank said that to complete transactions and other financial requests, customers must provide additional information.
Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year, according to a recent report by identity verification platform Sumsub. Companies say they are working to put more guardrails in place to prepare for an incoming wave of generative AI-fueled attackers. For example, Cassidy said he is working with New York Life’s
. Read more on livemint.com