A new crop of nefarious chatbots with names like “BadGPT" and “FraudGPT" are springing up on the darkest corners of the web, as cybercriminals look to tap the same artificial intelligence behind OpenAI’s ChatGPT. Just as some office workers use ChatGPT to write better emails, hackers are using manipulated versions of AI chatbots to turbocharge their phishing emails.
They can use chatbots—some also freely-available on the open internet—to create fake websites, write malware and tailor messages to better impersonate executives and other trusted entities. Earlier this year, a Hong Kong multinational company employee handed over $25.5 million to an attacker who posed as the company’s chief financial officer on an AI-generated deepfake conference call, the South China Morning Post reported, citing Hong Kong police.
Chief information officers and cybersecurity leaders, already accustomed to a growing spate of cyberattacks, say they are on high alert for an uptick in more sophisticated phishing emails and deepfakes. Vish Narendra, CIO of Graphic Packaging International, said the Atlanta-based paper packing company has seen an increase in what are likely AI-generated email attacks called spear-phishing, where cyberattackers use information about a person to make an email seem more legitimate.
Public companies in the spotlight are even more susceptible to contextualized spear-phishing, he said. Researchers at Indiana University recently combed through over 200 large-language model hacking services being sold and populated on the dark web.
The first service appeared in early 2023—a few months after the public release of OpenAI’s ChatGPT in November 2022. Most dark web hacking tools use versions of open-source AI models like Meta’s
. Read more on livemint.com