A recent investigation by researchers at Indiana University Bloomington has unveiled the use of a botnet powered by ChatGPT, a sophisticated AI language model developed by OpenAI, to promote cryptocurrency scams on X (formerly known as Twitter).
The botnet – dubbed Fox8 due to its connection with crypto-related websites – was composed of 1,140 accounts that utilized ChatGPT to generate and post content as well as engage with other posts. The auto-generated content aimed to entice unsuspecting users into clicking on links that led to crypto-hyping websites.
The researchers detected the botnet's activity by identifying a specific phrase, "As an AI language model…," which ChatGPT occasionally uses in response to certain prompts.
This led them to manually scrutinize accounts they suspected were operated by bots. Despite the relatively unsophisticated methods employed by the Fox8 botnet, it managed to publish seemingly convincing messages endorsing cryptocurrency sites, illustrating the ease with which AI can be harnessed for scams.
Micah Musser, an expert in AI-driven disinformation, believes that this discovery might only scratch the surface of a larger issue, given the popularity of large language models and chatbots.
“This is the low-hanging fruit,” Musser said in an interview with WIRED. “It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”
OpenAI's usage policy explicitly prohibits the use of its AI models for scams and disinformation. Researchers stress the challenge of identifying such botnets when they are effectively configured, as they could evade detection and manipulate algorithms to spread disinformation more effectively.
Filippo Menczer, a professor
Read more on cryptonews.com