hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets, according to a report published on Wednesday.
Microsoft said in its report it had tracked hacking groups affiliated with Russian military intelligence, Iran's Revolutionary Guard, and the Chinese and North Korean governments as they tried to perfect their hacking campaigns using large language models.
Those computer programmes, often called artificial intelligence, draw on massive amounts of text to generate human-sounding responses.
The company announced the find as it rolled out a blanket ban on state-backed hacking groups using its AI products.
«Independent of whether there's any violation of the law or any violation of terms of service, we just don't want those actors that we've identified — that we track and know are threat actors of various kinds — we don't want them to have access to this technology,» Microsoft vice president for Customer Security Tom Burt told Reuters in an interview ahead of the report's release.
Russian, North Korean and Iranian diplomatic officials didn't immediately return messages seeking comment on the allegations.
China's US embassy spokesperson Liu Pengyu said it opposed «groundless smears