AI) to create malicious software, draft convincing phishing emails and spread disinformation online, Canada's top cybersecurity official told Reuters, early evidence that the technological revolution sweeping Silicon Valley has also been adopted by cybercriminals. In an interview this week, Canadian Centre for Cyber Security Head Sami Khoury said that his agency had seen AI being used «in phishing emails, or crafting emails in a more focused way, in malicious code (and) in misinformation and disinformation.» Khoury did not provide details or evidence, but his assertion that cybercriminals were already using AI adds an urgent note to the chorus of concern over the use of the emerging technology by rogue actors.
In recent months several cyber watchdog groups have published reports warning about the hypothetical risks of AI — especially the fast-advancing language processing programs known as large language models (LLMs), which draw on huge volumes of text to craft convincing-sounding dialogue, documents and more. In March, the European police organization Europol published a report saying that models such as OpenAI's ChatGPT had made it possible «to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.» The same month, Britain's National Cyber Security Centre said in a blog post that there was a risk that criminals «might use LLMs to help with cyber attacks beyond their current capabilities.» Cybersecurity researchers have demonstrated a variety of potentially malicious use cases and some now say they are beginning to see suspected AI-generated content in the wild.
Read more on economictimes.indiatimes.com