Chatbots are taking away a key line of defence against fraudulent phishing emails by removing glaring grammatical and spelling errors, according to experts.
The warning comes as policing organisation Europol issues an international advisory about the potential criminal use of ChatGPT and other “large language models”.
Phishing emails are a well-known weapon of cybercriminals and fool recipients into clicking on a link that downloads malicious software or tricks them into handing over personal details such as passwords or pin numbers.
Half of all adults in England and Wales reported receiving a phishing email last year, according to the Office for National Statistics, while UK businesses have identified phishing attempts as the most common form of cyber-threat.
However, a basic flaw in some phishing attempts – poor spelling and grammar – is being rectified by artificial intelligence (AI) chatbots, which can correct the errors that trip spam filters or alert human readers.
“Every hacker can now use AI that deals with all misspellings and poor grammar,” says Corey Thomas, chief executive of the US cybersecurity firm Rapid7. “The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.”
Data suggests that ChatGPT, the leader in the market that became a sensation after its launch last year, is being used for cybercrime, with the rise of “large language models” (LLM) getting one of its first substantial commercial applications in the crafting of malicious communications.
Data from cybersecurity experts at the UK firm Darktrace suggests that phishing emails
Read more on theguardian.com