Artificial Intelligence (AI) is eventually becoming increasingly exploited by cybercriminals to actually create some highly personalized phishing emails that are difficult for the victims to recognize as fraudulent, reported New York Post.
ET Year-end Special Reads
Buying a home in 2025? Here's how property market can shape up
18 top stock picks for 2025 from 6 leading brokers
Five big bangs that shook the corporate world in 2024
According to reports from New York Post, AI tools actually analyze social media activity to eventually gather information about potential targets while allowing scammers to craft messages that appear to come from trusted sources which includes family or friends. This trend has eventually raised severe concerns among cybersecurity experts who note that traditional email defenses are extremely inadequate against these sophisticated attacks.
CISO of Beazly Kristy Kelly put an emphasis on the growing personal nature of these scams while McAfee also warned that the frequency and sophistication of AI driven phishing attacks are expected to rise exponentially, asserted New York Post. Phishing still remains a pretty primary method for initiating cyber breaches with over 90% of successful attacks starting with such messages.
Eminent experts have recommended that users should remain excessively vigilant by avoiding clicking on links in unsolicited emails while they should also enhance account security through two factor authentication, noted New York Post. The rise of generative AI in crafting