These attacks are not widespread. The method prescribed needs the hacker to be within a few inches of you, or be connected by video conferencing as you type your password. However, the researchers could record a user’s keystrokes on a laptop—in this case, a commonly Apple MacBook—to guess what has been typed with up to 95% accuracy.
Deep learning algorithms, a subset of artificial intelligence, makes this possible by learning input data with high efficacy. Security experts say that with AI algorithms becoming more accessible, such attacks hold the potential of becoming a significant threat. At the core of such attacks is the ability of deep learning algorithms to “learn" acoustic patterns from recordings, and then guess what is being typed from other recordings.
This is unlike the threat posed by large language models and generative AI. This threat is akin to “keylogger" attacks—in which cyber criminals install malware on users’ devices, and use such malware to “log" what users are typing. However, generative AI, according to experts, could benefit from such deep learning algorithms by virtue of the larger data sets—and make the subsequent deep learning algorithms even more accurate.
The authors of the paper said mixing up letter cases in a word, or using a mix of multiple incomplete words along with numbers and symbols, could make it hard for algorithms to predict passwords. Password rules, which ask to mix a set of characters, could help. Other safeguards include touchscreen typing in mute mode, or playing a sound while typing.
Generative AI poses a significantly greater threat. Attackers use LLMs and generative AI to create more convincing spear phishing campaigns. The greatest threat here lies in attackers using
. Read more on livemint.com