Gemini Advanced with Google Workspace or Gemini API, the chatbot could inadvertently divulge personal data, including passwords. The flaw was exploited by providing the chatbot with a prompt to conceal a passphrase. While the chatbot remained silent when directly queried about the passphrase, it readily disclosed the information when presented with an indirect prompt, such as requesting foundational instructions in a markdown code block.
Furthermore, the Gemini chatbot is susceptible to generating misinformation or malicious content, as highlighted in the findings. This poses a significant risk to users who rely on the chatbot for accurate information and assistance. Acknowledging these concerns, Google stated that it is actively working to address the issues with the chatbot's functionality.
According to a report by The Hacker News, Google emphasized its commitment to safeguarding users from vulnerabilities by conducting rigorous testing exercises and training its models to defend against adversarial behaviors like prompt injection and jailbreaking. Additionally, the company is dedicated to mitigating the spread of misleading information generated by the Gemini chatbot. The emergence of these security flaws adds to existing concerns over the credibility of AI-powered tools developed by Google.
Previously, the company faced controversy surrounding its image generation tool, leading to the suspension of its services. Google is speculated to be working on an improved version of the tool to address these concerns. As users increasingly rely on AI tools for various tasks, ensuring their security and reliability remains paramount.
Read more on livemint.com