Be careful what you tell a chatbot
LONDON — Be careful what you tell a chatbot. Your conversation might be used to improve the artificial intelligence system that it's built on.
If you ask ChatGPT for advice about your embarrassing medical condition, beware that anything you disclose could be used to tweak OpenAI's algorithms that underpin its AI models. The same goes if, for example, you upload a sensitive company report to Google’s Gemini to summarize for a meeting.
It's no secret that the AI models underpinning popular chatbots have been trained on enormous troves of information scraped from the internet, like blog posts, news articles and social media comments, so they can predict the next word when coming up with a response to your question.
This training was often done without consent, raising copyright concerns. and, experts say, given the opaque nature of AI models, it's probably too late to remove any of your data that might have been used.
But what you can do going forward is stop any of your chatbot interactions from being used for AI training. It’s not always possible but some companies give users the option:
Google keeps your conversations with its Gemini chatbot to train its machine learning systems. For users 18 or older, chats are kept by default for 18 months, though that can be adjusted in settings. Human reviewers can also access the conversations to improve the quality of the generative AI models that power Gemini. Google warns users not to tell Gemini any confidential information or give it any data they don't want a human reviewer to see.
To opt out of this, go to the Gemini website and click the Activity tab. Click the Turn Off button and from the drop down menu, you can choose to stop
Read more on abcnews.go.com