ChatGPT and other artificial intelligence (AI) chatbots and it doesn't take long for them to spout falsehoods. Described as hallucination, confabulation or just plain making things up, it's now a problem for every business, organisation and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.
«I don't think there's any model today that doesn't suffer from some hallucination,» said Daniela Amodei, cofounder and president of Anthropic, maker of the chatbot Claude 2. «They're really just sort of designed to predict the next word,» Amodei said. «And so there will be some rate at which the model does that inaccurately.» Anthropic, ChatGPT-maker OpenAI and other major developers of AI systems known as large language models say they're working to make them more truthful.
How long that will take — and whether they will ever be good enough to, say, safely dole out medical advice — remains to be seen. «This isn't fixable,» said Emily Bender, a linguistics professor and director of the University of Washington's Computational Linguistics Laboratory. «It's inherent in the mismatch between the technology and the proposed use cases.» A lot is riding on the reliability of generative AI technology.
The McKinsey Global Institute projects it will add the equivalent of $2.6 trillion to $4.4 trillion to the global economy. Chatbots are only one part of that frenzy, which also includes technology that can generate new images, video, music and computer code. Nearly all of the tools include some language component.
Read more on economictimes.indiatimes.com