OpenAI’s ChatGPT is promptly met by a response that is confident, coherent and just plain wrong. In an AI model, such tendencies are usually described as hallucinations. A more informal word exists, however: these are the qualities of a great bullshitter.
There are kinder ways to put it. In its instructions to users, OpenAI warns that ChatGPT “can make mistakes". Anthropic, an American AI company, says that its LLM Claude “may display incorrect or harmful information"; Google’s Gemini warns users to “double-check its responses".
The throughline is this: no matter how fluent and confident AI-generated text sounds, it still cannot be trusted. Hallucinations make it hard to rely on AI systems in the real world. Mistakes in news-generating algorithms can spread misinformation.
Image generators can produce art that infringes on copyright, even when told not to. Customer-service chatbots can promise refunds they shouldn’t. (In 2022 Air Canada’s chatbot concocted a bereavement policy, and this February a Canadian court has confirmed that the airline must foot the bill.) And hallucinations in AI systems that are used for diagnosis or prescription can kill.
The trouble is that the same abilities that allow models to hallucinate are also what make them so useful. For one, LLMs are a form of “generative" AI, which, taken literally, means they make things up to solve new problems. They do this by producing probability distributions for chunks of characters, or tokens, laying out how likely it is for each possible token in its vocabulary to come next.
Read more on livemint.com