AI hallucinations’, which could result in loss of business, threat to life, and increased costs, amidst increasing adoption of artificial intelligence, experts said.
AI hallucination refers to when artificial intelligence produces content or choices that are inaccurate, prejudiced, or unsuitable. In such an occurrence, the AI produces information and behaviours that veer significantly from reality.
“Any business that uses generative AI in the decision-making process faces the challenge of hallucination. So far, we have seen it used in areas like telecom (networks), financial analysis, HR and ad-tech,” said Pareekh Jain, chief executive of EIIRTrend, a tech- focussed information platform.
AI hallucinations cause bias to set into the model, making the decision-making process flawed, he explained. For example, if the AI is biased, it can cause the financial analysis to be flawed using the wrong metrics, or for the applicant selection process to be biased.
“Hallucinations are a primary risk with generative AI, diminishing trust in its output and usefulness,” Arun Chandrasekaran, distinguished VP analyst at Gartner, added.
Generative AI, for example a product like Chat GPT, uses popular language models that predict the next word in a sequence of text, based on statistical likelihoods derived from having been trained on all the text on the open internet.
“Because these statistical models are probabilistic, there are no guarantees that the answers