We humans are a sceptical bunch. When new technologies emerge, we view them with apprehension, worrying about their potential negative impact on our lives and future. However, as they reveal their wonders, we often embrace them without question, placing our trust in their ‘capabilities’ without fully considering the consequences.
Perceptions of artificial intelligence (AI) vary greatly. Some view AI as a threat to the future of humanity, while others see it as a transformative force with the capacity to resolve pressing human problems. While there is no single notion of what AI is, it is useful to think of it as a set of computer algorithms that can perform tasks otherwise done by humans.
AI attracts human trust through its adept execution of tasks suited to its capabilities, particularly those characterized by clear-cut rules and abundant data inputs. However, this trust can sometimes lead to its deployment in critical functions ill-suited to its strengths, driven by cost-saving motives, a problem often compounded by inadequate, outdated or irrelevant data. This is why we fear AI ‘hallucinations.’ AI bots are programmed to provide guidance even when the accuracy of their answers should inspire minimal confidence.
They may even fabricate facts or present arguments that, while plausible, are deemed flawed or incorrect by experts. The danger lies in AI tools making false or harmful recommendations. The widespread adoption of AI has sparked concerns of transparency, accountability and ethics.
Read more on livemint.com