This article is part of our Summer reads series. Visit the full collection for book lists, guest essays and more seasonal distractions. New generative-AI tools like OpenAI’s ChatGPT, the fastest-growing consumer internet application of all time, have taken the world by storm.
They have uses in everything from education to medicine and are astonishingly fun to play with. Although current AI systems are capable of spectacular feats they also carry risks. Europol has warned that they might greatly increase cybercrime.
Many AI experts are deeply worried about their potential to create a tsunami of misinformation, posing an imminent threat to the American presidential election in 2024, and ultimately to democracy itself, by creating an atmosphere of total distrust. Scientists have warned that these new tools could be used to design novel, deadly toxins. Others speculate that in the long term there could be a genuine risk to humanity itself.
One of the key issues with current AI systems is that they are primarily black boxes, often unreliable and hard to interpret, and at risk of getting out of control. For example, the core technology underlying systems like ChatGPT, large language models (LLMs), is known to “hallucinate", making up false statements. ChatGPT, for example, falsely accused a law professor of being involved in sexual harassment, apparently confused by statistical but irrelevant connections between bits of text that didn’t actually belong together.
Read more on livemint.com