When it comes to artificial intelligence (AI), one of the most commonly debated issues in the technology community is safety—so much so that it has helped lead to the ouster of OpenAI’s co-founder Sam Altman, according to Bloomberg News. And those concerns boil down to a truly unfathomable one: Will AI eventually kill us all? Allow me to set your mind at ease: AI is no more dangerous than the many other existential risks facing humanity, from super-volcanoes to stray asteroids and nuclear war. I am sorry if you do not find that very reassuring.
But it is far more optimistic than someone like the AI researcher Eliezer Yudkowsky, who believes humanity has entered its last hour. In his view, AI will be smarter than us and will not share our goals, and soon enough we humans will go the way of the Neanderthals. Others, meanwhile, have called for a six-month pause of AI progress, so we human beings can get a better grasp of what’s going on.
AI is just the latest instantiation of the many technological challenges humankind has faced throughout history. The printing press and electricity involved pluses and misuses too, but it would have been a mistake to press the ‘stop’ or even the ‘slow down’ buttons on either. AI worriers like to start with the question: “What is your ‘p’ [probability] that AI poses a truly existential risk?" Since ‘zero’ is obviously not the right answer, the discussion continues: Given a non-zero risk of total extinction, should we not be truly cautious? You then can weigh the potential risk against the forthcoming productivity improvements from AI, as one Stanford economist does in a recent study.
Read more on livemint.com