In our joint piece in May 2023 (‘How should humans respond to advancing artificial intelligence?’), we explored the power and reach of AI. Still, we expressed concern about its impact on human creativity, decision-making unregulated by human conscience, and the productive utility of human time saved by AI adoption. We also said that human beings were survivors and would probably outlive the predicted AI-doomsday scenarios.
But the release of the movie Oppenheimer brought back thoughts of an ‘extinction risk’. Its director Christopher Nolan said in a Financial Times interview that experts felt AI was their own Oppenheimer moment. Just like the physicist swung between a desire to advance theoretical physics by building a practical prototype and his concern about the bomb’s potential to ‘destroy the world’, AI proponents are excited by its prospects but shudder at its latent ability to wipe out human civilization.
Risk management principles state that even an exceptionally low probability of a large cataclysmic event has to be accounted for seriously, since the expected value of the loss is quite significant. Extinction risk, therefore, necessitates a massive consideration even if optimists accord it a low probability of occurrence. While implausible but probable risks to our existence loom in the distance, the proximate risks are here for us to address.
For India, one of these has to do with employment. The country’s booming service exports depend extensively on cheap labour for low-tech jobs such as software code testing, image and content creation, interpreting lab reports, etc. AI has begun taking over these jobs.
Read more on livemint.com