just-published experiments led by Hirokazu Shirado at Carnegie Mellon University, we found that even very simple forms of AI assistance for drivers, such as auto-steering or auto-braking, eroded social norms of reciprocity on the road. Allowing humans to delegate whether to swerve away from an oncoming car in repeated games of chicken resulted in people subsequently being less likely to take turns in giving way, thereby increasing the frequency of crashes when they drove without AI assistance. These effects of AI suggest that it could have a big impact on the social norms that have evolved over millennia, shaping how we treat each other in all manner of everyday interactions.
Governments cannot afford to ignore the risks. At a minimum, they should evaluate more closely whether AI systems are aligned with human social interests and they should provide for more safety testing. As the Bletchley Declaration signed at the recent AI-safety summit in Britain made clear, innovation must go hand in hand with attention to mitigating risks.
After all, we cannot ask AI to regulate itself, even politely. Nicholas A. Christakis is the director of the Human Nature Lab at Yale University. © 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.comMilestone Alert!
Livemint tops charts as the fastest growing news website in the world