Paul Graham of Y Combinator recently shared an anecdote that perfectly encapsulates the challenge of regulating artificial intelligence (AI). When he asked someone helping the British government with it what they would regulate, the response was a refreshingly honest, “I’m not sure yet." As Graham noted, this might be the most intelligent thing anyone has said about AI regulation thus far. AI creates a “pacing problem," first explained in Larry Downes’ book The Laws of Disruption.
He states that technology changes exponentially, but corresponding social, economic and legal systems change incrementally. Regulators are trying to govern Hogwarts with rules written for a Muggle school. Good luck stopping magical mischief with detention and a dress code.
And they are also faced with the Collingridge Dilemma, the regulatory equivalent of being stuck between a rock and a hard place. A 2023 paper in Science, Technology & Human Values analysed 50 cases of emerging technologies and found that in 76% of cases, early regulation stifled innovation, while in 82% of cases, late regulation failed to address societal impact adequately. Regulate too early, and you might accidentally outlaw the cure for cancer.
Regulate too late, and you might find yourself in a Black Mirror episode. Governments are aware of the need for regulation, but it is a tough job. A 2022 report by the Belfer Center at Harvard University found that only 18% of US federal agencies have employees with both technical and policy expertise in AI.
A similar study by the AI Now Institute found that only 23% of government agencies across OECD countries have this expertise. This lack of skills would be common around the world. The EU’s AI Act and the US’s proposed AI
. Read more on livemint.com