T he most interesting TV I’ve watched recently did not come from a conventional television channel, nor even from Netflix, but from TV coverage of parliament. It was a recording of a meeting of the AI in weapons systems select committee of the House of Lords, which was set up to inquire into “how should autonomous weapons be developed, used and regulated”. The particular session I was interested in was the one held on 20 April, during which the committee heard from four expert witnesses – Kenneth Payne, who is professor of strategy at King’s College London; Keith Dear, director of artificial intelligence innovation at the computer company Fujitsu; James Black from the defence and security research group of Rand Europe; and Courtney Bowman, global director of privacy and civil liberties engineering at Palantir UK. An interesting mix, I thought – and so it turned out to be.
Autonomous weapons systems are ones that can select and attack a target without human intervention. It is believed (and not just by their boosters) that these systems could revolutionise warfare, and may be faster, more accurate and more resilient than existing weapons systems. And that they could, conceivably, even limit the casualties of war (though I’ll believe that when I see it).
The most striking thing about the session (for this columnist, anyway) was that, although it was ostensibly about the military uses of artificial intelligence in warfare, many of the issues and questions that arose in the two hours of discussion could equally have arisen in discussions about civilian deployment of the technology. Questions about safety and reliability, for example, or governance and control. And, of course, about regulation.
Many of the most interesting
Read more on theguardian.com