Joe Biden is directing the US government to take a sweeping approach to artificial intelligence regulation, his most significant action yet to rein in an emerging technology that has sparked both concern and acclaim.
The lengthy executive order, released on Monday, sets new standards on security and privacy protections for AI, with far-reaching impacts on companies. Developers such as Microsoft Corp., Amazon.com Inc and Alphabet Inc’s Google will be directed to put powerful AI models through safety tests and submit results to the government before their public release.
The rule, which leverages the US government’s position as a top customer for big tech companies, is designed to vet technology with potential national or economic security risks, along with health and safety.
It will likely only apply to future systems, not those already on the market, a senior administration official said.
The initiative also creates infrastructure for watermarking standards for AI-generated content, such as audio or images, often referred to as “deepfakes.” The Commerce Department is being asked to help with the development of measures to counter public confusion about authentic content. Bloomberg Government earlier reported on a draft of the order.
The administration's action builds on voluntary commitments to securely deploy AI adopted by more than a dozen companies over the summer at the White House's request and its blueprint for an “AI Bill of Rights,” a guide for safe development and use.