By Jeff Mason and Alexandra Alper
(Reuters) -U.S. President Joe Biden is seeking to reduce the risks that artificial intelligence (AI) poses to consumers, workers, minority groups and national security with a new executive action on Monday.
The order, which he is expected to sign at the White House, requires developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they are released to the public.
It also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.
The move is the latest step by the administration to set parameters around AI as it makes rapid gains in capability and popularity in an environment of, so far, limited regulation. The order prompted a mixed response from industry and trade groups.
Bradley Tusk, CEO at Tusk Ventures, a venture capital firm with investments in tech and AI, welcomed the move. But he said tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.
«Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited,» Tusk said.
NetChoice, a national trade association that includes major tech platforms, described the order as an «AI Red Tape Wishlist,» that will end up «stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.»
The new order goes beyond voluntary commitments made earlier this year by AI
Read more on investing.com