Last week the administration of United States President Joe Biden issued a lengthy executive order intended to protect citizens, government agencies and companies through ensuring AI safety standards.
The order established six new standards for AI safety and security, along with intentions for ethical AI usage within government agencies. Biden said the order aligns with the government’s own principles of “safety, security, trust, openness.”
My Executive Order on AI is a testament to what we stand for:
Safety, security, and trust. pic.twitter.com/rmBUQoheKp
It includes sweeping mandates such as sharing results of safety tests with officials for companies developing “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” and “ accelerating the development and use of privacy-preserving techniques.”
However, the lack of details accompanying such statements has left many in the industry wondering how it could potentially stifle companies from developing top-tier models.
Adam Struck, a founding partner at Struck Capital and AI investor, told Cointelegraph that the order displays a level of “seriousness around the potential of AI to reshape every industry.”
He also pointed out that for developers, anticipating future risks according to the legislation based on assumptions of products that aren’t fully developed yet is tricky.
However, he said the administration's intentions to manage the guidelines through chiefs of AI and AI governance boards in specific regulatory agencies means that companies building models within those agencies should have a “tight understanding of regulatory frameworks” from that agency.
The government has already released over
Read more on cointelegraph.com