White House said Friday that OpenAI and others in the artificial intelligence race have committed to making their technology safer with features such as watermarks on fabricated images. «These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,» the White House said in a release.
Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI were to join US President Joe Biden later Friday to announce the commitments, which include developing «robust technical mechanisms» such as watermarking systems to ensure that users know when content is AI-generated, according to a White House official. Worry that imagery or audio created by artificial intelligence will be used for fraud and misinformation has ramped up as the technology improves and the 2024 US presidential election gets closer.
Ways to tell when audio or imagery have been generated artificially are being sought to prevent people from being duped by fakes that look or sound real. «They're committing to setting up a broader regime towards making it easier for consumers to know whether content is AI-generated or not,» the White House official said.
«There is technical work to be done, but the point here is that it applies to audio and visual content, and it will be part of a broader system.» The goal is for it to be easy for people to tell when online content is created by AI, the official added. Commitments by the companies include independent testing of AI systems for risks when it comes to biosecurity, cybersecurity, or «societal effects,» according to the
. Read more on economictimes.indiatimes.com