Officials in the European Union have discussed additional measures that would make artificial intelligence (AI) tools, such as OpenAI’s ChatGPT, more transparent to the public.
On June 5, Vera Jourova, the European Commission’s vice president for values and transparency, told the media that companies deploying generative AI tools with the “potential to generate disinformation” should place labels on their content as an effort to combat “fake news.”
Jourova also referenced companies that integrate generative AI into their services — such as Microsoft’s Bing Chat and Google’s Bard — as needing to create “safeguards” to prevent malicious actors from utilizing them for disinformation purposes.
In 2018 the EU created its “Code of Practice on Disinformation,” which acts as both an agreement and a tool for players in the tech industry on self-regulatory standards to combat disinformation.
Related: OpenAI gets warning from Japanese regulators on data collecting
Major tech companies, including Google, Microsoft and Meta Platforms, have already signed onto the EU’s 2022 Code of Practice on Disinformation. Jourova said those companies and others should report on new safeguards for AI this upcoming July.
She also highlighted Twitter’s withdrawal from the code of practice, saying the company should anticipate more scrutiny from regulators.
These statements from the deputy head come as the EU prepares its forthcoming EU AI Act, which will be a comprehensive set of guidelines for the public use of AI and the companies deploying it.
Despite the official laws scheduled to take effect within the next two to three years, European officials have urged to create a voluntary code of conduct for generative AI developers in the meantime.
Read more on cointelegraph.com