A group of OpenAI’s current and former workers are calling on the ChatGPT-maker and other artificial intelligence companies to protect whistleblowing employees who flag safety risks about AI technology
A group of OpenAI's current and former workers is calling on the ChatGPT-maker and other artificial intelligence companies to protect whistleblowing employees who flag safety risks about AI technology.
An open letter published Tuesday asks tech companies to establish stronger whistleblower protections so researchers can raise concerns about the development of high-performing AI systems internally and with the public without fear of retaliation.
Former OpenAI employee Daniel Kokotajlo, who left the company earlier this year, said in a written statement that tech companies are “disregarding the risks and impact of AI” as they race to develop better-than-human AI systems known as artificial general intelligence.
“I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence," he wrote. «They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood.”
OpenAI said in a statement responding to the letter that it already has measures for employees to express concerns, including an anonymous integrity hotline.
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,» said the company's statement. «We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities
Read more on abcnews.go.com