OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist, Ilya Sutskever.
Rather than maintain the so-called superalignment team as a standalone entity, OpenAI is now integrating the group more deeply across its research efforts to help the company achieve its safety goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.
The decision to rethink the team comes as a string of recent departures from OpenAI revives questions about the company’s approach to balancing speed versus safety in developing its AI products. Sutskever, a widely respected researcher, announced Tuesday that he was leaving OpenAI after having previously clashed with Chief Executive Officer Sam Altman over how rapidly to develop artificial intelligence.
Leike revealed his departure shortly after with a terse post on social media. “I resigned,” he said. For Leike, Sutskever’s exit was the last straw following disagreements with the company, according to a person familiar with the situation who asked not to be