OpenAI in May, revealed on Wednesday via social media that he has established Safe Superintelligence Inc. alongside co-founders Daniel Gross and Daniel Levy.
This new enterprise aims to focus exclusively on the safe creation of "superintelligence"—AI systems surpassing human intelligence.In a public statement, Sutskever and his partners have emphasized their commitment to avoiding the typical distractions of management and product cycles. They highlighted that their business model is designed to shield their safety and security efforts from short-term commercial pressures.
Safe Superintelligence is rooted in both Palo Alto, California, and Tel Aviv, leveraging their strong connections in these regions to attract top-tier technical talent.This announcement follows Sutskever's involvement in a failed attempt to oust OpenAI CEO Sam Altman last year, a move that has already sparked significant internal conflict regarding the balance between business pursuits and AI safety priorities at OpenAI. Sutskever has since expressed regret over the boardroom upheaval.At OpenAI, Sutskever co-led a team working on the safe advancement of artificial general intelligence (AGI), AI systems with capabilities beyond human intelligence.
Upon his departure, he hinted at a "very personally meaningful" project, the details of which remained undisclosed until now.The resignation of Jan Leike, Sutskever's team co-leader at OpenAI, shortly followed Sutskever's exit. Leike criticized OpenAI for prioritizing product development over safety.
In response, OpenAI formed a safety and security committee, although it predominantly comprises internal members.Safe Superintelligence Inc. represents Sutskever's intent to address these safety concerns by
. Read more on livemint.com