OpenAI has started training of an upgraded artificial intelligence (AI) model. This AI model, once implemented, will be advanced than that of GPT-4 system.
OpenAI revealed the establishment of a new safety and security panel, which will provide guidance to the entire board on essential safety and security matters related to its projects and operations.
According to the AI startup, the committee's initial objective is to assess and enhance OpenAI's existing processes and safety measures. They will present their recommendations upon completing a 90-day evaluation period.
The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led.
AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.
The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.
Q1.
Read more on economictimes.indiatimes.com