OpenAI, notable for its advanced AI research and the creation of models like ChatGPT, unveiled a new initiative on October 25, 2023, targeted at addressing the multitude of risks associated with AI technologies. The initiative heralds the formation of a specialized team named «Preparedness», devoted to monitoring, evaluating, anticipating, and mitigating catastrophic risks emanating from AI advancements. This proactive step comes amidst growing global concern over the potential hazards intertwined with burgeoning AI capabilities.
Unveiling the Preparedness Initiative
Under the leadership of Aleksander Madry, the Preparedness team will focus on a broad spectrum of risks that frontier AI models, those surpassing the capabilities of current leading models, might pose. The core mission revolves around developing robust frameworks for monitoring, evaluating, predicting, and protecting against the potentially dangerous capabilities of these frontier AI systems. The initiative underscores the necessity to comprehend and construct the requisite infrastructure ensuring the safety of highly capable AI systems.
Specific areas of focus include threats from individualized persuasion, cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, along with autonomous replication and adaptation (ARA). Moreover, the initiative aims to tackle critical questions concerning the misuse of frontier AI systems and the potential exploitation of stolen AI model weights by malicious entities.
Risk-Informed Development Policy
Integral to the Preparedness initiative is the crafting of a Risk-Informed Development Policy (RDP). The RDP will outline rigorous evaluations, monitoring procedures, and a range of protective
Read more on blockchain.news