OpenAI Initiates Preparedness Team to Address AI Catastrophic Risks

Luisa Crawford  Oct 29, 2023 20:30  UTC 12:30

2 Min Read

OpenAI, notable for its advanced AI research and the creation of models like ChatGPT, unveiled a new initiative on October 25, 2023, targeted at addressing the multitude of risks associated with AI technologies. The initiative heralds the formation of a specialized team named "Preparedness", devoted to monitoring, evaluating, anticipating, and mitigating catastrophic risks emanating from AI advancements. This proactive step comes amidst growing global concern over the potential hazards intertwined with burgeoning AI capabilities.

Unveiling the Preparedness Initiative

Under the leadership of Aleksander Madry, the Preparedness team will focus on a broad spectrum of risks that frontier AI models, those surpassing the capabilities of current leading models, might pose. The core mission revolves around developing robust frameworks for monitoring, evaluating, predicting, and protecting against the potentially dangerous capabilities of these frontier AI systems. The initiative underscores the necessity to comprehend and construct the requisite infrastructure ensuring the safety of highly capable AI systems.

Specific areas of focus include threats from individualized persuasion, cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, along with autonomous replication and adaptation (ARA). Moreover, the initiative aims to tackle critical questions concerning the misuse of frontier AI systems and the potential exploitation of stolen AI model weights by malicious entities.

Risk-Informed Development Policy

Integral to the Preparedness initiative is the crafting of a Risk-Informed Development Policy (RDP). The RDP will outline rigorous evaluations, monitoring procedures, and a range of protective measures for frontier model capability, establishing a governance structure for accountability and oversight throughout the development process. This policy will augment OpenAI's existing risk mitigation efforts, contributing to the safety and alignment of new, highly capable AI systems pre and post-deployment.

Engaging the Global Community

In a bid to unearth less obvious concerns and attract talent, OpenAI has also launched an AI Preparedness Challenge. The challenge, aimed at preventing catastrophic misuse of AI technology, promises $25,000 in API credits for up to 10 exemplary submissions. It's a part of a broader recruitment drive for the Preparedness team, seeking exceptional talent from diverse technical domains to contribute to the safety of frontier AI models.

Additionally, this initiative follows a voluntary commitment made in July by OpenAI, alongside other AI labs, to foster safety, security, and trust in AI, resonating with the focal points of the UK AI Safety Summit.

Growing Concerns and Previous Initiatives

The inception of the Preparedness team is not an isolated move. It traces back to previous affirmations by OpenAI, regarding the formation of dedicated teams to tackle AI-induced challenges. This acknowledgment of potential risks accompanies a broader narrative, including an open letter published in May 2023 by the Center for AI Safety, urging the community to prioritize mitigating AI extinction-level threats alongside other global existential risks.


Image source: Shutterstock


Read More