OpenAI Launches Safety Fellowship to Tackle AI Alignment Research
OpenAI is opening its doors to outside researchers with a new Safety Fellowship program aimed at advancing independent work on AI alignment and safety challenges. Applications are now open, with a May 3 deadline.
The five-month program runs from September 14, 2026 through February 5, 2027, targeting researchers, engineers, and practitioners who want to tackle safety questions affecting both current and future AI systems. OpenAI has partnered with Constellation to provide workspace in Berkeley, though remote participation is an option.
What OpenAI Wants
The company outlined priority research areas including safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. They're specifically seeking work that's "empirically grounded, technically strong, and relevant to the broader research community."
Fellows won't get internal system access—a notable limitation—but will receive API credits, compute support, a monthly stipend, and mentorship from OpenAI staff. The expectation is clear: produce something tangible by program's end, whether that's a research paper, benchmark, or dataset.
Who Should Apply
OpenAI is casting a wide net on backgrounds. Computer science is obvious, but they're also welcoming applicants from social science, cybersecurity, privacy, and human-computer interaction fields. The company explicitly stated they "prioritize research ability, technical judgment, and execution over specific credentials."
Letters of reference are required. Successful applicants will be notified by July 25.
The Bigger Picture
This fellowship arrives as AI safety concerns have moved from academic debate to mainstream regulatory discussion. OpenAI has faced criticism over the years for allegedly deprioritizing safety research in favor of capability development—a tension that led to high-profile departures from its safety team.
The program represents an attempt to cultivate external safety research talent while potentially deflecting some of that criticism. Whether it signals a genuine shift in priorities or serves primarily as an optics play remains to be seen.
For researchers interested in AI safety work with access to OpenAI resources and mentorship, applications close May 3 at the program's official page. Questions can be directed to openaifellows@constellation.org.
Read More
Anthropic Launches Claude Managed Agents Platform for Enterprise AI Deployment
Apr 08, 2026 0 Min Read
BNB Delivered 177% Returns for Holders Over 15 Months Through Stacking Rewards
Apr 08, 2026 0 Min Read
GitHub Copilot CLI Adds Rubber Duck Feature for Cross-Model AI Code Review
Apr 08, 2026 0 Min Read
NVIDIA Omniverse Gets Modular Libraries for Physical AI Integration
Apr 08, 2026 0 Min Read
AI Legal Tool Harvey Targets VC and Startup Law Market
Apr 08, 2026 0 Min Read