OpenAI is on the hunt for someone to fill a crucial safety position that’s been empty for months and the timing couldn’t be more critical.
The company behind ChatGPT announced last Saturday that it’s hiring a new “head of preparedness” to oversee its safety strategy. This person will be responsible for identifying potential dangers in OpenAI’s AI models and figuring out how they might be misused. CEO Sam Altman shared the job posting on X, signaling just how important this role has become.
The position comes with a hefty salary of $555,000 per year, plus company equity. According to OpenAI, the new hire will “lead the technical strategy and execution of OpenAI’s Preparedness framework” essentially the company’s roadmap for spotting and preparing for advanced AI capabilities that could cause serious harm.
This search for a safety chief arrives at a particularly challenging moment for OpenAI. The company is currently facing multiple wrongful death lawsuits and growing questions about ChatGPT effects on users mental health. In fact, OpenAI own research found something alarming: more than a million ChatGPT users about 0.07% of weekly active users—showed signs of mental health crises, including mania, psychosis, or suicidal thoughts.
Altman didn’t shy away from acknowledging the issue. “The potential impact of models on mental health was something we saw a preview of in 2025,” he said, calling the head of preparedness “a critical role at an important time.”
In his post, Altman painted a picture of the complex challenges the role involves. “If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he wrote.
He was also upfront about what the job entails: “This will be a stressful job, and you’ll jump into the deep end pretty much immediately.”
The vacancy isn’t exactly surprising given OpenAI’s rocky track record with safety leadership. The company’s safety teams have seen a revolving door of departures over recent years.
Back in July 2024, OpenAI moved Aleksander Madry, who was then serving as head of preparedness, to a different role. The company announced that two AI safety researchers Joaquin Quinonero Candela and Lilian Weng would share the responsibilities instead.
But that arrangement didn’t last long. Weng left OpenAI just a few months later. Then, earlier this year, Candela announced he was stepping away from the preparedness team to focus on recruiting at OpenAI instead.
The departures didn’t stop there. In November 2025, Andrea Vallone, who led a safety research team called Model Policy, announced she’d be leaving OpenAI by year’s end. Vallone had reportedly played a key role in shaping how ChatGPT responds to users experiencing mental health emergencies—work that now seems more important than ever.
With lawsuits piling up and internal safety leadership constantly shifting, OpenAI’s search for a new head of preparedness represents more than just filling an empty seat. It’s about finding someone willing to tackle one of the toughest jobs in tech at a moment when the stakes have never been higher.











