OpenAI·xAI Safety Talent Exodus: 3 Key Takeaways
- xAI: 6 out of 12 co-founders have resigned
- OpenAI: Mission Alignment team disbanded, policy executive fired
- AI industry trust shaken as safety personnel depart
Half of xAI’s Founding Team Vanished
Half of Elon Musk’s xAI co-founding team has left. Six out of 12 members have resigned, and over 11 engineers declared their departure last week alone.[TechCrunch]
Recently, Jimmy Ba, who reported directly to Musk, and Tony Wu, in charge of multimodality, left the company. Musk stated it was a “reorganization to increase execution speed,” but many departures were voluntary.[Fortune]
OpenAI Disbands Safety Team, Fires Dissenters
OpenAI disbanded its Mission Alignment team, a team of 6-7 people responsible for communicating the company’s mission internally and externally. The team leader’s title was changed to ‘Chief Futurist,’ while the rest were scattered to other departments.[TechCrunch]
More controversially, Ryan Beiermeister, VP of Policy, was fired. He was reportedly terminated on discrimination charges after opposing ChatGPT’s ‘adult mode.’ He has vehemently denied the allegations.[TechCrunch]
AI Industry Prioritizes Speed Over Safety
At Anthropic, Mrinank Sharma, head of the safety research team, also resigned, stating that “the world is in danger.” It’s unusual for safety personnel to leave all three companies simultaneously.[CNN]
The pattern is consistent. As product release speeds increase, safety checks are sidelined, and those who object leave. The departure of safety personnel poses a significant long-term risk.
Frequently Asked Questions (FAQ)
Q: Why did xAI’s co-founders leave?
A: Officially, it’s a reorganization. However, pressure for product development, regulatory issues, and the Grok deepfake controversy are all contributing factors. It’s a mix of voluntary resignations and restructuring, with Musk explaining it’s to improve execution speed.
Q: What is OpenAI’s Adult Mode?
A: It’s a feature that allows adult content on ChatGPT. CEO of the app division, Fidji Simo, announced its planned release in Q1 2026. The policy VP’s dismissal after opposing it has sparked controversy, with critics arguing that profit is being prioritized over safety.
Q: Why is the departure of AI safety talent a problem?
A: Safety researchers and policy officers monitor AI to ensure it operates ethically. Their departure weakens internal checks and balances. As we approach AGI, the absence of safety personnel can lead to unpredictable risks.
If you found this helpful, please subscribe to AI Digester.
References
- Okay, now exactly half of xAI’s founding team has left the company – TechCrunch (2026-02-10)
- X-odus: Half of xAI’s founding team has left – Fortune (2026-02-11)
- OpenAI disbands mission alignment team – TechCrunch (2026-02-11)
- OpenAI policy exec fired on discrimination claim – TechCrunch (2026-02-10)
- AI researchers are sounding the alarm on their way out – CNN (2026-02-11)