Expert Warns of OpenAI’s ‘Dangerous’ Shift in AI Safety Priorities

Expert Warns of OpenAI’s ‘Dangerous’ Shift in AI Safety Priorities

A prominent voice in AI research has sounded the alarm over what he calls a "very dangerous mentality" emerging within OpenAI, the influential organization at the forefront of artificial intelligence development. Miles Brundage, a former policy researcher at OpenAI who departed in late 2024, leveled this critique in early March 2025.


OpenAi


Brundage’s concerns hinge on a stark contrast between OpenAI’s past and present approaches to deploying AI systems. He pointed to the 2019 release of GPT 2, a language model unveiled with notable restraint due to fears it could be exploited for generating misinformation or other harmful content. 

Back then, OpenAI adopted a phased rollout, allowing time to assess the technology’s societal impact before fully unleashing it. Today, Brundage argues, that cautious ethos has been supplanted by a more permissive stance one that sidelines safety considerations unless "overwhelming evidence of imminent dangers" emerges. This shift, he contends, marks a troubling departure from the rigorous standards needed to responsibly manage increasingly powerful AI technologies.

Founded in 2015 as a nonprofit dedicated to advancing AI research, OpenAI has since morphed into a commercial titan, propelled by high profile successes like ChatGPT and substantial investments from partners such as Microsoft. This transformation has catapulted the organization to the vanguard of the AI industry, but it has also drawn scrutiny. 

Brundage’s critique builds on a chorus of unease from other former OpenAI insiders, including Ilya Sutskever and Jan Leike, who exited in 2024 amid public expressions of concern. Leike, in particular, decried a corporate culture where "safety processes have taken a backseat to shiny products," suggesting that the pursuit of market dominance may be overshadowing foundational safeguards.

OpenAI’s recently published safety and alignment framework, intended to clarify its risk mitigation strategies, has only deepened the divide. While the document details efforts to curb misuse and align AI behavior with human values, Miles Brundage and other critics view it as emblematic of a reactive mindset prioritizing deployment over preemptive safety research. 

This approach stands in stark contrast to the proactive measures some experts advocate, especially as AI capabilities edge closer to AGI, a milestone that could amplify both the technology’s promise and its perils. With competitors like Anthropic, Google, and xAI hot on its heels, OpenAI’s safety posture could set a critical precedent for the broader field.

The warning resonates beyond OpenAI’s walls, tapping into a growing unease among AI researchers and ethicists. Geoffrey Hinton, a deep learning luminary and vocal critic of unchecked AI development, has long argued that profit-driven priorities risk outstripping efforts to address catastrophic outcomes.

Comments

Also Read: TOP VIRAL NEWS

Davido’s 5IVE Makes History as First African Album of 2025 to Top Global Spotify Charts

Tiwa Savage Shines with Smooth R&B Anthem “You 4 Me,” Reinforcing Her Afrobeats Dominance

Ayra Starr Teams Up with Wizkid for Explosive New Single “Gimme Dat,” Dropping This Friday

CKay Announces Departure from Warner Music South Africa, Signals New Era for Afrobeats Star

Why ‘Sinners’ by Ryan Coogler and Michael B. Jordan Conquered the Box Office

SZA’s Graceful Recovery After Near-Fall Steals Spotlight on Grand National Tour Opener with Kendrick Lamar

Harry Kane Tipped for Premier League Return with Chelsea or Aston Villa in Summer 2025

Instagram Bolsters Teen Safety with Expanded AI-Powered Age Detection Technology