Openai power struggle exposed in ilya sutskever deposition threatens company’s future

In a bombshell 10-hour deposition that has now become public, Ilya Sutskever — co-founder of OpenAI and one of the key minds behind ChatGPT — provided sworn testimony that exposed a dramatic internal power struggle threatening to unravel the very company he helped create. The deposition, part of the ongoing Musk v. Altman legal battle, offers a rare, unfiltered glimpse into the internal chaos that nearly led to OpenAI’s implosion.

The 365-page transcript reveals more than just conflicting personalities and tech-world politics — it paints a detailed picture of a company caught between its ambitious mission and poor internal governance. Sutskever, who once championed OpenAI’s vision of building artificial general intelligence (AGI) for the benefit of humanity, found himself at the center of a boardroom coup that almost dismantled the organization.

According to his testimony, the OpenAI board made the fateful decision to remove CEO Sam Altman in late 2023 based on a combination of internal concerns, ideological disagreements, and what Sutskever now admits were unsubstantiated claims. What followed was a weekend of panic and confusion, as key employees threatened to resign, investors scrambled for answers, and the company teetered on the brink of collapse.

One of the most striking revelations from the deposition was the existence of a confidential 52-page internal report that has yet to be made public. This document, according to Sutskever, formed a core part of the rationale for Altman’s dismissal. However, he admitted that its claims were never independently verified, raising serious questions about the decision-making processes at the highest levels of the company.

Another major factor in the crisis was what Sutskever described as a miscalculation of governance structure. OpenAI’s board, composed of individuals with limited experience in managing high-growth technology companies, underestimated the organizational fallout that would result from ousting Altman without a clear succession plan. The deposition also revealed that the board failed to properly communicate with OpenAI’s broader leadership team and key stakeholders, leading to a loss of trust and a perception of instability.

Sutskever’s testimony also highlighted a deeper tension within the company: a philosophical divide between those who saw OpenAI’s mission as requiring strict ethical safeguards and those who prioritized rapid technological advancement. This ideological rift contributed to an environment where suspicion brewed and internal alliances formed, ultimately undermining collective decision-making.

In a moment of stark reflection, Sutskever even acknowledged that “destroying OpenAI could be consistent with the mission,” referencing the organization’s founding principle that AGI development must not be driven by profit or power. This statement underscores the paradox at OpenAI’s core: a desire to build world-changing technology while avoiding the very pitfalls that such power typically brings.

The fallout from this internal clash was swift and nearly catastrophic. Nearly all of OpenAI’s employees signed a letter threatening to resign if Altman was not reinstated. Investors, including major tech firms, applied pressure to restore stability. Just days later, Altman returned as CEO, but not before the company’s credibility and cohesion were severely shaken.

Since then, OpenAI has undertaken structural reforms, including changes to its board composition and internal oversight mechanisms. However, the damage to its internal trust remains a lingering issue, as does the public’s perception of the company’s ability to responsibly manage its own power.

The deposition also raises broader questions about the governance of AI firms, particularly those operating at the frontier of AGI research. As these companies amass unprecedented influence over the future of technology, who holds them accountable? And how can they ensure that their internal decision-making processes are robust enough to withstand ideological and personal conflicts?

Experts in corporate governance have pointed to OpenAI’s experience as a cautionary tale. The company’s hybrid structure — a nonprofit at its core with a for-profit subsidiary — may have contributed to the confusion. While this model was designed to balance mission with market viability, it also created conflicting incentives that became unmanageable during a crisis.

Furthermore, the deposition cast light on the cult-like loyalty many employees had toward Altman, a dynamic that complicated efforts to install alternative leadership. In organizations working on world-changing technologies, such loyalty can be both a strength and a liability, particularly when it overrides institutional checks and balances.

Since Altman’s return, OpenAI has sought to rebuild trust through increased transparency and a renewed focus on its founding principles. The company has also recommitted to its partnership with Microsoft and expanded its safety and ethics teams. Still, the events of late 2023 serve as a stark reminder that even the most visionary organizations are vulnerable to internal collapse if governance is not taken seriously.

The long-term impact of this episode remains to be seen. While OpenAI continues to lead in AI development, its internal struggles have fueled skepticism among regulators, academics, and the public. As governments around the world consider how to regulate AI, the OpenAI crisis may become a case study in the risks of under-regulated innovation.

In conclusion, Ilya Sutskever’s deposition offers a rare and unsettling look behind the curtain of one of the most influential tech companies of our time. It reminds us that the road to AGI is not only paved with code and algorithms, but also with complex human dynamics — ego, ideology, and the ever-present risk of hubris.