Leadership Fallout and Counter-Fallout at OpenAI reveals disunity between safety and profitability values in the AI industry

Silicon Valley superpower, OpenAI is experiencing an exponential spike of attention unlike their recognition throughout 2023 for their groundbreaking release of generative artificial intelligence chatbot, ChatGPT–speculating now upon potential divisiveness within its leadership that led to the board’s abrupt ousting of Chief Executive Officer (CEO) Sam Altman on Nov. 17.

  Before ChatGPT’s one-year anniversary, CEO Sam Altman was ousted unanimously by the board in a motion led by OpenAI co-founder and president Ilya Sutskever. The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman.

  According to the Washington Post, the board’s decision came by “Altman’s push toward commercializing the company’s rapidly advancing technology” whilst Sutskever’s concerns about OpenAI’s commitments to safety were hindering progress at OpenAI.

  Altman himself helped pioneer OpenAI’s unique board structure. The group had as many as nine members and is supposed to contain a majority of them with no financial stake in OpenAI. At the time of Altman’s firing, it was down to six members, which included three employees (president and co-founder Greg Brockman, Altman, and Sutskever) and three independent directors (Toner, tech entrepreneur Tasha McCauley, and Quora CEO Adam D’Angelo).

  Altman was infamous in his startups for his absenteeism. At times when he should have been present to nurture their growth, he would neglect his duties, reflecting the typical start-up to billionaire pathway of dipping and investing in multiple companies at once to reap the benefits without taking responsibility. 

  Microsoft then announced they would hire Altman and OpenAI co-founder, former chief technology officer (CTO), and President, Greg Brockman–who quit in solidarity after Altman’s firing.

  Many of the company’s top researchers–with skillsets demanding salaries in the tens of millions of dollars–threated OpenAI a transition to Microsoft if Altman was not reinstated. Without them, OpenAI would have struggled to keep up with other research labs run by Google, Facebook and Anthropic AI–applying the necessary pressure on the board, but also incorporating a sense of sabotage and blackmailing for the fact that the board prioritized safety over Altman’s ambitions.

  After the weekend of Altman’s firing, to revolt against the board’s decision, up to 95% of all employees threatened the board with mass resignations and transition to Micorsoft if Altman was not reinstated and the board was not replaced.

But it is difficult to tell if the willingness for the entirety of OpenAI’s staff came from a collective devotion to “the mission and the organization…that we have all worked so hard on and made such progress to” and “the shared loyalty we all feel and the sense of duty to completing the mission” as Altman told Trevor Noah on his podcast, What Now: With Trevor Noah.

The mass resignation letter was signed between 2:00 and 2:30am Monday morning. For longtime employees, there was an added incentive to sign, as Altman’s departure jeopardized an investment deal that would value the company at almost $90 billion, more than triple its $28 billion valuation in April.

While Altman’s staff may be wholeheartedly pursuant of expanding the young, highly profitable AI chatbot that reached over 100 million users in 5 days, two possible scenarios arise: either his staff is entirely in alignment to his desires for commercialization and profitability–in a neglectful manner to the product’s safety–, or the lack of sleep and peer pressure contributed to this mob-like, consensus behavior. 

Coupled with Altman’s desire to expand so rapidly, the board concluded that the CEO “was not paying enough attention to that risk,” according to the New York Times. Sutskever–among others on the board–feared that the further development of OpenAI’s technology would become dangerous. Tensions rose within the past year of the company’s explosive success in sustainable and safe development–no matter how successful the company was becoming. 

  In an early blog post in December 2015, Greg Brockman and Ilya Sutskever wrote, “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial returns. Since our research is free from financial obligations, we can better focus on a positive human impact.”

  But if the board’s initial concerns about a development so rapid that it prioritizes financial returns, then the current state of the 9-year-old company is in stark contrast with its foundational values.

Although it was never released on the board’s reasoning behind ALtman’s firing in the first place, Today, Explained by Vox shared, “There are also critics on an intellectual level on his approach to AI which in recent months and years has become seemingly less focused on keeping AI safe and mitigate the risks,” leaning more towards commercialization and expanding profit margins. 

The non-profit board is really in control of the bigger picture and firing and hiring everyone, despite OpenAI becoming a for-profit organization. Upholding the company’s original mission, the board’s focus is intended not to protect investors and employees, but to protect humanity and safeguard the best interests of humanity–doing what it thinks is best in terms of keeping AI safe. Although its reasons weren’t made clear initially, ousting Altman indicates his leadership at OpenAI did not align with those values upon which his co-founding colleagues founded the startup.

Altman and Brockman have agreed for at least the time being to not have a seat on the board, but former OpenAI board member Helen Toner, who was castigated for writing an academic paper critical of OpenAI’s approach to AI safety, is now off the board, replaced with Bret Taylor, formerly co-CEO of Salesforce and a Twitter board member, as well as Larry Summers, former U.S. treasury secretary — names that are in much deeper alignment with Altman’s desires to expand OpenAI rapidly without as many checks on his power or the company’s momentum in a manner that prioritizes AI safety and technological sustainability over rapid development. Despite taking the risk of calling the authority higher, prioritizing accountability in a young company with so much power over the entire scope of artificial intelligence development, she was scrutinized. The only two women on the board were replaced by men in this entire debacle. 

Now officially reinstated, it is as if Altman has returned with more power than ever. Having a board that is much more friendly to his approach to expanding economically with fewer bounds on the actual containment of artificial intelligence, the people who tried to hold him back are replaced by people whose top concern is not necessarily artificial intelligence safety. Now with the staff having mutiny in his favor, he has more power, more respect, and more nationwide attention than ever.

With Trevor Noah, Altman shared, “Obviously the board needs to grow and diversify…and that’ll be something that I think happens quickly…I’m excited to have a second chance at getting all these things right, and we clearly got them all wrong.” 

Only time will tell of the true intentions and directives of Altman’s leadership style. The struggle between those who appear to uphold OpenAI’s initial values and the exhilarating thrill of the company’s profitability reveals a central weakness in the tech industry that seemingly weighs more influence than our own government. In an era where corporations have more money than our national debt and fewer and fewer people are rightfully trusting in our national currency, the power “big tech” companies hold is monumental. If they cannot even agree on shared values, it is clear that situations like these will arise over and over again–formulating an elite system of billionaires who seek expansion at whatever means possible disregarding the safety of billions. 


More articles


Please enter your comment!
Please enter your name here