What the OpenAI drama tells us about the tech industry
The ousting of Sam Altman as CEO of OpenAI, worthy of a soap opera, has rocked Silicon Valley and the tech industry as a whole.
Let’s start at the very beginning: OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way.
The company discovered a promising path in LLMs that generate text, but developing and implementing those models required large amounts of computing infrastructure and cash. This led OpenAI to create a commercial entity to draw outside investors, and it landed a huge partner: Microsoft.
Is OpenAI really a non-profit?
Unlike other tech giants, the company behind ChatGPT was set up as a non-profit by founders who hoped that it wouldn’t be bound to commercial interests.
But things got complicated.
While OpenAI later transitioned to a for-profit model, its controlling shareholder remains the nonprofit OpenAI Inc. and its board of directors. This unique structure made it possible for four OpenAI board members — the company’s chief scientist, two outside tech entrepreneurs and an academic — to oust CEO Sam Altman last month.
Back in 11 March 2019, the company announced it had created a “capped-profit” company. OpenAI raised $12.3B from investors as per Dealroom: which means that the “returns cap” of the company is around $1.2T, rendering this cap essentially meaningless.
To be absolutely clear, there is nothing wrong with a for-profit approach; we’ve seen many of tech’s biggest innovations come from for-profit companies. The promise of profits is what motivates investors. It is also a major reason why employees join OpenAI: if the company never generated profits, the equity portion of their compensation would be worth zero.
OpenAI might have also set a different multiplier for later stage investors, as they previously wrote “and we expect [the investor return multiple] multiple to be lower [than 100x] for future rounds as we make further progress.”
OpenAI clearly wanted to “have its cake and eat it.” The changes made in 2019 attempted to inject a for-profit incentive into the nonprofit, which created some obvious conflicts: staying a non-profit to prioritize humanitarian goals or run a for-profit model that allows the company to attract world-class talent.
The contradictions also apply to safety and speed. To benefit humanity, moving slower and safer is the sensible approach. However, to generate profits, OpenAI needs to be first to market, and first to monetize. And OpenAI has indeed moved fastest, with the ChatGPT product taking the world by storm. The company claims to prioritize safety equally with speed but internally, i seems like not everyone sees it like this.
Safety VS Scaling
While not many details have transpired about the firing, many are speculating the real reason is the lack of understanding regarding the safe development of AGI. It’s well documented that Altman was the main proponent of the commercialization of this technology, as opposed to other people in the higher management that were more concerned about the potential dangers of AI.
Ironically, the biggest proponents of safety in the space who fired Altman “in the name of safety” may have accomplished the opposite: weakening the company that spearheads AI research has caused an interesting earthquake in Silicon Valley.
Some prominent figures of the tech industry have called out this movement they referred to as “effective altruism", which started out with companies trying to give charity based on their data and being “conscious" -whatever that means- about the risks associated with AI. What’s interesting here is that two of the four board members that were related with the firing of Sam Altman are closely related to the effective altruism movement.
ChatGPT’s meteoric success is something we has never seen before. A year after launching in November 2022, we’re looking at a product that has surpassed 100M weekly active users, as announced by CEO Sam Altman on 6 November 2023. However, this meteoric rise in popularity seems to have caught parts of OpenAI off-guard, as reported by The Atlantic over a year ago:
“From the outside, ChatGPT looked like one of the most successful product launches of all time. It grew faster than any other consumer app in history, and it seemed to single-handedly redefine how millions of people understood the threat—and promise—of automation. But it sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts. ChatGPT supercharged the race to create products for profit as it simultaneously heaped unprecedented pressure on the company’s infrastructure and on the employees focused on assessing and mitigating the technology’s risks.”
Being the industry leaders is not easy, but staying there is even more challenging. That’s why, after hearing that one of its biggest competitors was looking to launch a ChatBot some months after the release of ChatGPT, OpenAI decided to speed up work of its own to get ahead of the competition.
The result? More internal tension, according to that same report by The Atlantic:
“Safety teams within the company pushed to slow things down. These teams worked to refine ChatGPT to refuse certain types of abusive requests and to respond to other queries with more appropriate answers. But they struggled to build features such as an automated function that would ban users who repeatedly abused ChatGPT. In contrast, the company’s product side wanted to build on the momentum and double down on commercialization.”
And tensions grew, according to The Atlantic; Chief Scientist and co-founder Ilya Sutskever grew more concerned that OpenAI was putting commercialization ahead of the governing nonprofit’s mission to create beneficial AGI.
Many argue that this created a fundamental, unresolvable conflict, and that the board voted to remove what Sam and Greg stood for: further commercialization of OpenAI’s mission. Ejecting Altman and Brockman was a desperate attempt to refocus OpenAI on its nonprofit charter, ahead of profitability, growth, and moving fast.
Microsoft’s unstated power
The unraveling of this crisis has proved something that was not obvious until now: Microsoft wants an independent, for-profit OpenAI team to rely on. Satya Nadella’s bid to onboard Altman along with other top talent proved a master stroke that forced OpenAI’s board to rethink their decision.
If we think about it from Microsoft’s point of view, this move makes total sense. A corporation the size of Microsoft has to move slowly, aware that its reputation is on the line with every move it makes. On the other hand, a startup the size of OpenAI can afford to make more mistakes and lead a less perfect business strategy, knowing that its customers would be much more forgiving than those of a giant like Microsoft.
However, this move proved who actually is behind any direction OpenAI takes. You guessed it: Microsoft. With a simple job offer, Satya Nadella restored the order at OpenAI without bringing his own company into heavy scrutiny, all while holding no board seats.
In the end, things turned out great for Satya Nadella: the board was removed, Altman reinstated, and Microsoft continues to have exclusive access to OpenAI’s models, without much reputational risk. OpenAI might be an independent company on paper, but Microsoft sure got its way in the end.
If the chaos at OpenAI has shown anything, it’s that clinging to ideals that aren’t shared by many of the people involved will, sadly, lead nowhere good. Perhaps this was a good opportunity for everyone to understand what OpenAI actually stands for: is it really that concerned about humanity? Or is it just another for-profit tech company that simply has a great marketing team?