AI Doomers are worse than wrong - they're incompetent
Even judged on their own terms, AI Doomers are terrible and ineffective
Last week one of the most important tech companies in the world nearly self-destructed. And the entire thing was caused by the wild incompetence of a small slice of ‘effective altruists’.
Other sites have reported the exact series of events in greater detail, so I’m going to just run through the basics. OpenAI is an oddly structured AI company/non-profit1 that’s famous for its large language models like GPT-4 and ChatGPT as well as image creation tools like DALL-E. Thanks mostly to the sensational debut of ChatGPT, it’s now valued at around $80 billion and many observers think it could break into the Microsoft/Google/Apple/Amazon/Meta2 tier of tech giants. But last week, with essentially no warnings of any kind, OpenAI’s board of directors fired founder and CEO Sam Altman. The board said Altman was not “consistently candid in his communications” with the board, without elaborating or providing more detail.
The backlash to the board’s decision was nearly immediate. Altman is extraordinarily popular at OpenAI and in Silicon Valley writ large, and that popularity proved durable against the board’s vague accusations. President and chairman Greg Brockman resigned in protest. Giant institutional investors in OpenAI (including Microsoft, Sequoia Capital, and Thrive Capital) began to press behind the scenes for the decision to be reversed. Less than 24 hours after his firing, Altman was in negotiations with the board to return to the company. More than 90% of the company’s workforce3 threatened to resign if Altman wasn’t reinstated. Microsoft basically threatened to hire Altman, steal all of OpenAI’s employees and just recreate the entire company themselves.
There were several embarrassing twists and turns. Altman was back but then he wasn’t, then the board tried a desperation merger with rival Anthropic which was turned down immediately, and the entire time the OpenAI office was leaking rumors like a sieve. Finally on November 21st, four days after Altman was fired, he was reinstated as CEO and the board members who voted to oust him were replaced. In trying to fire Altman, the board ended up firing themselves.
There are dozens of angles you can take to talk about this story, but the most interesting one for me is how this epitomizes the buffoonery and tactical incompetence of the AI doom movement.
It’s unclear exactly why the OpenAI board decided to fire Altman. They’ve specifically denied it was due to any ‘malfeasance’ and at no point has anyone on the board provided any detail about the supposed lack of ‘candid communications’. Some speculate it’s because of a staff letter warning about a ‘powerful discovery that could threaten humanity’. Some think it stemmed from a dispute Altman had with Helen Toner, one of the board members who voted to oust him. Some think that it’s a disagreement about moving too fast in ways that endanger safety.
Whatever the precise nature of the disagreement, one thing is clear. There were two camps within OpenAI - one group of AI doomers laser-focused on AI safety and one groups more focused on commercializing OpenAI’s products. The conflict was between these two camps, with the board members who voted Altman out in the AI doom camp and Altman in the more commercial camp. And you can’t understand what happened at OpenAI without understanding the group that believes AI will destroy humanity as we know it.
I am not an AI doomer.4 I think the idea that AI is going to kill us all is deeply silly, thoroughly non-rigorous and the product of far too much navel gazing and sci-fi storytelling. But there are plenty of people who do believe that AI either will or might kill all of humanity, and they take this idea very seriously. They don’t just think “AI could take our jobs” or “AI could accidentally cause a big disaster” or “AI will be bad for the environment/capitalism/copyright/etc”. They think that AI is advancing so fast that pretty soon we’re going to create a godlike artificial intelligence which will really, truly kill every single human on the planet in service of some inscrutable AI goal. These folks exist. Often times they’re actually very smart, nice and well-meaning people. They have a significant amount of institutional power in the non-profit and effective altruism worlds. They have sucked up hundreds of millions of dollars of funding for their many institutes and centers studying the problem. They would likely call themselves something like ‘AI Safety Advocates’. A less flattering and more accurate name would be ‘AI Doomers’. Everybody wants AI to be safe, but only one group thinks we’re literally all going to die.
I disagree with the ‘AI Doom’ hypothesis. But what’s remarkable is how even if you grant their premise, for all their influence and institutes and piles of money and effort they have essentially no accomplishments. If anything, the AI doom movement has made things worse by their own standards. It’s one of the least effective, most tactically inept social movements I’ve ever seen.
How do you measure something like that? By looking at the evidence in front of your face. OpenAI’s strange institutional setup (a non-profit controlling an $80B for-profit corporation) is a direct result of AI doom fears. Just in case OpenAI-the-business made an AI that was too advanced, just in case they were tempted by profit to push safety to the side… the non-profit’s board would be able to step in and stop it. On the surface, that’s almost certainly what happened with Sam Altman’s firing. The board members who agreed to fire him all have extensive ties to the effective altruism and AI doom camps. The board was likely uncomfortable with the runaway success of OpenAI’s LLM models and wanted to slow down the pace of development, while Altman was publicly pushing to go faster and dream bigger.
The problem with the board’s approach is that they failed. They failed catastrophically. I cannot emphasize in strong enough terms how much of a public humiliation this is for the AI doom camp. One week ago, true-believer AI safety/AI doom advocates had formal control of the most important, advanced and influential AI company in the world. Now they’re all gone. They completely neutered all their institutional power with an idiotic strategic blunder.
The board fired Altman seemingly without a single thought about what would happen after they fired him. I’m curious what they actually thought was going to happen - they would fire Altman and all the investors in the for-profit corporation would just say “Oh, I guess we should just not develop this revolutionary technology we paid billions for. You’re right, money doesn’t matter! This is a thing that we venture capitalists often say, haha!”.
It seems pretty damn clear that they had no game plan. They didn’t do even basic due diligence. If they had, they’d have realized that every institutional investor, more than 90% of their own employees and virtually the entire tech industry would back Altman. They’d realize that firing Altman would cause the company to self-destruct.
But maybe things were so bad and the AI was so dangerous that destroying the company was actually good! This is the view expressed by board member Helen Toner who said that destroying the company could be consistent with the board’s mission. The problem with Helen Toner’s strategy is that while Helen Toner might have total control over OpenAI, she does not have total control over the rest of the tech industry. When the board fired Altman, he was scooped up by Microsoft within 48 hours. Within 72 hours, there was a standing offer of employment for any OpenAI employee to jump ship to Microsoft at equal pay. And the vast majority of their employees were on board with this. The end result of board’s actions would be that OpenAI still existed, only it’d be called ‘MicrosoftAI’ instead. And there would be even fewer safeguards against dangerous AI - Microsoft is a company that laid off its entire AI ethics and safety team earlier this year. Not a single post-firing scenario here was actually good for the AI doomer camp. It’s hard to overstate what a parade of dumb-fuckery this was. Wile E. Coyote has had more success against the Road Runner than OpenAI’s board has had in slowing dangerous AI developments.
This buffoonish incompetence is sadly typical for AI doomers. For all the worry, for all the effort that people put into thinking about AI doom there is a startling lack of any real achievements that make AI concretely safer. I’ve asked this question before - What value have you actually produced? - and usually I get pointed to some very sad stuff like ‘Here is a white paper we wrote called Functional Decision Theory: A New Theory of Instrumental Rationality’. And hey, papers like these don’t do anything, but what they lack in impact they make up for in volume! Or I’ll hear “We convinced this company to test their AI for dangerous scenarios before release”. If your greatest accomplishment is encouraging companies to test their own products in basic ways, you may want to consider whether you’ve actually done anything at all.
There’s a sense in which I’m being very unfair to AI doom advocates. They do actually have a huge string of accomplishments - the only problem is that it’s accomplishments in the exact opposite direction from their stated goals. If anything, they’ve made super-advanced AI happen faster. OpenAI was explicitly founded in the name of AI safety! Now OpenAI is leading the charge to develop cutting-edge AIs faster than anyone else, and they’re apparently so dangerous the CEO needed to be fired. AI enthusiasts will take this as a win, but it sure is curious that the world’s most advanced AI models are coming from an organization founded by people who think AI might kill everyone.
Or consider Anthropic. Anthropic was founded by ex-OpenAI employees who worried the company was not focused enough on safety. They decamped and founded their own rival firm that would truly, actually care about safety. They were true AI doom believers. And what impact did founding Anthropic have? OpenAI, late in 2022, became afraid that Anthropic was going to beat them to the punch with a chatbot. They quickly released a modified version of GPT3.5 to the public under the name ‘ChatGPT’. Yes, Anthropic’s existence was the reason ChatGPT was published to the world. And Anthropic, paragons of safety and advocates of The Right Way To Develop AI, ended up partnering with Amazon in the end, making them just as beholden to shareholders and corporate profits as any other tech startup. You will notice the pattern - every time AI doom advocates take major action, they seem to push AI further and faster.
This isn’t just my idle theorizing. Ask Sam Altman himself:
Eliezer Yudkowsky is both the world’s worst Harry Potter fanfiction writer5 and the most important figure in the AI doom movement, having sounded the alarm on dangerous AI for more than a decade. And Altman himself thinks Big Yud’s net impact has been to accelerate AGI (artificial general intelligence, aka smarter-than-human AI).
Even Yudkowsky himself, who founded the Machine Intelligence Research Institute to study how to develop AI safely, basically thinks all his efforts have been worthless. In an editorial for TIME, he said ‘We are not prepared’, and ‘There is no plan’. He advocated for a total worldwide shutdown of every single instance of AI development and AI research. He said that we should airstrike countries who develop AI, and would rather risk nuclear war than have AI being developed anywhere on earth. Leaving aside the lunacy of that suggestion, it’s a frank admission that AI doomers haven’t accomplished anything despite more than a decade of effort.
The upshot of all this is that the net impact of the AI safety/AI doom movement has been to make AI happen faster, not slower. They have no real achievements of any significance to their name. They write white papers, they found institutes, they take in money, but by their own standards they have accomplished worse than nothing. There are various cope justifications for these failures - maybe it would be even worse counterfactually! Maybe firing him and then hiring him back was actually logical by some crazy mental jiu-jitsu! Stop it. It’s embarrassing. The crowd that’s perfectly willing to speculate about the nature of godlike future AIs is congenitally unable to see the obvious thing directly in front of them.
There’s a real irony that AI doom is tightly interwoven with the ‘effective altruist’ world. To editorialize a bit: I consider myself somewhat of an effective altruist, but I got into the movement as someone who thinks stopping malaria deaths in Africa is a good idea because it’s so cost-effective. It pisses me off that AI doomers have ruined the label of effective altruist6. Nothing AI doomers do has had the slightest amount of impact. As far as I can tell they haven’t benefited humanity in any real way, even by their own standards. They are the opposite of ‘effective’. At best they are a money and talent drain that directs funding and bright, well-meaning young people into pointless work. At worst they are active grifters.
C'est pire qu'un crime, c'est une faute
- Charles Maurice de Talleyrand-Périgord
I really wish the AI safety/doom camp would stop and take stock of exactly what it is they think they’re accomplishing. They won’t, but I wish they would. I’d love to see them just separated from the EA movement entirely. I’d love for EA funders to stop throwing money at them. I’d love to see them admit that not only do they not accomplish anything with their hundreds of millions, they don’t even have a proper framework from which to measure their non-accomplishments. Their whole ecosystem is full of sound and fury, but not much else.
When Napoleon executed the Duke of Enghien in 1804, Talleyrand famously commented “It is worse than a crime, it is a mistake”. The AI doom movement is worse than wrong, it’s utterly incompetent. The firing of Sam Altman was only the latest example from a movement steeped in incompetence, labelled as ‘effective altruism’ but without the slightest evidence of effectiveness to back them up.
There is an OpenAI non-profit with the goal of creating safe AI, which controls a for-profit corporation also called OpenAI. The corporation is valued at $80 billion, but the final control of that corporation is ultimately decided by the non-profit’s board.
‘FAANG’ has been dead for years, MAGMA is the new hotness
I’ve seen 745 out of 770, which is an *astonishing* amount of loyalty
To stake my ground - I’m not going to debate this in the piece, in the comment section, on twitter, etc. I’ve had far too many 10,000 word back-and-forths with AI doom advocates and I have no desire to re-hash those arguments for a tenth time. I’ve heard your talking points and I don’t think they make sense.
Just kidding, his fanfiction is terrible but My Immortal still exists and nothing is touching how bad that one is.
SBF was also an AI doomer, unsurprisingly
Excuse me if I'm just covering known information here, but my 15 minutes of googling tell me that this Yugkowsky guy:
1. Did not go to high school or college.
2. Founded a non-profit in his early 20s focused on the dangers of science-fiction becoming true.
3. Has sustained his life on this for 20 years now, and is seen as a leader in this field.
That's an absolutely incredible hustle. Brilliant stuff. Like, should I have quite high school and started an NGO devoted to making sure that when we create real life pokemon that we only make the weak ones?
What does GPT-4 thinks of this article?
To critically analyze the given article about the AI debate, we can identify several potential flaws:
Lack of Objective Evidence: The article relies heavily on the author's opinions and interpretations without providing sufficient objective evidence. Statements like "AI Doomers are terrible and ineffective" and "the product of far too much navel gazing and sci-fi storytelling" are subjective and lack empirical support.
Overgeneralization: The article tends to generalize the entire group of AI safety advocates as "AI Doomers," implying that they all hold extreme views about AI leading to the end of humanity. This is an overgeneralization and does not account for the nuanced positions within the AI safety community.
Ad Hominem Attacks: The article frequently resorts to ad hominem attacks, labeling those concerned about AI risks as incompetent or as buffoons. This approach detracts from a rational and evidence-based discussion of the issues.
Lack of Balance: The article does not adequately present the views of the AI safety community, focusing instead on critiquing them. A more balanced approach would include their perspectives and concerns, providing a more comprehensive view of the debate.
Simplification of Complex Issues: The issues surrounding AI development and its potential risks are complex and multifaceted. The article simplifies these issues into a binary conflict between "AI Doomers" and those focused on commercialization, which might not accurately represent the full spectrum of views and concerns in the AI community.
Speculative Conclusions: Many conclusions in the article are speculative, particularly regarding the motivations behind the firing of Sam Altman and the internal dynamics of OpenAI. The lack of concrete evidence to support these speculations weakens the article's arguments.
Failure to Address Counterarguments: The article does not address potential counterarguments or acknowledge the validity of any concerns about AI safety, which is essential for a well-rounded discussion.
Unverified Claims: The article makes several claims about the events at OpenAI and the motivations of various parties involved, but these claims are not substantiated with verifiable sources or evidence.
In summary, while the article presents a strong opinion against what the author terms "AI Doomers," it falls short in providing a balanced, evidence-based critique of the AI safety debate. It relies on subjective assessments, overgeneralizations, and speculative conclusions without adequately considering the complexities of the issue or the perspectives of those concerned about AI safety.