16 Comments

Excuse me if I'm just covering known information here, but my 15 minutes of googling tell me that this Yugkowsky guy:

1. Did not go to high school or college.

2. Founded a non-profit in his early 20s focused on the dangers of science-fiction becoming true.

3. Has sustained his life on this for 20 years now, and is seen as a leader in this field.

That's an absolutely incredible hustle. Brilliant stuff. Like, should I have quite high school and started an NGO devoted to making sure that when we create real life pokemon that we only make the weak ones?

Expand full comment

He also started a sex cult based on telling people that doing math in your head the right way would make them magically always right about everything.

Expand full comment

What does GPT-4 thinks of this article?

To critically analyze the given article about the AI debate, we can identify several potential flaws:

Lack of Objective Evidence: The article relies heavily on the author's opinions and interpretations without providing sufficient objective evidence. Statements like "AI Doomers are terrible and ineffective" and "the product of far too much navel gazing and sci-fi storytelling" are subjective and lack empirical support.

Overgeneralization: The article tends to generalize the entire group of AI safety advocates as "AI Doomers," implying that they all hold extreme views about AI leading to the end of humanity. This is an overgeneralization and does not account for the nuanced positions within the AI safety community.

Ad Hominem Attacks: The article frequently resorts to ad hominem attacks, labeling those concerned about AI risks as incompetent or as buffoons. This approach detracts from a rational and evidence-based discussion of the issues.

Lack of Balance: The article does not adequately present the views of the AI safety community, focusing instead on critiquing them. A more balanced approach would include their perspectives and concerns, providing a more comprehensive view of the debate.

Simplification of Complex Issues: The issues surrounding AI development and its potential risks are complex and multifaceted. The article simplifies these issues into a binary conflict between "AI Doomers" and those focused on commercialization, which might not accurately represent the full spectrum of views and concerns in the AI community.

Speculative Conclusions: Many conclusions in the article are speculative, particularly regarding the motivations behind the firing of Sam Altman and the internal dynamics of OpenAI. The lack of concrete evidence to support these speculations weakens the article's arguments.

Failure to Address Counterarguments: The article does not address potential counterarguments or acknowledge the validity of any concerns about AI safety, which is essential for a well-rounded discussion.

Unverified Claims: The article makes several claims about the events at OpenAI and the motivations of various parties involved, but these claims are not substantiated with verifiable sources or evidence.

In summary, while the article presents a strong opinion against what the author terms "AI Doomers," it falls short in providing a balanced, evidence-based critique of the AI safety debate. It relies on subjective assessments, overgeneralizations, and speculative conclusions without adequately considering the complexities of the issue or the perspectives of those concerned about AI safety.

Expand full comment

Uh...this is pretty amazing. I had not done the argument analysis prompt before. Could you link to a screenshot?

Expand full comment

A 2022 survey showed that half of AI researchers give at least 10% chance of AI causing human extinction. (1)

There was a more recent survey of 800 AI engineers. (2)

50% of AI engineers think there is 25% or more chance of AI global catastrophe.

30% of AI engineers think there is 50% or more chance of AI global catastrophe.

13% of AI engineers think there is 75% or more chance of AI global catastrophe.

Geoffrey Hinton, winner of the Turing Award, for his contribution to the deep learning revolution that helped create current most powerful AIs like GPT-4, who is regarded as the godfather of AI, has 10% probability of AI global catastrophe in the next 30 years. (3)

He also quit Google in May to freely speak about the dangers of AI. (4)

And he signed an AI risk statement that states:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (5)

Paul Christiano, former senior OpenAI engineer who helped make GPT-3, thinks there is 10-20% chance of AI takeover with most humans being dead.

And he thinks there is 50% of doom shortly after AI becomes as smart as humans. (6)

Dario Amodei, CEO of Antrophic, one of the 3 major AI labs with 7 billion dollars in funding, thinks there is 10-25% chance of AI global catastrophe. (7)

One of the world’s best forecasting groups, Samotsvety, estimated the chance of AI catastrophe as 32%. They defined AI catastrophe as the death of 95% of humanity. (8)

Metaculus prediction market, has a bet on when Artificial General Intelligence with intelligence on par with human intelligence would be developed.

In the last 8 months since February 2023, the average prediction date for the arrival of that type of AI, shifted from 2041 to 2031. So 8 years from now. (9)

Here are sources: https://www.reddit.com/r/neoliberal/comments/17mgfvr/serious_question_how_seriously_should_we_take_the/

Now, draw your own conclusions.

Expand full comment

What does ChatGPT think of this comment?

The comment provided a summary of various expert opinions and surveys regarding the potential risks associated with AI. However, here are a few aspects that might be considered:

1. **Selection bias:** The sources primarily focus on opinions and surveys from individuals closely involved in AI development or research. This might not represent a diverse range of perspectives, potentially skewing the overall perception of risk.

2. **Varying definitions:** The definition of "catastrophe" or "global catastrophe" may differ among different experts or surveys, leading to varied interpretations and estimates.

3. **Unclear timeframes:** While some estimates mention specific timeframes, others don't specify when these catastrophic events might occur. This lack of consensus on timing could affect the severity or immediacy of the potential risks.

4. **Lack of counter-opinions:** The comment didn't include perspectives from experts who might hold contrasting views, potentially presenting a one-sided portrayal of the AI risk landscape.

5. **Source credibility:** Reddit, while a platform for discussion, might not always provide verifiable or peer-reviewed sources, potentially affecting the reliability of the information shared.

These points don't necessarily invalidate the concerns raised by experts but highlight potential limitations in the representation and framing of the AI risk landscape.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I take this to heart.

I agree with all these people (https://www.safe.ai/statement-on-ai-risk) that there is a risk that AI could threaten the survival of humanity (for me: 10-30% chance). I'm a doomer...with three grown daughters. Future generations are not theoretical to me.

You are *right* that the people who care are bungling the job of understanding the risks or making them safe.

Two questions for the author?

1) You provided the "silly" label, but **please attack the central argument that AI might kill humanity**. In a nutshell, the argument I'm asking you to attack is this: "More intelligent things do not worry about the desires (or survival) of less intelligent things as the more intelligent things pursue their own goals. Example: Humans inadvertently kill animals while building strip malls and apartment buildings."

Let me help you attack the central argument and justify "silly."

Your refutation might be: a) "Since we are the ones building the AI, it will always care about us and protect us no matter how smart it gets. We get this affection automatically."

Or your refutation might be: b) "We'll never get there. AI will never be smarter than humans."

Or your refutation might be: c) "We've got a lot of smart people working on this. They will figure out how to instill permanent affection in AI before we build AI."

Or your refutation might be: d) "AI killing people has been written about in sci-fi novels. An event described in a sci-fi novel can never happen in the real world. (If you want to protect humanity forever from a specific risk like an asteroid hitting the earth, write it in a sci-fi novel and publish. Bam.")

You can tell these are arguments I do not find compelling. IMHO, these refutations all have flaws.

What is your better argument? Please, there are a lot of people like me who would rather not worry about this. Put our minds at ease. Share the logic for your certainty.

___

2) What would competence by AI doomers look like? If they were not a bunch of idiots, what would they be doing?

Expand full comment
Dec 5, 2023·edited Dec 10, 2023

And you are right about Malaria. Prompted by reading your post, I just gave $40 to Against Malaria Foundation for 20 nets / to protect 36 people.

https://www.dropbox.com/scl/fi/stid8ms6vy6ovbu0jxg0s/2023-12-05_13-00-45.jpg?rlkey=0xbiq39xhl8jhugf8ojbau80b&dl=0

Expand full comment

This does not address whether the "AI doomers" are right about AI being an existential risk to humanity. It does illustrate that there are institutional reasons that said risk will be systematically underestimated, which strengthens their case.

Expand full comment

Altman is also a doomer!

Expand full comment

> Has sustained his life on this for 20 years now, and is seen as a leader in this field.

Some people drop out of the standard system because it's too slow for them/ they don't fit in the boxes.

Eliezer has written some pretty technical mathy papers about AI.

Also, the dangers of scifi coming true.

Half the inventions we have in the modern day were somewhat predicted by some scifi author if you look around.

Whether we are talking about HG wells predicting journeys to the moon or star trek communicators being pretty close to smartphones.

Some scifi authors try to predict what future tech is possible. And they are often not bad at it.

Predicting the future is harder than identifying literary genre.

Expand full comment

Yutzkowsky is a total clown to be sure, but isn't it coming out that the true story was actually more boring, disappointing, and conventional than any of the speculation put forth? Altman running his various Milo Minderbinder schemes to enrich himself, and pitting members of the board against each other with good old fashioned corporate backstabbing that Henry Ford or Andrew Carnegie might have recognized? Nothing new under the sun innit?

Expand full comment

This take is extraordinarily wrong.

The possibility of "AI doom" is basically the consensus position among the top experts, the tech companies, the governments of the world, and the general public. Certainly in the OpenAI circles, everyone is in agreement. Sam Altman was strenuously advocating taking AI doom seriously well before he co-founded OpenAI, and continued to do so as CEO. Everyone important on the staff joined Altman in signing the "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" letter, including CTO Mira Murati, Chief Scientist Ilya Sutskever, Head of policy research Miles Brundage, and Governance Lead Jade Leung, not to mention all the relevant board members _except_ Helen Toner and Tasha McCauley, who presumably are in agreement with it anyway.

There are no AI risk deniers in this story, anywhere.

As for what the AI safety camp has accomplished: A large amount of alignment research, even if it didn't end up succeeding, plus enough political work such that AGI creation will likely end up internationally outlawed in time, thus succeeding in the fundamental goal of human survival.

Expand full comment
Nov 27, 2023·edited Nov 27, 2023

Have you written anything longform on AI Doomerism (and why it's wrong rather than ineffective)?

I'm moderately doomery (5-10% chance of some kind of existential AI catastrophe in the next century or so) but haven't read much by informed critics of Yudkowsky.

Expand full comment

Yudkowski has always struck me as an incredibly insightful guy who has an equally incredible inability to apply any of his own insights to himself. "No Evolution for Corporations" is peak comedy.

Expand full comment

They are incompetent by their own light if you think that they're all honest and it's not a type of marketing. If they wanted humanity to have a voice in AGI development as they claim they would've made their firms co-ops. Instead they start conventional firms with the conventional venture capitalist courting strategies of talking a big game about how it will change everything in a couple years/decades. That just seems like marketing 101 to me.

"Eliezer Yudkowsky is both the world’s worst Harry Potter fanfiction writer..."

You're forgetting about 'the Cursed Child' (which, tbf, I also try to do).

Expand full comment