A 2022 survey showed that half of AI researchers give at least 10% chance of AI causing human extinction. (1)
There was a more recent survey of 800 AI engineers. (2)
50% of AI engineers think there is 25% or more chance of AI global catastrophe.
30% of AI engineers think there is 50% or more chance of AI global catastrophe.
13% of AI engineers think there is 75% or more chance of AI global catastrophe.
Geoffrey Hinton, winner of the Turing Award, for his contribution to the deep learning revolution that helped create current most powerful AIs like GPT-4, who is regarded as the godfather of AI, has 10% probability of AI global catastrophe in the next 30 years. (3)
He also quit Google in May to freely speak about the dangers of AI. (4)
And he signed an AI risk statement that states:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (5)
Paul Christiano, former senior OpenAI engineer who helped make GPT-3, thinks there is 10-20% chance of AI takeover with most humans being dead.
And he thinks there is 50% of doom shortly after AI becomes as smart as humans. (6)
Dario Amodei, CEO of Antrophic, one of the 3 major AI labs with 7 billion dollars in funding, thinks there is 10-25% chance of AI global catastrophe. (7)
One of the world’s best forecasting groups, Samotsvety, estimated the chance of AI catastrophe as 32%. They defined AI catastrophe as the death of 95% of humanity. (8)
Metaculus prediction market, has a bet on when Artificial General Intelligence with intelligence on par with human intelligence would be developed.
In the last 8 months since February 2023, the average prediction date for the arrival of that type of AI, shifted from 2041 to 2031. So 8 years from now. (9)
The comment provided a summary of various expert opinions and surveys regarding the potential risks associated with AI. However, here are a few aspects that might be considered:
1. **Selection bias:** The sources primarily focus on opinions and surveys from individuals closely involved in AI development or research. This might not represent a diverse range of perspectives, potentially skewing the overall perception of risk.
2. **Varying definitions:** The definition of "catastrophe" or "global catastrophe" may differ among different experts or surveys, leading to varied interpretations and estimates.
3. **Unclear timeframes:** While some estimates mention specific timeframes, others don't specify when these catastrophic events might occur. This lack of consensus on timing could affect the severity or immediacy of the potential risks.
4. **Lack of counter-opinions:** The comment didn't include perspectives from experts who might hold contrasting views, potentially presenting a one-sided portrayal of the AI risk landscape.
5. **Source credibility:** Reddit, while a platform for discussion, might not always provide verifiable or peer-reviewed sources, potentially affecting the reliability of the information shared.
These points don't necessarily invalidate the concerns raised by experts but highlight potential limitations in the representation and framing of the AI risk landscape.
A 2022 survey showed that half of AI researchers give at least 10% chance of AI causing human extinction. (1)
There was a more recent survey of 800 AI engineers. (2)
50% of AI engineers think there is 25% or more chance of AI global catastrophe.
30% of AI engineers think there is 50% or more chance of AI global catastrophe.
13% of AI engineers think there is 75% or more chance of AI global catastrophe.
Geoffrey Hinton, winner of the Turing Award, for his contribution to the deep learning revolution that helped create current most powerful AIs like GPT-4, who is regarded as the godfather of AI, has 10% probability of AI global catastrophe in the next 30 years. (3)
He also quit Google in May to freely speak about the dangers of AI. (4)
And he signed an AI risk statement that states:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." (5)
Paul Christiano, former senior OpenAI engineer who helped make GPT-3, thinks there is 10-20% chance of AI takeover with most humans being dead.
And he thinks there is 50% of doom shortly after AI becomes as smart as humans. (6)
Dario Amodei, CEO of Antrophic, one of the 3 major AI labs with 7 billion dollars in funding, thinks there is 10-25% chance of AI global catastrophe. (7)
One of the world’s best forecasting groups, Samotsvety, estimated the chance of AI catastrophe as 32%. They defined AI catastrophe as the death of 95% of humanity. (8)
Metaculus prediction market, has a bet on when Artificial General Intelligence with intelligence on par with human intelligence would be developed.
In the last 8 months since February 2023, the average prediction date for the arrival of that type of AI, shifted from 2041 to 2031. So 8 years from now. (9)
Here are sources: https://www.reddit.com/r/neoliberal/comments/17mgfvr/serious_question_how_seriously_should_we_take_the/
Now, draw your own conclusions.
What does ChatGPT think of this comment?
The comment provided a summary of various expert opinions and surveys regarding the potential risks associated with AI. However, here are a few aspects that might be considered:
1. **Selection bias:** The sources primarily focus on opinions and surveys from individuals closely involved in AI development or research. This might not represent a diverse range of perspectives, potentially skewing the overall perception of risk.
2. **Varying definitions:** The definition of "catastrophe" or "global catastrophe" may differ among different experts or surveys, leading to varied interpretations and estimates.
3. **Unclear timeframes:** While some estimates mention specific timeframes, others don't specify when these catastrophic events might occur. This lack of consensus on timing could affect the severity or immediacy of the potential risks.
4. **Lack of counter-opinions:** The comment didn't include perspectives from experts who might hold contrasting views, potentially presenting a one-sided portrayal of the AI risk landscape.
5. **Source credibility:** Reddit, while a platform for discussion, might not always provide verifiable or peer-reviewed sources, potentially affecting the reliability of the information shared.
These points don't necessarily invalidate the concerns raised by experts but highlight potential limitations in the representation and framing of the AI risk landscape.