Episodes

Thursday Jun 12, 2025
Thursday Jun 12, 2025
1. The 3 most cited AI researchers of all time (Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever) are vocally concerned about this. One of them believes the risk of extinction is higher than 50%.
2. The CEOs of the 4 leading AI companies have all acknowledged this risk as real. “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”
-Sam Altman, CEO of OpenAI “I think there's some chance that it will end humanity. I probably agree with Geoff Hinton that it's about 10% or 20% or something like that.”
-Elon Musk, CEO of xAI “I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.”
-Dario Amodei, CEO of Anthropic “We need to be working now, yesterday, on those problems, no matter what the probability is, because it’s definitely non-zero.”
-Demis Hassabis, CEO of Google DeepMind 3. Half of surveyed AI researchers believe that there are double-digit odds of extinction https://x.com/HumanHarlan/status/1925015874840543653
No comments yet. Be the first to say something!