The Generalist
The Generalist
Existential Risk and the Future of Humanity: Lessons from AI, Pandemics, and Nuclear Threats | Toby Ord (Author of "The Precipice")
0:00
-1:19:08

Existential Risk and the Future of Humanity: Lessons from AI, Pandemics, and Nuclear Threats | Toby Ord (Author of "The Precipice")

Toby Ord on the 1-in-6 odds of extinction, four key types of AI threats, and the policies we need now to safeguard future generations.

A quick note before today's podcast: Last Thursday, we launched Part 2 of our four-part series on Founders Fund. If you haven’t read it yet, you can catch up on Part 1 here and Part 2 here. For everyone following along, Part 3 drops this Thursday, June 26th.

In the meantime, we hope you enjoy today's podcast episode below.

YouTube

Spotify

Apple

This episode is brought to you by Brex: The banking solution for startups.


How close are we to the end of humanity? Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice, argues that the odds of a civilization-ending catastrophe this century are roughly one in six. In this wide-ranging conversation, we unpack the risks that could end humanity’s story and explore why protecting future generations may be our greatest moral duty.

We explore:

  • Why existential risk matters and what we owe the 10,000-plus generations who came before us

  • Why Toby believes we face a one-in-six chance of civilizational collapse this century

  • The four key types of AI risk: alignment failures, gradual disempowerment, AI-fueled coups, and AI-enabled weapons of mass destruction

  • Why racing dynamics between companies and nations amplify those risks, and how an AI treaty might help

  • How short-term incentives in democracies blind us to century-scale dangers, along with policy ideas to fix it

  • The lessons COVID should have taught us (but didn’t)

  • The hidden ways the nuclear threat has intensified as treaties lapse and geopolitical tensions rise

  • Concrete steps each of us can take today to steer humanity away from the brink


Explore the episode

Timestamps

(00:00) Intro

(02:20) An explanation of existential risk, and the study of it

(06:20) How Toby’s interest in global poverty sparked his founding of Giving What We Can

(11:18) Why Toby chose to study under Derek Parfit at Oxford

(14:40) Population ethics, and how Parfit’s philosophy looked ahead to future generations

(19:05) An introduction to existential risk

(22:40) Why we should care about the continued existence of humans

(28:53) How fatherhood sparked Toby’s gratitude to his parents and previous generations

(31:57) An explanation of how LLMs and agents work

(40:10) The four types of AI risks

(46:58) How humans justify bad choices: lessons from the Manhattan Project

(51:29) A breakdown of the “unilateralist’s curse” and a case for an AI treaty

(1:02:15) Covid’s impact on our understanding of pandemic risk

(1:08:51) The shortcomings of our democracies and ways to combat our short-term focus

(1:14:50) Final meditations


Follow Toby Ord

Website: https://www.tobyord.com/

LinkedIn: https://www.linkedin.com/in/tobyord

X: https://x.com/tobyordoxford?lang=en

Giving What We Can: https://www.givingwhatwecan.org/


Resources and episode mentions

Books

People

Other resources


Subscribe to the show

I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.

YouTube

Spotify

Apple


Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].

Discussion about this episode

User's avatar