203 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 100 Ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com

    Timestamps:
    00:00 A scenario of nuclear war
    06:56 Who would launch an attack?
    13:50 Detecting nuclear attacks
    19:37 The first critical seconds
    29:42 Decisions under time pressure
    34:27 Lessons from insiders
    44:18 Submarines
    51:06 How did we end up like this?
    59:40 Interceptor missiles
    1:11:25 Nuclear weapons and cyberattacks
    1:17:35 Concentration of power

    • 1 hr 26 min
    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/.

    Timestamps:
    0:20 AI Impacts surveys
    18:11 What AI will look like in 20 years
    22:43 Experts’ extinction risk predictions
    29:35 Opinions on slowing down AI development
    31:25 AI “arms races”
    34:00 AI risk areas with the most agreement
    40:41 Do “high hopes and dire concerns” go hand-in-hand?
    42:00 Intelligence explosions
    45:37 Discontinuous progress
    49:43 Impacts of AI crossing the human-level intelligence threshold
    59:39 What does AI learn from human culture?
    1:02:59 AI scaling
    1:05:04 What should we do?

    • 1 hr 8 min
    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info

    Timestamps:
    00:00 Pausing AI
    10:23 Risks during an AI pause
    19:41 Hardware overhang
    29:04 Technological progress
    37:00 Safety research during a pause
    54:42 Social dynamics of AI risk
    1:10:00 What prevents cooperation?
    1:18:21 What about China?
    1:28:24 Protesting AGI corporations

    • 1 hr 36 min
    Sneha Revanur on the Social Effects of AI

    Sneha Revanur on the Social Effects of AI

    Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org

    Timestamps:
    00:00 Encode Justice
    06:11 AI ethics and AI safety
    15:49 Humans in the loop
    23:59 AI in social media
    30:42 Deteriorating social skills?
    36:00 AIs identifying as AIs
    43:36 AI influence in elections
    50:32 AIs interacting with human systems

    • 57 min
    Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

    Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

    Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/

    Timestamps:
    00:00 Is AI like a Shoggoth?
    09:50 Scaling laws
    16:41 Are humans more general than AIs?
    21:54 Are AI models explainable?
    27:49 Using AI to explain AI
    32:36 Evidence for AI being uncontrollable
    40:29 AI verifiability
    46:08 Will AI be aligned by default?
    54:29 Creating human-like AI
    1:03:41 Robotics and safety
    1:09:01 Obstacles to AI in the economy
    1:18:00 AI innovation with current models
    1:23:55 AI accidents in the past and future

    • 1 hr 31 min
    Special: Flo Crivello on AI as a New Form of Life

    Special: Flo Crivello on AI as a New Form of Life

    On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years.

    Timestamps:
    00:00 Technological progress
    07:59 Regulatory capture and AI
    11:53 AI as a new form of life
    15:44 Can AI development be paused?
    20:12 Biden's executive order on AI
    22:54 How would a GPU kill switch work?
    27:00 Regulating models or applications?
    32:13 AGI in 2-8 years
    42:00 China and US collaboration on AI

    • 47 min

Customer Reviews

4.8 out of 5
100 Ratings

100 Ratings

457/26777633 ,

Science-smart interviewer asks very good questions!

Great, in depth interviews.

VV7425795 ,

Fantastic contribution to mankind! Thanks!

🤗👍🏻

malfoxley ,

Gerat show!

Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
In Her Ellement
Boston Consulting Group BCG
Acquired
Ben Gilbert and David Rosenthal
Deep Questions with Cal Newport
Cal Newport
Hard Fork
The New York Times

You Might Also Like

Dwarkesh Podcast
Dwarkesh Patel
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Conversations with Tyler
Mercatus Center at George Mason University
Your Undivided Attention
Tristan Harris and Aza Raskin, The Center for Humane Technology
"Upstream" with Erik Torenberg
Erik Torenberg