Future of Life Institute Podcast

PODCAST · technology

Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  1. 266

    Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)

    Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency. Anthony argues for purpose-built AI tools under meaningful human control, with liability, access limits, external guardrails, and international cooperation.LINKS:A Better Path for AIWhat You Can DoCHAPTERS: (00:00) Episode Preview (01:03) Attention, attachment, automation (13:58) Superintelligence power race (26:39) Escaping replacement dynamics (40:15) Pro-human tool AI (53:30) Guardrails and verification (01:03:24) Defining pro-human AI (01:10:37) Agents and accountability (01:17:28) International AI cooperation (01:25:28) Rethinking AI alignment (01:32:43) Optimism and action PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  2. 265

    How to Govern AI When You Can't Predict the Future (with Charlie Bullock)

    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.LINKS:Radical Optionality websiteCharlie BullockCHAPTERS: (00:00) Episode Preview (01:04) The pacing problem (06:18) Defining radical optionality (11:03) Assumptions under uncertainty (16:00) Industry convenience concerns (20:41) Political will realities (26:48) Private governance limits (30:28) Government misuse risks (36:29) Balancing institutional power (42:25) Transparency and reporting (49:35) Evaluations, security, talent (58:26) State law preemption (01:04:20) Historical nuclear analogies PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  3. 264

    Why AI Is Not a Normal Technology (with Peter Wildeford)

    Peter Wildeford is Head of Policy at the AI Policy Network, and a top AI forecaster. He joins the podcast to discuss how to forecast AI progress and what current trends imply for the economy and national security. Peter argues AI is neither a bubble nor a normal technology, and we examine benchmark trends, adoption lags, unemployment and productivity effects, and the rise of cyber capabilities. We also cover robotics, export controls, prediction markets, and when AI may surpass human forecasters.LINKS:Peter Wildeford BlogCHAPTERS: (00:00) Episode Preview (01:12) AI bubble debate (06:25) Normal technology question (15:31) Mythos security implications (30:47) Robotics and labor (40:27) Social economic response (48:57) Forecasting methodology (59:49) AGI policy timelines (01:11:13) Forecasting with AI PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  4. 263

    Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)

    Carina Prunkl is a researcher at Inria. She joins the podcast to discuss how to assess the capabilities and risks of general-purpose AI. We examine why systems can solve hard coding and math problems yet still fail at simple tasks, why pre-deployment tests often miss real-world behavior, and how faster capability gains can increase misuse risks. The conversation also covers de-skilling, red teaming, layered safeguards, and warning signs that AIs might undermine oversight.LINKS:Carina Prunkl personal websiteCHAPTERS: (00:00) Episode Preview (01:04) Introducing the report (02:10) Jagged frontier capabilities (05:29) Formal reasoning progress (12:36) Risks and evaluation science (19:00) Funding evaluation capacity (24:03) Autonomy and de-skilling (31:32) Authenticity and AI companions (41:00) Defense in depth methods (48:34) Loss of control risks (53:16) Where to read report PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  5. 262

    Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)

    Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.LINKS:Li-Lian Ang personal siteBlue Dot Impact organization siteCHAPTERS:(00:00) Episode Preview(00:48) Blue dot beginnings(03:04) Evolving AI risk concerns(06:20) AI agents in cyber(15:52) Gradual disempowerment and jobs(23:26) Aligning AI with humans(29:08) Power concentration and misuse(34:52) Influencing frontier AI labs(43:05) Uncertain timelines and strategy(50:18) Writing, AI, and actionPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  6. 261

    What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)

    Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute. She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy. You can read the full essay at: curecancer.aiCHAPTERS:(00:00) Episode Preview(01:10) Introduction and essay motivation(06:30) Intelligence vs data bottlenecks(19:03) Cancer's complexity and heterogeneity(29:05) Measurement, health, and homeostasis(41:41) AI in drug development(50:13) Regulation, FDA, and innovation(01:02:58) Practical paths toward curesPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  7. 260

    AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)

    Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.You can read the full essay at: curecancer.aiCHAPTERS:(00:00) Essay Preview(00:54) How AI Can, and Can't, Cure Cancer(17:05) Reckoning with Past Failures(35:23) Misguiding Myths and Errors(59:15) AI Solutions Derive from First Principles or Data(01:31:31) Systemic Bottlenecks & Misalignments(02:08:46) Conclusion(02:14:35) The Roadmap ForwardPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  8. 259

    How AI Hacks Your Brain's Attachment System (with Zak Stein)

    Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.LINKS:AI Psychological Harms Research CoalitionZak Stein official websiteCHAPTERS: (00:00) Episode Preview (00:56) Education to existential risk (03:03) Lessons from social media (08:41) Attachment systems and AI (18:42) AI companions and attachment (27:23) Anthropomorphism and user disempowerment (36:06) Cognitive atrophy and tools (45:54) Children, toys, and attachment (57:38) AI psychosis and selfhood (01:10:31) Cognitive security and parenting (01:26:15) Education, collapse, and speciation (01:36:40) Preserving humanity and values PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  9. 258

    The Case for a Global Ban on Superintelligence (with Andrea Miotti)

    Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.LINKS:Control AIControl AI global action pageControlAI's lawmaker contact toolsOpen roles at ControlAIControlAI's theory of changeCHAPTERS: (00:00) Episode Preview (00:52) Extinction risk and lobbying (08:59) Progress toward superintelligence (16:26) Building political awareness (24:27) Global regulation strategy (33:06) Race dynamics and public (42:36) Vision and key safeguards (51:18) Recursive self-improvement controls (58:13) Power concentration and action PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  10. 257

    Can AI Do Our Alignment Homework? (with Ryan Kidd)

    Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.orgCHAPTERS: (00:00) Episode Preview (00:20) Introductions and AGI timelines (10:13) Deception, values, and control (23:20) Dual use and alignment (32:22) Frontier labs and governance (44:12) MATS tracks and mentors (58:14) Talent archetypes and demand (01:12:30) Applicant profiles and selection (01:20:04) Applications, breadth, and growth (01:29:44) Careers, resources, and ideas (01:45:49) Final thanks and wrap PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  11. 256

    How to Rebuild the Social Contract After AGI (with Deric Cheng)

    Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.LINKS:Deric Cheng personal websiteAGI Social Contract project siteGuiding society through the AI economic transitionCHAPTERS:(00:00) Episode Preview(01:01) Introducing Derek and AGI(04:09) Automation, power, and inequality(08:55) Inequality, unrest, and time(13:46) Bridging futurists and economists(20:35) Future of work scenarios(27:22) Jobs resisting AI automation(36:57) Luxury, land, and inequality(43:32) Designing and testing solutions(51:23) Taxation in an AI economy(59:10) Envisioning a post-AGI societyPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  12. 255

    How AI Can Help Humanity Reason Better (with Oly Sourbut)

    Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.LINKS:FLF organization siteOly Sourbut personal siteCHAPTERS:(00:00) Episode Preview(01:03) FLF and human reasoning(08:21) Agents and epistemic virtues(22:16) Human use and atrophy(35:41) Abstraction and legible AI(47:03) Demand, trust and Wikipedia(57:21) Map of human reasoning(01:04:30) Negotiation, institutions and vision(01:15:42) How to get involvedPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  13. 254

    How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

    Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.LINKS:Nora Ammann siteARIA safeguarded AI program pageAI Resilience official siteGradual Disempowerment websiteCHAPTERS:(00:00) Episode Preview(01:00) Slow takeoff expectations(08:13) Domination versus chaos(17:18) Human-AI coalitions vision(28:14) Scaling oversight and agents(38:45) Formal specs and guarantees(51:10) Resilience in AI era(01:02:21) Defense-favored cyber systems(01:10:37) AI-enabled bargaining and tradePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  14. 253

    How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)

    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.LINKS:David Duvenaud academic homepageGradual DisempowermentThe Post-AGI WorkshopPost-AGI Studies DiscordCHAPTERS:(00:00) Episode Preview(01:05) Introducing gradual disempowerment(06:06) Obsolete labor and UBI(14:29) Property, power, and control(23:38) Culture shifts toward AIs(34:34) States misalign without people(44:15) Competition and preservation tradeoffs(53:03) Building post-AGI studies(01:02:29) Forecasting and coordination tools(01:10:26) Human values and futuresPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  15. 252

    Why the AI Race Undermines Safety (with Steven Adler)

    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. LINKS:Steven Adler's Substack: https://stevenadler.substack.comCHAPTERS:(00:00) Episode Preview(01:00) Race Dynamics And Safety(18:03) Chatbots And Mental Health(30:42) Models Outsmart Safety Tests(41:01) AI Swarms And Work(54:21) Human Bottlenecks And Oversight(01:06:23) Animals And Superintelligence(01:19:24) Safety Capabilities And GovernancePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  16. 251

    Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)

    Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.LINKS:The Midas Project WebsiteTyler Johnston's LinkedIn ProfileCHAPTERS:(00:00) Episode Preview(01:06) Introducing the Midas Project(05:01) Shining a Light on AI(08:36) Industry Lockdown and Transparency(13:45) The OpenAI Files(20:55) Subpoenaed by OpenAI(29:10) Responding to the Subpoena(37:41) The Case for Transparency(44:30) Pricing Risk and Regulation(52:15) Measuring Transparency and Auditing(57:50) Hope for the FuturePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  17. 250

    We're Not Ready for AGI (with Will MacAskill)

    William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  18. 249

    What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

    Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.LINKS:About the AI Whistleblower InitiativeKarl KochPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(00:55) Starting the Whistleblower Initiative(05:43) Current State of Protections(13:04) Path to Optimal Policies(23:28) A Whistleblower's First Steps(32:29) Life After Whistleblowing(39:24) Evaluating Company Policies(48:19) Alternatives to Whistleblowing(55:24) High-Stakes Future Scenarios(01:02:27) AI and National SecuritySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPDISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations.

  19. 248

    Can Machines Be Truly Creative? (with Maya Ackerman)

    Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.LINKS:- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:00) Defining Human Creativity(02:58) Machine and AI Creativity(06:25) Measuring Subjective Creativity(10:07) Creativity in Animals(13:43) Alignment Damages Creativity(19:09) Creativity is Hallucination(26:13) Humble Creative Machines(30:50) Incentives and Replacement(40:36) Analogies for the Future(43:57) Collaborating with AI(52:20) Reinforcement Learning & Slop(55:59) AI in EducationSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  20. 247

    From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)

    Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.LINKS:- Parmy Olson on X (Twitter): https://x.com/parmy- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:18) Introducing Parmy Olson(02:37) Personalities Driving AI(06:45) From Research to Products(12:45) Has the Mission Changed?(19:43) The Role of Regulators(21:44) Skepticism of AI Utopia(28:00) The Human Cost(33:48) Embracing Controversy(40:51) The Role of Journalism(41:40) Big Tech's InfluenceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  21. 246

    Can Defense in Depth Work for AI? (with Adam Gleave)

    Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.aiPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of GovernanceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  22. 245

    How We Keep Humans in Control of AI (with Beatrice Erkers)

    Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.LINKS:AI Pathways - https://ai-pathways.existentialhope.comBeatrice Erkers - https://www.existentialhope.com/team/beatrice-erkersCHAPTERS:(00:00) Episode Preview(01:10) Introduction and Background(05:40) AI Pathways Project(11:10) Defining Tool AI(17:40) Tool AI Benefits(23:10) D/acc Pathway Explained(29:10) Decentralization Trade-offs(35:10) Combining Both Pathways(40:10) Uncertainties and Concerns(45:10) Future Evolution(01:01:21) Funding PilotsPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  23. 244

    Why Building Superintelligence Means Human Extinction (with Nate Soares)

    Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute -  https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:05) Introduction and Book Discussion(03:34) Psychology of AI Alarmism(07:52) Intelligence Threshold Effects(11:38) Growing vs Crafting AI(18:23) Illusion of AI Control(26:45) Why Iteration Won't Work(34:35) The No Retries Problem(38:22) Computer Security Lessons(49:13) The Cursed Problem(59:32) Multiple Curses and Complications(01:09:44) AI's Infrastructure Advantage(01:16:26) Grading Humanity's Response(01:22:55) Time Needed for Solutions(01:32:07) International Ban NecessitySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  24. 243

    Breaking the Intelligence Curse (with Luke Drago)

    Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition.  "The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/ Luke's Substack: https://lukedrago.substack.com/ Workshop Labs: https://workshoplabs.ai/ CHAPTERS: (00:00) Episode Preview(00:55) Intelligence Curse Introduction(02:55) AI vs Historical Technology(07:22) Economic Metrics and Indicators(11:23) Pyramid Replacement Theory(17:28) Human Judgment and Taste(22:25) Data Privacy and Control(28:55) Dystopian Economic Scenario(35:04) Resource Curse Lessons(39:57) Culture vs Economic Forces(47:15) Open Source AI Debate(54:37) Corporate Mission Evolution(59:07) AI Alignment and Loyalty(01:05:56) Moonshots and Career Advice

  25. 242

    What Markets Tell Us About AI Timelines (with Basil Halperin)

    Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/CHAPTERS:(00:00) Episode Preview(00:49) Introduction and Background(05:19) Efficient Market Hypothesis Explained(10:34) Markets and Low Probability Events(16:09) Information Diffusion on Wall Street(24:34) Stock Prices vs Interest Rates(28:47) New Goods Counter-Argument(40:41) Why Focus on Interest Rates(45:00) AI Secrecy and Market Efficiency(50:52) Short Timeline Disagreements(55:13) Wealth Concentration Effects(01:01:55) Alternative Economic Indicators(01:12:47) Benchmarks vs Economic Impact(01:25:17) Open Research QuestionsSOCIAL LINKS:Website: https://future-of-life-institute-podcast.aipodcast.ingTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPPRODUCED BY: https://aipodcast.ing

  26. 241

    AGI Security: How We Defend the Future (with Esben Kran)

    Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.   Learn more about Esben's work at: https://blog.kran.ai  00:00 – Intro and preview 01:13 – AGI security vs traditional cybersecurity 02:36 – Rebuilding societal infrastructure for embedded security 03:33 – Sentware: adaptive, self-improving malware 04:59 – New attack surfaces 05:38 – Social media as misaligned AI 06:46 – Personal vs societal defenses 09:13 – Why private companies underinvest in security 13:01 – Security as the foundation for any AI deployment 14:15 – Oversight without a surveillance state 17:19 – Protocols for safe agent communication 20:25 – The expensive internet hypothesis 23:30 – Distributed safety for companies and governments 28:20 – Cloudflare’s “agent labyrinth” example 31:08 – Positive vision for distributed security 33:49 – Human value when labor is automated 41:19 – Encoding law for machines: contracts and enforcement 44:36 – DarkBench: detecting manipulative LLM behavior 55:22 – The AGI endgame: default path vs designed future 57:37 – Powerful tool AI 01:09:55 – Fast takeoff risk 01:16:09 – Realistic optimism

  27. 240

    Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

    Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.  Follow Benjamin's work at: https://benjamintodd.substack.com  Timestamps: 00:00 What are reasoning models?  04:04 Reinforcement learning supercharges reasoning 05:06 Reasoning models vs. agents 10:04 Economic impact of automated math/code 12:14 Compute as a bottleneck 15:20 Shift from giant pre-training to post-training/agents 17:02 Three feedback loops: algorithms, chips, robots 20:33 How fast could an algorithmic loop run? 22:03 Chip design and production acceleration 23:42 Industrial/robotics loop and growth dynamics 29:52 Society’s slow reaction; “warning shots” 33:03 Robotics: software and hardware bottlenecks 35:05 Scaling robot production 38:12 Robots at ~$0.20/hour?  43:13 Regulation and humans-in-the-loop 49:06 Personal prep: why it still matters 52:04 Build an information network 55:01 Save more money 58:58 Land, real estate, and scarcity in an AI world 01:02:15 Valuable skills: get close to AI, or far from it 01:06:49 Fame, relationships, citizenship 01:10:01 Redistribution, welfare, and politics under AI 01:12:04 Try to become more resilient  01:14:36 Information hygiene 01:22:16 Seven-year horizon and scaling limits by ~2030

  28. 239

    From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)

    On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines.  Learn more about Calum's work here: https://calumchace.com  Timestamps:  00:00:00  Preview and intro 00:03:02  Past tech revolutions and AI-driven unemployment 00:05:43  Cognitive automation: from secretaries to every job 00:08:02  The “peak horse” analogy and avoiding human obsolescence 00:10:55  Infinite demand and lump of labor 00:18:30  Fully-automated luxury capitalism 00:23:31  Abundance economy and a potential employment cliff 00:29:37  Education reimagined with personalized AI tutors 00:36:22  Real-world uses of LLMs: memory, drafting, emotional insight 00:42:56  Meaning beyond jobs: aristocrats, retirees, and kids 00:49:51  Four futures of superintelligence 00:57:20  Conscious AI and empathy as a safety strategy 01:10:55  Verifying AI agents 01:25:20  Over-attributing vs under-attributing machine consciousness

  29. 238

    How AI Could Help Overthrow Governments (with Tom Davidson)

    On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats.  Learn more about Tom's work here: https://www.forethought.org  Timestamps:  00:00:00  Preview: why preventing AI-enabled coups matters 00:01:24  What do we mean by an “AI-enabled coup”? 00:01:59  Capabilities AIs would need (persuasion, strategy, productivity) 00:02:36  Cyber-offense and the road to robotized militaries 00:05:32  Step-by-step example of an AI-enabled military coup 00:08:35  How AI-enabled coups would differ from historical coups 00:09:24  Democratic backsliding (Venezuela, Hungary, U.S. parallels) 00:12:38  Singular loyalties, secret loyalties, exclusive access 00:14:01  Secret-loyalty scenario: CEO with hidden control 00:18:10  From sleeper agents to sophisticated covert AIs 00:22:22  Exclusive-access threat: one project races ahead 00:29:03  Could one country outgrow the rest of the world? 00:40:00  Could a single company dominate global GDP? 00:47:01  Autocracies vs democracies 00:54:43  Mitigations for singular and secret loyalties 01:06:25  Guardrails, monitoring, and controlled-use APIs 01:12:38  Using AI itself to preserve checks-and-balances 01:24:53  Risk indicators to watch for AI-enabled coups 01:33:05  Tom’s risk estimates for the next 5 and 30 years 01:46:50  How you can help – research, policy, and careers

  30. 237

    What Happens After Superintelligence? (with Anders Sandberg)

    Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks.  Learn more about Anders's work here: https://mimircenter.org/anders-sandberg  Timestamps:  00:00:00 Preview and intro 00:04:20 2030 superintelligence scenario 00:11:55 Status, post-scarcity, and reshaping human psychology 00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 00:23:48 Technosphere vs biosphere 00:28:42 Culture and physics as long-run drivers of civilization 00:40:38 How superintelligence could upend markets and governments 00:50:01 State inertia: why governments lag behind companies 00:59:06 Value lock-in, censorship, and model alignment 01:08:32 Emergent AI ecosystems and coordination-failure risks 01:19:34 Predictability vs reliability: designing safe systems 01:30:32 Crossing the reliability threshold 01:38:25 Personal reflections on accelerating change

  31. 236

    Why the AI Race Ends in Disaster (with Daniel Kokotajlo)

    On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory.  You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org  Timestamps:  00:00:00 Preview and intro 00:00:50 Why AI will eclipse the Industrial Revolution  00:09:48 How much can AI speed up AI research?  00:16:13 Automated coding and diffusion 00:27:37 Transparency in AI development  00:34:52 Deploying AI internally  00:40:24 Communication between AIs  00:49:23 Is AI inherently risky? 00:59:54 Iterative forecasting

  32. 235

    Preparing for an AI Economy (with Daniel Susskind)

    On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.  You can learn more about Daniel's work here: https://www.danielsusskind.com  Timestamps:  00:00:00 Preview and intro  00:03:19 AI researchers versus economists  00:10:39 Measuring AI's economic effects  00:16:19 Can AI be steered in positive directions?  00:22:10 Human values and economic outcomes 00:28:21 What will remain for people to do?  00:44:58 Commercial incentives in AI 00:50:38 Will education move towards general skills? 00:58:46 Lessons for parents

  33. 234

    Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)

    Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.  Learn more about Ed's work here: https://ed.newtonrex.com  Timestamps:  00:00:00 Preview and intro  00:04:18 AI-generated music  00:12:15 Resigning from Stability AI  00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained  00:37:16 Special kinds of training data  00:50:42 The longer-term future of AI  00:56:09 Will AI improve living standards?  01:03:10 AI versions of artists  01:13:28 Authenticity and art  01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward

  34. 233

    AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)

    On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines.  Timestamps:  00:00:00 Preview and intro00:00:46 What do benchmarks measure?  00:08:08 Will AI develop like other tech?  00:14:13 Which tasks can AIs do? 00:23:00 Capability profiles of AIs  00:34:04 Timelines and social effects 00:42:01 Alignment by default?  00:50:36 Can vague AGI plans be useful? 00:54:36 The fast world and the slow world 01:08:02 Long-term projects and short timelines

  35. 232

    Could Powerful AI Break Our Fragile World? (with Michael Nielsen)

    On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism.Timestamps:  00:00:00 Preview and intro 00:01:05 Understanding is dual-use  00:05:17 Can we handle AI like other tech?  00:12:08 Can institutions adapt to AI?  00:16:50 Recognizing signs of dangerous AI 00:22:45 Agents versus tools 00:25:43 Power is latent in the world 00:35:45 Widespread powerful hardware 00:42:09 Governance mechanisms for AI 00:53:55 Deep atheism and optimistic cosmism

  36. 231

    Facing Superintelligence (with Ben Goertzel)

    On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.   Timestamps:  00:00:00 Preview and intro  00:01:59 Thinking about AGI in the 1970s  00:07:28 What's different about this AI boom?  00:16:10 Former taboos about AGI 00:19:53 AI research worth revisiting  00:35:53 Will the first AGI be simple?  00:48:49 Is alignment achievable?  01:02:40 Benchmarks and economic impact  01:15:23 Bottlenecks to superintelligence 01:23:09 What should we do?

  37. 230

    Will Future AIs Be Conscious? (with Jeff Sebo)

    On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.  You can follow Jeff’s work here: https://jeffsebo.net/  Timestamps:  00:00:00 Preview and intro 00:02:56 Imagining artificial consciousness  00:07:51 Substrate-independence? 00:11:26 Are we making progress?  00:18:03 Intuitions about explanations  00:24:43 AI risk and AI consciousness  00:40:01 Consciousness and cognitive complexity  00:51:20 Intuition versus intellect 00:58:48 AIs as companions  01:05:24 AI rights  01:13:00 Acting under time pressure 01:20:16 Measuring consciousness  01:32:11 How can you help?

  38. 229

    Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)

    On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading.  You can follow Zvi's excellent blog here: https://thezvi.substack.com  Timestamps:  00:00:00 Preview and introduction  00:02:01 Sycophantic AIs  00:07:28 Bottlenecks for AI agents  00:21:26 Are benchmarks useful?  00:32:39 AI agent time horizons  00:44:18 Impact of automating research 00:53:00 Limits to scaling inference compute  01:02:51 Will the future go well for humanity?  01:12:22 A good plan for safe AI  01:26:03 What makes AI different?  01:31:29 AI in trading

  39. 228

    Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)

    On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.  You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io  Timestamps:  00:00:00 Preview and introduction  00:01:36 A US-China AI arms race?  00:10:58 Attitudes to AI safety in China  00:17:53 Diffusion of AI  00:25:13 Innovation without diffusion  00:34:29 AI development concentration  00:41:40 Learning from the history of technology  00:47:48 Translating Chinese AI writings  00:55:36 Automating translation of AI writings

  40. 227

    How Will We Cooperate with AIs? (with Allison Duettmann)

    On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org  Timestamps:  00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI  00:13:02 Risks from decentralized AI  00:25:39 International AI governance  00:39:52 Cooperation with future AIs  00:53:51 AI for decision-making  01:05:58 Capital intensity of AI 01:09:11 Lessons from history  01:15:50 Future space law and property rights  01:27:28 Is technology invented or discovered?  01:32:34 Children in the age of AI

  41. 226

    Brain-like AGI and why it's Dangerous (with Steven Byrnes)

    On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  You can learn more about Steven's work at: https://sjbyrnes.com/agi.html  Timestamps:  00:00 Preview  00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI  19:12 Learning from the brain  28:36 Why is brain-like AI the most likely path to AGI?  39:23 Honesty in AI models  44:02 How to help with brain-like AGI safety  53:36 AI traits with both positive and negative effects  01:02:44 Different AI safety strategies

  42. 225

    How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)

    On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.  You can learn more about Ege's work at https://epoch.ai  Timestamps:  00:00:00 – Preview and introduction 00:02:59 – Compute scaling and automation - GATE model 00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 00:47:19 – AI, Wages, and Labor Market Transitions 00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 01:06:56 – Moravec’s Paradox and Automation of Human Skills 01:13:59 – Which Jobs Are Most Vulnerable to AI? 01:33:00 – Timeline Extremes: What Could Change AI Forecasts?

  43. 224

    Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

    In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  00:00 Nicholas Carlini's contributions to cybersecurity08:19 Understanding attack strategies 29:39 High-dimensional spaces and attack intuitions 51:00 Challenges in open-source model safety 01:00:11 Unlearning and fact editing in models 01:10:55 Adversarial examples and human robustness 01:37:03 Cryptography and AI robustness 01:55:51 Scaling AI security research

  44. 223

    Keep the Future Human (with Anthony Aguirre)

    On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai   AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...  Timestamps:  00:00 What situation is humanity in? 05:00 Why AI progress is fast  09:56 Tool AI instead of AGI 15:56 The incentives of AI companies  19:13 Governments can coordinate a slowdown 25:20 The need for international coordination  31:59 Monitoring training runs  39:10 Do reasoning models undermine compute governance?  49:09 Why isn't alignment enough?  59:42 How do we decide if we want AGI?  01:02:18 Disagreement about AI  01:11:12 The early days of AI risk

  45. 222

    We Created AI. Why Don't We Understand It? (with Samir Varma)

    On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.  You can find out more about Samir's work here: https://samirvarma.com   Timestamps:  00:00 AIs with free will? 08:00 Can we predict AI behavior?  11:38 AI psychology 16:24 Which concepts will AIs use?  20:19 Will we collaborate with AIs?  26:16 Will we trade with AIs?  31:40 Training data for robots  34:00 AI in finance  39:55 How much of trading is automated?  49:00 AI in biology and complex systems 59:31 Will our skills atrophy?  01:02:55 Levels of scientific explanation  01:06:12 AIs with emotions and consciousness?  01:12:12 Why can't we predict recessions?

  46. 221

    Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)

    On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.   We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:   https://palisaderesearch.org/blog/specification-gaming  Timestamps:  00:00 The pace of AI progress  04:15 How we might lose control  07:23 Why are AIs sometimes dumb?  12:52 Benchmarks vs real world  19:11 Loss of control scenarios 26:36 Why would AI turn against us?  30:35 AIs hacking chess  36:25 Why didn't more advanced AIs hack?  41:39 Creating honest AIs  49:44 AI attackers vs AI defenders  58:27 How good is security at AI companies?  01:03:37 A sense of urgency 01:10:11 What should we do?  01:15:54 Skepticism about AI progress

  47. 220

    Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity

    Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.   You can learn more about Ann's work here:   https://www.wiseancestors.org   Timestamps:  00:00 What is Wise Ancestors?  04:27 Recovering after catastrophes 11:40 Decentralized science  18:28 Upfront benefit-sharing  26:30 Local communities  32:44 Recreating optimal environments  38:57 Cross-cultural collaboration

  48. 219

    Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective

    Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.   You can learn more about Michael's work here:   https://catholic.tech/academics/faculty/michael-baggot  Timestamps:  00:00 Meta-narratives and transhumanism  15:28 Advanced AI and religious communities  27:22 Superintelligence  38:31 Countercultures and technology  52:38 Christian perspectives and tradition 01:05:20 God-like artificial intelligence  01:13:15 A positive vision for AI

  49. 218

    David Dalrymple on Safeguarded, Transformative AI

    David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.  You can learn more about David's work at ARIA here:   https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/   Timestamps:  00:00 What is Safeguarded AI?  16:28 Implementing Safeguarded AI 22:58 Can we trust Safeguarded AIs?  31:00 Formalizing more of the world  37:34 The performance cost of verified AI  47:58 Changing attitudes towards AI  52:39 Flexible‬‭ Hardware-Enabled‬‭ Guarantees 01:24:15 Mind uploading  01:36:14 Lessons from David's early life

  50. 217

    Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters

    Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com  Timestamps: 00:00 What is GiveDirectly? 15:04 AI for targeting cash transfers 29:39 AI for predicting natural disasters 46:04 How scalable is GiveDirectly's AI approach? 58:10 Decentralized vs. centralized data collection 1:04:30 Dream scenario for GiveDirectly

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

HOSTED BY

Future of Life Institute

Produced by Gus Docker

CATEGORIES

URL copied to clipboard!