EPISODE · May 7, 2026 · 1H 7M
How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
from Future of Life Institute Podcast · host Future of Life Institute
Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.LINKS:Radical Optionality websiteCharlie BullockCHAPTERS: (00:00) Episode Preview (01:04) The pacing problem (06:18) Defining radical optionality (11:03) Assumptions under uncertainty (16:00) Industry convenience concerns (20:41) Political will realities (26:48) Private governance limits (30:28) Government misuse risks (36:29) Balancing institutional power (42:25) Transparency and reporting (49:35) Evaluations, security, talent (58:26) State law preemption (01:04:20) Historical nuclear analogies PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
NOW PLAYING
How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
No transcript for this episode yet
Similar Episodes
No similar episodes found.