EPISODE · Mar 4, 2024 · 54 MIN
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
from Into AI Safety · host Jacob Haimes
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans.00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - OutroLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.StakeOut.AIAI Governance Scorecard (go to Pg. 3)Pause AIRegulations.gov USCO StakeOut.AI Comment OMB StakeOut.AI Comment AI Treaty open letterTAISCAlpaca: A Strong, Replicable Instruction-Following ModelReferences on EU AI Act and Cedric O Tweet from Cedric O EU policymakers enter the last mile for Artificial Intelligence rulebook AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’ EU’s AI Act negotiations hit the brakes over foundation models The EU AI Act needs Foundation Model Regulation BigTech’s Efforts to Derail the AI Act Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulationDivide-and-Conquer Dynamics in AI-Driven Disempowerment
NOW PLAYING
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
No transcript for this episode yet
Similar Episodes
Jun 4, 2025 ·9m
May 21, 2025 ·11m