EPISODE · Jul 10, 2024 · 1H 25M
Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
from For Humanity: An AI Risk Podcast · host The AI Risk Network. AI Safety
In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.Gladstone AI Action Planhttps://www.gladstone.ai/action-planTIME MAGAZINE ON THE GLADSTONE REPORThttps://time.com/6898967/ai-extinction-national-security-risks-report/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebates Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:**The whistleblower's concerns (00:00:00)****Introduction to the podcast (00:01:09)****The urgency of addressing AI risk (00:02:18)****The potential consequences of falling behind in AI (00:04:36)****Transitioning to working on AI risk (00:06:33)****Engagement with the State Department (00:08:07)****Project assessment and public visibility (00:10:10)****Motivation for taking on the detective work (00:13:16)****Alignment with the government's safety culture (00:17:03)****Potential government oversight of AI labs (00:20:50)****The whistle blowers' concerns (00:21:52)****Shifting control to the government (00:22:47)****Elite group within the government (00:24:12)****Government competence and allocation of resources (00:25:34)****Political level and tech expertise (00:27:58)****Challenges in government engagement (00:29:41)****State department's engagement and assessment (00:31:33)****Recognition of government competence (00:34:36)****Engagement with frontier labs (00:35:04)****Whistleblower insights and concerns (00:37:33)****Whistleblower motivations (00:41:58)****Engagements with AI Labs (00:42:54)****Emotional Impact of the Work (00:43:49)****Workshop with Government Officials (00:44:46)****Challenges in Policy Implementation (00:45:46)****Expertise and Insights (00:49:11)****Future Engagement with US Government (00:50:51)****Flexibility of Private Sector Entity (00:52:57)****Impact on Whistleblowing Culture (00:55:23)****Key Recommendations (00:57:03)****Security and Governance of AI Technology (01:00:11)****Obstacles and Timing in Hardware Development (01:04:26)****The AI Lab Security Measures (01:04:50)****Nvidia's Stance on Regulations (01:05:44)****Export Controls and Governance Failures (01:07:26)****Concerns about AGI and Alignment (01:13:16)****Implications for Future Generations (01:16:33)****Personal Transformation and Mental Health (01:19:23)****Starting a Nonprofit for AI Risk Awareness (01:21:51)** This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
NOW PLAYING
Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
No transcript for this episode yet
Similar Episodes
No similar episodes found.