EPISODE · Nov 8, 2023 · 33 MIN
For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem
from For Humanity: An AI Risk Podcast · host The AI Risk Network. AI Safety
Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
NOW PLAYING
For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem
No transcript for this episode yet
Similar Episodes
No similar episodes found.