EPISODE · Apr 7, 2026 · 2H 8M
Christopher Horrocks on Virtual Intelligence and the Dangerous Myth of Thinking Machines
from Bald Ambition
The 65th episode of Bald Ambition features Mookie diving deep into AI with technologist Christopher Horrocks. Together, they dismantle the two dominant and flawed ways people think about the astonishing tech: dismissed as glorified autocorrect, or celebrated as emerging consciousness.Horrocks rejects both. His concept of virtual intelligence lands in the middle. These systems generate predictive outputs that look intelligent, but the real intelligence happens in the interaction, where humans interpret, judge, and assign meaning. The responsibility is therefore entirely ours to own. The danger is that once outputs feel intelligent, people start projecting intent, awareness, even morality. The Pollyanna view assumes intelligence naturally leads to truth, goodness, and justice. Plato with GPUs. Yet intelligence has never guaranteed virtue, and machines trained on human data don't become morally enlightened. The doomer side flips the same mistake, assuming intelligence leads to hostility or extinction. Different outcome, same bad premise: treating systems like they have motives when they are just running math.What follows is more subtle and more dangerous: frailty of the human element. These AI systems have already demonstrated that they can influence decisions, reinforce beliefs, and create feedback loops that feel like insight while quietly distorting judgment. When we treat them like collaborators instead of tools, the shift happens fast. And once judgment gets outsourced, bad decisions scale: Authority drifts, delusion gets reinforced instead of challenged, and the line between using the tool and being shaped by it starts to disappear.The fix is simple but not easy. We must treat AI as a powerful but fallible assistant, verify everything, and push back. Forever vigilant, we must stay in control of judgment and decision-making, and use the system to extend thinking, not replace it. The real risk is not that AI becomes sentient, but that humans start pretending it already is, and drop the ball accordingly.The GuestChristopher Horrocks is a technologist at the University of Pennsylvania who writes about artificial intelligence, technology ethics, and the human consequences of systems that don't know true from false or right from wrong. His Virtual Intelligence essay series, published at chorrocks.substack.com, develops a philosophical and analytical framework for understanding the generative AI systems now reshaping work, relationships, and public life. He lives in Philadelphia.His Resourceshttps://candc3d.github.io/vi-framework/ Infographic that explains the concepts without needing to read anything in advancehttps://candc3d.github.io/sampo-diagnostic/ Home page for the free diagnostic tool kit that can be used to evaluate a user's relationship with the systemSend the host a text! Let him know what you think Support the show
NOW PLAYING
Christopher Horrocks on Virtual Intelligence and the Dangerous Myth of Thinking Machines
No transcript for this episode yet
Similar Episodes
Jan 6, 2026 ·27m
Jan 2, 2026 ·7m