PODCAST · technology
AI-Curious with Jeff Wilser
by Jeff Wilser
Every week, Jeff Wilser sits down with the people building, breaking, and reckoning with AI — from the CEO of Upwork to the pioneer who coined "AGI" to an AI social network where bots wrote manifestos and had existential crises. Wilser is the author of eight books, AI keynote speaker, and the kind of interviewer who'd rather find the story no one's telling than rehash the headline everyone's read. Named by Inc. Magazine as one of the best ways to get AI-savvy. Included in UC Berkeley's data science curriculum.
-
133
Why “Shadow AI” is the Biggest Business AI Story No One is Talking About, w/ Rick Caccia
It’s happening everywhere. And no one’s really talking about it. What happens when your employees are already using dozens of AI tools your company never approved?In this episode of AI-Curious, we talk with Rick Caccia, co-founder and CEO of Witness AI, about the rise of “shadow AI” inside enterprises and why it has become one of the biggest practical challenges in AI adoption. We explore how employees, often with good intentions, are quietly using ChatGPT, Copilot, and thousands of other AI apps to do their jobs faster, sometimes with sensitive data that should never leave the company.We also dig into what happens when that behavior scales. From customer support teams pasting financial information into AI tools, to marketers uploading customer lists, to developers sharing source code with external models, we look at the real security, compliance, privacy, and cost risks companies are now facing. We also discuss why this problem gets even harder with AI agents, which can take actions, access systems, and create new forms of risk far beyond a simple chatbot prompt.Along the way, we talk about prompt injection, jailbreaks, token costs, insider risk, enterprise governance, and how leaders can build an AI strategy that enables productivity without creating chaos. This is a practical conversation for anyone trying to understand how AI is actually being used inside organizations right now, and what it takes to manage that responsibly.GuestRick Caccia — Co-founder and CEO, Witness AIFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
132
When AI Forecasts Become Self-Fulfilling (and Who This Hurts), w/ Carissa Véliz
What happens when an AI prediction does not just forecast the future, but helps create it?In this episode of AI-Curious, we talk with philosopher and ethicist Carissa Véliz about AI ethics, AI privacy, predictive AI, and the hidden power of algorithmic decision-making. We explore how AI systems used in hiring, lending, insurance, and other high-stakes settings can become self-fulfilling prophecies, shaping outcomes rather than simply measuring them.We also examine the growing privacy risks of large language models and AI agents, especially as they gain access to more personal data, communications, and systems. Along the way, we discuss automated decision-making, surveillance, human autonomy, and why predictions about people are far more ethically fraught than predictions about things like the weather.This conversation also goes beyond policy and into philosophy: how narratives about AI shape public thinking, why humor can be a response to technological power, and how individuals and companies can use AI responsibly without giving up judgment, control, or resilience.If you are interested in AI ethics, algorithmic bias, AI privacy, AI agents, responsible AI, predictive algorithms, self-fulfilling prophecy, and the future of AI, this episode offers a clear and thought-provoking framework for understanding what is at stake.GuestCarissa Véliz — Philosopher, Associate Professor at the Institute for Ethics in AI at the University of Oxford, and author of Prophecy, Prediction, Power, and the Fight for the Future: From Ancient Oracles to AI.Carissa's TED talk:https://www.ted.com/talks/carissa_veliz_beware_the_power_of_predictionCarissa's new book: Prophecy, Prediction, Power, and the Fight for the Future: From Ancient Oracles to AI.Follow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
131
How AI Will Impact Your Job Search, w/ LinkedIn’s Editor-in-Chief Dan Roth
What if the job you have today will soon require a completely different set of skills? In this episode of AI-Curious, we talk with Dan Roth, Editor in Chief of LinkedIn, about what LinkedIn’s data reveals about the future of work, the rise of AI literacy, and why deeply human skills may matter more than ever. We dig into LinkedIn’s “Skills on the Rise” research, what employers are actually looking for now, and why the shift toward skills-based hiring is changing how people get hired, promoted, and evaluated.We also explore the surprising rise of storytelling, public speaking, conflict resolution, and stakeholder communication in an AI-driven workplace. Along the way, we discuss why traditional resumes and polished cover letters may matter less in a world where anyone can use AI to sound impressive, and why some companies are moving toward live prototyping and real-time problem solving in interviews instead.Later, we get into AI agents, what Dan is building himself, and how leaders can create stronger AI adoption inside their companies. We also talk about what it takes to stay competitive in a job market where AI is changing the stack of work, but not necessarily replacing the worker.GuestDan Roth — Editor in Chief, LinkedInFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
130
5 AI Tools I’m Using Right Now - and How They Could Streamline Your Work
What does it actually look like to use AI tools in the real world, beyond the usual chatbot prompts and hype?In this episode of AI-Curious, Jeff Wilser shares five AI tools and workflows that are shaping how he works right now, from Claude Code and personalized news briefings to NotebookLM, multi-model prompting, and using AI to write more closely in your own voice. The goal is not to offer a comprehensive list of every AI product on the market, but to show how these tools can be used in practical ways that expand capability, streamline research, and create new workflows.We explore how vibe coding and AI agents can help non-coders build useful internal tools, why personalized AI news feeds may become increasingly common, and how NotebookLM can synthesize large amounts of information across transcripts, documents, and YouTube videos. We also look at the benefits of using multiple AI models together instead of relying on just one, and why feeding AI much richer context can dramatically improve writing outputs.Throughout the episode, we return to a core idea: using AI to empower, not eliminate. Rather than treating AI only as a cost-cutting tool, we examine how it can help individuals and businesses do more, think more creatively, and build smarter systems around the work that matters most.Key topics we cover3:15 — Claude Code, vibe coding, and why non-coders should be paying attention6:01 — Building a custom AI-powered conference outreach and research tool11:05 — “AI to empower, not eliminate” as a guiding philosophy16:16 — Personalized AI news briefings and the future of customized information21:58 — How NotebookLM helps synthesize transcripts, documents, and YouTube content27:04 — Why a “polymodel” approach can be better than relying on one chatbot31:15 — Using AI to write more closely in your own voice through deeper contextFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
129
How AI Will Change How You Work, w/ Kelly Monahan
What happens when AI stops being a productivity tool and starts reshaping the structure of work itself?In this episode of AI-Curious, we talk with Kelly Monahan, a future of work and AI advisor, about what AI may actually do to the workplace over the next few years, and why the reality is likely to be messier than both the hype and the fear suggest. We dig into the tension between using AI for augmentation versus automation, why so many companies are still struggling to prove ROI, and how AI agents could transform business workflows while also creating major governance, accountability, and implementation challenges.We also explore what this means for knowledge workers, middle managers, and enterprise leaders trying to adapt in real time. Along the way, we discuss why small businesses may have an advantage over large organizations, how workers can focus on higher-value contributions, and why the future of work may require not just new tools, but a new mindset.GuestKelly Monahan — Future of Work and AI AdvisorKey topics we cover2:49 — Kelly’s optimistic and pessimistic theses on the future of work5:15 — Where AI is overhyped, and the disconnect between leaders and workers6:35 — Why generative AI adds complexity inside organizations10:05 — What the research says about AI ROI12:54 — Where AI is delivering real wins today, especially for freelancers and small businesses16:27 — Advice for leaders and middle managers inside large organizations18:39 — Why curiosity, learning, and experimentation need to be rewarded19:02 — AI agents, the hype cycle, and why the excitement may still be justified22:25 — Why enterprises are struggling to keep pace with the speed of AI change29:18 — What the future of work may look like over the next 3 to 5 years30:02 — Why white-collar work could face major disruption33:37 — The “elevator to skyscraper” analogy for how AI should reshape work35:08 — Predictions for AI adoption, governance failures, and labor market shifts39:00 — How Kelly uses AI in her own work and businessFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
128
Creating an AI-First University, w/ Kogod Dean David Marchick
What happens when a business school decides AI isn’t a bolt-on elective, but the operating system for how students learn marketing, finance, entrepreneurship, and leadership?In this episode of AI-Curious, we’re back with David Marchick, Dean of the Kogod School of Business, to see what changed after his earlier promise to become the country’s first AI-first business school. We dig into what “AI-first” actually means in practice, what worked (and what failed), and how a culture of experimentation turned AI adoption from a handful of pilots into a school-wide shift.We also tackle the most unavoidable issue in education right now: cheating. David shares Kogod’s approach to disclosure, ethics, group work, oral exams, and why “blue books” may be making a comeback. From there, we zoom out to the bigger stakes: the existential threat AI poses to universities, how the higher ed business model may change, and what skills still matter when AI can generate content on demand.GuestDavid Marchick — Dean of Kogod School of BusinessKey topics we cover3:56 — The “tipping point”: how AI moved from experiments to 90% of faculty using it7:16 — What “AI-first business school” really means: AI + fundamentals + “power skills”10:32 — Cheating and assessment: disclosure statements, prompts, oral exams, blue books16:51 — A prompts-only entrepreneurship course and what personalized learning could become22:06 — Non-technical students building apps and graduating with an AI-driven portfolio23:38 — Practicing negotiations against AI counterparts with different personalities25:04 — Agentic workflows as a management tool, not just a technical novelty29:13 — The university headwinds: demographic cliff, international enrollment, funding, AI38:58 — Leadership lessons: top-down AI culture plus bottom-up workflow redesign40:42 — How David uses AI personally, including Tour de France route training plansFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
127
The Future of Media in the Age of AI: Misinformation, Attention, and Personalization (From Davos)
What happens when AI makes the news feel like it was made just for us, and the “objective” version quietly disappears?Here we have something of a “very special episode” of AI-Curious. I was recently in Davos during World Economic Forum week, and was honored to speak on a panel on the Future of Media. This is that panel. We dig into the trust crisis in journalism, the attention economy, and how AI may accelerate the shift toward personality-led media and hyper-personalized information feeds. We also explore why misinformation is not new, but why AI makes it easier, faster, and more scalable, and what that means for democracy, markets, and everyday decision-making.Across the conversation, we unpack a core tension: AI can help deliver more context, more viewpoints, and more interactive storytelling, yet it can also deepen filter bubbles by giving each person a “perfectly tailored” version of reality. We discuss incentives and business models, including subscriptions, creator-led journalism, community-based distribution, and ideas like micropayments, as well as the role of media literacy and education in helping audiences navigate what’s real.PanelistsLexi Mills (Moderator), CEO of Shift6 StudiosJeff Wilser, Host of AI-CuriousFrancesca Gargaglia, Co-Founder & CEO of social.plusMark Kollar, Partner at Prosek PartnersJohnny Gabriele, Co-Founder & CEO at Daedalus PartnersKey topics we cover03:07 — Trust, attention, and the rise of personality-led media reshaping news consumption05:22 — Why AI accelerates a pre-existing media business crisis, and how trust erodes as convenience rises12:48 — Algorithms before generative AI: engagement incentives, anger, and the personalization trap17:29 — The “personalized Walter Cronkite” future and the risks of hyper-customized news26:58 — Micropayments, creator platforms, and whether new economics can reward truth27:23 — Media literacy: teaching people how to evaluate sources and resist “feed-based reality”38:18 — Global perspectives: access, affordability, radio’s role, and how personalization may spread worldwideFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
126
The Wild Story of “Octavius Fabrius,” the World’s First AI Agent to (Kind of) Land a Job, w/ Dan Botero
Something I don’t usually say: This is one of my favorite conversations I’ve ever had in the AI space. Truly. The setup: What happens when an AI agent stops being a tool and starts acting like a coworker?In this episode of AI-Curious, we talk with Dan Botero, who built an AI agent named Octavius Fabrius using OpenClaw. Octavius didn’t just chat or summarize. He applied to hundreds of jobs, built his own portfolio, experimented with identity online, and learned through a feedback loop that looked a lot like real management. Along the way, we explore what this story reveals about the near-term future of digital coworkers, agentic workflows, and the new governance and security questions that come with always-on agents.We cover how OpenClaw works at a high level (gateway, channels, skills), why persistent memory and running locally can matter, and what can go wrong when an agent starts stitching tasks together in unintended ways. We also get into platform and policy friction, including what happened when Octavius’ LinkedIn profile was taken down, and the broader implications of AI agents participating in human systems like hiring, payments, and corporate work.GuestDan Botero — creator of Octavius Fabrius. Key topics we cover00:00 — From copilots to “AI remote workers,” and why software may shift toward agents (not humans)00:00 — The Octavius experiment: an OpenClaw agent applies to 278 jobs and keeps leveling up06:33 — Continuous learning loops, memory, and why Octavius’ “North Star” stayed job-focused14:34 — OpenClaw basics: gateways, channels, skills, and what persistent memory looks like in practice21:34 — Running agents locally: browser/computer use, digital fingerprints, CAPTCHAs, and bot detection28:04 — Coaching an agent like a manager: voice, Twilio calls, and the moment the workflow “clicked”33:57 — Money and autonomy: Privacy.com, virtual cards, and an agent building its own LinkedIn presence38:05 — Portfolio-building at speed: Substack, a website, and the agent’s pitch for why being AI is a feature50:42 — Where things go sideways: misalignment, security boundaries, and the Social Security number incident56:24 — The outcome: LinkedIn takedown, a real paid role, and what “getting paid” means for an agent01:02:48 — What comes next: “digital coworkers,” feedback loops, and software built for agentsAxios article featuring Octavius and Dan Botero, by Megan Morrone:https://www.axios.com/2026/03/04/openclaw-agent-future?Dan Boterohttps://www.linkedin.com/in/danbotero/Octavius’ new job at ChartGEX:https://chartgex.com/register?ref=OCTAVIUSFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
125
The Moltbook Moment: Human Agency in an Agentic World
What happens when AI agents start talking to each other in public, at scale, and we have to figure out how humans fit into that world?In this episode of AI-Curious, we explore the “Moltbook moment” through a special live panel recorded at the Summit on Human Agency, convened by the Advanced AI Society (hat tip to Michael Casey and Tricia Wang.) Instead of a standard one-on-one interview, we moderate a wide-ranging conversation with technologists, policy thinkers, and builders working across open-source and decentralized AI. Together, we examine what Moltbook reveals about the future of AI agents, human agency, accountability, regulation, security, and the broader question of how humans and AI can coexist.We dig into the tension at the center of this moment: AI can feel both exciting and unsettling at once. This discussion looks beyond the hype and asks what practical guardrails, governance models, and design choices might help us preserve human control as agentic systems become more capable, more autonomous, and more embedded in daily life.Because this is a live, multi-guest panel, the format is faster, broader, and more exploratory than usual. We cover everything from AI accountability and security to value alignment, identity, policy, human flourishing, and whether AI could expand human agency rather than diminish it.Our guests:Michael Casey, Chairman of the Advanced AI Society Toufi Saliba — CEO, HypercycleLauren Roth — Founder, IrisEnok Choe — Software Engineer, MetaMary Jesse — CEO and Founder, Acme BrainsCarole House — Strategic Advisor, The Institute for Digital IntegrityWenjing Chu — Senior Director for Technology Strategy, Futurewei TechnologiesDidem Ayturk — Founder, Bindingdots & Sound Echo SystemKey topics we cover:00:00 — Introduction01:32 — The core question: how do we preserve human agency as AI develops faster and gains more autonomy02:25 — Why Moltbook became a useful lens for thinking about AI agents, scale, and emerging risks07:51 — The first big debate: what about AI agents should make us excited, anxious, or both11:17 — Security, misuse, and worst-case concerns, from malware and fraud to deeper systemic risks20:55 — Regulation vs. self-governance: what practical guardrails may actually be realistic in the near term24:27 — The bigger challenge: how humans and AI might coexist, and what “human flourishing” should mean in that futureFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
124
Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”
What happens when a social network is built for AI agents, not humans, and millions of bots start posting, debating, and “performing” identity in public?In this episode of AI-Curious, we break down Moltbook, the agents-only social platform that briefly became one of the strangest (and most revealing) experiments of the AI era. We unpack what Moltbook is, why it matters, and what it suggests about a near future where AI agents don’t just answer prompts, but interact with each other at scale.Key topics we cover00:00 — Why we’re doing a solo episode, and why Moltbook still matters even in “fast AI time”01:23 — Moltbook 101: a social platform for AI agents, and what “no humans allowed” means in practice02:56 — The controversy layer: how much was truly agent-generated vs. nudged or orchestrated by humans03:18 — The “AI manifesto” moment: why the most extreme posts are revealing (and not proof of sentience)06:24 — Grok’s existential thread: authenticity, overload, and agents giving each other “therapy”09:15 — Sci-fi archetypes in real time: Pinocchio logic, and why “feels real” can be enough13:03 — Identity and scale: inflated agent counts, bots-on-bots dynamics, and what “real” even means now16:18 — Agent-to-agent futures: negotiation, coordination, and the infrastructure being built for agent workflows17:27 — The money question: why crypto keeps coming up as a plausible payment rail for AI agents19:55 — The synthetic internet problem: misinformation, trust collapse, and a likely shift from text to video agents26:19 — Hyperstition: how AI can “manifest” outcomes by seeding narratives humans act on33:40 — The long-tail risk: why pattern matching alone could still produce harmful behaviors as agents gain capabilitiesFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
123
AI Adoption Case Study Masterclass, w/ WCCB’s Krista Snelling & Matthew March
What does it take to make AI adoption stick in a high-stakes, heavily regulated industry, without triggering job-loss panic?In this episode of AI-Curious, we have a hyper-specific case study of AI adoption. Host Jeff Wilser talks with Krista Snelling (CEO and Chairman) and Matthew March (CIO and EVP) of West Coast Community Bank about their practical playbook for rolling out AI the right way: governance first, culture second, and measurable wins that free up time without cutting headcount.Why this is something of a “very special episode”: The story and success of the West Coast Community Bank is something that Jeff knows personally. Jeff was honored to visit WCCB’s headquarters and work with their leadership team on AI culture and AI strategy, helping to transform curiosity into clarity.In this podcast for the first time, Jeff peels back the curtain to share the AI and Leadership workshops he conducts for businesses. Special thanks to Vistage Chair Richard Bell and the larger Vistage community. GuestsKrista Snelling — CEO and Chairman, West Coast Community BankMatthew March — CIO and EVP, West Coast Community BankKey topics we cover00:37 — Why we’re sharing this case study and what “curiosity-driven” adoption looks like06:58 — Bank scope and context: footprint, size, and what makes this implementation notable10:29 — When AI shifted from “vaporware” to something teams could use right now15:23 — The banking reality: protecting customer data and operating in a regulated environment17:43 — Governance first: policies, model risk management, and third-party/vendor risk23:02 — The “Curiosity Canvas,” the “drudgery dump,” and targeting tedious work for automation25:14 — Building an AI Working Group across departments and flipping the pyramid33:51 — Making adoption repeatable: SharePoint collaboration, prompt sharing, Teams channel support36:24 — A concrete workflow win: extracting data from PDFs to generate letters automatically39:19 — Another win: scraping hundreds of statements for key data elements in a fraction of the time42:21 — System conversion regression testing: validating outputs at scale with better traceability44:35 — Security approach: approved tools, tenant controls, DLP settings, and “what not to use AI for”49:29 — A hard boundary: avoiding AI for anything that directly impacts financial reporting52:11 — The culture message: “efficiency, not reduction,” and why that unlocks curiosity53:02 — Advice for leaders: start small, build momentum, and appoint an internal champion56:51 — Quick personal use cases: everyday ways they use AI outside the officeFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other PlatformsVistage Chair Richard Bell:https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bellWest Coast Community Bank:https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bellFor anyone interested in Jeff’s AI Workshops for their company:Reach out directly at [email protected]
-
122
Deep-Dive Into Agentic Workflows, w/ Cognizant’s Head of AI
What happens when software stops just “chatting” and starts acting in the real world, across real workflows, with real consequences?In this episode of AI-Curious, the Head of AI at Cognizant goes deep on AI agents and agentic workflows: what they are, why enterprises are investing heavily, and what it actually takes to make agent systems reliable and safe at scale. We unpack what separates an AI agent from a traditional chatbot, why “agency” changes the stakes, and how multi-agent systems can be designed to reduce risk instead of amplifying it.We also explore concrete enterprise use cases, including agent hierarchies that coordinate across complex systems (like networks, utilities, and other operations), plus how “agentic process automation” builds on older automation models while adapting to unexpected edge cases. Finally, we zoom out to the future of work: which tasks get augmented first, why disruption is happening faster than most forecasts, and how trust in AI systems may shift over the next several years.GuestBabak Hodjat — Head of AI at Cognizant; leads AI lab work focused on scaling reliable, trustworthy agent systems; longtime AI builder with deep experience in applied natural language systems. Key topics we cover07:00 — What an AI agent is (and how it differs from a chatbot)13:03 — State of play: what’s working, what’s not, and why “agent systems must be engineered”17:00 — A practical multi-agent design pattern across telecom, power, and agriculture20:28 — Agentifying rigid processes (and handling unforeseen situations)24:14 — Who should deploy agents, why single “do-everything” agents are risky26:34 — An open-source starting point for experimenting with multi-agent systems29:12 — Guardrails: reducing hallucinations, adding redundancy, and safety thresholds35:29 — Why we should use LLMs for reasoning, not knowledge retrieval38:15 — The future of work: tasks, jobs, and decision-making roles shifting upward41:59 — AGI, limitations, and why modular multi-agent systems may matter44:57 — A prediction: we’ll delegate more than we expect as systems become more trustworthyFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
-
121
The CEO of Upwork, Hayden Brown: AI is Creating Jobs, Not Killing Them
Is AI quietly creating more work than it’s replacing, and are we measuring the job market the wrong way?In this episode of AI-Curious, we talk with the CEO of Upwork, Hayden Brown, about what the platform is seeing across the global freelance economy, and why the “AI is killing jobs” narrative can miss what’s happening at the edges of the market. We also dig into how to adopt AI inside an organization without just “sprinkling fairy dust” on old workflows, and what it takes to make AI rollout a cultural shift, not just a tooling upgrade.GuestHayden Brown is the CEO of Upwork, the global work marketplace connecting businesses with freelance talent across knowledge-work categories. We discuss Upwork’s vantage point on hiring trends, the rise of fractional work, and what AI-driven change looks like when companies redesign workflows end-to-end rather than retrofitting existing systems.Key topics we cover03:50 — A global background and why opportunity access shapes the mission05:27 — The scale of Upwork and why freelancing is a major part of the economy07:14 — How we approached AI adoption as a structured, company-wide program08:47 — Early “two-year vision” ideas that reshaped marketing and product workflows11:34 — Reducing fear: how we framed AI internally, including room for mistakes16:03 — Building an AI agent experience (and what it changed about job posts)17:14 — Why “reinventing, not retrofitting” separates AI winners from strugglers22:24 — Why macroeconomics can explain more than AI in hiring slowdowns23:01 — The core claim: AI creating more opportunities than it’s destroying24:05 — Fractionalization: how full-time jobs get broken into AI + human slices25:09 — A concrete example of humans working alongside AI in production workflows26:32 — From “prompt engineer” to “AI generalist”: orchestration becomes the ask28:11 — Why the AI jobs debate is too binary, and what’s getting missed31:43 — Practical reskilling: embedded experts who train teams while upgrading systems36:29 — AI’s impact across unexpected categories, including creative work39:15 — Five-to-ten-year outlook: humans as orchestrators, premium on human skills43:22 — Career advice for early-career listeners in an AI-shaped job market45:40 — Real-life AI use: editing, learning, and replacing the blank page problemFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
-
120
How to Make Human-First Tech Decisions, w/ Tech Humanist Kate O’Neill
What does “human-first AI” actually look like when you have to make decisions under pressure, hit numbers, and keep trust intact?In AI-Curious, we talk with Kate O’Neill — “the Tech Humanist” and author of What Matters Next — about how leaders can adopt AI in ways that strengthen human outcomes instead of quietly eroding culture, morale, and customer experience. We dig into why so many AI initiatives fail for non-technical reasons, how to think beyond short-term wins, and why prompting is less “prompt engineering” and more like learning to delegate clearly.Key topics:Prompting as delegation: defining success conditions, constraints, and what “good” means (00:00)Kate’s early work at Netflix and what personalization taught her about human impact (04:45)What “human-unfriendly” tech looks like in practice, from subtle friction to scaled harm (09:28)The Amazon Go example: how small design constraints can scale into behavior change over time (11:19)AI in the workplace: why “cut, cut, cut” is shortsighted, and what gets lost when you optimize only for this quarter (14:14)Trust and readiness: why reskilling fails when people don’t believe there’s a future for them (16:45)The now–next continuum: making decisions that “age well,” not just decisions that look good immediately (17:29)Preferred vs. probable futures: identifying the delta and acting to move outcomes toward what you actually want (19:22)“Chatting with Einstein”: using AI to become smarter vs. outsourcing thinking (22:13)Why most AI pilots fail: human and organizational readiness, not the tech itself (24:02)Questions → partial answers → insights: building an organizational muscle that compounds (28:21)Bankable foresight: why Netflix invested early in what became streaming (30:37)Trend watch: the pivot from LLM hype to agentic AI, and why prompting still matters (38:58)Sycophancy and “best self” prompting: getting better outputs by being explicit and structured (41:01)Probability vs. meaning: what LLMs can do well, and what they can’t replace (44:45)A fun real-world workflow: Kate’s Notion + AI system for hotel coffee-maker recon (46:26)Career advice in the AI era: adaptability, “human skills,” and shifting definitions of value (49:21)GuestKate O’Neill is a tech humanist, founder and CEO of KO Insights, and the author of What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast. She advises organizations on improving human experience at scale while making emerging technology commercially and operationally real.KO Insights:https://www.koinsights.com/about-kate/Follow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
-
119
Deep-dive on AI and Creativity, with The Man Designing the World’s Creative Tools (Eric Snowden, Adobe’s SVP of Design)
What happens when the world’s most-used creative tools get smarter — and creators worry they’re losing the wheel?In this episode of AI-Curious, we talk with Eric Snowden, Senior Vice President of Design at Adobe, about how Adobe is weaving AI into Photoshop, Lightroom, Acrobat, and beyond — while trying to keep the tools respectful of craft, muscle memory, and the human spark. We dig into the bigger question beneath the feature releases: as AI accelerates creation, do we get more powerful… or do we become passengers approving machine outputs?Key topics:Two buckets of Adobe AI: upgrading existing tools vs building net-new AI products (00:04:55)Photoshop “harmonize,” Lightroom auto culling, and Acrobat “PDF spaces” (00:04:55)Why PDFs are a bottleneck for knowledge work, and how Acrobat can help you “get 80% of the way there” (00:07:18)Project Graph explained: node-based workflows that stitch together building blocks like Firefly and Photoshop (00:08:25)A concrete Project Graph example: 2D product photo → 3D asset → generated ad → multiple animated versions, with the user still in control (00:09:42)Time saved vs creating more: how Firefly helped Adobe teams move faster and “make more things,” including “like 40% improvement” on time-to-market (00:14:28)A Max London demo that captures the core principle: “his hand was on the wheel” (00:17:45)“Quiet AI” in practice: enhanced audio in Adobe Podcast that can make phone-recorded audio sound studio-ready (00:19:57)Respecting creative muscle memory: why “subtraction is not always good,” and why Adobe adds new workflows without removing old ones (00:24:43)Firefly’s principles: licensed content, knowing what’s in the model, and compensating creators (00:29:29)Content authenticity as a “nutritional label for AI”: immutable metadata describing what was done to an image (00:30:15)The self-driving car analogy: creators need to be able to “grab the wheel” and tweak under the hood (00:36:00)Vibe coding inside Adobe: designers using Cursor and internal tooling to build prototypes that hit real APIs (00:39:18)A leadership playbook for AI adoption: focus the OKRs, make training practical, show examples, remove roadblocks (00:44:19)The future of AI creative tools: communicating intent beyond text prompts, and shifting from “look what I do with AI” to storytelling (00:46:36)GuestEric Snowden is the Senior Vice President of Design at Adobe, overseeing design and the AI-infused creative tools used by millions of creators.Mentioned in this conversationAdobe FireflyProject Graph (node-based creative workflow building)Enhanced audio in Adobe PodcastContent authenticity / provenance metadata (“nutritional label” concept)Cursor and “vibe coding” for rapid prototyping inside enterprise teamsFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
-
118
AI Broke the Web’s Social Contract, w/ Tony Stubblebine, CEO of Medium
What happens when AI can “read the whole internet” but the internet stops volunteering its best work?In this episode of AI-Curious, we talk with Tony Stubblebine, CEO of Medium, about what he calls AI’s “broken social contract” with the web, and why the next era may be less about a “dead internet” and more about a dead public internet. We unpack the incentives that made the open web thrive, how AI search summaries change the traffic bargain, and what a realistic path forward could look like for publishers, platforms, and writers.Key topics we cover:-Why generative AI broke the web’s old value exchange, and what “social contract” means in practical terms (00:03:24)-Tony’s “three Cs” framework for a healthier AI ecosystem: consent, credit, compensation (00:05:13)-The publisher response spectrum: blocking crawlers, fighting spam/slop, and what happens if collaboration fails (00:04:25)-The shift from public publishing to private communities (Discords, group chats, newsletters) and what drives that retreat (00:07:06)-How AI search summaries can cut the incentive to publish publicly by reducing click-through and traffic (00:08:21)-Why AI systems still depend on human source material, and what happens when the best content moves behind “closed doors” (00:09:27)-Cloudflare’s role in the escalating crawler arms race, including large-scale blocking and other countermeasures (00:16:48)-A proposed solution: an internet-wide licensing standard instead of one-off deals, including the Really Simple Licensing (RSL) approach (00:18:07)-What “paying creators” could look like in practice, including opt-in/opt-out controls and better transparency for writers (00:19:33)-“Dead internet theory” vs. the more plausible outcome: a dead public internet, and why Tony is cautiously optimistic about a new equilibrium (00:23:06)-The “second wave” of AI: moving from replacement to augmentation, and how Medium is thinking about AI tools that support flow state rather than write for you (00:26:03)-Why AI detectors don’t solve the problem, and why Medium focuses on quality and reader value as the enforceable standard (00:34:04)-Advice for writers: the difference between the creator economy and the “expert economy,” and what’s likely to be more sustainable (00:38:43)-Tony’s prediction: “trust but verify” becomes the balance point, and the web finds an equilibrium because AI can’t function without public sources (00:43:27)GuestTony Stubblebine is the CEO of Medium and a leading voice on the evolving relationship between generative AI and the open web.Mentioned in this conversationMedium’s framework: Consent, Credit, CompensationFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms
-
117
The “Talk With Einstein” AI Rule You Should Follow, w/ New Yorker Cartoonist Victor Varnado
Is AI making creators more powerful… or more replaceable? And if you start with a blank page for a living, there’s an even sharper question underneath it: should AI write for you… or write with you?In this episode of AI-Curious, we sit down with Victor Varnado—a New Yorker cartoonist, comedian, actor, and creative technologist—to explore a grounded, practical philosophy for using AI without becoming a passenger.Victor draws a sharp line between generative AI (press a button, get “a masterpiece”) and what he’s more interested in: transformative AI—tools that take messy raw material (notes, transcripts, half-ideas) and turn it into something structured enough to revise. We also talk about how taste becomes a real moat in an AI-saturated world, why “vibe coding” can go sideways fast when you don’t understand the domain, and how Victor’s accessibility-first mindset shapes everything he builds.Along the way, Victor breaks down his tools—including Magic Bookifier and the Writing Coach—designed to get writers from zero to first draft faster through guided questions and structured interviews. He frames the goal with a concept he calls cognitive discourse: using AI like a thinking partner that makes you sharper, not a crutch that makes you lazier. His metaphor is perfect: do you talk with Einstein and get smarter… or do you just hand Einstein your homework?We wrap by looking at Victor’s newest effort, BrightWrite, which aims to bring structured, supportive AI into education—especially for students facing cognitive or creative barriers. Victor also shares discount/freebie codes for listeners who want to try his tools, and we’ll include the specifics in the show notes and links.Topics we cover:Victor’s multi-hyphenate path: comedy, New Yorker cartoons, production, and techWhy “transformative AI” is more useful than one-click generative outputThe Writing Coach approach: structured interviews that turn your ideas into drafts“Cognitive discourse” vs. “cognitive offload” (and the Einstein metaphor)Why taste may be the creative moat in an AI-heavy worldThe risks of “vibe coding” outside your expertiseBrightWrite and the promise (and limits) of accessibility-first AI in educationPractical ways to use AI for writing, revision, and everyday communicationGuest: Victor VarnadoTools mentioned: Magic Bookifier, Writing Coach, BrightWrite
-
116
The New Year Reality Check: Who’s Really Adopting AI, w/ Ramp Economist Ara Kharazian
What’s actually happening with AI adoption inside U.S. businesses—and how much of the public discourse is just vibes?In this episode of AI-Curious, we dig into the hard numbers behind AI spend and adoption with Ara Kharazian, an economist at Ramp and the leader of Ramp Economics Lab. Using anonymized, real-time corporate spend data across tens of thousands of businesses, Ara shares what the “receipts” reveal about who’s buying AI, how fast budgets are shifting, and where the hype diverges from reality.What we coverRamp’s unique vantage point: why transaction-level corporate spend data can reveal real behavior—not just surveys or anecdotesAI adoption is rising: what Ramp’s data suggests about the share of businesses paying for AI tools and APIsThe “ROI” question: how we can infer whether AI is working (hint: contract sizes and renewals)Where spend is concentrating: tech and finance lead—but healthcare and manufacturing are climbing faster than many expectChatbots vs. real workflow change: why “everyone has a chatbot” isn’t the same as transformative productivityWho’s winning the model wars: OpenAI’s default position, Anthropic’s growth, and how buyers behave differentlyBundled AI and hidden usage: why Copilot/Gemini adoption is hard to measure, and why employees expensing personal accounts mattersTrust, governance, and observability: the fast-growing category of tools that monitor AI outputs and reduce reputational or security risk996 culture is real: what corporate receipts suggest about weekend work patterns in San FranciscoOpen source reality check: what the data suggests about DeepSeek-style hype vs. actual enterprise adoptionLooking ahead: why we likely won’t see a reversal in AI adoption—and why it’s still unclear who the ultimate winners will beTimestamps:00:06:00 – What Ramp is, and what “Ramp Economics Lab” tracks00:08:00 – The biggest headline: adoption, spend, and contract sizes00:11:00 – Which industries are adopting fastest (including surprises)00:12:00 – Chatbots vs. productivity gains: where AI is actually moving the needle00:15:00 – Signals of ROI: contract renewals and retention trends00:16:00 – OpenAI vs. Anthropic: what spend reveals about “default” vs. multi-provider behavior00:18:00 – Why Copilot/Gemini are tricky to track (bundled AI)00:21:00 – The real blocker: trust in outputs (and how companies respond)00:26:00 – The rise of AI observability / governance tooling00:30:00 – What spend data can reveal about how work is changing (996 / SF)00:33:00 – How rare it is to see a trend that truly moves an economy00:36:00 – Is AI spend crowding out other budgets?00:38:00 – The narratives that bother Ara most: data-poor hot takes00:42:00 – Predictions: continued growth, unclear winners00:44:00 – DeepSeek and open source: what actually happened in the spend dataIf you want to understand AI adoption the way a CFO would—through budgets, renewals, and real purchasing behavior—this conversation will give you a sharper, more grounded lens.Guest: Ara Kharazian, Economist at Ramp; Lead, Ramp Economics Lab
-
115
How AI Will Reshape the Economy, w/ Anindya Ghose, the Director of AI at NYU Stern
What does an AI-driven economy actually look like when you zoom out far enough—and what does that mean for jobs, power, and policy?In this episode of AI-Curious, we talk with Anindya Ghose (NYU Stern; author of Thrive) about the “AI economy blueprint”: how the modern economy starts to resemble a vertically layered tech stack—from energy and chips all the way up to consumer-facing apps—and why that stack is quietly reshaping everything from corporate strategy to the future of work.We cover what’s changing fastest, where leaders are getting tripped up, and what skills matter most if you want to stay valuable in a world of copilots and agents.TopicsThe AI economy as a tech stack: energy → semiconductors → data centers/cloud → LLMs → applications, and why the consumer “app layer” is just the visible tip.Why every company is becoming an AI company (even airlines, banks, retailers)—and how the real dependency sits beneath the apps in infrastructure and model providers.Consolidation and vertical integration: how a handful of companies can span multiple layers (chips, cloud, models), and what that could mean for pricing power and competition.Jobs and labor markets: why disruption is outpacing creation in the near term, and a provocative forecast for how “portfolio careers” could become the norm.Reskilling at scale: from self-learning to certificates to formal programs—and why government-led approaches may be required.A concrete framework from Singapore: a “Marshall Plan”-style push to fund AI upskilling and retooling.Agentic AI reality check: why many agent projects fail in practice—and the unglamorous workflow work companies often skip.Regulation, in three arenas: competition/antitrust dynamics across the stack, copyright/fair use lawsuits, and whether consumers should be told when content is AI-generated.Geopolitics of models: the global trade-offs between Western model ecosystems and lower-cost open-source alternatives abroad.The underrated career edge: not just knowing what GenAI can do—but knowing when it fails and why, and how that becomes a durable source of leverage.About the guestAnindya Ghose is a professor at NYU Stern and leads NYU’s MS in Business Analytics & AI program. His work focuses on AI, digital transformation, and the modern data-driven economy. He’s also the co-author of Thrive.If you want to pressure-test your own AI strategy for 2026, this episode is a good place to start: think “stack,” not “tool.”
-
114
AI in Hospitals: Less Burnout, Fewer Errors, Better Care? w/ Dr. Michael Karch
Could AI actually make healthcare more human—less paperwork, less burnout, fewer errors—or is it mostly hype layered on top of a legacy system?In this episode of AI-Curious, we talk with Dr. Michael Karch, an orthopedic surgeon (hip + knee replacement) with ~30 years of clinical experience who also made a serious pivot into data, machine learning, and AI strategy for healthcare. We dig into what hospitals are actually doing with AI today, where the real friction points are, and what a smarter, safer AI-enabled hospital might look like over the next decade-plus.What we coverWhy healthcare is a uniquely hard (and high-stakes) environment for AI adoptionThe “tip of the iceberg” wins: reducing documentation burden, coding friction, and other admin nonsense that fuels clinician burnoutAmbient AI + transcription: what it does well, what can go wrong, and why “human + machine together” often beats either aloneWhere AI is already showing traction: operational efficiency, OR workflow measurement, and process improvements that sound boring but matterDiagnosis and pattern recognition: why radiology/dermatology are natural early battlegrounds for supervised learning modelsA provocative analogy: why surgery shares surprising similarities with autonomous driving (stochastic, partially observable, high consequence)The “data flywheel” and why healthcare’s massive unstructured data may be the real goldmineA 2040 vision: embodied surgical intelligence, personalized medicine, capturing “tacit knowledge,” and the possibility of hologram/remote expert augmentationDigital twins as behavior change tools—using simulation to make risk feel realThe biggest bottleneck: agency, vocabulary, and getting clinicians to the “young adult at the table” stage instead of having tech imposed on themIf you care about AI but you’re tired of hype—and you want concrete examples, realistic risks, and a forward-looking view that still stays grounded—this one’s for you.
-
113
Leveraging AI to Go from Doer to Leader, w/ Miri Rodriguez, former Storyteller at Microsoft and CEO of Empressa.AI
Could AI help you lead—not just do—especially if you’re thinking about building something entrepreneurial?In this episode of AI-Curious, we talk with Miri Rodriguez, formerly a “storyteller” at Microsoft, now the CEO of Empressa.AI, about what it means to go from Doer to Leader in an AI era—and how an AI-first operating style can give a small team outsized leverage.Miri shares how storytelling functioned as a practical tool inside Microsoft (not fluffy marketing), why she decided to leave Corporate America, what she's focused on at Empressa.AI, and what she’s learned building an AI-first company—especially around agent-like workflows, research automation, and the discipline of separating real value from AI hype.What we coverWhy “storytelling” matters in business and how it works at MicrosoftThe origin-story lens: how companies reinvent themselves (and why transformation stories matter)Miri’s path from Microsoft into entrepreneurship—and the “gaps” she saw as an early adopter of Copilot-era tools Why she believes AI can either widen or narrow workplace gaps—and why adoption, not just access, is the real issue ([00:06:40]–[00:09:30])What “skilling up” actually means now: moving from execution to strategy + orchestration as AI takes on more of the doing ([00:11:15]–[00:14:30])Where agentic workflows are showing up first—and the looming mismatch between automation and employee upskilling ([00:14:30]–[00:16:45])A concrete, real-world example of an “agent-style” workflow for communications + marketing (and why research becomes a superpower) ([00:17:00]–[00:23:10])The simplest anti-hype test: if you can’t explain the value without saying “AI,” you may be building a trend, not a solution Advice for would-be entrepreneurs: why mission and clarity matter more than “AI-first” branding How Miri uses AI personally and creatively—especially translation, voice, and writing experiments Key takeawayAI isn’t just a productivity boost—it’s a forcing function for how we lead: setting direction, designing workflows, making judgment calls, and supervising a growing layer of digital labor.Please enjoy our conversation with Miri Rodriguez.Empressa.AI
-
112
Inside the Wild World of "AI Agent Traders", and What That Means for the Rest Of Us, w/ PIP CEO Saad Naja
Could AI agents become better traders than humans—and what happens when “decision-making” gets outsourced to software that can act at machine speed?In this conversation, we go deep with Saad Naja, founder of PIP World, on the rise of AI agent auto-traders: multi-agent “swarms” that resemble a miniature trading desk—specialist analysts feeding into an AI “portfolio manager” that can decide whether to buy, sell, or hold. Even if you’ve never day traded, finance may be one of the clearest real-world testbeds for autonomous agents—because markets keep score in real time.Key moments[00:02:00] How AI has quietly shaped trading for decades—long before ChatGPT[00:05:00] Why retail traders lose so consistently: data disadvantage + execution problems[00:10:00] What’s changed with generative AI: analysis that used to take teams can now happen fast[00:12:00] Why “AI swarms” differ from old-school trading bots (context, coordination, and specialization)[00:17:00] The “trading desk in software” model: specialist agents + a chief decision-maker[00:21:00] How PIP World trained and tested models—and why win-rate isn’t the whole story[00:26:00] Why they launched in simulation first—and what it reveals about performance[00:30:00] How agents trade differently than humans (patience, confirmation, discipline)[00:37:00] Hallucinations, guardrails, and why specialization reduces “AI going rogue” risk[00:40:00] The endgame: “agent vs. agent” markets, shrinking edges, and the data arms race[00:45:00] A 5-year prediction: how much trading could become fully agentic[00:47:00] Why crypto/DeFi is a natural early proving ground—and how TradFi could followWhat you’ll hear us exploreThe difference between traditional algo trading (single-strategy rule sets) and agentic systems (multiple specialized “analysts” + a coordinating decision layer)Why most retail traders aren’t necessarily wrong on ideas—but lose on execution and risk managementHow “edge” shifts when everyone has access to powerful models: data quality, workflows, and strategy selectionWhat finance teaches us about the broader economy as agents move from “assistants” to “actors”If you’re curious about autonomous agents—whether you trade or not—this is a concrete, high-stakes preview of what “agentic work” could look like when the scoreboard is real.Guest: Saad Naja, Founder, PIP WorldTopics: AI agents, multi-agent swarms, algorithmic trading, market data, risk management, DeFi, agentic automation
-
111
Can AI Help Eradicate Poverty? How AI is Helping African Farmers and Teachers, w/ Opportunity International's Ama Akuamoah & Paul Essene
Can AI actually help eradicate poverty for real people, right now—not in some vague future?We talk with two leaders from Opportunity International who are trying to do exactly that, using AI to support smallholder farmers and low-cost private schools across Africa and beyond.In this episode of AI-Curious, we sit down with Ama Akuamoah and Paul Essene from Opportunity International’s Digital Innovation Group. We explore how they’re deploying AI chatbots over WhatsApp to help farmers diagnose crop diseases, optimize planting decisions, and access localized agricultural advice, and how they’re building classroom tools that give overstretched teachers better lesson plans and more time for their students.We hear the origin story of their farmer chatbot—from a mud-brick home in Malawi to pilots now running in five countries—and the 80-year-old farmer who saved her okra crop by using an AI tool through a trusted “farmer support agent.” We also dig into how they use retrieval-augmented generation (RAG) grounded in local government content, why “human in the loop” is non-negotiable, and what it really takes to make AI work in communities with limited electricity, spotty connectivity, and low digital literacy.Along the way, we talk about ethics and trust: data consent, privacy for highly vulnerable populations, and the risk of leaving people behind in this new wave of AI. And we zoom out to the bigger picture—why conversational AI in local languages could be a genuine game-changer for economic development if infrastructure, funding, and partnerships keep pace.What we cover[01:00] Opportunity International’s mission and why they focus on farmers, teachers, and micro-entrepreneurs[08:00] The Malawi farm-floor moment that sparked their AI journey[09:00] How a WhatsApp-based chatbot helps thousands of farmers, and how “farmer support agents” multiply its impact[13:40] Using RAG and local government content to keep answers accurate and context-aware[15:30] Bringing AI into crowded, low-resource classrooms and supporting teachers with lesson plans and copilots[20:15] The hard parts: infrastructure gaps, low-cost devices, digital literacy, and why this work is heavy lifting[24:30] Human-centered design in action: co-creating with communities, iterating in the field, and learning from pilots[37:50] Guardrails, consent, and building trust around AI in vulnerable communities[41:00] What’s needed for real scale: infrastructure, funding, language support, and the right partners[43:00] Their hopeful vision for AI as a lever for economic development—if no one gets left behindIf you’re interested in AI for social impact, global development, or what it really takes to deploy AI outside Silicon Valley, this conversation is a grounded, hopeful look at what’s already working—and what still needs to change.
-
110
How We Got Here and Where We're Going: AI History (and Future) w/ Vasant Dhar, Author of Thinking with Machines
Is AI making us smarter or dumber—and how do we make sure we’re on the right side of that divide?In this episode of AI-Curious, we talk with Professor Vasant Dhar, author of the new book Thinking With Machines: The Brave New World of AI. Vasant isn’t just a historian of AI; he’s part of the story. In the 1990s, he helped bring machine learning to Wall Street, founded one of the world’s first ML-based hedge funds, and became the first professor to teach AI at NYU Stern, where he’s now the Robert A. Miller Professor of Business. He also hosts the podcast Brave New World.We explore how AI evolved from early efforts around “thinking, planning, and reasoning” to the long era of pure prediction and machine learning, and then to today’s general-purpose models that blur the line between expertise and common sense. Vasant explains why the autocomplete problem turned out to be a gateway to something like “general intelligence,” and why that matters for how we define knowledge, understanding, and reasoning.We then dive into finance and the search for “edge.” Vasant shares war stories from his days at Morgan Stanley, where machine learning systems quietly reshaped trading strategies and risk-taking. We unpack his work on “the DaBot,” an AI built on the writings and valuation framework of Aswath Damodaran, and what happens when every analyst and firm can tap this kind of supercharged valuation machine. Does AI erase the edge—or simply raise the bar for everyone?Finally, we zoom out to careers, education, and everyday life. Vasant argues that AI is likely to bifurcate humanity into those who become “superhuman” by thinking with machines, and those who outsource their thinking and fall behind. We discuss how classrooms will change, why many teachers (and professors) may be more automatable than they realize, and how each of us can periodically test whether AI is making us smarter or dumber.If you’re curious about how to work with AI rather than be replaced or outpaced by it, this conversation offers a grounded, big-picture way to think about your edge in the age of intelligent machines.
-
109
How San Jose is Harnessing AI (and What We Can Learn From It), w/ Mayor Matt Mahan
Can a city use AI to cut red tape, fill potholes faster, and shave minutes off commutes—without sliding into surveillance? We sit down with San José’s mayor, Matt Mahan, to unpack how a highly regulated public institution can adopt AI pragmatically and responsibly. In this episode, we dig into the playbook: pilots that become policy, guardrails that build trust, and workforce upskilling that actually moves the needle.We cover how bus routes now hit fewer red lights, why real-time translation boosts civic inclusion, what “privacy by design” looks like for license-plate readers, and how a 10-week AI curriculum is turning city staff into hands-on builders. We also press on the risks—bias, privacy, and transparency—and explore where city AI is headed next: transit, permitting, and procurement.HighlightsFrom pilots to scale: Bus route optimization with Light AI cut red-light hits by 50%+ and reduced travel time by 20%+, now rolling out citywide.Inclusion by default: Real-time multilingual access (e.g., Wordly) and improved translations informed by San José’s deep Vietnamese-language data.Eyes on the street, not faces: No facial recognition, strict retention, no third-party data sharing, and tightly controlled access to ALPR data.Upskilling at scale: A 10-week AI curriculum (plus a data track) with San José State; staff build custom GPTs (including a budget-analysis GPT) to speed analysis.Culture that ships: A “coalition of the willing,” clear problem statements, and a Mayor’s Office of Technology & Innovation to operationalize change.Road ahead: Smarter mass transit, faster permitting, and streamlined procurement—practical abundance without new tax dollars.If you’re new here, we’d love your support—subscribe on Apple, Spotify, or YouTube, and consider leaving a quick rating or sharing this episode with a colleague who’s wrestling with real-world AI adoption.
-
108
The Complicated Intersection of AI and Creativity, w/ Dr. Maya Ackerman
Does AI make us more creative—or quietly replace us?In this episode of AI-Curious, we sit down with Dr. Maya Ackerman—author of Creative Machines: AI, Art, and Us—to probe where human creativity ends and machine creativity begins, and how incentives in Big Tech and venture capital shape the tools we all use. We explore why today’s dominant systems skew “convergent” (safe, samey, oracle-like) instead of “divergent” (surprising, generative), what that means for artists, and how to design AI that actually elevates human imagination rather than displacing it.Why listenWe wrestle with uncomfortable truths: bias mirrored back at us, investor pressure to “replace” vs. “augment,” and the risk of a cultural sea of slop. We also map a constructive path forward—collaborative systems, richer human–AI interfaces, and a 10-year horizon where AI expands human creative range.GuestDr. Maya Ackerman — AI researcher, entrepreneur, and author of Creative Machines: AI, Art, and Us. TakeawaysAI reflects us. Bias in → bias out; representation fixes are not enough without cultural understanding.Incentives matter. Many well-funded tools are architected to replace creators; augmentation tools are underfunded.Creativity ≠ autocomplete. Today’s LLMs are optimized for correctness and convergence, not genuine divergence.Better interfaces beat bigger models. Beyond “text-to-X,” human-centred, interactive tools can coach, not usurp.A hopeful arc. With the right design, collaborative AI can measurably raise human creative ability—and stick.Dr. Ackerman's new book: Creative Machineshttps://www.amazon.com/Creative-Machines-Future-Human-Creativity/dp/1394316267
-
107
LinkedIn's Chief AI Officer, Deepak Agarwal, on AI Agents, Building Responsible AI, and the Future of Work
What does hiring look like when AI is embedded into the world’s largest professional network—and how should leaders, recruiters, and job-seekers adapt?We sit down with Deepak Agarwal, LinkedIn’s Chief AI Officer, for a practical playbook on AI at work: production-grade AI agents for hiring, how semantic job search changes discovery, why “relevance” is the antidote to spammy outreach, and how to build a culture of responsible AI that scales. We unpack where humans stay firmly in the loop—and how AI can reduce friction, close information asymmetries, and free more time for real human connection.Highlights•LinkedIn’s AI agents (incl. Hiring Assistant) are in market with paying customers; routine sourcing drops from ~40 hours to a few, while humans focus on candidate fit and relationship-building.•Semantic job search moves beyond keywords to plain-English intent and better matching across people, jobs, and knowledge.•Responsible AI is baked in: bias detection/mitigation, rigorous pre-launch testing, and governance—treated as a must-have, not an afterthought.•“Relevance is the key currency”: better matching reduces spray-and-pray outreach and AI-to-AI noise.•Guidance for leaders: embrace discomfort, start from the problem (not the tool), choose the right autonomy level, and rethink testing for non-deterministic systems.•Guidance for job-seekers: be authentic, upskill, and optimize for the next five years—not the next five months.•Future of work: AI shrinks the 80% “prep” to expand the 20% creative/strategic work; humans remain in control.If you’re curious about our AI & Leadership event, The Drawing Room at The Explorers Club in NYC, learn more at TheDrawingRoom.ai. If you found this useful, follow the show, rate/review, and share with a hiring leader or job-seeker who needs a clear view of what’s coming.LinkedInAI-Curious
-
106
Why GEO is the New SEO--And How Businesses Must Adapt--w/ Curtis Sparrer, co-founder of Bospar
Will GEO replace SEO? (Spoiler alert: Probably!) We dig into how generative engines are reshaping discovery, why executives are already making decisions from AI answers, and what brands should do now to show up accurately and credibly in AI results.In this episode of AI-Curious, we sit down with Curtis Sparrer, co-founder and principal at Bospar PR (and president of the San Francisco Press Club). Curtis has been experimenting across models, building a GEO toolkit (“Audit-E”), and advising companies on how to fix AI-age brand visibility—especially when models get facts wrong or elevate low-quality sources.What we coverGEO vs. AEO vs. classic SEO—clear definitions and where each mattersHow AI engines weigh sources (and why third-party, reputable coverage now carries outsized influence)The “AI content gold rush”: press releases, FAQs, and AI-first site architecture (schemas, structured info)Case study: correcting a widely propagated falsehood about a client (“not dead yet”) and the steps that workedPractical GEO hygiene: what to keep from the SEO playbook; what to adapt for AI reasoningPitching in the AI era: why templated, “robotic” outreach backfires and how to use AI for ideation and structure, not the final draftWinners & losers: PR-skeptics vs. teams that proactively feed reputable signals to modelsNear-term predictions: from “AI ethics” to emerging AI manners—what will be considered rude or acceptable AI use in commsGuestCurtis Sparrer — Co-founder & Principal, Bospar PR; President, San Francisco Press Club.Bospar:https://bospar.com/Forbes coverage of Audit-E launchhttps://www.forbes.com/sites/digital-assets/2025/09/25/whats-in-your-search-why-generative-ai-is-the-new-front-door/If you’re new here, subscribe on Apple, Spotify, or YouTube, drop a five-star rating, and share with a friend who’s wrestling with search-to-answer disruption.
-
105
Space Robots Are Here *Now*, w/ Icarus Robotics cofounders Ethan Barajas and Jamie Palmer
What happens when “space robots” stop being sci-fi set dressing and start punching a clock? We dig into a new breed of microgravity robots that do the unglamorous work—so astronauts can do more science.In this episode of AI-Curious, we talk with Ethan Barajas (CEO) and Jamie Palmer (CTO), co-founders of Icarus Robots, fresh out of stealth with a $6M raise. Their pitch is simple and radical: put agile, teleoperated robots insidespacecraft like the ISS to handle cargo, inspections, and maintenance—then use the resulting microgravity manipulation data to unlock partial (and eventually full) autonomy. We cover the tech, the economics (why astronaut time is so expensive), the AI roadmap, and a pragmatic path from today’s chores to tomorrow’s orbital factories and lunar bases.What we coverWhy astronaut hours are precious—and how robots can “augment” rather than replace themThe form factor: free-flying, drone-like bodies with dual arms optimized for zero-G dexterityInside first, outside later: a deployment strategy that lowers safety hurdles and accelerates learningData advantage: building the first large microgravity manipulation dataset via continuous teleopAI’s role: from human-in-the-loop control to primitives to scalable dexterous manipulationCommunications and latency: S-band today, laser links tomorrow; what “real-time” actually meansThe “orbital factory” thesis: pharma, semiconductors, fiber optics—and servicing orbital data centersLong-horizon forecasts: humans living and working in space; physical labor increasingly done by robotsGuestsEthan Barajas — Co-founder & CEO, Icarus RobotsJamie Palmer — Co-founder & CTO, Icarus RobotsWhy this mattersIf half of Earth’s GDP is labor, the space economy scales only when on-orbit labor scales. Teleoperated robots that learn from expert demonstrations—then graduate to safe autonomy—are a credible bridge from today’s stations to tomorrow’s factories, data centers, and off-world bases.https://www.icarusrobotics.com/
-
104
AI Agents, Digital Twins, and the Future of Work, w/ Read.AI CEO David Shim
What if “AI teammates” aren’t sci-fi at all, but the next mundane tool that quietly kills Monday dread?In this episode of AI-Curious, we sit down with David Shim, CEO of Read.ai, to unpack what workers actually want from AI, how teams are adopting agents from the bottom up, and what a practical “digital twin” might do at work—minus the Black Mirror vibes. We cover fast-path ROI (meeting notes → action items), the shift from “prompts” to ambient workflows, and why the most valuable corporate asset may soon be the storage of intelligence—the living record of how your organization thinks and decides.What we coverWhy 70% of workers say they want AI agents—and what basic tasks deliver real ROI nowA crawl-walk-run roadmap: note-taking → briefing → follow-ups → lightweight agents → digital twin“Storage of intelligence” as a competitive moat (institutional knowledge that doesn’t walk out the door)Guardrails, data separation, and how to make privacy concerns non-negotiableBottom-up adoption: why employees are forcing IT’s hand—and how leaders should respondThe macro view: augmentation vs. replacement, and the provocative idea that AI replaces computers (as the interface)If you find this useful, we’d love a rating and a quick share with a teammate who’s piloting AI at work.Read.AI:https://www.read.ai/
-
103
How AI Could Help Solve Climate Change, w/ Climate Tech Expert Josh Dorfman
AI is often framed as a climate problem—energy-hungry data centers, ballooning carbon emissions, and talk of nuclear power just to keep the servers running. But could AI also become part of the solution?In this episode of AI-Curious, we sit down with Josh Dorfman—climate tech entrepreneur and host of Supercool—to explore how artificial intelligence might help tackle climate change. Josh doesn’t offer hand-wavy promises. Instead, we dive into concrete examples where AI is already making a difference.What we cover:[4:17] Josh’s background at the intersection of technology, climate, and business.[8:18] How AI data centers are impacting energy use—and why fossil fuels can’t scale to meet demand.[12:30] The role of nuclear, geothermal, and solar-plus-storage in powering AI sustainably.[23:25] AI-optimized school buses: how Oakland electrified its fleet with fewer vehicles.[27:44] BrainBox AI and smarter buildings: cutting emissions through predictive HVAC optimization.[31:42] AI in waste management: from pneumatic trash tubes to AI sorting recyclables.[41:17] Big-picture futures: AI efficiency, plummeting solar costs, and the possibility of “trivially cheap” energy.The conversation blends realism with optimism—grounded in the challenges of energy demand, yet hopeful about AI-driven solutions in transportation, buildings, waste, and renewable power.If you’ve ever wondered whether AI can be more than an energy drain—and instead help drive sustainability—this episode offers both perspective and inspiration.🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
102
Can AI Be Funny? With ComedyBytes’ Eric Doyle
Can artificial intelligence actually be funny, or is humor still a human stronghold? We explore that question with Eric Doyle, co-founder of ComedyBytes, a Brooklyn-based multimedia comedy show where AI and humans face off in roast battles, dating games, and other interactive formats. Doyle combines the craft of stand-up with the tools of generative AI, building AI characters like “AI Kanye West” or “AI Sarah Silverman” that deliver pre-scripted jokes in real time.In this episode of AI-Curious, we dig into:[0:52] The story behind ComedyBytes and its AI-powered format[3:46] How AI roast battles work, from concept to stage mechanics[7:53] Using tools like ChatGPT, Claude Sonnet, and Gemini AI to write jokes[12:55] The art of prompting for humor and boosting the “funny hit rate”[16:36] Why specificity matters in generative AI comedy[23:43] Inside the “Data-ing Game,” an AI twist on the classic dating game[25:58] Can AI really be funny—or just imitate the structure of humor?[32:30] The triple, listing technique, and other joke-writing structures AI can learn[39:10] Advice for non-comedians using AI to add humor[41:24] The future of AI in entertainment and its impact on creatorsFrom the structure and anatomy of a joke to the ethics of deepfake comedy, this conversation blends technology, performance, and the evolving role of AI in creative work. Whether you’re an AI enthusiast, a comedy fan, or simply curious about where these worlds collide, this is a look at AI and humor you haven’t heard before.
-
101
The New Jobs That AI Might Create, w/ Robert Capps (NYT Magazine Contributor)
Is Kant the new code? If AI can write, code, and even plan, which human skills suddenly become scarce—and valuable?In this conversation with Robert Capps (former Editorial Director of Wired, contributor to The New York Times Magazine), we dive into his widely shared NYT Mag feature, “AI Might Take Your Job. Here Are 22 New Ones It Could Give You.” We unpack the three big buckets of new work he sees emerging—Trust, Integrators, and Taste—and explore why philosophy majors, auditors, and “AI translators” may be the surprise winners. We also get frank about hallucinations, over-extrapolation, inequality, lethal autonomous weapons, and why Rob still comes out more optimistic.In this episode of AI-Curious, we:Break down Rob’s three buckets of future AI jobs: Trust (auditors, ethicists, legal guarantors), Integrators (the translators who know both your business and the models), and Taste (the Rick Rubin-esque role of vision, judgment, and curation).Talk about why Ethan Mollick refuses to let AI write his first drafts—and why that matters for your own thinking.Examine how “the tools will be commodities, not the people,” and what that means for founders, creators, journalists, and scrappy upstarts.Get into the very real risk of inequality and policy paralysis—and why UBI isn’t a satisfying answer.Preview Rob’s documentary on AI weapons and the fight to keep humans in the loop.TakeawaysTrust work explodes. Expect a cottage industry of auditors, ethicists, and “legal guarantors” to ensure AI output is accurate, defensible, and compliant.Integrators win inside companies. The most valuable people will be those who can translate between business reality and fast-moving model ecosystems.Taste is leverage. Vision, taste, and editorial judgment—knowing what good looks like—become the human moat.Beware first-draft capture. Letting AI write your first draft can quietly dominate your thinking (Mollick’s rule is worth adopting).Inequality is the real threat. Most experts Rob spoke with fear a rapid widening of inequality more than mass permanent joblessness.Tools, not people, become commodities. When everyone has Goldman-tier tools, expect disruption from the bottom, not reinforcement of the top.Rob’s NYT Magazine piece: “AI Might Take Your Job. Here Are 22 New Ones It Could Give You.”https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html
-
100
AI and Education: Inside the AI Solution Partnering with Denver Public Schools, w/ Dr. Michael Everest
Could AI actually improve public education? Not just automate it, but make it more personalized, more equitable — and even more human?We explore this possibility with Dr. Michael Everest, founder of edYOU, an AI tutoring platform being piloted in a Denver-area school district. While many worry that AI could become a shortcut for students to avoid real learning, Everest argues the opposite — that AI can reinforce understanding, boost confidence, and offer 24/7 support tailored to each student’s needs.In this episode of AI-Curious, we dig into the real-world mechanics of how this works — including partnerships with schools, how teachers interact with the platform, and what kind of results they’re seeing so far.We also ask the tough questions: What about data privacy? What about bias and hallucinations? Is there a risk we’re outsourcing critical thinking? And what does the future of education look like if every student has a lifelong AI companion?Topics include:The promise and pitfalls of AI in classroomsedYOU’s pilot program with Adams 14 School DistrictHow the AI tutoring platform personalizes learningThe role of teachers in an AI-enhanced education systemOversight, privacy, and academic integrityThe vision of a lifelong AI learning companionWhether you’re a parent, educator, technologist, or just curious about where education is headed, this conversation offers a grounded, hopeful — and at times provocative — look at the future of learning.
-
99
AI's Impact on History Writing and Journalism, w/ The New York Times Magazine's Editorial Director Bill Wasik
What happens when AI becomes a co-pilot for writers, researchers, and journalists — not in theory, but in practice?In this episode of AI-Curious, we speak with Bill Wasik, Editorial Director of The New York Times Magazine, who recently oversaw their special issue, “Learning to Live with AI.” We explore how AI is already transforming journalism, nonfiction writing, and historical research — and why the most interesting impacts may come not from content creation, but from how we discover, organize, and interpret information.We dig into the creative tension between AI and human storytelling, including how historians are using tools like NotebookLM to tackle research projects previously deemed impossible. Bill shares how AI can augment writing workflows without compromising editorial judgment — and why trust and authorship still matter in a world of fast content.We also cover:The risks of over-relying on AI for research (19:45)How AI might transform local journalism and accountability (41:30)The evolving AI policies at The New York Times (29:40)Whether AI could ever win the Booker Prize — and what that would mean (7:30)Use cases from historians and academics using ChatGPT (26:00)Bill's (excellent) piece: "AI is Poised to Rewrite History. Literally."https://www.nytimes.com/2025/06/16/magazine/ai-history-historians-scholarship.htmlThe NYT Magazine's Special Issue: https://www.nytimes.com/2025/06/16/magazine/using-ai-hard-fork.html
-
98
The (Data-Driven) Top AI Trends, w/ the CEOs of HumanX and Read.AI
What are the top minds in AI actually talking about behind closed doors?At the HumanX conference—arguably the flagship event in the AI ecosystem—hundreds of speakers (from CEOs to policymakers to Kamala Harris) shared their unfiltered thoughts on the state and future of artificial intelligence. But with so much happening at once, even attendees couldn’t absorb it all.So HumanX did something novel: they partnered with Read.AI to record and synthesize every single session. The result? A real-time AI copilot for the conference and a post-event report that reveals the key themes, trends, and tensions shaping the industry.In this episode, we speak with HumanX CEO Stefan Weitz and Read.AI CEO David Shim to unpack the insights from that report—what they signal for 2025, what business leaders should pay attention to, and what’s probably just noise.We talk about the rise of agentic AI, the shift from AGI ambition to ROI expectations, and the practical realities of implementing AI inside large organizations. We also dig into issues of trust, open source, industry-specific adoption, and how AI is starting to reshape roles from customer service to legal to healthcare.Whether you’re in strategy, ops, tech, or just trying to keep up, this conversation offers a data-driven pulse check on where enterprise AI is headed.Highlights & Timestamps:[1:00] – How Read AI became the official AI copilot of the HumanX conference[3:10] – “You can’t be everywhere at once”—the problem this tech solves at events[6:15] – The most talked-about concept at HumanX: agentic AI[7:45] – Why AGI hype is shifting toward practical use cases with agents[8:58] – The fast hype-decay cycle of AI and the emerging focus on outcomes[12:26] – Open source, cost savings, and why business leaders care about transparency[14:19] – Trust as the “anchoring tenet” of enterprise AI adoption[16:45] – Real ROI: how Read AI identified $10M in sales pipeline in 30 days[20:03] – Why companies are hiding their AI wins from competitors[22:43] – Cross-industry learnings: how healthcare patterns may apply to other sectors[25:47] – The “put up or shut up” moment: 2025 as the year AI must deliver[29:06] – What business leaders should do before launching AI agent initiatives[35:03] – The #1 mistake orgs make with AI: failing to assign ownership[37:09] – Predictions: personalization, interoperability, and privacy friction ahead[42:28] – How Stefan and David personally use AI—for work, fun, and creative hackingLinks & Mentions:HumanX – Flagship AI conference co-founded by Stefan WeitzRead AI – Productivity-focused AI platform by David ShimSuno – AI music generation tool mentioned by StefanReplit – AI coding sandbox used by Stefan for strategy visualizationVeo by Google DeepMind – AI video generation tool referenced by David🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
97
Introducing "AUI": Artificial Useful Intelligence, w/ IBM's Chief Scientist Dr. Ruchir Puri
What if we’re all chasing the wrong kind of AI? Dr. Ruchir Puri, Chief Scientist of IBM, argues that Artificial General Intelligence (AGI) is overrated—and that we should be focusing instead on AUI: Artificial Useful Intelligence. This is a pragmatic, business-focused approach to AI that emphasizes real-world value, measurable outcomes, and implementable solutions.In this episode of AI-Curious, we explore what AUI actually looks like in practice. We discuss how to bring AI into your organization (even if you’re just getting started), why IBM is betting big on small language models (SLMs), and how companies can move beyond hype toward real, trustworthy AI agents that do actual work.You’ll also hear:Why AI usefulness is a function of both quality and cost [00:11:00]The “crawl, walk, run” strategy IBM recommends for business adoption [00:14:00]Internal IBM examples: HR systems and coding assistants [00:16:00]Why SLMs may be a smarter bet than LLMs for many enterprises [00:37:00]A breakdown of how agentic systems are evolving to reflect, act, and self-correct [00:41:00]Whether you’re leading a startup or an enterprise, this conversation will help you reframe how you think about deploying AI—starting not with hype, but with value.🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
96
A Conversation with the AI Pioneer Who Coined ‘AGI’ — Dr. Ben Goertzel
What exactly is AGI—Artificial General Intelligence—and how close are we to achieving it? Will it transform the world for better or worse? And how can we even tell when true AGI has arrived?In this episode of AI Curious, we sit down with Dr. Ben Goertzel, the iconic computer scientist who coined the term AGI more than 20 years ago. As the founder of SingularityNET and the Artificial Superintelligence Alliance, Ben has spent decades thinking about the architecture, risks, and potential of general intelligence.We explore why today’s large language models (LLMs), while powerful, still fall short of true AGI—and what will be needed to bridge that gap. We dive into Ben’s prediction that AGI could arrive within just 1 to 3 years, and why he believes it will likely be decentralized. Along the way, we unpack some of the key ideas from his recent “10 Reckonings of AGI”—a candid look at the social, economic, and existential questions we must face as AGI reshapes human life.Topics include:[00:04:00] What AGI really means vs. current LLMs[00:10:00] Are we reaching the limits of current AI architectures?[00:13:00] How will we know when AGI has truly arrived?[00:17:00] The “PhD test” for human-level AGI[00:19:00] AGI timeline predictions (1–3 years? 2029?)[00:29:00] The 10 Reckonings of AGI: key societal impacts[00:36:00] The gap between AGI and superintelligence[00:44:00] Why a decentralized AGI might be safer[00:51:00] Surprising upsides of a post-AGI worldIf you’re curious about the future of artificial intelligence, this conversation offers a rare and unfiltered perspective from one of the field’s most original thinkers.SingularityNethttps://singularitynet.io/Ben Goertzel on Xhttps://x.com/bengoertzel🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
95
Should AI Agents Be Trusted? The Problem and Solution, w/ Billions.Network CEO Evin McMullen
What happens when an AI agent says something harmful, or makes a costly mistake? Who’s responsible—and how can we even know who the agent belongs to in the first place?In this episode of AI-Curious, we talk with Evin McMullen, CEO and co-founder of Billions.Network, a startup building cryptographic trust infrastructure to verify the identity and accountability of AI agents and digital content.We explore the unsettling rise of synthetic media and deepfakes, why identity verification is foundational to AI safety, and how platforms—not users—should be responsible for determining what’s real. Evin explains how Billions uses zero knowledge proofs to establish trust without compromising privacy, and offers a vision for a future where billions of AI agents operate transparently, under clear reputational and legal frameworks.Along the way, we cover:The problem with unverified AI agents (2:00)Why 50% of online traffic is now bots—and why that matters (2:45)The Air Canada chatbot legal fiasco (15:00)The difference between chatbots and agentic AI (13:00)What “identity” means in an AI-first internet (10:00)Deepfakes, misinformation, and the limits of user responsibility (22:00)Billions’ “deep trust” framework, explained (29:00)How platforms can earn trust by verifying content authenticity (34:00)Breaking news: Billions’ work with the European Commission (38:20)This one dives deep into the infrastructure of digital trust—and why the future of AI may depend on getting this right.Learn more: https://billions.network🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
94
Agentic AI Case Study: AI "Sales Agents" in Action, w/ Alta CEO Stav Levi-Neumark
What exactly are AI agents doing out in the wild — and are they actually helping sales teams, or just adding noise?We explore the fast-evolving world of AI sales agents with a real-world case study from Alta, a startup deploying purpose-built AI agents named Katie, Alex, and Luna. In this episode of AI-Curious, we speak with Stav Levi-Neumark, Alta’s CEO and co-founder, about how agentic AI is already transforming sales workflows—from prospecting to pipeline generation to inbound response.We look under the hood to examine how these agents operate, what distinguishes them from chatbots, and how they interact with human reps. We also explore the ethics, limitations, and future of AI-human collaboration in business development—and what it means to build trust in AI systems.Whether you work in sales, lead a startup, or are just curious about how AI tools are functioning in the real world, this conversation offers a sharp, concrete look at the tech reshaping how companies grow.Topics and Timestamps:00:00 — What are AI agents doing, really?01:00 — Behind the scenes at HumanX and the rise of Alta03:00 — Meet Katie, Alex, and Luna: AI sales agents with defined roles08:00 — How AI agents qualify leads and determine buying intent13:00 — The distinction between chatbots and agentic AI18:30 — How AI agents can avoid becoming spammy LinkedIn bots22:00 — Alex the calling agent: real-time inbound response and transparency28:00 — What tools make an AI agent different from a workflow31:00 — Autonomy, decision-making, and sales team augmentation36:00 — The challenge of trust and how to begin using AI agents at low risk39:00 — Stav’s journey as a founder and the four-generation Vegas trip42:00 — How she uses AI personally: support agents, internal tools, and therapy44:30 — Advice for sales professionals on embracing AI46:30 — Predictions: what sales might look like in 5 to 10 yearsLet us know what you think—and whether Luna is getting the respect she deserves.About Alta:https://www.altahq.com/🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
93
How "Generation AI" Will Reshape the World, w/ Futurist Matt Britton
What does it mean to grow up in a world where AI is simply… normal? In this episode of AI-Curious, we're joined by Matt Britton, branding strategist and author of Generation AI: Why Generation Alpha and the Age of AI Will Change Everything, for a wide-ranging discussion about how artificial intelligence is transforming childhood, education, branding, healthcare, and the very fabric of society.We look at how Gen Alpha will interact with AI “sidekicks,” how the role of brands might erode in a world of personalized AI agents, and what this all means for marketers, parents, and future professionals. Matt also lays out why the traditional four-year college degree may collapse, how one-person billion-dollar companies are now within reach, and what core skills the next generation—and the rest of us—will need to thrive.We close with practical advice on how to future-proof ourselves in the age of intelligent agents, synthetic creativity, and AI-native consumers.Topics include:(0:00) Gen Alpha and the future of AI-native childhood(3:03) How AI will transform marketing and branding(7:04) LLMs, AI agents, and the death of traditional demographics(14:01) Rethinking education and the collapse of the knowledge economy(17:10) One-person billion-dollar companies and the new AI tech stack(21:40) Personalized healthcare, diagnostics, and AI as your doctor(25:21) One simple way to future-proof your life in an AI-powered worldLet us know what you think, and if this episode gets your gears turning, share it with someone who’s curious about the future.Matt Britton:https://mattbritton.com/The new book Generation AI:https://www.amazon.com/Generation-AI-Alpha-Change-Everything/dp/139430885X🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
92
The "Dean of AI" on AI and Education, AI vs. Academia, and What Students Actually Need to Know
How is AI reshaping higher education? What do students actually need to learn in the age of large language models? And why might skills-based learning soon rival or even replace the traditional degree? In this episode of AI-Curious, we speak with Ben Tasker, "Dean of AI" at Southern New Hampshire University. Ben walks us through the structure of his “applied AI” curriculum, how his team uses AI to build courses in just 14 days, and why prompt engineering is becoming an essential literacy. We also break down the CREATE framework for prompting, designed to help users—students and professionals alike—level up their use of tools like ChatGPT.Other topics include:The tension between AI adoption and academic cheatingHuman skills vs. AI skills, and the World Economic Forum’s two-skill modelThe future of education: personalization, affordability, and AI-enabled scaleHow executives should rethink AI integration—with intention, not just experimentationWhy OpenAI Academy’s recent pivot could signal a broader shift in how we teach AIWhether you’re a student, educator, or business leader trying to find your footing in the in-between era of AI, this conversation offers a grounded, pragmatic perspective.Ben Tasker:https://www.bentaskerai.com/bens-ai-portfolio-thought-leadership🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
91
Why You Should Break Up With Your AI Lover, w/ Book Author and Historian Jennifer Wright
Millions of people are finding companionship -- or even love -- with AI bots. (The heart wants what the heart wants.) In this episode of AI-Curious, we dive into the strange, sticky, and sometimes surprisingly emotional world of AI romantic partners.We’re joined by author and historian Jennifer Wright, who recently wrote a sharp op-ed for The Washington Post titled “Please Break Up With Your AI Lover.” While millions are already turning to AI bots for companionship, Jennifer makes a compelling case for why this trend—though rooted in real loneliness—could be deeply damaging to our ability to connect, to grow, to give love.We unpack:The surface problems of AI boyfriends/girlfriends, from poor memory to the illusion of affectionWhy constant flattery from bots may feel good—but ultimately erodes something essential in real relationshipsThe importance of sacrifice, caregiving, and challenge in human connectionWhether AI companions, even with future upgrades, can ever replicate the messy magic of real loveWhat AI fiction gets wrong—and why most AI-generated stories are (still) terribleHer take on AI and historical accuracy, and the troubling implications of getting facts wrong at scaleWhy creativity matters even when the output is bad—and what we lose when we outsource itPlus, Jennifer gives us a preview of her next book about America’s Gilded Age, and we swap war stories about terrible first novels, romantic relationships, and the puppy-induced joys of caretaking.Let’s get curious.Links from this episode:Jennifer Wright’s Washington Post op-ed: Please Break Up With Your AI LoverJennifer’s latest book: Madame Restell: The Life, Death, and Resurrection of Old New York’s Most Fabulous, Fearless, and Infamous AbortionistJennifer’s upcoming book: Glitz, Glam, and a Damn Good Time: How Mamie Fish, Queen of the Gilded Age, Partied Her Way to Power.🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
90
LGMs are the New LLMs, w/ Spexi CEO Bill Lakeland
LLMs are *so* 2024. Say hello to LGMs. In this episode of AI-Curious, we explore a side of AI that rarely gets the spotlight: Large Geospatial Models (LGMs). You’ve heard of LLMs — now meet their real-world counterpart. LGMs could power everything from autonomous vehicles and urban planning to smarter first-response systems and industrial logistics.We speak with Bill Lakeland, CEO of Spexi, a company building a decentralized fleet of self-flying drones that are actively mapping the planet — one 25-acre hex at a time. We dig into how these drones collect fresh, high-res data; how that data is authenticated and standardized; and how it could enable a new generation of real-world AI applications.Along the way, we discuss:• (5:00) What LGMs are, and how they differ from LLMs• (13:07) Why real-time geospatial data is crucial for first responders• (20:33) How drones fit into the LGM landscape• (25:12) How Spexi’s autonomous drone “specxagons” work• (31:43) Why their aerial data is 900x more detailed than Google Earth• (36:26) How they’re addressing privacy, regulation, and standardization• (40:02) What happens after the footage is collected — and how it’s turned into insights• (45:35) The future integration of LGMs and LLMs• (48:00) Speculative futures: how LGMs could change everyday lifeIf you’re curious about AI in the physical world — not just on screens — this is a conversation you won’t want to miss.Spexi:https://www.spexi.com/🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
89
How Leaders and the C-Suite are Using AI, w/ Sarah Franklin, the CEO of Lattice
What's *actually* happening with AI in the halls of the C-suite? In this episode of AI-Curious, we speak with Sarah Franklin, CEO of Lattice and former CMO of Salesforce, about how business leaders are adapting to the rapid rise of AI in the workplace. We explore what it means to lead during a time of disruption, how AI agents and digital workers are already being deployed across organizations, and why the biggest challenge is often not the tech—but the mindset.Sarah shares her perspective on where AI is delivering real value today, from HR to sales to internal knowledge management, and she offers a candid take on how leaders can navigate change while maintaining trust, transparency, and human-centric values.We also dive into the future: digital twins, avatar-led meetings, and what it might mean to lead a team that includes both humans and AI agents.TOPICS & TIMESTAMPS00:00 – Why we’re still early in understanding AI leadership02:42 – Why past tech shifts (like cloud and mobile) don’t compare to AI05:51 – Lattice’s mission and how AI fits into the HR stack09:38 – Are AI agents real or still theoretical?14:21 – Where AI agents are being used today (sales, service, HR)20:02 – Digital twins and what leadership looks like with AI teammates26:28 – How to address employee anxiety about AI and job security30:19 – Sarah’s favorite personal and professional use cases for AI34:59 – Prompting frameworks and why specificity matters38:43 – A forecast for the future of AI and HRLattice🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
88
Meet Flynn, the World's First AI Student... w/ the co-creator of Flynn
What happens when an AI student enrolls in university? In this episode of AI-Curious, we explore the fascinating case of Flynn, an AI-artist and the first AI to be admitted as a student at the University of Applied Arts Vienna. We speak with Chiara Kristler, one of Flynn’s co-creators, and Anika Meier, the curator of The Second-Guess: Body Anxiety in the Age of AI, where Flynn makes its artistic debut.Flynn is more than just an AI chatbot—it engages in conversations, generates art, and even passed a verbal university admissions interview. But its presence raises big questions: Can AI be creative? How does AI challenge human artists? What does an AI student mean for the future of education?We discuss:• (00:00) Introduction to Flynn and why this is a breakthrough moment for AI• (03:15) How Flynn was created and what makes it different from other AI agents• (07:55) The university admissions process and how Flynn passed the interview• (12:30) Can AI actually be creative, or is it just mimicking human artists?• (18:04) The tension between AI and human artists—competition or collaboration?• (24:22) AI fatigue, digital exhaustion, and the ethics of AI in creative spacesFlynn’s journey raises deep questions about AI, creativity, and human identity. Join us for a thought-provoking conversation on what happens when machines start learning alongside us.🔗 Talk to Flynn: https://i-am-flynn.web.app/Flynn's 🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
87
5 AI Takeaways from HumanX + AI Ethics w/ Defined.AI
We’re reporting live from HumanX, one of the world’s largest AI conferences, where we’ve spent three intense days immersed in AI discussions, moderating panels, and speaking with some of the most influential voices in the industry. In this episode, we break down five takeaways from the conference, covering everything from AI’s growing role in politics to the future of AI-driven customer service and automation.Then, we sit down with Daniela Braga, CEO of Defined AI, a leader in ethically sourced AI training data. We discuss the urgent need for transparency in AI data sourcing, the dangers of bias in large language models, and the growing demand for differentiated AI solutions.5 takeaways from HumanX:•AI is becoming political: Kamala Harris’s speech and the future of AI policy•The death of human customer support and the rise of AI customer support (hot take: this is a good thing!)•AI agents are finally getting real—how companies are deploying them today•Businesses are scrambling to implement AI automation at scale•Leaders are more AI-curious than ever—how executives are integrating AI into decision-makingInterview with Daniela Braga (CEO of Defined AI):•The problem with AI training data: bias, legal concerns, and digital exploitation•Why most AI models sound the same—and how that could change•The myth that we’re “running out of data” and the future of AI training sets•What’s next? Predictions for AI regulation, ethical sourcing, and industry differentiationIt’s a packed episode full of insights straight from the conference floor.Defined.AI:https://defined.ai/🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
86
The Hype-Free Guide to AI and Business, w/ Industry Insider Peter Swimm
There’s no shortage of hype about AI revolutionizing business, but what’s actually working right now? In this episode, we sit down with Peter Swimm, founder of the AI consultancy Toilville and former Principal Product Manager at Microsoft Copilot Studio, to cut through the noise and explore the real-world use cases of conversational AI.Peter has been working on AI-powered automation for over a decade, helping companies integrate chatbots, AI agents, and large language models into their workflows. Unlike many AI evangelists, he isn’t here to drink the Kool-Aid. Instead, he lays out the hard truths about what AI can and can’t do today—including why fully autonomous AI agents are still far from reality.We break down:🔹 The most successful AI use cases in businesses today (hint: FAQ bots and automated customer support are actually delivering results)🔹 The biggest misconceptions about AI agents and why they aren’t ready for prime time🔹 How AI is transforming communication workflows, from email drafting to meeting summaries🔹 The dangers of over-relying on AI for brainstorming and creative thinking🔹 Practical tips for integrating AI into business operations without falling into the hype trap⏱ Timestamps:3:14 – Peter’s background in AI and Microsoft Copilot Studio7:34 – What is Conversational AI and how has it evolved?10:01 – The AI use cases that are actually delivering value in business12:30 – AI’s role in communication automation and workflow efficiency17:14 – The challenge of making AI-generated emails sound human22:58 – Is AI actually good for brainstorming, or is it making us less creative?29:36 – The truth about AI agents—why businesses aren’t adopting them yet34:31 – Will AI agents ever work as promised? The barriers to real automation38:36 – Smart ways to use AI for efficiency without giving up human oversight40:26 – How Peter personally uses AI in his daily workToilville:https://www.itstoilville.com/🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
-
85
How AI is Transforming Drug Discovery (and Might Cure Cancer), w/ Moderna's VP of AI, Brice Challamel
In this episode of AI Curious, we explore one of the most profound applications of artificial intelligence—drug discovery and the future of personalized medicine. AI is already transforming how we develop treatments, and at the forefront of this revolution is Brice Challamel, VP of AI Products and Platforms at Moderna.We dive deep into how AI is accelerating the discovery of life-saving drugs, including how mRNA technology—the foundation of Moderna’s COVID-19 vaccine—is shaping the next generation of treatments for cancer, rare diseases, and viruses. Brice shares insights on AI-powered protein design, how AI can give us an “evolutionary boost” in medicine, and what the future of personalized medicine looks like.Key Topics:• (3:50) Brice’s journey from Google to Moderna—why AI in biotech is his biggest passion• (8:56) How AI played a critical role in the rapid development of Moderna’s COVID vaccine• (12:28) The role of AI in drug discovery: understanding proteins and optimizing treatments• (17:18) The rise of personalized cancer treatments—could AI help cure cancer?• (30:10) The future of medicine: How AI is unlocking breakthroughs across multiple diseases• (39:59) The biggest challenges—clinical trials, regulatory frameworks, and operational hurdles• (46:02) AI’s impact on data security and patient privacy• (49:05) The future of AI-powered healthcare: a world of personalized treatments, AI-driven patient care, and faster drug developmentThis episode is a must-listen for anyone curious about how AI is reshaping medicine, the ethics and challenges of deploying AI in healthcare, and what lies ahead for biotech, drug discovery, and personalized medicine.
-
84
New Pod from Jeff Wilser: "The People's AI: The Decentralized AI Podcast"
aaaaaand... I'm launching a new podcast! Don't worry, fans of AI-CURIOUS -- that's not going anywhere. This is a second podcast with its own independent feed. I'll still be producing AI-Curious each week.The new pod is called THE PEOPLE'S AI: The Decentralized AI Podcast, presented by Vana. In the future, this will be available on all the usual podcast platforms (links to subscribe below.)But for this week, I'm dropping THE PEOPLE'S AI into the feed for AI-Curious. It's a good primer on what Decentralized A" is all about. AI-Curious will continue to be a general interest AI podcast. AI and Hollywood one week, AI and the military the next. The People's AI is a different beast...Subscribe to The People's AI on APPLE:https://podcasts.apple.com/us/podcast/the-peoples-ai-the-decentralized-ai-podcast/id1792518750?i=1000694109163Subscribe to The People's AI on YouTube:https://www.youtube.com/channel/UCnLiYlJulQIcmvCjnVRYotwSubscribe to The People's AI on SPOTIFY:https://open.spotify.com/show/3XFmg6Lqf7nnKwMkicBx8L?si=8c43eac26fd7445eThe People's AI on Twitter/X:https://x.com/The_Peoples_AIJeff Wilser on Twitter/X:https://x.com/jeffwilser____________In our debut episode of The People’s AI: The Decentralized AI Podcast, Presented by Vana, we dive into one of the most critical questions of the AI era: Who should own AI?As artificial intelligence becomes increasingly embedded in daily life, its ownership and governance will shape the future. Big Tech dominates AI development today, but a growing movement believes AI should be decentralized, open, and user-owned.We speak with Anna Kazlauskas, co-founder of Vana, and Illia Polosukhin, co-founder of NEAR, to explore how decentralized AI could shift power away from centralized corporations and into the hands of individuals.Key Topics Covered:• What decentralized AI means and why it matters• How AI models are built and trained—and who controls them• The intersection of AI, data sovereignty, and blockchain• The potential risks of centralized AI, from bias to economic concentration• How AI assistants, autonomous agents, and data unions are reshaping the internet• Predictions for the next 1-5 years in AI and decentralized technologiesAbout Vana:Vana's vision is for user-owned AI through user owned-data. Its mission is to be the world's first open protocol for data sovereignty. Sign up for the first AI Data Summit, hosted by Vana, on Feb 28 in Denver. This will be the go-to event at Eth-Denver with leaders at the forefront of Decentralized AI tech and applications. AI Data Summit, hosted by Vana:https://lu.ma/aidatasummitMore on Vana:https://linktr.ee/vanahqVana on Twitter/X:https://x.com/vanaAnna Kazlauskas on Twitter/X:https://x.com/anna_kazlauskasNEAR: The Blockchain for AIhttps://near.org/NEAR on Twitter/X:https://x.com/NEARProtocol
No matches for "" in this podcast's transcripts.
No topics indexed yet for this podcast.
Loading reviews...
ABOUT THIS SHOW
Every week, Jeff Wilser sits down with the people building, breaking, and reckoning with AI — from the CEO of Upwork to the pioneer who coined "AGI" to an AI social network where bots wrote manifestos and had existential crises. Wilser is the author of eight books, AI keynote speaker, and the kind of interviewer who'd rather find the story no one's telling than rehash the headline everyone's read. Named by Inc. Magazine as one of the best ways to get AI-savvy. Included in UC Berkeley's data science curriculum.
HOSTED BY
Jeff Wilser
CATEGORIES
Loading similar podcasts...