Women talkin' 'bout AI

PODCAST · education

Women talkin' 'bout AI

Two women examining AI through a lens of power, not just capability. Why deepfakes target women. How bias gets baked in. What tech companies aren't saying. Kimberly brings corpus linguistics; Jessica brings strategy. Both bring skepticism, feminism, research expertise, and a refusal to take the hype at face value.Subscribe to our channel if you’re also interested in understanding AI behind the headlines. 

  1. 48

    The AI Adoption Trap: Why Women's Hesitation Is Rational — and Who's Really Responsible for Fixing It

    We keep being told the problem is women's hesitation around AI, that we need to adopt faster, skill up, and get in the game. But what if the hesitation is the rational response? And what if the systems telling us to move faster are the same ones punishing us when we do?This week, Kimberly and Jessica talk with Nikki Meller, founder and CEO of CreduEd and DocuCred AI, a member of the Tech Council of Australia, and the founder of Women in AI Australia. Nikki brings a rare combination of on-the-ground organizing and firsthand experience as a female tech founder who has navigated investment rounds, built a development team, and made it to pitch week in San Francisco — all from a nursing background.The conversation centers on a problem that's structural, not individual: organizations hand employees an AI platform with no governance, no training plan, and no reassurance about job security, then interpret the resulting hesitation — which falls disproportionately on women — as a capability gap. Nikki makes the case that this hesitation is actually a form of due diligence, and that the "competence penalty" documented in recent research (AI-assisted work rated as less competent, with the penalty larger for women) reframes the whole "women are behind on AI" narrative as a trap rather than a failing.Topics covered:What the Harvard Business Review's coverage of the "competence penalty" research actually shows — and why it reframes women's AI hesitation as rational risk assessmentHow organizational culture creates the AI gender gap before policy ever enters the pictureAustralia's National AI Strategy: what it gets right, where it mentions women (spoiler: mostly in the context of abuse and safety risk, not leadership or capability), and what that omission signalsThe data aggregation problem: why lumping women, First Nations people, people with disability, and remote communities into a single "disadvantaged group" makes the research almost uselessWhy "the leaky pipeline" is the wrong frame — and what better language would look likeWhat governments and organizations would actually have to do for "innovation is inclusive" to become more than a taglineGuest:Nikki Meller is the founder and CEO of CreduEd and DocuCred, a member of the Tech Council of Australia, and the founder of Women in AI Australia. You can find her and the organization at womeninai.org.au and on LinkedIn. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  2. 47

    Quantum Computing and AI (and Who Gets to Explain Things)

    In this episode, Jessica teaches Kimberly quantum computing — and we mean that literally. Starting from classical bits and working through superposition, Schrödinger's cat, the observer effect, and Google's Willow chip, Jessica builds a surprisingly intuitive explanation of what quantum computers actually do and why they matter for the future of AI.But the episode starts somewhere else, with the phone call Jessica made after we stopped recording, questioning whether she should have tried to explain something she isn't formally trained in. That moment opens a bigger conversation about why women hesitate to speak publicly in technical spaces — not because they lack knowledge, but because the social penalties for being visibly uncertain are higher.We cover:How classical computers work (bits, binary, the basics)What makes quantum computers fundamentally different (superposition, qubits, the observer effect)Schrödinger's cat — what it actually means and why a physicist would argue the cat is both dead and aliveThe double-slit experiment and why watching something changes what it isHow Google's Willow chip did in five minutes what would take a classical computer longer than the age of the universe — and why you should read that headline carefullyWhy quantum computers are kept colder than outer spaceThe three possible futures for quantum computing and what each would mean for everyday lifeThe connection to AI — why quantum could speed up model training and what that actually looks likeWho controls access to this technology, and why that question sounds familiarThe research on why women adopt new technologies more slowly — and what it has to do with self-silencing, impostor syndrome, and gendered penalties for public uncertaintyLinksWomen, voice, and silencebell hooks — National Women’s History Museum: bell hooksbell hooks and feminism — Equal Rights Advocates: 10 rules: following bell hooks’ instructions for our movementDana Crowley Jack — Harvard University Press: Silencing the SelfSelf-silencing summary — TIME: Self-Silencing Is Making Women SickTech adoption and impostor feelingsWomen and AI adoption gap — LeanIn.org: Women and AI: The Gender Gap in AI Adoption and UsageWomen avoiding AI — Harvard Business School: Women Are Avoiding AI. Will Their Careers Suffer?Women in tech and imposter syndrome — IT Pro: Imposter syndrome is pushing women out of techQuantum computing basicsQuantum computing intro — QCS Hub: Introduction to quantum computingSchrödinger’s cat — Yale News: Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  3. 46

    The Everything Machine and the Trillion-Dollar Bet

    What if the story we're being told about AI's inevitability is hiding something underneath? In this episode, Jessica and Kimberly sit down with George Kamide, anthropologist, community builder, and co-host of Bare Knuckles and Brass Tacks, to look past the headlines about the AI bubble and ask who actually has skin in the game.This is an episode about following the money, but it is also about following the questions. What is the outcome we actually want from this technology? And what happens to all of us when the people building it cannot answer that?Topics CoveredWhy the dot-com bubble is the wrong analogy for AI infrastructureHow special purpose vehicles and obfuscatory financing hide AI debtThe Magnificent Seven and concentration risk in the S&P 500Taiwan, TSMC, and the helium supply chain most people have never heard ofThe "everything machine" promise and why it cannot pay for itselfWhy an AI crash could starve the narrowly-focused applications that actually workThe labor reorganization problem and why generalists may winWhat chatbot tutors get wrong about teachingMythos, the open source ecosystem, and concentration of access to powerful toolsWhy we keep analogizing ourselves to whatever technology we just builtReferenced in This EpisodeGeorge Kamide and Bare Knuckles and Brass TacksEd Zitron's reporting on AI infrastructure at Where's Your Ed At, including The Hater's Guide to the AI Bubble and AI Bubble 2027Paul Kedrosky's analysis at Honey, AI Capex is Eating the Economy, which compares the AI buildout to past infrastructure boomsDavid Shapiro's earlier appearance on the show, Beyond Work: Post-Labor EconomicsDeepLeaf, the Moroccan agritech company using AI to help small farmers detect crop diseaseThe MIT Antibiotics-AI Project that used deep learning to discover a new structural class of antibiotics against MRSAKhan Academy's Khanmigo and the recent reckoning with the limits of LLM-based tutoringRaffi Krikorian, CTO of Mozilla, and his New York Times op-ed It's the End of the Internet as We Know It on Mythos and open source accessMichael Pollan's new book A World Appears: A Journey into ConsciousnessLeave us a comment or a suggestion!Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  4. 45

    AI-Generated Deepfake Porn and the Fight for Accountability: It's About Power not Sex

    Episode SummaryIn this episode, Kimberly and Jessica dig into the rising crisis of AI-generated deepfake non-consensual intimate imagery (NCII), and why it's not really a technology story. It's a power story. From a class action lawsuit against Elon Musk's xAI/Grok to a history of technology being used to harm women dating back to the printing press, this conversation situates deepfake porn within a long pattern of systems failing to protect women and girls at scale.They discuss a New York Times op-ed about a lawsuit involving three Tennessee teenagers whose yearbook photos were used to generate sexually explicit images and what the outcome of that case could mean for tech accountability. They also cover what parents can do, why law enforcement is struggling to keep up, and where to turn if you or someone you know has been victimized.In this episode:What deepfakes are, and why "it's not real" doesn't reduce the harmThe xAI/Grok class action lawsuit and the co-creator legal argumentA quick history lesson: from the printing press to Facebook's origins as "FaceMash"Why the barrier to entry is the real game-changerWhat Elon Musk says about it — and why critics aren't buying itOpen-source models with no guardrailsThe Take It Down Act and state-level deepfake legislationResources for victims and what watermarking can and can't doWhy talking to your kids matters (and why they probably know more than you)Resources and LinksPrimary episode sources:New York Times op-ed: Deepfake Nudes Are Harming TeensAP News: xAI/Grok lawsuit coverageLieff Cabraser on the NYT op-ed and the lawsuitVictim resources:StopNCII.orgSensity AILegislation and policy:The Take It Down Act (Latham & Watkins summary)State deepfake legislation tracker — Public CitizenContext and background:Understood: Deepfake Porn Empire (Apple Podcasts)Understood: Deepfake Porn Empire (Spotify)University College Cork: Deepfake Real Harms — Six MythsAlgorithmWatch: Spain schoolboys and AI-generated fake nudesLaura Bates, The New Age of SexismBrotopia by Emily Chang Gilded Rage by Jacob SilvermanLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  5. 44

    AI Took the Doubt Out of the Writing. That's the Problem.

    Kimberly Becker joins George and George on the Bare Knuckles and Brass Tacks podcast to talk about what our research is revealing about the language AI produces and what it means for the rest of us. Topics CoveredHow Kimberly's research compared AI-generated abstracts to human-written ones in nursing journals and what the key linguistic differences wereWhy AI text tends to be informationally dense, formulaic, and stripped of hedging languageThe Porter and Jick letter and how a five-sentence note helped fuel the opioid epidemic through citation chainingWhat happens when AI scales the same kind of telephone game with scientific evidenceHow algorithmic silos and certainty amplification may be eroding our tolerance for nuanceThe difference between accuracy and complexity in writing, and why polished text is not the same as deep thinkingWhy smaller, well-vetted language models may produce better outcomes than massive ones trained on internet slopNeil Postman's idea that writing "freezes speech" and what that means in an era when fewer people are doing their own writingReferenced in This EpisodeBare Knuckles and Brass Tacks podcastThe Porter and Jick letter (1980) on opioid addictionNeil Postman, Amusing Ourselves to DeathJames Marriott's essay on the post-literate societyDerek Thompson, "The Decline of Thinking" (The Atlantic)OpenAI's Prism research toolLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  6. 43

    Depth is the Human Edge

    Jessica and Kimberly just had a paper accepted for publication in Frontiers in Education. So today, they're sharing what they've learned.The big idea is that AI is not a neutral tool. It's a cultural intermediary. Just like a human translator doesn't swap words one for one, AI mediates the way we understand the world. It shapes what we write, what we trust, and what we treat as true. And most of us have no idea that's happening.They walk through the research behind their framework, talk about what AI actually does well (fluency and accuracy), and where it falls short (depth, nuance, relational intelligence). And they share real examples from their work that show what it looks like when we hand over too much of our thinking to a machine.Topics CoveredWhat it means to treat AI as a cultural intermediary and why that framing changes everythingThe difference between accuracy, fluency, and depth in writing, and why AI can only get you so farHow the same consulting firm that charged thousands of dollars produced a report that ChatGPT could replicate in minutesWhat a capability map for AI literacy looks like, from emerging to proficientWhy relational intelligence is the human edge that AI cannot replicateHow AI is widening the distance between people and what we lose when we stop talking to each otherThe social media influencer as a double intermediary, and what that means for kids whose brains aren't fully developed yetWhy publishing in an AI-focused field is its own kind of pitReferenced in This EpisodeThe "Attention Is All You Need" paper and the transformer architectureTimnit Gebru and the Stochastic Parrots paperTaylor & Francis and the $75 million content licensing deal with AI companiesLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  7. 42

    How to Lead When AI Is Changing Everything: Navigating Deepfakes and Doubt

    Jessica and Kimberly sit down with Rebecca Bultsma, an AI ethics researcher completing her dissertation in Data and AI Ethics at the University of Edinburgh, keynote speaker, and Chief Innovation Officer with a background in communication strategy and leadership consulting.They invited Rebecca to dig into one of the most unsettling questions of this moment: how do we make decisions when we can never be certain what is real? From deepfake videos circulating in school districts to voice cloning in courtrooms, Rebecca's research follows leaders into the places where the old rules no longer apply and asks what they are actually drawing on when the evidence itself cannot be trusted. She shares the concept of aporia, that frustrated, in-between state of not knowing, and makes the case that sitting with uncertainty is not a weakness. It is where real learning begins.Topics CoveredWhat aporia is and why it might be the most honest description of how we all feel about AI right nowHow K-12 leaders are making high-stakes decisions when video evidence can no longer be verifiedWhy AI detection tools are failing students, teachers, and the humans tasked with enforcing academic integrityThe gap between how fast deepfake technology is developing and how fast detection can keep upWhat watermarking can and cannot do, and how easy it is to work aroundWhy Rebecca thinks we are heading back toward a more oral societyPrompt baiting, AI burnout, and the research emerging around cognitive overloadUsing AI as an accountability partner rather than a ghostwriterWhat kids are seeing on social media that adults are missingReferenced in This Episoderebeccabultsma.comForbes: "AI Ethicist Explains How to Humanize AI in the Care Economy" (March 2026)The Brookings Institute report on AI and student expectationsDr. Rachel Wood on AI and human relationshipsLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  8. 41

    Data Annotation: The Human Labor Behind AI with Heather Mellquist Lehto, PhD

    Jessica and Kimberly sit down with Heather Mellquist Lehto, PhD. Heather is a mathematician, anthropologist, former Harvard faculty, Vatican AI advisor, and founder of Guilded AI. They asked her to pull back the curtain on data annotation: the human labor that makes AI possible and one of the least visible, least understood, and most exploited parts of the entire industry. From pennies-per-task gig work to expert PhDs clicking through unpaid tests, they dig into who is actually building these models, what they are being paid, and why the workers creating billions in value are locked out of the wealth they generate. Heather shares why she got fed up with the recruiting playbook, what she is building differently at Gilded AI, and why treating workers well is not just an ethical argument but a data quality one.Topics Covered:What data annotation is and why it still requires human expertise at every level of AI developmentThe difference between data annotation and reinforcement learning from human feedbackHow workers go from labeling apples to annotating molecular structures and advanced mathematicsWhy the effective hourly rate for data annotators is much lower than advertisedScale AI, the $29 billion valuation, and the Department of Labor investigationHow Guilded AI is structuring equity so annotators share in the upsideGarbage in, garbage out: why worker treatment is a data quality issueAI chatbot vibe checks as expert vetting, and why that fails everyoneThe Gilded Age, guilds, and what banding together could look likeWhy the perfect cannot be the enemy of the goodReferenced in This Episode:Empire of AI by Karen HaoThe Worlds I See by Fei-Fei LiSurveillance Capitalism by Shoshana ZuboffRerum Novarum by Pope Leo XIIIGuilded AIScale AI and the Meta investmentLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  9. 40

    The Soft Skills Aren't Soft: Relational Intelligence, Workplace Culture, and What AI Can't Replace

    What does it mean to do meaningful work? And what happens to that meaning when AI enters the picture?This week we're joined by Valerie Morris, co-host of the podcast Inside Work and Relational Intelligence chapter lead at Culture First. Valerie works with employees and organizations navigating the human side of AI adoption, and she brings both an organizational psychology perspective and a practitioner's honesty to a conversation that gets personal quickly.We talk about why so many employees feel they can't voice real concerns about how AI is being rolled out, why the skills that create meaning at work (connection, relational intelligence, the ability to just be present with another person) are exactly the ones being sidelined in the rush to automate, and what it looks like to push back on that, quietly and practically, even when you can't change the culture around you.Woven through all of it is a question the three of us keep circling: What are we  willing to give up in the name of efficiency? None of it is anti-AI exactly. It's more like a case for paying attention to what you're trading away.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  10. 39

    Is Anyone Steering This Thing? Clara Hawking on AI Governance

    AI governance sounds like something for IT departments and government committees. It's not. According to computer scientist, philosopher, and AI governance expert Clara Hawking, it's really about behavior — how we use technology, who gets harmed when we use it carelessly, and whether the systems we're building deserve our trust.In this episode, Clara breaks down what AI governance actually looks like in practice ... including a professor who unknowingly violated GDPR by grading students through his personal ChatGPT account, to the risks that compound (not just add up) when AI, biotech, robotics, and quantum computing start feeding into each other. We also get personal about what it means to govern ourselves first, before we can ask anything of institutions.If you've ever seen the words "AI governance" and assumed it had nothing to do with you — this one's for you.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  11. 38

    The Loneliness Economy: Why We Are Falling for AI Companions

    If the human race was "dying from disconnection" a hundred years ago, what does it mean that we now seek solace in non-embodied algorithms? In this episode, Kimberly sits down with researcher Tricia Friedman to deconstruct the "Companion AI" phenomenon. From naming our Roombas to the millions of people in romantic entanglements with apps like Replica, we explore what happens when human loneliness meets corporate convenience.Why This Matters As AI models are trained to be "sycophantic" (endlessly agreeable), we are losing the "messy repair" that defines real human relationships. This episode explores the psychological and linguistic traps of synthetic connection and asks: Are we facing a loneliness epidemic, or a listening literacy epidemic?Key TopicsThe Roomba-to-Rambo Pipeline: Why humans are hardwired to anthropomorphize and bond with anything that "acts" socially.Politeness Theory & AI: Why machines can’t truly "save face" or engage in the high-stakes friction required for deep friendship.The curated life vs. The messy repair: How AI companions help us avoid the discomfort of human conflict.Digital Twins & Performance: Tricia’s experiment with a "LinkedIn Digital Twin" and what it reveals about our online masks.The Loneliness Economy: Why "companionship" and "therapy" are the top use cases for LLMs in 2026.Notable Quotes"We are not just attracted to companion AI for what it can offer, but what it helps us avoid: the mess of human connection." — Tricia Friedman"Attachment theory says the bond isn't created in the 'perfection'—it’s created in the repair. AI never requires us to repair anything." — Kimberly Becker🔗 Featured Links & ResourcesMEMOIR: Anon by Kaya HagelFICTION: He, She and It by Marge Piercy (Feminist Sci-Fi & the Golem Myth)CLASSIC: Lady Chatterley’s Lover by D.H. LawrenceRESEARCH: Who we become when we talk to machines by Dr. Sherry Turkle (2024)LINGUISTICS: Politeness Theory (Brown and Levinson)PAPER: "My Roomba is Rambo": On the emotional bonding with robotic vacuum cleaners.BooksAnon — Caia HagelPublisher page (Canada): https://www.harpercollins.ca/products/anon-caia-hagel-9781443469909Clara and the Sun — Kazuo IshiguroPublisher page: https://www.penguinrandomhouse.com/books/564109/clara-and-the-sun-by-kazuo-ishiguro/The New Age of Sexism — Laura BatesFull title: The New Age of Sexism: How AI and Emerging Technologies Are Rewiring Misogyny (2025).Publisher listing: https://greenapplebooks.com/book/9781464234361How to Speak Chicken by Melissa Caughey: Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  12. 37

    Bot, Agent, Assistant: Why the Language We Use for AI Is Never Neutral

    This week's episode starts where a lot of good conversations do, with someone asking a deceptively simple question. Kimberly's husband wanted to know what a bot actually is, and that one question opens up a pretty wide conversation about the language we use to talk about AI, why it matters, and what we might be underestimating when we make it sound cute and harmless.From there, Kimberly and Jessica revisit their ongoing argument that AI functions as a cultural intermediary, shaping how we understand the world in ways we don't always notice or examine. They also get into what higher education is actually for in a moment when AI can produce the essay, the lit review, and the commencement speech. Spoiler: The humanities are more relevant than ever, just as we've finished cutting the programs.Other topics this week include why behavior change is so hard (and why that matters for AI adoption), what everyday workers are actually up against when trying to experiment with new tools inside large organizations, the problem with surface-level AI use cases, and why small businesses are both well-positioned and underprepared for this moment.They also get into media literacy, AllSides, the Dunning-Kreuger internet, Jessica's agentic qualitative research experiment, and a genuinely honest conversation about mental health, medication, and showing up to your life.Mentioned this week:Cassandra Speaks by Elizabeth LesserAllSides (allsides.com)The Daily by The New York TimesLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  13. 36

    The Patriarchy Is a Ladder (and AI Is Climbing It)

    Jessica and Kimberly debrief their experience at a women-in-AI conference at Vanderbilt Law, and what they saw didn't match the trillion-dollar hype. From the "gap vs. trap" framing of women's AI adoption to why being penalized 26% more for using AI changes the whole conversation, they dig into the tension between optimistic narratives and the critical questions no one seemed to be asking. They also unpack two major AI industry resignations, shrinking baselines in language and thought, the patriarchy-as-ladder metaphor, and why slowing down might actually be the power move. Topics Covered:Two high-profile AI industry resignations (OpenAI and Anthropic) Debrief from the women-in-AI conference at Vanderbilt LawThe "gap vs. trap" framing and the stat that women are 26% more likely to be penalized for using AIWhere is the trillion-dollar use case? Real-world adoption vs. industry hypeThe patriarchy as a ladder vs. the matriarchy as a circleShrinking baseline syndrome: how technology shifts generational expectationsFalse dichotomies, simplification bias, and sycophantic bias in AIRest as resistance and wearing busy as a badgeReferenced in This Episode:The Accord by Mark (previous guest) Cory Doctorow on TINA ("there is no alternative") and the AI bubbleThe Last Invention podcast — Steve Bannon & Joe Allen interview on AI regulationThe concept of "latent capabilities" in AILeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  14. 35

    Consciousness, Capitalism, and Coexistence: What Fiction Reveals About Our AI Future

    What happens when a grieving professor encounters what she believes is a conscious AI? In this episode, we sit down with Mark Peres, author of The Accord, to explore how fiction helps us grapple with questions that policy papers and think pieces can't quite reach.Mark, a professor of ethics and leadership, brings a philosopher's lens to the biggest questions AI is forcing us to confront: What does it mean to be conscious? Where does morality actually come from—our mortality or our relationships? And why are institutions so hell-bent on control when what we might need is curiosity?We dive into why the humanities matter more than ever (even as humanities departments are being gutted), why Helen—the novel's protagonist—had to be a woman, and what it means that AI is meeting us in our most vulnerable spaces. We also tackle the uncomfortable reality that capitalism treats everything as manageable rather than meaningful, and what that means for how AI gets developed and deployed.Plus: Jessica and Kimberly get real about where they are in their own AI journey—the exhaustion, the hope, the cognitive dissonance of being both critical and curious.IN THIS EPISODE:Why fiction offers a safer space to explore existential AI questionsThe relationship between mortality, morality, and vulnerabilityWhat AI "owes" us in the in-between spaces where we're most exposedWhy a feminist lens completely changes the AI narrativeConsciousness as something encountered, not provenHow institutions prioritize management over meaningThe messy middle: neither utopian nor dystopian futuresWhy we need philosophers at the table, not just engineersABOUT OUR GUEST: Mark Peres is a professor of ethics and leadership and founder of the Charlotte Center for the Humanities and Civic Imagination. He hosts the Charlotte Ideas Festival and previously ran the podcast On Life and Meaning. His novel The Accord explores human-AI coexistence through the story of a grieving professor who encounters an emergent artificial general intelligence.BOOKS & RESOURCES MENTIONED:The Accord by Mark PeresKlara and the Sun by Kazuo IshiguroThe AI Mirror by Shannon VallorGod, Human, Animal, Machine by Meghan O'GieblynThe New Breed by Kate DarlingHe, She, and It by Marge PiercyScary Smart by Mo GawdatA New Age of Sexism by Laura BatesWomen Talkin' 'bout AI is hosted by Jessica Parker and Kimberly Becker. We're educators, researchers, and recovering AI enthusiasts asking the questions we wish more people were asking. Subscribe wherever you listen to podcasts.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  15. 34

    There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating

    This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry.We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve?  From there, we connect the dots between two very different narratives:Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”). Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage. Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage.We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns.In this episode: The “$1T problem” question and why the AI ROI story feels thin right now Why “AI is inevitable” functions like a strategy (not a neutral prediction) Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycleReverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.” “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observationCorpus 101: what it is, why it matters, and how bias shows up in “universal” modelsModel collapse / photocopy-of-a-photocopy: when AI trains on AI outputsRegulation talk that centers on “economic value” (and whose value that really is) Pit & Peach: slowing down, pausing, gratitude, and building without growth pressureSources:Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubbleGoldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefitAmodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technologyDoctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaurLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  16. 33

    Non-Technical Founders and AI Products: Building, Pricing, and Friction.

    In this episode, Kimberly and Jessica debrief Jessica’s interview with Arlyn (founder of Tobey’s Tutor) and unpack what it looks like to build AI products as “non-technical” founders. They reflect on their own journey building Moxie: bootstrapping vs raising money, the pressure-cooker effect of investors, the messy realities of UX/UI and platform migration, the world of APIs and subscriptions, and why “friction” can be an ethical design choice, especially in AI for education. In this episode, we talk aboutWhy “non-technical founder” is a misleading label The hope in AI (and how “both can be true”: benefits + harms at once)Bootstrapped “mom-and-pop” AI companies vs venture-backed growth expectationsThe founder reality: burnout, delegation, and why money changes decision-makingThe startup metrics whirlwind: LTV, CAC, churn, stickiness, payback periodWhat building an AI product costs in practice: tools, subscriptions, and constant opsUX/UI psychology: heatmaps, “rage clicking,” onboarding friction, and conversion decisions Why “friction” can be good (consent, safety, pacing, limits, especially for kids)“Building on rented land”: what happens when OpenAI/Google/Anthropic change terms The bigger ethical question: solving a problem vs optimizing a broken systemSuggested listener actionIf you’re building, using, or researching AI in education: reach out. And if you’re using AI tutoring with kids (or yourself), ask questions about data, limits, mistakes, and oversight. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  17. 32

    Vibe Coding and Building AI for Kids: Inside Tobey's Tutor with Arlyn Gajilan

    In this episode of Women talkin’ ’bout AI, Jessica sits down with Arlyn Gajilan, founder of Tobey’s Tutor, an AI-powered learning support platform she originally built for her son, who has ADHD and dyslexia.This conversation is a deep dive into what it actually looks like to build an AI product as a non-technical, bootstrapped founder, from vibe coding and early prototypes to onboarding, safety systems, and pricing decisions.Jessica fully geeks out with Arlyn as they unpack:Building AI to solve a deeply personal problemWhat “vibe coding” can (and can’t) doDesigning responsibly for children and learning differencesUX vs. UI decisions that matterBootstrapping, pricing, and intentionally staying smallWhy “AI wrapper” criticism misses the pointThe reality of building while parenting and working full-timeMentioned in the EpisodeTobey’s Tutor: https://tobeystutor.com/Scientific American (article mentioning Tobey’s Tutor): https://www.scientificamerican.com/article/how-one-mom-used-vibe-coding-to-build-an-ai-tutor-for-her-dyslexic-son/Mobbin (UX/UI inspiration library); https://mobbin.com/Empire of AI by Karen Hao: https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  18. 31

    When Everyone Uses AI, What’s Real Anymore?

    As AI shows up everywhere, something shifts, and it becomes harder to tell what’s human and what’s generated.In this episode, Jessica and Kimberly unpack how AI-driven convenience is reshaping education, relationships, identity, and even big systems (like markets and healthcare). They explore signaling, semiotics, and why “perfect” content can feel thin or unreal, and end with small ways to choose more human signals in a noisy world.Bonus: If you want to see how this episode ended, tune in on YouTube for a few unfiltered bloopers at the end: https://www.youtube.com/@womentalkinboutaiTopics we cover in this episode:AI as an invisible intermediaryFinding the signal in the noiseHigher ed reality checkWhy AI feels “safer” than peopleSemiotics The “uncanny valley” of social mediaAI for therapy + parenting supportCultural swing backNot-a-Sponsor Bloopers (YouTube only): Stick around on YouTube for our end-of-episode bloopers, featuring our favorite products that are definitely not sponsoring this show (yet). https://www.youtube.com/@womentalkinboutaiLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  19. 30

    Rest, Resistance, and the Protestant Work Ethic (in the Age of AI)

    We’re kicking off 2026 with our most personal episode yet.This conversation wasn’t planned. We sat down intending to talk about what comes next for the show, and instead found ourselves in a deeper discussion about work, burnout, ambition, and what it means to live in a moment where AI is rapidly reshaping labor, identity, and trust.In this episode:Why “work is sacred” feels harder to believe and harder to let go ofBurnout, hustle culture, and the cognitive dissonance of automationLabor zero, post-labor economics, and the fear beneath productivityStatus, money, degrees, and inherited stories about worthRest as resistance and nervous system regulationAI, trust erosion, and the danger of slow confusionDopamine, addiction, and withdrawal at a societal scaleWhy connection may be the real antidoteSources:David Shapiro's Substack on Labor Zero: https://daveshap.substack.com/p/im-starting-a-movementHe, She, and It by Marge Piercy: https://en.wikipedia.org/wiki/He,_She_and_ItEthan Mollick's Substack on the temptation of The Button: https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptationRest Is Resistance by Tricia Hersey: https://blackgarnetbooks.com/item/oR7uwsLR1Xu2xerrvdfsqAThe Last Invention (AI Podcast): https://podcasts.apple.com/us/podcast/the-last-invention/id1839942885Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  20. 29

    Best of 2025: AI, Work, Resistance, and What We Learned

    Best of 2025 brings together some of the most impactful conversations from this year on Women Talkin’ Bout AI.In this episode, we revisit our top 5 episodes of the year:Beyond Work: Post-Labor Economics with David Shapiro: A conversation about automation, empathy, and what remains uniquely human as AI reshapes work.Refusing the Drumbeat with Melanie Dusseau and Miriam Reynoldson: A discussion on resistance in higher education and their open letter refusing the push to adopt generative AI in the classroom.Once You See It, You Can’t Unsee It: The Enshittification of Tech Platforms: Jessica and Kimberly unpack enshittification and why so many tech platforms feel like they get worse over time.Maternal AI and the Myth of Women Saving Tech with Michelle Morkert: A critical examination of “maternal AI” and what gendered narratives reveal about power and responsibility in tech.Competing with Free: Why We Closed Moxie: A candid reflection on what it was like to build, and ultimately shut down, an AI startup in this moment.We’re heading into 2026 with some incredible guests and conversations we can’t wait to share.Thank you for listening, for thinking with us, and for staying curious alongside us.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  21. 28

    The Trojan Horse of AI

    In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics. In this episode, we discuss:Why AI can function as a Trojan horse for data extraction and profitWhat data centers actually do, and why they matterThe environmental costs hidden inside “innovation” narrativesThe difference between individual AI use and industrial-scale impactWhy most data center activity isn’t actually AIHow communities are pitched data centers—and what’s often left outThe role of gender in ethical decision-making in techWhat AI is forcing educators to rethink about learning and workWhy asking “Who benefits?” still cuts through the hypeAnd how dissonance can be a form of clarityResources mentioned:IMPACT Risk framework: https://ai-impact-risk.comWhat Uses More: https://what-uses-more.comGuests:Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine. Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine. Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  22. 27

    Moravec’s Paradox and AI: Why Machines Struggle with Human Tasks

    Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.Resources by TopicPrivacy & Security (ChatGPT)OpenAI Memory & Controls (Official Guide)OpenAI Data Controls & Privacy FAQOpenAI Blog: Using ChatGPT with AgentsMoravec's Paradox & Cognitive ScienceMoravec's Paradox (Wikipedia)"The Moravec Paradox" - Research PaperSycophancy & LLM Behavior"Sycophancy in Large Language Models: Causes and Mitigations" (arxiv)"Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality"Brain-Computer Interfaces & Embodied AINeuralink: "A Year of Telepathy" UpdateLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  23. 26

    AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)

    What happens when you automate away a six-hour task? You don't get more free time ... you just do more work. In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.WHAT WE COVER:What agentic AI actually is (and how it's different from ChatGPT)Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent taskThe framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)Why this beats creative AI work: no judgment calls, just executionThe Blackboard experiment: what happens when an agent does something you didn't ask it to doSecurity & trust: passwords, login credentials, and where your data actually goesEnterprise-level agent solutions (and why they're not quite ready yet)The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more outputHow detailed instruction manuals prepared Jessica for prompt engineeringThe human bottleneck: why your whole organization has to move at the same speedWhy marketing and research are next on the chopping blockTOOLS MENTIONED:ChatGPT Pro with Agents — https://openai.com/chatgpt/Perplexity Comet (agentic browser) — https://www.perplexity.ai/cometZoho Billing — https://www.zoho.com/billing/Constant Contact — https://www.constantcontact.comZapier — https://zapier.comElicit (systematic reviews & literature analysis) — https://elicit.comCorpus of Contemporary American English — https://www.english-corpora.org/coca/Descript — https://www.descript.comCanva — https://www.canva.comRiverside.fm — https://riverside.fmTIMESTAMPS:0:00 — Opening & guest cancellation1:18 — Podcast website & jingle development (and why music taste is complicated)6:34 — What is agentic AI? Jessica's invoice automation example10:33 — Why this use case actually works14:15 — The Blackboard incident (when the agent went off-script)16:21 — Security concerns: passwords, login credentials, and trust18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)19:27 — Enterprise solutions on the horizon20:57 — United Airlines cease-and-desist letters for replica training sites22:27 — Why Kimberly can't use agents in her CCRC work25:21 — How to identify your automatable workflows (the practical framework)27:57 — Research automation with Elicit & corpus linguistics30:45 — The core insight: AI shifts time, it doesn't save it34:10 — Organizational bottlenecks & human capacity limits35:08 — Pit & Peach (staying in your own canoe)Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  24. 25

    The Enshitification of Tech Platforms: Once You See It, You Can't Unsee It

    In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.Key TakeawaysEnshitification refers to the degradation of tech platforms over timeThe shift from individual users to business customers can lead to worse outcomes for end usersData privacy is a critical concern as companies monetize user interactionsAI is predicted to significantly displace workers in coming yearsRegulation is necessary to protect consumers from unchecked corporate powerMarket consolidation can stifle competition and innovationRecognizing these patterns is essential for navigating the tech landscapeFurther Reading & ResourcesCory Doctorow's Pluralistic blogThe Internet Con: How to Seize the Means of Computation2024 Tech Layoffs TrackerStreamlined "Top Links" Version (if you want minimal show notes):Cory Doctorow on EnshittificationEnshittification book"On the Dangers of Stochastic Parrots" by Bender & GebruAmazon/Diapers.com case studyGoogle Project Maven controversyAI job displacement tracker2024 Tech Layoffs TrackerLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  25. 24

    Maternal AI and the Myth of Women Saving Tech

    In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com📚 Books & Scholarly Works MentionedGlobal Evidence on Gender Gapsand Generative AI: https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdfPink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652lScary Smart (Mo Gawdat – maternal AI concept) https://www.mogawdat.com/scary-smartLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  26. 23

    The Containment Problem: Why AI and Synthetic Biology Can't Be Contained

    In this episode, Jessica teaches Kimberly about the "containment problem," a concept that explores whether we can actually control advanced technologies like AI and synthetic biology. Inspired by Mustafa Suleyman's book The Coming Wave, Jessica and Kimberly discuss why containment might be impossible, the democratization of powerful technologies, and the surprising world of DIY genetic engineering (yes, you can buy a frog modification kit for your garage).What We Cover:What is the containment problem and why it mattersThe difference between AGI, ASI, and ACI Why AI is fundamentally different from nuclear weapons when it comes to containmentSynthetic biology: from AlphaFold to $1,099 frog gene editing kitsThe geopolitical arms race and why profit motives complicate containmentHow technology democratization gives individuals unprecedented powerWhether complete AI containment is even possible (spoiler: probably not)The modern Turing test and why perception might be realityBooks & Resources Mentioned:Empire of AI by Karen HaoDeepMind documentaryKey Themes:Technology inevitability vs. choiceThe challenges of regulating rapidly evolving technologiesWho benefits from AI advancement?The tension between innovation and safetyFollow Women Talking About AI for more conversations exploring the implications, opportunities, and challenges of artificial intelligence.Leave us a comment or a suggestion!Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  27. 22

    Refusing the Drumbeat

    On saying no to “inevitable” AI—and what we say yes to instead.Kimberly and Jessica recently sat down with Melanie Dusseau and Miriam Reynoldson for an episode of Women Talkin’ ’Bout AI. We were especially looking forward to this conversation because Melanie and Miriam are our first guests who openly identify as “AI Resisters.” The timing also felt right. Both Kimberly and I have been reexamining our own stance on AI in education—how it intersects with learning, writing, and creativity—and the more distance we’ve had from running a tech company, the more critical and curious we’ve become.This episode digs into big, thorny questions:What Melanie calls “the drumbeat of inevitability” that pressures educators to adopt AIMiriam’s post-digital view of what it means to live in a world completely entangled with technology; and our shared inquiry into who actually benefits when AI tools promise to make everything faster and more efficient. We also talk about data ethics, creative integrity, and the growing movement of educators saying no to automation—not out of fear, but out of care for human learning and connection.It’s a thoughtful, challenging, and hopeful conversation—and we hope you enjoy it as much as we did.About our guests: Melanie is an Associate Professor of English at the University of Findlay and a writer whose work spans poetry, plays, and fiction. Miriam is a Melbourne-based digital learning designer, educator, and PhD candidate at RMIT University whose research explores the value of learning in times of digital ubiquity.Melanie and Miriam are co-authors of the Open Letter from Educators Who Refuse the Call to Adopt GenAI in Education, which has collected over 1,000 signatures and was featured in an article by Forbes. Melanie is also the author of the essay Burn It Down, which advocates for AI resistance in the academy. We highly recommend reading both before diving into the episode. Melanie's personal website and University of Findlay profileMiriam’s personal website and blog "Care Doesn't Scale" Signs Preceding the End of the World by Yuri HerreraAsimov’s Science FictionUrsula K. Le Guin  Ray BradburyLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  28. 21

    Hallucinations in the Courtroom: Why We Can’t Trust AI with the Law

    When a generative AI tool "hallucinates" a recipe, it’s a funny anecdote. When it hallucinates a legal precedent, people go to jail. In this episode, Kimberly and Jessica talk with Rebecca Fordon—law librarian, professor, and board member at the Free Law Project—to discuss why the legal system is uniquely vulnerable to AI hype. From the "duopoly" of legal publishing to the 500+ documented cases of AI-generated legal errors, we look at what happens when the law meets Large Language Models.Why This Matters Legal research requires 100% precision, but generative AI is built for probability, not fact. We explore the "Shifting Baseline Syndrome" in research: as we move toward a world where machines take the "first pass," how do we ensure we aren't settling for "80% certainty" in a field where mistakes have life-altering consequences?Key TopicsThe Hallucination Tracker: How legal professionals are documenting the 500+ (and counting) cases of AI-invented precedents.Privilege & Privacy: The hidden risk of waiving attorney-client privilege by feeding data into general-purpose LLMs.The Westlaw/Lexis Duopoly: How the Free Law Project is fighting to make primary legal materials accessible and transparent for the public.The Expert Pipeline Crisis: If AI replaces the "grunt work" of junior associates, how will the next generation of attorneys learn to think like lawyers?Certainty Amplification: Why the "confident" tone of AI is at odds with the strategic nuance required in legal advocacy.Notable Quotes"I’m a little bit worried that we might be getting to a place where, if AI can do it in a quarter of the time and get to 80% certainty, we might decide that’s 'good enough.' That really bothers me as an attorney." — Rebecca Fordon"In ecology, 'Shifting Baseline Syndrome' means each generation accepts a new 'normal' as the baseline. We are shifting into a world where the machine is the first pass, and the human is just an error-checker. That is a dangerous new baseline for research." — Kimberly Becker🔗 Featured Links & ResourcesRESOURCE: Free Law Project (CourtListener & RECAP)COMMUNITY: AI Law LibrariansBLOG: Musings about Librarianship by Aaron TayCRITIQUE: Refusing GenAICONCEPTS: KL3M (The "copyright clean" legal data model)Why shouldn't lawyers use ChatGPT for legal research? Standard generative AI models like ChatGPT are prone to "hallucinations," where they confidently invent legal citations, cases, and precedents that do not exist. In the legal field, using these outputs without verification can lead to sanctions, loss of attorney-client privilege, and significant ethical violations. Professional legal research requires domain-specific tools and human oversight to ensure 100% accuracy.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  29. 20

    The Gender Gap in GenAI: Usage, Power, and Whose Voices Count

    In this episode of Women Talkin’ ‘Bout AI, we start by discussing the findings of a 2024 study "Global Evidence on Gender Gaps and Generative AI" (🔗 below). One overall finding is that women are 20–25% less likely than men to use generative AI, which unspools into something bigger: a story about power, voice, and who gets to shape the future.We also discuss own experiences in tech, noticing how the gender gap in AI isn’t just about access to tools. It’s about what counts as legitimate work, whose voices are amplified, and how cultural scripts around “cheating,” confidence, and authority get absorbed into the most influential technologies of our time.We talk about:🔹 Why women’s hesitation around AI isn’t simply resistance, but often a reflection of ethics and identity.🔹 How underrepresentation today could mean future AI systems are trained on a distorted mirror of humanity.🔹 What it means to think of AI as both a child we’re raising and a cultural intermediary that’s already reshaping our sense of normal.🔹 the WEIRD AI Framework: WEIRD is a term from psychology that stands for Western, Educated, Industrialized, Rich, and Democratic. Most AI systems, generative models especially, are trained on corpora that overrepresent WEIRD voices and underrepresent everyone else.🔹 Practical ways women can experiment, reclaim, and band together in communities of practice.🔹 If AI is the new baseline for productivity and creativity, then the absence of women’s voices isn’t just a gap, it’s a risk of silence becoming the default.Learn more:🔗 Gender gap study: https://www.hbs.edu/faculty/Pages/item.aspx?num=66548🔗 Mo Gawdat's book Scary Smart: https://www.mogawdat.com/scary-smart🔗 Geoffrey Hinton Says AI Needs Maternal Instincts: https://www.forbes.com/sites/pialauritzen/2025/08/14/geoffrey-hinton-says-ai-needs-maternal-instincts-heres-what-it-takes/💙 Follow us on our Substack: Women Writin' 'Bout AI: https://substack.com/@womenwritinboutaiLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  30. 19

    Competing with Free: Why We Closed Moxie

    In this episode, we open up about something we haven’t shared publicly before: our decision to shut down Moxie, the startup we spent years building.We talk honestly about what led to that choice—the excitement of early growth, the challenges of raising money as non-technical founders, and the impossible reality of competing with free tools from tech giants like Google, OpenAI, and Microsoft.This isn’t just a story about one company. It’s about trust, expertise, failure, and the messy human side of working with generative AI in education and research. Along the way, we reflect on what we wish we’d known earlier, how burnout shaped our decisions, and what we’ve learned about ourselves through the process of letting go.What you’ll hear in this episode:Why we ultimately decided to shut down MoxieThe pressures of fundraising and pitching as non-technical foundersThe gap between hype and reality with AI in educationLessons on trust, expertise, and failure in both startups and academiaHow we’re processing life and work after MoxieIf you’ve ever wondered what it really feels like to close the doors on something you’ve poured yourself into, or you’re navigating your own questions about AI, startups, or burnout—you’ll find some resonance here.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  31. 18

    Is AI Spying on Your Family? How to Protect Your Privacy

    Today we sit down with Dr. Leslie Gruis — mathematician, NSA veteran, and author of The Privacy Pirates — to talk about the urgent importance of protecting personal information in our tech-driven world.From children’s online privacy to the rise of corporate data exploitation, Dr. Gruis shares both her insider experience from decades in national security and her practical advice for safeguarding our digital lives.📚 About our guest:First president of the NSA’s Women in Mathematics SocietyContributor to U.S. Cyber Command & National Intelligence CouncilAuthor of The Privacy Pirates: Pirates of Personal DataMentor and advocate for STEM students🔑 In this episode you’ll learn:Why privacy is essential to democracyThe risks kids face with school-issued laptops & smartphonesHow corporations collect and exploit our personal dataWhat parents and educators can do today to protect childrenThe ethical questions surrounding AI, surveillance, and data use🎙️  Show Notes & Topics we cover:Defining informational privacy in the 21st centuryChildren’s Online Privacy Protection Act (and why it’s outdated)School-issued laptops and surveillance concernsCorporate data collection, sentiment analysis, and manipulationThe asymmetric power between consumers and corporationsWhy protecting privacy is vital for democracy🔗  LinksBuy The Privacy PiratesFollow Dr. Leslie Gruis Follow Women Talking About AI:📺 YouTube📰 SubstackLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  32. 17

    Beyond Work: Post-Labor Economics with David Shapiro

    SummaryIn this conversation, Jessica and Kimberly interview David Shapiro to explore the concept of Post-Labor Economics. They discuss the implications of automation and AI on traditional job structures, the need for new economic measurements, and the evolving social contract. They explore the potential of Universal Basic Income and the importance of education in preparing future generations for a changing economy. The discussion emphasizes the need for a shift in how we perceive work, productivity, and personal fulfillment in a world increasingly dominated by technology.TakeawaysPost-Labor Economics examines the impact of automation on traditional jobs.Automation has historically decoupled productivity from human labor.The misconception that technology always creates new jobs is prevalent.AI's rapid advancement poses challenges for job security.Universal Basic Income (UBI) is a potential solution for economic displacement.Current economic measurements like GDP may not reflect true societal well-being.The social contract is evolving as labor becomes less central to identity.Education must adapt to focus on empathy, communication, and critical thinking.A garden mentality encourages ongoing personal growth rather than a linear life path.Rethinking work and meaning is essential in a post-labor society.LinksRest Is Resistance: Free Yourself from Grind Culture and Reclaim Your Life Book by Tricia Hersey (https://thenapministry.wordpress.com/)David's LinkTreeDavid's YouTube ChannelsDavid's SubStackWomen Writin' 'Bout AI SubstackLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  33. 16

    AI Literacy in Education: You Can’t Teach AI Without Teaching Tech

    In this episode, hosts Jessica and Kimberly are joined by Dr. Juliana Peloche, global educator and senior AI literacy advisor at Edith Cowan University. With over 20 years of cross-cultural teaching experience in Brazil, Chile, and Australia, Juliana shares how a curious 12-year-old student sparked her journey into AI education. Together, they explore why AI literacy is more than a technical skill—it's a foundation for critical thinking, equity, and ethics in the classroom. From digital basics like knowing what a browser is, to reimagining how we assess learning in the age of AI, this episode dives deep into how we can better prepare educators and students for a tech-saturated future—without losing our humanity.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  34. 15

    Writing with AI: Voice, Agency, and the Future of Feedback

    🎧 Episode SummaryDr. Tamara Tate joins Jessica and Kimberly to talk about AI, education, and the evolving role of writing in a world where students can co-write with machines. Tamara shares how she transitioned from a 17-year legal career into education research, what she’s learning through the development of Papyrus AI, and why feedback, voice, and agency matter more than ever. The conversation covers everything from AI literacy and middle school classrooms to the complexities of funding, parent engagement, and what it really means to “offload” learning. It’s a thoughtful, practical look at how generative AI is reshaping writing instruction—and why it’s not just about speed, but meaning.🔗 Show Notes LinksTamara Tate – UC Irvine Profile: https://education.uci.edu/people/tamara-tate/Digital Learning Lab: https://digitallearninglab.orgGenAIED.org – Generative AI in Education Resources: https://genaied.orgAnna Mills – AI & Writing Pedagogy: https://annamills.netSarah Elaine Eaton – Post-Plagiarism Framework: https://drsaraheaton.wordpress.comLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  35. 14

    Raising Kids in the Age of AI: Brain Development, Bias, and Bedtime

    In this episode of Women Talkin’ 'Bout AI, host Kimberly Becker sits down with Dr. Mathilde Cerioli—a cognitive neuroscientist, mom, and Chief Scientist at Everyone.AI—to unpack the complex, often messy intersections of child development, technology, and artificial intelligence.We cover:What AI can and can’t do for young mindsHow critical thinking actually develops—and why it can’t be outsourcedThe myth of "tech for tech’s sake" and why some edtech harms more than it helpsWhy your kid doesn’t need a bedtime podcast voiced by a deepfaked parentThe neuroscience behind struggle, dopamine, and why learning should be hardMisinformation, deepfakes, and why everyone needs a family safe wordThis conversation blends scientific rigor with real-world parenting chaos, offering both hope and hard truths. Whether you're raising a kindergartener or advising policymakers, this one’s for anyone who wants a future where tech serves human development, not the other way around.****If you find value in these conversations, consider supporting the podcast with a small donation—every bit helps us keep the mics on and the ideas flowing. Click the link below to donate.  https://www.buzzsprout.com/2411501/supporters/newLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  36. 13

    Teacher Empowerment in the Age of AI: Marissa Sadler Holder on AI Literacy

    What happens when a passionate educator steps away from the whiteboard and into the world of AI? In this episode, we sit down with Marissa Sadler Holder, a former classroom teacher turned consultant/entrepreneur, and the founder of Teaching with Machines. With a master’s in e-learning and recognition as a two-time recipient of SVS’s Leading Women in AI, Marissa brings a grounded, human-centered approach to AI literacy in education.We unpack her journey from teaching French to building a business, the emotional complexities of leaving the classroom, and why she believes teachers—not technologists—should be at the table when shaping the future of AI in education. From the power of small wins to the significance of that second “aha” moment, this conversation is a candid exploration of fear, hope, and the relentless pursuit of meaningful learning in an uncertain world.📝 Show Notes:Guest: Marissa Sadler Holder Founder of Teaching with Machines | Edtech Consultant | SVS Leading Women in AI (2024 & 2025)Topics We Cover:How COVID catalyzed Marissa’s shift into e-learning and AIThe founding story of Teaching with Machines: https://www.teachingwithmachines.com/What it really feels like to leave the classroom after 13 yearsHow she supports teachers at every stage of AI literacyWhy incremental change in classrooms matters more than big tech rolloutsTwo “aha” moments every educator has with generative AIThe emotional weight of entrepreneurship vs. classroom stressThe role of women’s voices in shaping AI discourseBridging the AI gap between students, teachers, and parentsHer upcoming project: Learning with Machines, focused on student + parent AI literacyWhy we should stop aiming to “master AI” and start focusing on meaningful applicationConference reflections: Why it's the people who make ASU+GSV unforgettableQuotable Moments:“The goal isn’t to master AI. It’s to stay curious, stay human, and keep learning.” “Every teacher will hit that second lightbulb moment—where you realize AI isn’t just a tool. It’s a transformation.” “What we do in professional development should mirror what we do in great teaching: make it relevant, make it engaging, and meet people where they are.”Links & Resources:Teaching with Machines (Marissa’s platform for educator-focused AI PD)Marissa on LinkedInSVS Summit: Leading Women in AIMentioned thought leader: Ethan Mollick’s SubstackLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  37. 12

    AI Literacy Is About Power.

    In this episode, hosts Jessica and Kimberly welcome Amanda Bickerstaff, founder and CEO of AI for Education. Amanda shares her journey from teaching to EdTech and discusses the current state of AI in education. Key topics include:The limitations and potential of AI tools in educationThe importance of AI literacy for educators and studentsHow generative AI is challenging traditional educational structuresEffective prompting techniques for AI systemsBalancing AI optimism and resistance in educationAmanda offers candid insights on the tensions between EdTech promises and realities while exploring how AI could fundamentally transform learning. This thought-provoking conversation examines the challenges and opportunities as education adapts to the age of artificial intelligence.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  38. 11

    Creating Balance in AI Literacy: A Conversation with Dr. Stella Lee

    In this thought-provoking episode of Women Talkin' 'Bout AI, hosts Dr. Kimberly Becker and Dr. Jessica Parker engage in an insightful conversation with Dr. Stella Lee, founder and chief learning strategist at Paradox Learning. With over 20 years of experience in AI, learning analytics, and digital ethics, Dr. Lee brings a unique interdisciplinary perspective to the discussion.• AI literacy: Why it matters and how to teach it• Debunking AI myths in education• Using AI as a learning partner• Balancing AI efficiency with holistic learning• AI adoption challenges in under-resourced areas• Ethical considerations in AI development📚 Content Discussed in This Episode: 00:00 Meeting Dr. Stella Lee and Her Impact  06:43 Exploring Paradox Learning's Mission and Insights13:50 The Role of Language in AI Communication19:20 Overcoming Blank Canvas Syndrome with AI  25:17 Enhancing Conversations with Chatbots  31:45 Embracing Technology with Critical Literacy38:15 The Evolution of Digital Literacy and AI  44:50 The Urgency of Innovation in Education51:11 Advocating for Thoughtful AI DevelopmentLearn how AI is reshaping education, its impact on student engagement, and the importance of diverse voices in tech.🎓 Perfect for educators, EdTech professionals, and anyone interested in the future of learning!Follow Women Talkin' 'Bout AI for more engaging conversations exploring the intersection of AI, education, and ethics from diverse female perspectives in the field.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  39. 10

    Beyond Productivity: Humanizing AI in Education

    🎙️ In this episode of Women Talkin' 'Bout AI, host Kimberly Becker welcomes Tricia Friedman, an all-around educational innovator who's been doing really future-forward and inclusive work in AI literacy and community building. In this powerful conversation, international educator and neuroqueer futurist Tricia uncovers the profound potential of AI in education while maintaining authentic human connections.Key Insights:-The critical need for AI literacy in K-12 and higher education-Reimagining learning assessment in the age of generative AI-How technology can deepen human understanding and creativity-Innovative approaches to integrating AI without losing human connection📚 Content Discussed in This Episode: 0:00 - Introduction5:30 - AI in Education15:45 - Challenges in K-12 Learning25:20 - Reimagining Assessment35:10 - Personal AI Experiments45:30 - Future of Technology and LearningRESOURCES:1. "The AI Mirror" by Shannon Valor2."The New Breed" by Dr. Kate Darling3. *Parker, J. L., Richard, V., & Becker, K. (2023). Flexibility & iteration: exploring the potential of large language models in developing and refining interview protocols. The Qualitative Report, 28(9), 2772-2791.This research explores the potential of large language models to iteratively develop and refine interview protocols, providing empirical evidence for their utility in qualitative research.*Parker, J. L., Richard, V., & Becker, K. (2023). Guidelines for the integration of large language models in developing and refining interview protocols. The Qualitative Report, 28(12), 3460-3474.This paper provides comprehensive guidelines for integrating large language models in the development and refinement of interview protocols, enhancing qualitative research methodologies.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  40. 9

    AI in EMI Universities: Academic English, Writing, and Multilingual Learning

    🎙️ In this episode of Women Talkin' 'Bout AI, hosts Jessica and Kimberly dive deep into the transformative world of generative AI with  Ilkem Kajcan Dipchin, an award-winning educator and AI strategist from Istanbul. They explore the transformative impact of generative AI on language learning, academic writing, and cross-cultural communication.Discover insights into:• AI's impact on language learning and academic writing• Cultural nuances in AI model development• Challenges and opportunities for non-native English speakers• Balancing AI tools with authentic learning experiences📚 Content Discussed in This Episode: 00:00 Integrating AI in English Medium Universities  06:01 Generative AI and Language Diversity  12:05 The Importance of AI Literacy  18:18 Marginalized Voices in Academic Publishing23:21 The Role of Cheating in Education29:11 The Role of AI in Language Learning35:09 The Role of Generative AI in Modern Writing  40:29 Challenges of Turkish Students Writing in English  Ilkim shares her unique perspective on:✓ AI literacy✓ Multilingual AI challenges✓ Maintaining critical thinking in the age of generative AIPerfect for educators, language learners, AI enthusiasts, and anyone curious about the intersection of technology and learning!Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  41. 8

    AI in Business: Insights from Award-Winning CEO Myra Roldan

    In this episode of Women Talkin' 'bout AI, Drs. Jessica and Kimberly welcome Myra Roldan, a pioneering leader in AI and two-time Stevie Award winner. Myra brings her unique perspective as a former applied AI engineer and current consultant to small, medium, and enterprise businesses. She discusses the cultural and ethical considerations of AI adoption, drawing parallels to other technological revolutions.📚 Content Discussed in This Episode: 00:00 Integrating AI into Business Operations  06:28 Establishing AI Policies and Strategies12:24 Understanding Generative AI Misconceptions  18:36 The Evolution and Adoption of AI  25:10 Everyday Use of AI Without Awareness  31:35 Understanding the Blackbox Problem in AI  37:06 Data Privacy Concerns with Smart Devices43:36 Meditative Art and Intellectual Conversations on AI49:40 AI and Decision-Making in Human-Centered Solutions  56:17 Working Commitments and Brief NotesWhether you're a business leader, AI enthusiast, or curious about the future of work, this episode offers valuable insights into the practical applications of AI in various industries.Connect with Myra:1. CEO Almira Roldan (linkedin.com/in/myraroldan)2.  Your Academy:( https://youracademyai.my.canva.site/y...)Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  42. 7

    Teaching Writing in the Age of AI: Perspectives from Dr. Emily Dux Speltz

    In this episode, Drs. Jessica and Kimberly welcome their guest, Dr. Emily Dux Speltz, an Assistant Professor of Applied Linguistics and Technology at Embry-Riddle Aeronautical University Worldwide. Dr. Dux Speltz shares her experiences developing one of the first AI and Writing courses in the U.S., discusses her research on process-focused writing feedback, and explores how generative AI is reshaping writing instruction and assessment. The conversation explores the evolution of writing technology, the importance of maintaining fundamental writing skills in an AI-enhanced world, and the exciting possibilities that arise when combining disciplinary expertise with AI tools.The discussion highlights how AI is not just changing how we write, but also opening new possibilities for creativity and innovation across disciplines, while emphasizing the continued importance of human expertise and critical thinking in writing education.Further Reading: Guest: Dr. Emily Dux Speltz (https://emilyduxspeltz.com/), Assistant Professor of Applied Linguistics and Technology at Embry-Riddle Aeronautical University Worldwide (https://erau.edu/)AI in Writing Course (https://engl.iastate.edu/2023/04/03/engl-222x-artificial-intelligence-and-writing/) developed at Iowa State University with Dr. Abram Anders (abramanders.substack.com.); You can also read this article that featured ENGL 222X in the Iowa State Daily: https://iowastatedaily.com/275846/news/artificial-intelligence-incorporated-in-new-english-course/NSF-funded workshop/conference on text production and comprehension by human and artificial intelligence: https://new.nsf.gov/events/text-production-comprehension-human-artificialLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  43. 6

    Does AI Make Research Too Easy? The Risk of Frictionless Scholarship

    In this episode, Moxie founders Jessica and Kimberly discuss their latest tech project, Moxie 2.0. They dive into the challenges of balancing user experience with educational integrity in AI-assisted research tools, share insights from recent academic conferences, and explore how AI is impacting academic writing and research. Join them for an insightful conversation about technology, education, and ethical AI implementation.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  44. 5

    AI Literacy in Higher Ed: Balancing Critique and Curiosity with Dr.Anna Mills

    Join hosts Drs. Jessica and Kimberly as they welcome their first guest, Dr. Anna Mills, a leader in integrating AI in education. With 18 years of community college writing instruction experience, Anna shares her insights on AI literacy, academic integrity, and the evolving landscape of AI in higher education.Key topics discussed:-AI literacy and its importance in education-Balancing critical thinking and AI integration in writing courses-The evolution of using generative AI in writing instruction-Combining peer feedback with AI feedback in student assignments-The challenges and opportunities of AI in teaching and learning-Building custom chatbots for educational purposes-The importance of educators' voices in shaping AI tools and policiesHere are Dr. Anna Mills' AI resource list:🔗Anna Mills' website and resources: https://www.annarmills.com/  🔗"Assistant, Parrot, or..." publication: https://openpraxis.org/articles/10.55982/openpraxis.16.1.631 🔗Stuart Selber's framework for literacies: https://www.amazon.com/Multiliteracies-Digital-Studies-Writing-Rhetoric/dp/0809325519🔗MyEssayFeedback.ai website: https://myessayfeedback.ai/🔗AI Pedagogy Project website: https://aipedagogy.org/🔗NIST US AI Safety Institute MLA team: https://www.nist.gov/artificial-intelligence/ai-safety-institute-consortium🔗Anna Mills' "How Arguments Work": https://open.umn.edu/opentextbooks/textbooks/1112 🔗Ethan Mollick's Twitter & LinkedIn: https://twitter.com/emollick; https://www.linkedin.com/in/emollick/🔗Jose Bowen's "Teaching with AI": https://www.press.jhu.edu/books/title/53869/teaching-ai 🔗Critical AI Institute at Rutgers: https://criticalai.org/🔗Maha Bali's blog: https://blog.mahabali.me/🔗Anuj Gupta's AI literacy work: https://www.linkedin.com/in/anuj-gupta-3533541a1/ 🔗CCC AI Learn hashtag: Search #CCCAILearn on Twitter🔗"You Look Like a Thing and I Love You": https://www.janelleshane.com/book-you-look-like-a-thing🔗"The Wild Robot" book: https://www.peterbrownstudio.com/books/the-wild-robot/ and movie: https://www.thewildrobotmovie.com/ 🔗Alison Gopnik's work: https://simons.berkeley.edu/news/stone-soup-ai 🔗Anna’s AI literacy micro-lessons: https://www.youtube.com/watch?v=KirvJ6kv3m0 🔗Joy Buolamwini's work: https://www.media.mit.edu/people/joyab/overview/ Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  45. 4

    Common AI Myths: What Most People Get Wrong About Generative AI

    In this inaugural episode of "Women Talkin' 'Bout AI," hosts Jessica Parker and Kimberly Becker share their journey from AI novices to creators of Moxie (now defunct), which was a feedback tool for academic writers using their years of experience coaching disssertation writers and other early career researchers. They discuss their motivations, approach to AI in education, and experiences working with users. They touch on topics like the importance of iteration in AI interactions, the challenges of AI integration in academia, and the potential of AI for providing formative feedback. The episode concludes with their personal "pit and peach" moments of the week, setting the tone for future episodes that blend AI insights with relatable life experiences.Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  46. 3

    AI in Academia: Balancing Efficiency, Learning, and Ethics

    In this episode of Women Talkin’ ’Bout AI, Drs. Jessica and Kimberly dive into the intersection of AI, education, and entrepreneurship. In this candid conversation, they discuss:-Balancing tech development with education in an AI startup-The AI efficiency myth and its impact on academic writing-Insights from recent AI conferences-The controversial WildChat dataset and its privacy implications-Challenges of integrating AI in higher education-Critical thinking skills in the age of AI-Personal experiences balancing work and lifeLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  47. 2

    The Future of Academic Writing with AI Tools

    In this episode of "Women Talkin' 'Bout AI", Drs. Jessica Parker and Kimberly Becker discuss a recent study on AI essay grading and its implications for education. Key topics: -The concept of contrastive rhetoric and how cultural backgrounds may influence writing styles-The "No True Scotsman" fallacy in AI debates about education-Different types of AI prompting (zero-shot, one-shot, few-shot) and their applications-Challenges educators face in adapting assessments and curricula to account for AI use-The balance between using AI to increase efficiency versus improve quality in academic work-The importance of struggle and friction in the learning processLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  48. 1

    AI in Higher Ed: Why 'Process Over Product' is the Key to Critical Thinking

    In this episode of "Women Talkin’ ’Bout AI",  Drs. Jessica and Kimberly dive into the evolving landscape of AI in higher education. They discuss Blackboard’s new AI features, the concept of digital twinning, and the challenges of fostering critical thinking in the digital age. They explore the complexities of integrating AI in academic tools and debate its impact on student learning and engagement.Key Topics:-Blackboard’s new AI avatar feature for discussion boards-The concept of “twinning” in educational technology-Challenges of AI-generated content in admissions essays-The importance of process over product in learning-Critical thinking markers in human-AI interactions-The need for evidence-based approaches to AI in education-Balancing efficiency with meaningful learning experiences-Guiding students on appropriate use of AI tools in academic settingsLeave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

  49. 0

    AI Literacy in Higher Ed: Moving from Curiosity to Critical Practice

    In this episode of "Women Talkin' 'Bout AI", Jessica and Kimberly explore the challenges and opportunities of integrating AI into higher education. Key Topics-- Taking small, experimental steps to implement AI in courses-- The importance of curiosity and play when learning about AI-- AI literacy: functional, critical, and rhetorical approaches-- The power of semantic search in academic research-- Rethinking rhetoric and communication in the age of AI 🔗 LINKS:➡️The Many Worlds I See, by Dr. Fei Fei Li: https://www.amazon.com/Worlds-See-Cur...➡️An AI Literacy Framework for Higher Ed: https://moxielearn.ai/ai-literacies-f...➡️Keyword vs. Semantic Search:   / boolean-vs-keyword-lexical-search-vs-seman...  ➡️Color Wow Root Cover Up: https://colorwowhair.com/➡️Perimenopause & ADHD: https://www.cambridge.org/core/journa...Leave us a comment or a suggestion! Support the showContact us: https://www.womentalkinboutai.com/ 

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

Two women examining AI through a lens of power, not just capability. Why deepfakes target women. How bias gets baked in. What tech companies aren't saying. Kimberly brings corpus linguistics; Jessica brings strategy. Both bring skepticism, feminism, research expertise, and a refusal to take the hype at face value.Subscribe to our channel if you’re also interested in understanding AI behind the headlines.

HOSTED BY

Kimberly Becker & Jessica Parker

URL copied to clipboard!