PODCAST · education
EdTechnical
by Owen Henkel & Libby Hills
Hosted by EdTechnical co-founders Libby Hills (CEO) and Owen Henkel (Research Director), the EdTechnical podcast explores AI in education through a research-grounded lens. Each episode, Libby and Owen ask experts to help educators sift the useful insights from the AI hype. They ask questions like: how does this actually help students and teachers? What do we actually know about this technology, and what is just speculation? And (importantly!) when we say AI, what are we actually talking about?Beyond the podcast, EdTechnical also invests in promising AI edtech companies and conducts applied research to inform real-world product and investment decisions.
-
51
AI That Acts: What “Agents” Mean for Classrooms
In this EdTechnical short, Libby and Owen unpack ‘AI agents’ and what they mean for education. Agents are large language models connected to tools and workflows that are allowed to take actions like searching, summarising, and completing multi-step tasks. Recent progress comes from the combination of stronger models and better systems for connecting agents to external tools, enabling more complex and autonomous outputs.Applying agents to education brings a tension between flexibility and reliability. Agentic systems can be useful for teachers, who operate across varied contexts and need adaptable support. For students, especially in structured learning, too much flexibility can reduce clarity and introduce inconsistency.This matters because effective learning depends on structure and progression. The value of agents in education depends on how well they are applied to the specific task and learning goal.Links:OpenClaw & Moltbook: The viral story of AI agents building their own Reddit-like social network https://techcrunch.com/2026/01/30/openclaws-ai-assistants-are-now-building-their-own-social-network/ Claude Research Mode: Anthropic's explainer on deep research https://www.anthropic.com/news/research Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
50
Voice AI Is Listening. But Is It Actually Hearing? (Recorded Live at SXSW EDU 2026)
At this year's SXSW EDU, Owen joined a panel on what it takes to make voice AI for assessment work in classrooms.In this live recording of the session, the panelists untangle how voice AI works, and what testing this technology with kindergartners looks like in rural Georgia. They explain why the distinction between capturing what a student said versus what they meant matters enormously for literacy assessment and why questions of privacy, equity and model bias are not afterthoughts but design requirements.Where does voice AI genuinely open up new possibilities in education, and where is the evidence still thin? The other panelists were Patti Ura, Director of Learning Technology Research at Digital Promise, Amelia Kelly, VP of Data Science at Curriculum Associates and former CTO of Soapbox Labs, and Kristen Hoff, Head of Measurement at Curriculum Associates.Links:Soapbox Labs, now part of Curriculum Associates Digital Promise OpenAI Whisper, the open source speech-to-text modelJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
49
A Teddy Bear That Talks Back?
In this EdTechnical short, Libby and Owen test a conversational plush toy to understand more about AI-powered toys designed for young children. Recent research from Cambridge shows that preschool-aged children can form rapid emotional connections with social robots like these, even when the responses from the robot are inconsistent.Children’s experiences with AI toys are shaped by voice and real-time interaction. Could highly responsive, frictionless AI systems in toys influence children’s expectations of human relationships?Libby and Owen discuss the difference between shared, supervised play and extended solo interaction with the toy, which may be less advisable. As the technology continues to improve, the key challenge becomes how these tools are introduced and used in early childhood environments.Links:BBC Article: AI toys for children misread emotions and respond inappropriately, researchers warnCambridge study on AI toys in early childhoodAI chatbots and the “empathy gap” in childrenJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
48
AI broke take-home assignments. Can it fix them too?
In this episode of EdTechnical, Libby and Owen speak with Panos Ipeirotis, Professor at NYU Stern School of Business, about his experiment using AI to run oral exams in university courses. As generative AI makes it easier for students to outsource written assignments, educators are asking whether traditional take-home assessments still measure real understanding.Panos introduced AI-mediated oral assessments after noticing a mismatch between high-quality written submissions and weak classroom discussion. In the new system, students answer questions from a voice agent that probes their understanding of the material and their own work.Panos tells Libby and Owen how the exams work, including an AI “council” of language models that evaluates transcripts and produces detailed feedback. What does this approach reveal about the future of assessment? Could AI make oral exams scalable in higher education, and even improve fairness and grading consistency?Links:Panos Ipeirotis – NYU Stern Faculty ProfileNYU Professor Uses AI-Run Oral Exams to “Fight Fire with Fire”Article: The case for oral assessment in the age of AIGuest BioPanos Ipeirotis is a Professor of Information, Operations and Management Sciences at NYU Stern School of Business. His research focuses on data science, AI, and human-AI collaboration. In addition to his academic work, he experiments with practical applications of AI in education, including new models of assessment that combine oral exams with AI-based evaluation.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
47
Why AI Can't Automate Just the "Boring" Parts of Teaching
In this EdTechnical Short, Libby and Owen explore how AI might reshape teaching through the lens of the “weakest link” theory from economics. They discuss the possibility of full job replacement, partial task automation, and productivity gains for teachers. Automation often shifts the composition of work rather than eliminating roles, as with bank tellers and radiologists. In schools, planning, grading, diagnosing student understanding, classroom management, and relationship-building are tightly interconnected. Automating one component may reallocate time, but complexity is not neatly reduced.AI can already perform isolated teaching tasks. What happens to the education system when those tasks are embedded in a deeply relational profession?Links:Michael Kremer (1993), The O-Ring Theory of Economic DevelopmentDavid Autor (2015), “Why Are There Still So Many Jobs? The History and Future of Workplace Automation”Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
46
Are Roboteachers Coming? (Probably Not)
In this episode of EdTechnical, Libby and Owen speak with Kristyn Sommer, a developmental psychologist and child robot interaction researcher.Together, they explore how young children learn through imitation, why physical presence matters for learning, and what the so-called robot deficit reveals about engagement, psychological safety, and learning outcomes. Kristyn explains where robots can support learning, where they fall short, and why many assumptions about roboteachers are far ahead of the evidence.They also discuss the practical realities and the ethics of educational robotics, and why robots are more likely to support teachers than replace them anytime soon.Links:Can a robot teach me that? Children’s ability to imitate robotsPreschool children overimitate robots, but do so less than they overimitate humansWhen is it right for a robot to be wrong? Children trust a robot over a human in a selective trust taskBioKristyn Sommer is a developmental psychologist and child-robot interaction researcher whose work explores how young children learn from and with social robots. She is a postdoctoral research fellow at Griffith University’s School of Applied Psychology, where she investigates how children’s social, emotional and behavioural engagement with robotic teachers affects learning and development. Her research also examines individual differences in how children relate to and trust robots, and how these insights might inform more supportive, evidence-based uses of educational technology. She is also a Jacobs Foundation Research Fellow focused on foundational work in children’s learning with robot companions.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
45
Adding It Up: Dan Meyer on Math, Tech & AI Scepticism
In this episode of EdTechnical, Libby and Owen sit down with Dan Meyer: math educator, EdTech innovator, and self-proclaimed “token AI sceptic”. Dan’s rare mix of classroom experience and product design insight gives him a unique perspective on how technology intersects with real classrooms. He shares what the classroom teaches him about student engagement, the challenges teachers face, and why motivation is deeply social - which EdTech can overlook.They dig into how AI can support creativity and connection, why great math teaching starts with inviting and developing, and where “AI guy” might be missing the point. Plus, Dan reveals the AI project he’s excited about and what it means for teachers. Links:TeacherTapp survey on teacher AI use EdTechnical’s forecasting competition - deadline 16 DecemberBioDan Meyer taught secondary maths to students who didn't like secondary maths. He has advocated for better maths instruction on CNN, Good Morning America, Everyday With Rachel Ray, and TED.com. He earned his doctorate from Stanford University in maths education and is the Vice President of Teacher Growth at Amplify where he explores the future of maths, technology, and learning. He has worked with teachers around the world, calls Oakland home, and taught eighth graders there yesterday.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
44
How Revolutionary is Alpha School?
In this episode of EdTechnical, Libby and Owen look at Alpha School, a model that started as a micro-school in Austin, Texas, and is now expanding. At its core, Alpha condenses academic learning into a morning block where students work largely independently using software, supported by guides rather than traditional teachers. Afternoons are reserved for enrichment and life skills.Libby and Owen discuss the appeal of this approach , the evidence behind mastery-based learning, and the big questions about scalability and cost. Is this a breakthrough for education or just a well-designed version of ideas we’ve seen before? Join them for a brief dive into Alpha School’s model and what it could signal for future learning models.Links:Alpha School’s white paper A parent review of Alpha School A Wired article about Alpha SchoolEdTechnical’s forecasting competitionJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
43
Back to the Future: Two Years on with Daisy Christodoulou
In this episode Libby and Owen are joined by Daisy Christodoulou MBE, EdTechnical’s very first guest from two years ago. Daisy is Director of Education at No More Marking and a leading voice in assessment. Daisy, Owen and Libby reflect over what’s changed in the two years since that first episode, including Daisy’s own views about the opportunities for AI use in assessment. Daisy shares what her team has learned through their recent experiments with AI work, including how falling model costs are unlocking new possibilities, and why human-in-the-loop systems are essential.LinksEdTechnical websiteMaking Good Progress?: The future of Assessment for Learning paperback by Daisy ChristodoulouNo More MarkingJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
42
Guardrails and Growth: California’s AI Safety Push
Millions of students now study with AI chatbots. There are growing concerns about what happens when vulnerable teens form emotional bonds with AI. Tragic teen deaths have sparked intense debate about how to protect young people from AI systems that blur the line between tool and companion. California just drew the first regulatory lines—but they're messy and educational AI is caught in the middle. In this short episode, Libby and Owen discuss the trade-off between building guardrails for safety, and achieving ambitious goals.This matters beyond California: when the state that's home to OpenAI, Google, and Anthropic sets the rules, this has consequences for classrooms everywhere. LinksSB 243 Text: Companion Chatbots AB 1064 Veto Message Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
41
Is social media really destroying teen mental health?
In this episode of EdTechnical, Libby and Owen speak with Candice Odgers, a psychologist and researcher studying how online experiences influence children's mental health. They revisit the debate around social media and teen wellbeing, questioning the claims that social media use has caused rising rates of depression and anxiety. Candice calls for a more careful reading of the evidence and cautions against rushing into restrictive policies that may have unintended consequences or divert attention from more effective interventions.Candice also shares early findings from her recent research into AI in education. She finds surprisingly limited use of AI among young people, and mixed perceptions around what counts as cheating, which shapes how these tools are received. Notably, she found no clear socioeconomic divide in AI engagement, raising questions about how these tools might be designed to support more equitable learning. They discuss the challenge of designing rigorous studies in this space and the need for thoughtful, evidence-informed approaches to both social media and AI.Links:Adaptlab - Adaptation, Development and Positive Transitions LabNYT Article: Panicking About Your Kids’ Phones? New Research Says Don’tBioCandice Odgers is the Associate Dean for Research and Faculty Development and Professor of Psychological Science at the University of California Irvine. She also co-directs the Child & Brain Development Program at the Canadian Institute for Advanced Research and the CERES Network funded by the Jacobs Foundation.Her team has been capturing the daily lives and health of adolescents using mobile phones and sensors over the past decade. More recently, she has been working to leverage digital technologies to better support the needs of children and adolescents as they come of age in an increasingly unequal and digital world.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
40
Why AI Detectors Don't Work for Education
In this episode of Ed-Technical, Libby and Owen explore why traditional AI detection tools are struggling in academic settings. As students adopt increasingly sophisticated methods to evade AI detection - like paraphrasing tools, hybrid writing, and sequential model use - detection accuracy drops and false positives rise. Libby and Owen look at the research showing why reliable detection with automated tools is so difficult, including why watermarking and statistical analysis often fail in real-world conditions. The conversation shifts toward process-based and live assessments, such as keystroke tracking and oral exams, which offer more dependable ways to evaluate student work. They also discuss the institutional challenges that prevent widespread adoption of these methods, like resource constraints and student resistance. Ultimately, they ask how the conversation about detection could lead towards more meaningful assessment. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
39
Rewiring the Brain: Reading, AI and the Science of Literacy
In this first episode of EdTechnical Season 3, Libby and Owen speak with Dr. Jason Yeatman from Stanford University about how the brain learns to read, the power of better assessment, and a broader look at how AI is beginning to reshape our relationship with reading itself. They touch on the science behind reading as a learned skill, the surprising overlap between visual and auditory processing, and the challenges schools face in teaching it well. ROAR (Rapid Online Assessment of Reading), a free online reading assessment tool developed by Dr. Yeatman’s lab, comes up as a practical way schools are identifying literacy gaps and supporting students at scale across the US. They reflect on what reading looks like in an AI-driven world in which technology can surface information instantly, reflecting that literacy remains essential for engaging with complexity, understanding detail, and maintaining equal access to opportunity and participation in society. LinksROAR (Rapid Online Reading Assessment) – Welcome to ROAR!Journal Article: The Virtuous Cycle between Education and Neuroscience, by Jason D. Yeatman and Maya Yablonski, published in Mind, Brain and Education (August 2025)BioDr. Jason Yeatman is an Associate Professor in the Graduate School of Education and Department of Psychology and the Division of Developmental and Behavioral Pediatrics at Stanford University. He earned his PhD in Psychology, focusing on the neurobiology of literacy and brain imaging methods to study learning and plasticity. As director of the Brain Development and Education Lab, his research aims to uncover how children learn to read, how this process differs in those with dyslexia, and how to design effective literacy interventions using structural and functional neuroimaging to explore how reading instruction shapes brain development.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
38
Assessment in Education: To AI or Not to AI?
In this episode of EdTechnical, Libby and Owen speak with assessment expert Dylan Wiliam, Emeritus Professor at UCL Institute of Education, about how formative assessment and AI are reshaping classroom practice. Dylan brings decades of experience in educational research and teacher development to a timely conversation about what works, what doesn’t, and what’s next for assessment. They cover: Why formative assessment remains underused despite its proven impact How AI is reshaping summative assessment and teacher workload The limits of AI in delivering meaningful feedback Rethinking homework in the age of AI Oral exams, conversational assessment, and the future of grading The potential for AI to shift the teacher-student dynamic for the better Links Book: Student Assessment: Better Evidence, Better Decisions, Better Learning Dylan and others explore how assessment can be redesigned to better support learning and decision-making in schools. Podcast episode: Formative Assessment, AI, and the Future of Teaching Dylan discusses how AI can support teacher growth and formative assessment, while cautioning against overreliance on tech. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
37
Is ChatGPT Rotting Your Brain?
In this short, Libby and Owen digest a recent MIT study attracting a lot of attention, ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing’. The study looked at how using tools like ChatGPT for writing essays affects people's brains and writing abilities compared to using search engines or just their own thinking. Is there a potential trade-off between making writing easier in the short term, but harming cognitive abilities and learning over time? This question is especially salient for students who are in the earlier stages of developing their essay writing skills. Link:Your Brain on ChatGPTJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
36
Finding Their Voice: Voice AI for Literacy Support
Voice AI is having a moment in education. As schools grapple with declining literacy scores and stretched teaching resources, voice-enabled tools have the potential to help. But what's already working in real classrooms, and what challenges remain?In this episode, Libby and Owen speak with Kristen Huff from Curriculum Associates and Amelia Kelly from SoapBox Labs about the emerging field of voice AI for literacy support and assessment. Together they explore how automatic speech recognition technology helps teachers identify reading challenges earlier, provide more frequent assessments, and give students personalized feedback on their oral reading.They discuss the practical realities of developing and implementing voice AI in education, from navigating noisy classroom environments to building teacher trust in AI-generated assessments.Links:Ed-Technical episode with TeachFX about voice AI and teacher feedback Ed-Technical episode with Professor Peter Foltz about voice AI Ed-Technical episode with Dr Carmen Strigel about use of voice technology for teacher feedback in low resource settings Guest biographies Kristen Huff, MEd, EdD, currently serves as the Head of Measurement at Curriculum Associates, where she works with a team of assessment designers, psychometricians, and researchers in the development of online assessments integrated with personalized learning and teacher-led instruction. Kristen has deep expertise in k-12 large scale assessment, and has presented and published consistently in educational measurement conferences and publications for over 25 years.Amelia Kelly, PhD, is an AI engineer and pioneer in voice technology with more than 15 years of experience in speech recognition and natural language understanding. She is a Fulbright Scholar and Eisenhower Fellow, holding a PhD and master’s in linguistics and speech technology. Amelia currently serves as chief technology officer of SoapBox Labs and vice president of data science at Curriculum Associates, where she leads the development of child-specific speech-recognition technologies.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
35
Coach or Crutch?: Using AI to hone self regulation (not outsource it)
In this episode, Libby and Owen talk to Sanna Järvelä and Inge Molenaar, two of the world’s leading scholars on self‑regulated learning (SRL). Together they cover SRL 101: what self-regulated learning is and why it is a valuable skill. Self-regulated learning is students setting their own goals and then monitoring their learning to achieve those goals. Self-regulation can come more naturally in informal learning settings like sports, but it can be harder to monitor your learning and know if you're on track in school. Sanna and Inge explain how technology can help to address this, and make the learning process more visible. AI systems offer valuable opportunities for better understanding and measuring of self-regulated learning, but need to be carefully designed. We want AI to be a coach not a crutch: AI systems need to reinforce self-regulated learning, not encourage students to offload it. They also touch on the increasingly important question about how we self-regulate our own use of AI. When do I need to proofread this, when do I use autocomplete, and when do I turn AI off?Guest biographies and linksSanna Järvelä is Professor of Learning Sciences & Educational Technology at the University of Oulu, Finland, where she leads the LET research unit. She is co-Director of CELLA, the Center for Learning and Living with AI supported by the Jacobs Foundation. Inge Molenaar is Professor of Education & Artificial Intelligence at Radboud University and founding Director of the Dutch National Education Lab AI (NOLAI). She is co-Director of CELLA the Center for Learning and Living with AI alongside Sanna. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
34
A1 sauce for all: Reflections from SXSW and ASUGSV
This week Owen and Libby reflect on two recent EdTech conferences in the US: SXSW Edu in March and ASUGSV in April. They discuss how much things have shifted for US education over this short time period, and three themes that stood out to them both: AI literacy, transformation versus efficiency, and the disruptive potential of AI for education. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
33
Mimicry versus meaning: why context is important for AI tools
Another live Ed-Technical episode! In this short, Owen does a deep dive on AI and discourse analysis (the study of how meaning is constructed through language) with three experts. The conversation explores the intersection between AI, particularly Large Language Models (LLMs), and the study of discourse. This is a topical conversation as LLM capabilities continue to evolve. LLMs have mastered sentence level communication. However we know less about their ability to be useful over the course of a full conversation and complex and interactive processes (like learning) that require deeper appreciation of context. Featuring:Pani Kendeu: Professor at the University of Minnesota, researching learning, cognition, and technology, and a former elementary school teacher. Alyssa Wise: Professor of Technology and Education at Vanderbilt University, directing the Live Learning Innovation Incubator which bridges technology with real-world classroom challenges.Art Graesser: Professor at the University of Memphis, co-founder of the Institute for Intelligent Systems and the Society for Text and Discourse. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
32
Live from SXSW EDU: Evidence Eats AI for Breakfast
Everyone is talking about AI’s power to provide answers, but what about your lingering questions? What does the latest research actually tell us? Join Libby and Owen for this live session from SXSW EDU as they delve into the latest research to uncover where AI is truly adding value in the educational landscape — and where it falls short. They’re joined by two expert guests: Kristen DiCerbo from Khan Academy and Assistant Professor Peter Bergman from University of Texas at Austin and Learning Collider. The group discusses the most pressing open questions and key findings from the latest research.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
31
181 Papers Later: What We Know (and Don't) About GenAI in Schools
In this episode, Owen and Libby chat with Chris Agnew about Stanford's new generative AI hub for education. Chris leads this initiative within Stanford's SCALE program, which aims to be a trusted source for education system leaders on what works in AI and learning.Chris walks us through their research repository of 181 papers examining AI's impact in K-12 education. He outlines their GenAI tools typology which breaks down AI applications into three categories: efficiency gains, improving student outcomes, and reimagining schooling. The conversation explores key research gaps, including how schools can productively engage with teachers' unions on AI adoption and understanding how students use AI tools for homework - the "elephant in the room" that keeps education leaders up at night.Before joining Stanford, Chris worked in non-traditional learning environments from wilderness education to apprenticeship programs. He shares both aspirational and practical visions for AI in education over the next five years - though sadly, none involve Owen's hoped-for cyborg centaur tutors (yet).Links:Stanford Accelerator for Learning SCALE InitiativeGenerative AI Research RepositoryJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
30
Is two years of learning possible in six weeks with AI?
In this short, Owen and Libby discuss a recent World Bank blog post about a study in Nigeria that evaluated the impact of Microsoft Copilot (powered by ChatGPT) on student learning outcomes. In a six-week after school programme, students were supported to use Copilot. The full study hasn’t been published yet but the blog post reports “overwhelmingly positive effects on learning outcomes”. It reports that the learning improvement over the six-week programme was equivalent to nearly two years of typical learning. Owen has a few questions about this…LinkWorld Bank blog post ‘From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time’Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
29
Babies & AI: what can AI tell us about how babies learn language?
In this episode, Libby and Owen interview Mike Frank, Professor at Stanford University and leading expert in child development. This episode has a different angle to the others, as it is more about AI as a scientific instrument rather than as a tool for learning. Libby and Owen have a fascinating discussion with Mike about language acquisition and what we can learn about language learning from large language models. Mike explains some of the differences between how large language models develop an understanding of human language versus how babies do this. There are some big questions touched on here, including how much of the full human experience it’s possible to capture in data. Libby and Owen also make excellent use of Mike’s valuable time by asking for his expert view on why infants find unboxing videos - videos of other children opening gifts - so addictive. LinksMike Frank’s biography New York Times piece about Mike’s work An interview with Mike about his research Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
28
Teachers & ChatGPT: 25.3 extra minutes a week
In this short, Libby and Owen discuss a hot-off-the-press study that is one of the first to test how ChatGPT impacts the time science teachers spend on lesson preparation. The TLDR is that teachers who used ChatGPT, with a guide, spent 31% less time preparing lessons - that’s 25.3 minutes per week on average. This very promising result points to the potential for ChatGPT and similar generative AI tools to help teachers with their workload. However we encourage you to dig into the summary and report to go beyond the headline result (after listening to this episode) - this is a rich and rigorous study with lots of other interesting findings!Links EEF summary Full study Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
27
How & why did Google build an education specific LLM? (part 2/3)
This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM. One of the problems with out-of-the-box large language models is that they’re designed to be helpful assistants, not teachers. Google was interested in developing a large language model better suited to educational tasks, that others might use as a starting point for education products. In this episode, members of the Google team talk about how they approached this, and why some of the subtleties of good teaching makes this an especially tricky undertaking!They describe the under-the-hood processes that turn a generic large language model into something more attuned to educational needs. Libby and Owen explore how Google’s teams approached fine-tuning to equip LearnLM with pedagogical behaviours that can’t be achieved by prompt engineering alone. This episode offers a rare look at the rigorous, iterative, and multidisciplinary effort it takes to reshape a general-purpose AI into a tool that has the potential to support learning.Stay tuned for our next episode in this mini-series, where Libby and Owen take a step back and look at how to define tutoring and assess the extent to which an AI tool is delivering. Team biographies Muktha Ananda is Engineering leader, Learning and Education @Google. Muktha has applied AI to a variety of domains such as gaming, search, social/professional networks and online advertisement and most recently education and learning. At Google Muktha’s team builds horizontal AI technologies for learning which can be used across surfaces like Search, Gemini, Classroom, and YouTube. Muktha also works on Gemini Learning. Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for Education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Before embarking on AI research, Markus completed a PhD in black hole physics.Irina Jurenka is a Research Lead at Google DeepMind, where she works with a multidisciplinary team of research scientists and engineers to advance Generative AI capabilities towards the goal of making quality education more universally accessible. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an Experimental Psychology student at Westminster University. This was followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence. LinkThe LearnLM APIJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
26
AI tutoring part 2: How good can it get?
In this episode, Owen and Libby chat about AI tutoring with guests, Ben Kornell, Managing Partner at Common Sense Growth Fund, and Alex Sarlin, a veteran in the edtech industry. Both co-founded Edtech Insiders, a leading newsletter and podcast covering the growing Edtech industry. Ben and Alex differentiate between AI-powered search and true AI tutoring, and discuss trends like AI-enhanced human tutors, hybrid models, and fully autonomous AI bots. The conversation highlights the need for AI to integrate- and learn from traditional education in developing key elements, such as targeting the right zone of proximal development. Human tutors have the ability to sense motivation and frustration, helping students through the more challenging parts of learning. Emerging technologies are now using facial and physical cues to gauge engagement, proving valuable as nudges for AI tutors or human instructors to boost motivation or adjust level of content.They also address ethical and political risks, such as biased responses and dependency issues. With exciting developments on the horizon, the episode explores the at times seemingly sci-fi-like future of AI tutoring!Guest bios:Ben Kornell - Ben is currently serving as the Managing Partner of Common Sense Growth Fund at Common Sense Media. Prior to that, they have worked as a School Board Member for the San Carlos School District and was the Co-Founder and Podcast Host of Edtech Insiders.Alex Sarlin - Alex is a 15 year veteran of the Edtech industry, as a Product Manager and Learning Engineer at both large Edtech companies (2U, Scholastic, Chan Zuckerberg Initiative) and startups (Coursera, Skillshare, Credly, Knewton). He is currently a consultant and adviser to a number of Edtech companies in higher education and the future of work. He holds a Master's of Instructional Design from Columbia University, and is the founder of Edtech Insiders, a leading newsletter and podcast covering the growing Edtech industry. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
25
Inside the black box: How Google is thinking about AI & education (part 1 of 3)
This episode is the first of a three part mini-series with Google. There is a lot of interest in how big tech companies are engaging in AI and education and what their future plans are - in this mini-series, hear the latest directly from Google. The genesis of this mini-series was a short Ed-Technical episode from earlier this year. Libby and Owen discussed a paper Google released about the work they had done to fine-tune a LLM called LearnLM to make it more useful for education. This work was motivated by a realisation that some of the core behaviours of LLMs (helpfulness, sycophancy) aren’t aligned with what’s valuable from a learning perspective, and prompting can only go so far. This first episode focuses on how Google is integrating LearnLM’s capabilities into existing Google products like YouTube and new products like LearnAbout. The next episode in the mini-series will focus on LLMs and tutoring, and the final episode will be a more technical episode on the development of LearnLM. We had a chance to talk to a number of folks across a range of teams, including LearnX, Google Research and DeepMind. There was too much great content to squeeze into three episodes but all full interviews will be up on our YouTube channel. In this episode we include excerpts from interviews with four members of the team.Rob Wong is the Product Lead for LearnX, a team within Google that builds learning features on Search, YouTube, and Gemini chat, and also works on LearnLM in partnership with Google Research and Google DeepMind. Julia Wilkowski leads a pedagogy team at Google. Her team collaborates with Google product teams to apply learning science principles and teaching best practices. Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Angie Mac McAllister, PhD is a Group Product Manager at Google with a vision: to make a personal AI tutor available to everyone. Focused on developing learning features for Gemini, Mac combines 35 years of experience in education with cutting-edge AI to help students become better learners. Links: Google’s technical report on LearnLMEd-Technical short episode on Google’s LearnLM paper Article about Learn About, Google’s experimental new AI tool Article by Angie Mac McAllister about new Gemini learning featuresJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
24
Big data and algorithmic bias in education: what is it and why does it matter?
This episode, Owen and Libby speak to Ryan Baker, a leading expert in using big data to study learners and learning interactions with educational software. Ryan is a Professor in the Graduate School of Education at the University of Pennsylvania, and is Director of the Penn Center for Learning Analytics. Ryan provides an overview of educational data mining (otherwise known as EDM) and explains how insights from EDM can help improve learner engagement and outcomes. Libby and Owen also explore the technical aspects of algorithmic bias with Ryan, discussing why it matters, how it is defined, and how it can be addressed from a technical perspective. Links:Ryan Baker biography One of Ryan Baker’s research papers about algorithmic bias Big Data and Education - Ryan Baker’s free massive online open textbookJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
23
Think aloud or think before you speak?: OpenAI’s new model for advanced reasoning
In this short episode, Libby and Owen discuss OpenAI’s new model for advanced reasoning, o1. They talk about its new capabilities and strengths, and what they think about its significance for education after an initial play around. They talk through the benefits of ‘think aloud’ versus ‘think before you speak’ approaches in education, and how this relates to o1. Links:OpenAI’s announcement about o1Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
22
Misconceptions about misconceptions: How AI can help teachers understand & tackle student misconceptions
In this episode, Libby and Owen are joined by Craig Barton, Head of Education at Eedi and host of the Mr Barton Maths and Tips for Teachers podcasts, along with Simon Woodhead, Director of Research at Eedi. Together, they explore the world of educational misconceptions—what they are, why they matter and how AI and data science can help tackle them. Links: Craig Barton biography Simon Woodhead biography Eedi’s research Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
21
Why Language Models are suck ups and how this can be bad for learning
In this short, Libby and Owen discuss recent research from Anthropic looking at sycophancy – the tendency to agree with users – in large language models (LLMs), and key research from educational psychology about how important feedback is for learning. Libby and Owen connect the two papers and explore why sycophancy is especially a problem when it comes to using LLMs for educational purposes. Links:Anthropic paper on sycophancy in language models John Hattie and Helen Timberley’s paper, The Power of Feedback Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
20
Passionate about planning (and Tim Walz): automated lesson planning tools
In this short, Libby and Owen discuss automated lesson planning tools (after Owen stops talking about his Tim Walz crush). There’s now a growing number of lesson planning tools out there for teachers who are using AI: Khanmigo, Magic School, Diffit and Oak National Academy (who will soon release a lesson planning tool) to name a few. Libby and Owen cover what some of the automated tools do and what some of their features are. They share their thoughts about the value and benefits of the tools. They also do a quick primer on lesson plans and how they differ from other education materials for all the non-teachers who listen. Links:Khanmigo blog about their approach to building a lesson planning tool Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
19
Short: Generative AI Can Harm Learning - our quick takes
In this short, Libby and Owen discuss a recent paper that has generated interest and discussion called ‘Generative AI Can Harm Learning’. The paper presents the findings from a thought-provoking study of nearly 1,000 students in Turkey. The study tested the effects of giving students access to two different versions of GPT-4 while studying math: one was essentially ChatGPT and the other was a version of GPT-4 that had been tailored for tutoring with a thin prompt wrapper – so it didn’t just give students the answer. The main finding (that the title is based on) is that access to generic ChatGPT had a negative effect on students’ math test results, versus the control group who studied with no access to a chatbot. Not everyone agrees that the results justify the somewhat dramatic title, or that the title reflects the most interesting findings from the study. Listen in to see what Libby and Owen think. The ‘Generative AI Can Harm Learning’ paper can be found here.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
18
Tangerines & loquats: Building AI EdTech for low resource settings
This episode, Owen and Libby speak to Carmen Strigel, Senior Director of Education Technology at RTI, a non-profit global research organisation. Carmen has been the driving force behind a number of successful EdTech products built and used in low resource settings. Carmen tells Owen and Libby about Tangerine, data collection software used in more than 60 countries, and Loquat, a machine learning tool that provides feedback to teachers on their classroom talk. This episode builds on earlier interviews this season about voice AI with Alyssa van Kamp from TeachFX and Peter Foltz from the Institute of Cognitive Science at the University of Colorado Boulder. Carmen tells Libby and Owen aboutvoice AI in low resource settings. They also explore how contextual factors, like different pedagogies and classroom settings, influence EdTech product design in low resource settings. Carmen and Owen connect over their shared passion for assessment - watch this space for the ‘Formative Assessment Fanboy’ t-shirts proposed in the episode. Links:Carmen Strigel’s biography and publicationsInformation about TangerineInformation about LoquatJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
17
Short episode: What do teachers, parents, and students really think about AI chatbots?
In our second short episode of the season, Owen and Libby chat about the recently released results of a US poll (conducted on behalf of Walton Family Foundation and Renaissance Philanthropy) looking at the views of teachers, parents, and students on AI chatbots. There were some surprising findings: more than 8 in 10 participants think technology in education has had a positive impact, and parents want to see AI chatbots used more in their child’s education. Listen in to hear more highlights from the poll. LinksReport on the results of the pollEdWeek article about the poll resultsJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
16
From biscuits to AI: Teacher Tapp's surprising insights on teacher preferences
In this episode of Ed-Technical, Owen and Libby speak with Becky Allen, co-founder of Teacher Tapp, a survey tool that polls a representative sample of teachers about what's happening in their schools, classrooms, and lives at the end of every school day. The conversation covers a range of topics related to AI and education, including how teachers are currently using large language models, the potential for AI to address the "lockstep problem" in education (that all students are expected to progress together, despite their differences) and the future role of technology in schools.Becky shares insights from Teacher Tappp surveys, revealing that many teachers are actively using AI tools for tasks like lesson planning and content creation, particularly in English and upper upper primary education. She discusses the challenges of personalissed learning and the importance of considering teachers' preferences when implementing new technologies. Becky also offers her perspective on the future of AI in education, suggesting that while independent learning may see significant changes, the structure of classroom instruction is likely to remain largely unchanged due to the complex social dynamics of schools.“You've got about a third of the teachers that seem to be actively using large language models during their work. What they said about how they're using it, some of it isn't surprising, like things that we would call kind of admin of some description. But the thing I didn't expect is the extent to which now teachers are using it for lesson planning and by lesson planning I'm talking about a massive kind of broad range of things that has to happen before the lesson can take place.” Becky AllenJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
15
Voice AI for teacher feedback: navigating noisy classrooms and building teachers' trust
In the second episode of season two Owen and Libby speak to Alyssa Van Camp, Head of Research at EdTech start-up TeachFX. TeachFX is an app for teachers that uses voice AI (a combination of automated speech recognition and natural language processing) to analyse classroom talk and then provide automated feedback to teachers. Owen and Libby talk to Alyssa about the challenges with transcribing and analysing classroom talk and how TeachFX is overcoming some of these issues. Alyssa also shares some interesting reflections about how TeachFX approaches data privacy, and navigates the tension between being a tool for teacher development while responding to the needs of administrators (who are often the purchasers of the tool). Links:Alyssa Van CampMore information about TeachFXA recent study by previous Ed-Technical guest Dora Demszky looking at the impact of TeachFX in classroomsJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
14
Short episode: Google's newest research on genAI for education, and the problem with hypotenuses...
In the first short episode of season two, Owen and Libby share their warm takes on two big releases from OpenAI and Google. They reflect on the OpenAI demo video of Sal Khan’s (Khan Academy’s founder) son using their latest model (GPT-4o) as a maths tutor, and Google’s paper describing how they trained and evaluated a fine-tuned version of Gemini for educational purposes. Share your takes on these announcements with us over social media or in the comments!Links:Google paper on LearnLM-TutorKey Excerpts and Owen’s Commentary on PaperGPT-4o (Omni) math tutoring demo on Khan AcademyJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
13
Season 2 : Three decades of AI in education: assessment to classroom collaboration
This season Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel continue to speak with leading researchers, practitioners and educators on the Ed-Technical podcast series about the cutting edge of AI in education. They will break down complex AI concepts into non-technical insights to better understand what the research says and help educators sift the useful insights from the AI hype. In the first episode of Season 2, Libby and Owen speak with Peter Foltz about his work at the intersection of cognitive science, AI, and education over the past few decades. Peter discusses his experience building an automated essay scoring system in the late 1990s, which provided students with immediate, formative feedback on their writing. Peter reflects on the potential of large language models to enhance these systems by offering more substantive feedback to help students become better thinkers, not just better writers.They then discuss the challenges and opportunities of using speech-to-text AI in educational settings. Peter’s team uses AI to analyze student conversations during group work, providing insights into collaboration, respect, and equity within teams that is fed back to the students to reward positive contributions. Despite the technical difficulties of accurately capturing audio in noisy classrooms, Peter highlights the potential of these tools to support teachers in assessing and fostering effective teamwork among students. They also touch on the importance of co-designing these technologies with teachers, students, and parents to ensure their appropriate and acceptable use in the classroom.Guest and resourcesPeter’s personal webpage, and google scholar pageNSF Institute for Student-AI Teaming Overview of student collaboration toolJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
12
Season 1 Highlights & Lingering Questions
In this final episode of season 1 Owen and Libby pull out highlights from each episode. They reflect on some of the common themes, and their lingering questions after season 1. They ask, if a model’s not perfect, how good does it have to be before it stops being useful at all? And they share their appreciation for their many guests who are ex-teachers. Owen also shares his personal wish for more adversarial discussions in season 2, which Libby may or may not grant. We'll be back for season 2 in May! Join for our long promised episodes on voice AI, as well as a possible episode live from Daytona Beach for Spring Break…Guests and resourcesCheck out our page on BOLD for all Ed-Technical episodesJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
11
Short 3: What out-there EdTech idea would you buy or sell?
This week we're doing our third short episode - a chat between Libby and Owen about the more speculative or out-there EdTech ideas they could get behind (or not). Listen to find out who’s into calculators and who’s into roboteachers……Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
10
Behind the scenes of building AI EdTech for schools
Introduction: Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about? In the second part of this episode, Libby and Owen talk again to EdTech investors from Rethink Education, Educapital, Achieve Partners, Sparkmind, Brighteye Ventures and Reach Capital. Our guests spend their days helping founders grapple with the operational realities of building an EdTech business. They discuss where they see some of the challenges and opportunities presented by AI from a business (rather than a product) perspective. They share reflections about the value of less shiny and more operational EdTech, schools’ fatigue with new product pitches that claim to use AI, and how they see the market evolving. Join us next time for an exciting episode about voice AI!Guests and resourcesEbony Brown, Rethink Education Jonathan Denais, Educapital Daniel Pianko, Achieve Partners Kai Talas, SparkmindBen Wirz, Brighteye VenturesJennifer Wu, Reach CapitalFor Buzzsprout only:ContactJoin us on social media: https://twitter.com/BOLD_insights and Libby Hills, https://twitter.com/libbylhhills and Owen Henkel, https://twitter.com/owen_henkel Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletterStay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
9
(Short) Behind the scenes: How impactful can an AI tutor be?
This week we're doing our second short episode - a behind the scenes chat about a recent paper of Owen’s that has been generating some interest online. In the paper Owen and his co-authors present the impact of an AI tutor (Rori) on maths performance of around 1,000 students. Listen in for a summary of the study, what they found (TL;DR - the results are very promising!) and some probing (but not too probing, as Owen’s still recovering from his trouncing by Libby in their first short episode - a debate about “hallucinations” are in large language models) questions from LibbyLinks: Owen’s paper on the impact of an AI tutor (Rori) on maths achievementMore information about Rising AcademiesCredits: Sarah Myles for production support; Josie Hills for graphic designJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
8
Investing in the Future of Learning: Can AI Unlock New Learning Experiences?
In the seventh episode of this series, Libby and Owen talk to six EdTech investors about their views on AI in education. Guests join from Rethink Education, Educapital, Achieve Partners, Sparkmind, Brighteye Ventures and Reach Capital for a two part episode. In this first part, investors tell us what opportunities they see for AI potentially improving the student learning experience. Join to hear their views about how AI could help develop teacher support tools, as well as longer term possibilities such as delivering personalised learning experiences for students. Fear not, we don’t take an overly simplistic view on personalisation (a term we hear often in connection with AI), but are able to unpack a bit of the nuance here with some of our guests. Join us next time for part two, which focuses on how AI might disrupt existing EdTech business models. Guests and resourcesEbony Brown, Rethink Education Jonathan Denais, Educapital Daniel Pianko, Achieve Partners Kai Talas, SparkmindBen Wirz, Brighteye VenturesJennifer Wu, Reach CapitalJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
7
(Short) Owen's Spicy Take on Hallucinations and Libby's Strong Disagree
This week, we're trying something a bit different and doing a short episode. The gloves come off as Libby and Owen engage in a lively debate about the "hallucinations" in large language models (e.g. unexpected and hard to explain errors) and their impact on building educational products.They spar on the nuances of model hallucinations, discussing the various forms and potential consequences. Owen presents a "spicy take" on the matter, advocating for the value of engagement and interaction even if it means accepting a certain level of inaccuracy. Libby, however, expresses concerns about the accuracy of information in educational settings, particularly in K-12 schools. She emphasizes the importance of the high bar set by traditional educational tools in terms of factual correctness.Who scores an ed-technical knockout? You, the listeners, will decide!Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
6
Navigating Assessment in the Age of AI: What Counts as Cheating?
In the sixth episode of this series, Libby and Owen talk to Matt Glanville, Director of Assessment at the International Baccalaureate (IB). The IB works with over 5,000 schools worldwide in 160 countries to offer a range of curriculum programmes and qualifications. Early last year they shared their progressive stance on AI – rather than banning AI tools to try to prevent cheating, they recognize that they will become part of everyday life, and therefore students need support to use them ethically, transparently, and safely. Matt (another ex-teacher ❤) talks about IB’s position and how they deal with cheating. We discuss what’s new about cheating in a post-ChatGPT world and what isn’t (spoiler – malicious and determined cheating isn’t a new problem for schools). Matt poses some provocative questions about how we define cheating. Is it cheating to use large language models (LLMs) to give you ideas for your essay? What about having an LLM write an essay using the ideas you give it? We also touch on the implications for formal assessments. Will oral exams make a comeback? Most importantly, we introduce Owen to Mills & Boon. Guests and resourcesThe International Baccalaureate’s AI resources and content The International Baccalaureate’s March 2023 statement about ChatGPT and AIMatt talking about IB’s position on AI at the ACES annual meeting Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
5
The quest to save teachers’ Sunday evenings with AI
Introduction: Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about? In the fifth episode of this series, Libby and Owen talk to Sian Cooke, about her quest to save teachers’ Sunday evenings! Sian is Head of the Department for Education’s Emerging Tech Unit in England, who are doing a lot of thinking about what generative AI means for the whole education system. Come and hear what it’s like responding to generative AI from a policymaking perspective. Sian talks about where she sees the immediate low hanging AI fruits for teachers, and why she thinks generative AI tools are particularly promising for low stakes high frequency tasks. We often hear that there are no silver bullets in education but Sian thinks there might be some bronze bullets that we shouldn’t overlook in the excitement about new technologies. Sian used a lot of technology in the classroom when she was a teacher, but from a policy perspective, she started working on education technology during the COVID-19 pandemic, providing internet access and laptops to children who couldn’t get online for remote learning. Since 2020 she’s been working on the DFE’s strategy for technology in schools and more recently she’s been heading up the Emerging Tech Unit in the digital strategy division. Guests and resourcesSian Cooke’s LinkedInDepartment for Education’s policy paper on generative AITechnology in schools survey report mentioned by SianGenerative AI call for evidence summary of responses mentioned by Sian Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
4
Helping Teachers One AI Experiment at a Time
In the fourth episode of this series, Libby and Owen talk to John Roberts, co-founder and Director of Product and Engineering at Oak National Academy. Oak was originally created as an online classroom in 2020 as a rapid response to the coronavirus outbreak. They have delivered over 150 million lessons in their online classroom. They have now become a new national body supporting curriculum and providing free resources to teachers of 4- to 16-year-olds in England.Oak National Academy recently launched a couple of AI experiments (a quiz designer and lesson planner) designed to help teachers save time. John talks us through the experience of starting Oak during the pandemic, what potential he sees for AI to help save teacher’s time, his thoughts on AI generating quality content, and how they’re optimising base large language models for education (and find out more about retrieval augmented generation or RAG).There’s also some good bonus content from John (an ex-physics teacher) on what a negative displacement in a longitudinal wave indicates...Guests and resourcesTES profile on John Roberts Information on Oak’s AI experimentsInformation about Oak’s approach to open government licensing Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
3
The Inside Story Behind Khan Academy's AI Tutor Khanmigo
Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about? In the third episode of this new series, Libby and Owen talk to Kristen DiCerbo, the Chief Learning Officer at Khan Academy. Khan Academy is best known for its high quality instructional videos, often led by founder Sal Khan himself. It is used by millions of students around the world.In March this year, Khan Academy launched Khanmigo, one of the first AI-based tutor and teaching assistants to use GPT-4 (one of the most powerful large language models). Kristen talks about what it was like to get a call from OpenAI and an early preview of GPT-4 last year. She explains how they built Khanmigo (spoiler – it’s not as complicated as you might think!), how it embeds principles from the learning sciences, and where she thinks this technology is going to lead us over the next few years. Join us for this exciting episode as we speak to one of the most experienced and inspiring innovators in the field about how to actually build an AI chatbot for education! Guests and resourcesKristen DiCerboIntroduction to KhanmigoOpenAI Khammigo announcement Video about how to use KhanmigoJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
-
2
Is AI the answer to quicker, better feedback for teachers?
Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about? In this episode, Libby and Owen talk to Dr. Dora Demszky, Assistant Professor in Education Data Science at Stanford University. Dora’s background is in linguistics and natural language processing. Her work focuses on using natural language processing tools to support educators. Dora has been working with large language models for a whole, before the recent explosion of interest. So she shares what it’s been like to move from having to explain what a large language model is, to having the world obsess about them! She also talks about the potential and limitations of using large language models to support educators with activities like teacher coaching and writing feedback to students. Next episode Libby and Owen speak to Dr. Paul Atherton, founder of Fab Inc and Fab Data. Like Dora, Paul is also interested in building AI driven tools for teachers, but he focuses on doing so for teachers in highly resource constrained places like Sierra Leone. Guests and resourcesDr. Dora DemszkyDora’s paper on using chatGPT for teacher coachingCoverage of Dora’s research showing positive benefits of providing automated feedback to instructorsJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)Listen to all episodes of EdTechnical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expertCredits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
No matches for "" in this podcast's transcripts.
No topics indexed yet for this podcast.
Loading reviews...
ABOUT THIS SHOW
Hosted by EdTechnical co-founders Libby Hills (CEO) and Owen Henkel (Research Director), the EdTechnical podcast explores AI in education through a research-grounded lens. Each episode, Libby and Owen ask experts to help educators sift the useful insights from the AI hype. They ask questions like: how does this actually help students and teachers? What do we actually know about this technology, and what is just speculation? And (importantly!) when we say AI, what are we actually talking about?Beyond the podcast, EdTechnical also invests in promising AI edtech companies and conducts applied research to inform real-world product and investment decisions.
HOSTED BY
Owen Henkel & Libby Hills
CATEGORIES
Loading similar podcasts...