PodParley PodParley
It's 2026, and We're Still Talking Evals

EPISODE · Apr 21, 2026 · 40 MIN

It's 2026, and We're Still Talking Evals

from MLOps.community · host Demetrios

Maggie Konstanty is an AI Product Manager at Prosus, one of the world's largest consumer internet companies, where she builds and evaluates AI agents for food ordering and ecommerce at scale. She's been inside the messy reality of LLM evaluation longer than most — and her take is unfiltered.It's 2026, and We're Still Talking Evals // MLOps Podcast #372 with Maggie Konstanty, AI Product Manager at Prosus🧪 Why accuracy metrics lie — Maggie breaks down why "95% accurate" tells you almost nothing about whether your agent is actually working in the real world, and what to measure instead.🏗️ Pre-ship vs. production evals — Your eval suite before launch will not survive first contact with real users. Maggie explains the structural disconnect and how to close the gap.👻 The silent failure: user drop-off — Users who are unhappy don't complain — they just leave. Discover why drop-off analytics are one of the most underutilized eval signals in production.🎯 Instruction to fail: the 20-evaluator trap — Setting up 20 types of evaluators not connected to your product goal is a fast path to wasted time. How to design evals that are tied to real outcomes.🍽️ The "surprise me" edge case — A real example from Prosus's food ordering agent and what it reveals about how users actually behave vs. how PMs imagine they do.🤖 LLM-as-a-judge: the limits — Why Maggie doesn't lean on LLM-as-a-judge for accuracy measurement, and what approaches she uses instead for production-grade evaluation.🛠️ Arize/Phoenix & eval tooling critique — A candid take on the current state of eval platforms, why she spent a whole day fighting the UI, and why mature teams often go back to custom code.🧬 Eval as team DNA — Evals aren't a launch checklist. Maggie makes the case that they need to be a constant practice embedded in team culture — and why alignment on "what good looks like" is harder than any technical implementation.🔢 When to stop optimizing — What happens when your eval score approaches 100%, and how to know when it's time to shift focus to a different metric or flow.💬 Red teaming with incentives — A fun tactic: running adversarial eval sessions where engineers compete to break your agent for an Amazon gift card.This is required watching for AI PMs, ML engineers, and applied AI teams who have moved past "getting evals set up" and are now struggling with making them actually matter.---🔗 Links & ResourcesMaggie Konstanty on LinkedIn: https://www.linkedin.com/in/maggie-konstantyProsus: [https://www.prosus.com](https://www.prosus.com/)MLOps.community: [https://mlops.community](https://mlops.community/)Arize AI / Phoenix (mentioned): [https://arize.com](https://arize.com/) / [https://phoenix.arize.com](https://phoenix.arize.com/)MLOps.community Slack: https://go.mlops.community/slack⏱️ Timestamps[00:00] Evaluations and User Alignment[00:18] Eval Lifecycle in Production[06:05] LLM Accuracy and Judging[15:30] Evals vs Tests in AI[22:39] Profanity as Frustration Signal[29:23] Impact-weighted performance[32:22] Eval Tooling Pros and Cons[38:10] Build vs Buy Dilemma[39:35] Wrap up

NOW PLAYING

It's 2026, and We're Still Talking Evals

0:00 40:56
Play in mini player Transcript not yet generated

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

URL copied to clipboard!