PodParley PodParley
MLOps.community

PODCAST · technology

MLOps.community

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

  1. 514

    Voice Agent Use Cases

    This episode is brought to you by Hyperbolic and the MLflow team. Check out more information at hyperbolic.ai and MLflow.org.What does it actually take to build voice AI at a billion-interaction scale? This episode features an ex-Amazon voice AI engineer who built customer support systems handling 2 billion+ interactions — now working on next-gen voice agent platforms. Anurag digs deep into the real engineering tradeoffs, design patterns, and use cases that separate production-grade voice agents from demos.Voice Agent Use Cases // MLOps Podcast #372 with Anurag Beniwal, Member of the Technical Staff at ElevenLabs🎙️ Topics covered:🔹 Cascaded vs. speech-to-speech — Why cascaded systems still win in production, and how to make them feel natural without sacrificing control🔹 Latency masking — Foreground/background model architecture and how to buy yourself time while deep retrieval runs🔹 Constellation of models — Using Haiku for tool calling, fine-tuned smaller models for response generation, and why "one model for everything" breaks at scale🔹 Turn-taking & ASR challenges — Why voice is harder than chat: accents, noise, silence detection, and domain-specific fine-tuning🔹 Level 1 vs Level 2 customer support — Why today's agents max out at Level 1 and what it takes to capture Level 2 expert judgment🔹 Inbound vs. outbound sales agents — Where voice agents are already winning, and why inbound lead qualification beats cold outbound🔹 Booking, reservations & concierge — The clearest near-term wins for voice agents across hospitality, home services, and SMBs🔹 Continual learning from natural language feedback — How to build agents that improve from real operator feedback without ML expertise🔹 Conversational TTS — Why passing full conversation history to your TTS model changes everything for tone consistency🔹 User tiers for voice platforms — Non-technical business owners vs. developers vs. enterprise: why one interface doesn't fit all. If you're building production voice agents, evaluating voice AI vendors, or scaling AI-first customer support — this episode is packed with hard-won lessons from someone who's done it at Amazon scale.🔗 Links & Resources:MLOps.community: https://mlops.communityGoogle Scholar: https://scholar.google.com/citations?user=g_QB5WgAAAAJ&hl=en&oAmazon science page: https://www.amazon.science/author/anurag-beniwalJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide⏱️ Timestamps [00:00] Cascaded Systems Control Challenge[05:35] Voice vs Chat Complexity[14:16] MLflow's open source platform[15:03] AI Model Constellations[23:00] Model Constellations Use Cases[31:40] Voice vs Text Context[33:54] Voice as Thought Capture[42:11] Cascaded vs Speech-to-Speech Debate[50:02] Wrap up

  2. 513

    The Creator of Superpowers: Why Real Agentic Engineering Beats Vibe Coding

    Jesse Vincent is the Founder & CEO of Prime Radiant and creator of Superpowers — the most-used Claude Code plugin in the world. He built the first agentic software development methodology from scratch while managing MIT interns in the early 2000s, and hasn't written a line of code manually since October.The Creator of Superpowers: Why Real Agentic Engineering Beats Vibe Coding // MLOps Podcast #373 with Jesse Vincent, Founder & CEO of Prime RadiantIn this conversation, Jesse walks Demetrios through the full Superpowers system: why he thinks most developers are still approaching agentic coding wrong, how he designs skills that force LLMs to stop rationalizing and actually follow rules, and what he's building next at Prime Radiant — including Green Field, an unreleased tool for reverse-engineering legacy codebases into specs. This one is for developers who want to go beyond "vibe coding" and build AI-assisted workflows that actually scale.🔧 Topics Covered🧠 The Superpowers Methodology — How the brainstorming skill extracts what you actually want before you hand work to an agent, and why most developers skip this step📋 Spec-Driven Development & Plan Files — Why Jesse insists on TDD, DRY, and YAGNI for every agentic task, and how planning skills generate per-task context blocks agents can actually execute on🐛 Debugging with Agents — Jesse's systematic approach to root cause analysis, reproduction cases, and the 30 years of debugging instinct he's baked into a skill🔄 Pressure Testing LLM Skills — How Claude fires up sub-agents and stress-tests its own rules to catch rationalization before it shows up in production🛠️ Clearance IDE — Jesse's new Markdown-native development environment built for humans working alongside AI, with a history pane for file navigation📦 Green Field (Unreleased) — A toolset for turning old codebases or built products into clean specs — not yet public but dropping soon from Prime Radiant🧑‍💼 Management as the Magic Trick — Why the real unlock of tools like Superpowers is that they make every developer a manager, and why that transition is hard the first time⚖️ Software Ethics in the Agent Era — Reverse engineering, license washing, open source cloning, and whether the value of software itself is collapsing🔗 Links & ResourcesPrime Radiant: [https://prime-radiant.com](https://prime-radiant.com/)Superpowers on GitHub: https://github.com/prime-radiant-incClearance IDE: https://github.com/prime-radiant-inc (check repo)MLOps.community Slack: https://go.mlops.community/slackMLOps.community website: [https://mlops.community](https://mlops.community/)⏱️ Timestamps[00:00] Greenfield Toolset Insights[00:27] Superpowers Kit Evangelism[08:06] Hyperbolic's GPU Cloud[17:48] Debugging Skill Creation[22:12] Skill Extraction Strategy[31:15] Smallest Harness[41:06] Software supply chains[48:56] Visual Precision Challenges[54:09] Creative Feedback Loops[1:04:24] MLflow's Gen AI[1:05:55] Wrap-up

  3. 512

    It's 2026, and We're Still Talking Evals

    Maggie Konstanty is an AI Product Manager at Prosus, one of the world's largest consumer internet companies, where she builds and evaluates AI agents for food ordering and ecommerce at scale. She's been inside the messy reality of LLM evaluation longer than most — and her take is unfiltered.It's 2026, and We're Still Talking Evals // MLOps Podcast #372 with Maggie Konstanty, AI Product Manager at Prosus🧪 Why accuracy metrics lie — Maggie breaks down why "95% accurate" tells you almost nothing about whether your agent is actually working in the real world, and what to measure instead.🏗️ Pre-ship vs. production evals — Your eval suite before launch will not survive first contact with real users. Maggie explains the structural disconnect and how to close the gap.👻 The silent failure: user drop-off — Users who are unhappy don't complain — they just leave. Discover why drop-off analytics are one of the most underutilized eval signals in production.🎯 Instruction to fail: the 20-evaluator trap — Setting up 20 types of evaluators not connected to your product goal is a fast path to wasted time. How to design evals that are tied to real outcomes.🍽️ The "surprise me" edge case — A real example from Prosus's food ordering agent and what it reveals about how users actually behave vs. how PMs imagine they do.🤖 LLM-as-a-judge: the limits — Why Maggie doesn't lean on LLM-as-a-judge for accuracy measurement, and what approaches she uses instead for production-grade evaluation.🛠️ Arize/Phoenix & eval tooling critique — A candid take on the current state of eval platforms, why she spent a whole day fighting the UI, and why mature teams often go back to custom code.🧬 Eval as team DNA — Evals aren't a launch checklist. Maggie makes the case that they need to be a constant practice embedded in team culture — and why alignment on "what good looks like" is harder than any technical implementation.🔢 When to stop optimizing — What happens when your eval score approaches 100%, and how to know when it's time to shift focus to a different metric or flow.💬 Red teaming with incentives — A fun tactic: running adversarial eval sessions where engineers compete to break your agent for an Amazon gift card.This is required watching for AI PMs, ML engineers, and applied AI teams who have moved past "getting evals set up" and are now struggling with making them actually matter.---🔗 Links & ResourcesMaggie Konstanty on LinkedIn: https://www.linkedin.com/in/maggie-konstantyProsus: [https://www.prosus.com](https://www.prosus.com/)MLOps.community: [https://mlops.community](https://mlops.community/)Arize AI / Phoenix (mentioned): [https://arize.com](https://arize.com/) / [https://phoenix.arize.com](https://phoenix.arize.com/)MLOps.community Slack: https://go.mlops.community/slack⏱️ Timestamps[00:00] Evaluations and User Alignment[00:18] Eval Lifecycle in Production[06:05] LLM Accuracy and Judging[15:30] Evals vs Tests in AI[22:39] Profanity as Frustration Signal[29:23] Impact-weighted performance[32:22] Eval Tooling Pros and Cons[38:10] Build vs Buy Dilemma[39:35] Wrap up

  4. 511

    Why Agents are Driving Software Development to the Cloud

    This episode is brought to you by Hyperbolic and the MLflow team. Check out more information at hyperbolic.ai and MLflow.org.Why AI Coding Agents Are Moving to the Cloud — With Zach Lloyd, CEO of WarpZach Lloyd is the founder and CEO of Warp, the AI-native terminal and agentic development platform trusted by over a million developers. Before Warp, Zach was a product lead at Google on Google Docs — giving him a uniquely deep intuition for what it means to build truly collaborative developer tools at scale.Why Agents are Driving Software Development to the Cloud // MLOps Podcast #371 with Zach Lloyd, CEO of WarpWhat we cover:🏗️ Why agents belong in the cloud, not local sandboxes — Zach breaks down why the "set up a local dev box for your agent" approach is fundamentally flawed and what cloud-native agent execution actually looks like in practice.🚀 GitHub is losing collaborative code review — One of the episode's sharpest takes: the hero features of GitHub, like collaborative code review, are migrating into agent workbenches. Zach explains why this shift is structural, not cyclical.📱 "Just-in-time apps" are replacing SaaS — The era of long-lived, learn-to-use-it software may be ending. Zach argues that agents will generate ephemeral, purpose-built interfaces on demand — and why most current app categories are at risk.🤖 Introducing Oz — Warp's cloud orchestration platform — A first look at how Oz works, how Demetrios is already using it to automate podcast production, and what multi-agent orchestration looks like in a real team environment.👁️ Agent observability and why it matters — Debugging, compliance, context management, and handoff/steering: Zach outlines the three pillars every engineering team needs before trusting agents with production work.🔐 Agent chaos is real — access control for AI — Why giving agents too much context is just as dangerous as giving them too little, and how Warp thinks about scoped agent permissions as you scale.📦 SaaS for agents will look nothing like SaaS for humans — The 25-year investment in human-friendly UI is irrelevant for agents. Zach explains what the new infrastructure layer for AI workers will actually need.⚡ Open-weight models will commoditize the coding agent space — With Nvidia investing $2B in open-weight models, Zach believes the current cost advantage that frontier labs hold is temporary — and how Warp is positioning for that world.🧩 Multi-agent orchestration patterns — Parallel agents, agent-to-agent handoffs, and why there's no single "right" pattern yet. Warp's Oz platform is being built for flexibility, not prescription.This episode is essential for engineering leaders, platform engineers, and any developer trying to understand where their daily workflow is headed in the next 18 months.🔗 Links & Resources:Warp: https://www.warp.devWarp Oz platform: https://oz.devZach Lloyd on X/Twitter: https://x.com/zachlloydMLOps Community: https://mlops.communityMLOps Community Slack: https://go.mlops.community/slack⏱️ Timestamps [00:00] Agentic Coding Review Shift[00:29] Warp Collaboration vs Sandboxes[05:22] Continuous Co-Creation in Teams[07:00] Hyperbolics GPU Cloud[07:56] Skill Governance Framework[14:41] Agents vs Browsers Analogy[21:31] PR Provenance in Warp[27:58] Agent System Commandments[37:44] Harness vs ADE[42:03] Adversarial Review Technique[45:26] GitHub Limitations for Agents[49:07] MLflow's GenAI[50:06] Wrap up

  5. 510

    The Modern Software Engineer

    This episode is brought to you by the MLflow team. Check out more information at MLflow.org.Mihail Eric is Head of AI at Monaco and Adjunct Lecturer at Stanford University, where he teaches CS146S: "The Modern Software Developer" — the first course in the world dedicated to how AI is transforming every stage of the software development lifecycle. With 12+ years building production AI systems at Amazon Alexa, Storia AI (YC S24), and early-stage startups, Mihail has one of the most grounded, practitioner-level takes on what it actually means to be a software engineer in 2026.The Modern Software Engineer // MLOps Podcast #370 with Mihail Eric, Head of AI at Monaco🧠 What the modern software engineer actually looks like — why the job description has fundamentally shifted from writing code to designing systems and directing agents⚙️ Agents require more thinking, not less — why the engineers getting the most out of coding agents are the ones who invest the most upfront in architecture, planning, and codebase structure🎓 Inside Stanford's "Modern Software Developer" course — what Mihail teaches in the first CS course in the world focused entirely on AI-transformed software development🏗️ From writing code to designing systems — how the best developers are repositioning themselves as architects of agentic workflows rather than line-by-line coders🔁 The Build System: how to run agents at scale — practical lessons from building multi-agent pipelines, parallel subagent batches, and automated retrospectives📉 What junior engineers should actually focus on — the skills that remain irreplaceable and the paths that still produce strong software engineers in an AI-first world🚀 Building Monaco's AI-native revenue engine — what it's like building AI infrastructure for a fast-moving $35M-funded startup disrupting enterprise CRM🎯 How to ace AI engineering interviews — Mihail's framework for demonstrating real AI engineering competence beyond prompt engineering basics. Essential watching for software engineers, ML practitioners, and engineering managers who want an honest, practitioner-level view of where the profession is going — from someone who's both teaching it at Stanford and building it in production.🔗 Links & ResourcesMihail Eric on LinkedIn: https://www.linkedin.com/in/mihaileric/Mihail's website: https://www.mihaileric.comStanford course "The Modern Software Developer": https://themodernsoftware.dev/Maven course — AI Software Development: From First Prompt to Production Code: https://maven.com/the-modern-software-developer/ai-courseFree AI Engineer interview prep course: https://course.aiengineermastery.com/Monaco (AI-native revenue engine): https://monaco.comMLOps.community Slack: https://go.mlops.community/slack⏱️ Timestamps 00:00 Intro — Mihail Eric & Monaco04:00 What has actually changed for software engineers in 202609:00 Inside Stanford's "Modern Software Developer" course15:00 Why agents require more human thinking, not less21:00 From writing code to designing systems — the architect mindset27:00 The Build System: running agents at scale in production33:00 What junior engineers should focus on right now39:00 Building AI infrastructure at Monaco44:00 How to demonstrate real AI engineering competence49:00 Skills that will remain irreplaceable52:00 Rapid fire/closing thoughts

  6. 509

    We Cut LLM Latency by 70% in Production

    Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production.How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks Key topics covered:The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselvesGPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hoursTensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reductionCold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up timesKV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models togetherScheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes)Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticalsAI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followedAgentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLCChinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training dataThis episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment.Links & Resources:TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLMNVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-themTimestamps: [00:00] Optimizing GPU Usage and Latency[00:21] Learning AI as Leadership[04:34] AI Cost Centers[13:56] Throughput and Infrastructure Efficiency[18:10] Scaling and Unit Economics[24:14] Championing AI ROI[36:11] Queue to Value Engine[41:30] Failed Product Features[46:12] Agentic Engineering Costs[58:49] AI Self-Hosting in Engineering[1:04:40] Wrap up

  7. 508

    Getting Humans Out of the Way: How to Work with Teams of Agents

    Rob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge.Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractMost people cripple coding agents by micromanaging them—reviewing every step and becoming the bottleneck.The shift isn’t to better supervise agents, but to design systems where they work well on their own: parallelized, self-validating, and guided by strong processes.Done right, you don’t lose control—you gain leverage. Like paving roads for cars, the real unlock is reshaping the environment so AI can move fast.// BioRob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge.// Related LinksWebsite: https://robennals.org/https://broomy.org/https://learnai.robennals.org/ (not yet announced, but should be by the time of the podcast)~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rob on LinkedIn: /robennals/Timestamps:[00:00] Agent Optimization Strategies[00:21] Visual Regression Explanation[05:35] Automated QA for Videos[13:05] Verification System Design[19:48] Agent Selection Strategies[30:48] Parallel Agent Management[35:30] Containerization and Cost Estimation[42:48] Shifting to Agent Orchestration[50:10] Wrap up

  8. 507

    Fixing GPU Starvation in Large-Scale Distributed Training

    Kashish Mittal is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure.Fixing GPU Starvation in Large-Scale Distributed Training // MLOps Podcast #367 with Kashish Mittal, Staff Software Engineer at Uber Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// Abstract Kashish zooms out to discuss a universal industry pattern: how infrastructure—specifically data loading—is almost always the hidden constraint for ML scaling.The conversation dives deep into a recent architectural war story. Kashish walks through the full-stack profiling and detective work required to solve a massive GPU starvation bottleneck. By redesigning the Petastorm caching layer to bypass CPU transformation walls and uncovering hidden distributed race conditions, his team boosted GPU utilization to 60%+ and cut training time by 80%. Kashish also shares his philosophy on the fundamental trade-offs between latency and efficiency in GPU serving.// BioKashish Mittal is a Staff Software Engineer at Uber, where he architects the hyperscale machine learning infrastructure that powers Uber’s core mobility and delivery marketplaces. Prior to Uber, Kashish spent nearly a decade at Google building highly scalable, low-latency distributed ML systems for flagship products, including YouTube Ads and Core Search Ranking. His engineering expertise lies at the intersection of distributed systems and AI—specifically focusing on large-scale data processing, eliminating critical I/O bottlenecks, and maximizing GPU efficiency for petabyte-scale training pipelines. When he isn't hunting down distributed race conditions, he is a passionate advocate for open-source architecture and building reproducible, high-throughput ML systems.// Related LinksWebsite: https://www.uber.com/Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy: https://www.youtube.com/watch?v=ie1M8p-SVfM~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Kashish on LinkedIn: /kashishmittal/Timestamps:[00:00] Local dataset caching[00:30] Engineers Evolving Roles[04:44] GPU Resource Management[10:21] GPU Utilization Issues[21:49] More GPU War Stories[32:12] Model Serving Issues[39:58] Reflective Learning in Coding[43:23] Workflow and Reflective Skills[52:30] Wrap up

  9. 506

    Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    Jens Bodal is a Senior Software Engineer II working independently, focusing on backend systems, software architecture, and building scalable solutions across client projects.This One Shift Makes Developers Obsolete // MLOps Podcast #366 with Jens Bodal, Senior Software Engineer II, Independent Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// Abstract AI agents are shifting the role of developers from writing code to defining intent. This conversation explores why specs are becoming more important than implementation, what breaks in real-world systems, and how engineering teams need to rethink workflows in an agent-driven world.// BioJens Bodal is a senior software engineer based in Edmonds, Washington, with nine years of experience building developer tooling, internal platforms, and web infrastructure. He spent seven years as an SDE II at Amazon, working on teams including Amazon Games Studio and the AWS Events Management Platform. His work has focused on developer tooling, CI/CD systems, testing infrastructure, and improving the developer experience for teams operating production services. He is particularly interested in developer experience and the growing ecosystem of local tools that help engineers build and run AI systems on infrastructure they control.// Related LinksWebsite: https://bodal.devhttps://github.com/jensbodalhttps://www.youtube.com/watch?v=Yp7LYdbOuwE~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jens on LinkedIn: /jensbodalTimestamps:[00:00] Specification vs Code[00:25] Conference Realizations and Insights[09:01] Agents and Orchestration Insights[10:39] Coding Agents and Talent[18:10] Sub-agent Design Concepts[25:18] Evaling on Vibes[33:23] Walled Garden and Proxies [41:48] Spec-Driven Development Limitations[46:56] Code Ownership vs Authorship[50:49] Engineering Ownership and PMs[53:47] Skill Creation and Iteration[58:40] Wrap up

  10. 505

    Operationalizing AI Agents: From Experimentation to Production // Databricks Roundtable

    Databricks Roundtable episode: Operationalizing AI Agents: From Experimentation to Production. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguideBig shout-out to Databricks for the collaboration!// AbstractThis panel discusses the real-world challenges of deploying AI agents at scale. The conversation explores technical and operational barriers that slow production adoption, including reliability, cost, governance, and security.The panelists also examine how LLMOps, AIOps, and AgentOps differ from traditional MLOps, and why new approaches are required for generative and agent-based systems. Finally, experts define success criteria for GenAI frameworks, with a focus on robust evaluation, observability, and continuous monitoring across development and staging environments.// BioSamraj MoorjaniSamraj is a software engineer working on the Agent Quality team. Previously, Samraj worked at Meta on ads/product classification research and AppLovin on MLOps. Samraj graduated with a BS+MS in Computer Science from UIUC, advised by Professor Hari Sundaram, where he worked on controllable natural language generation to produce appealing, interpretable science to combat the spread of misinformation. He also worked with Professor Wen-mei Hwu on accelerating LLM inference through extreme sparsification.Apurva MisraApurva is an AI Consultant at Sentick, focusing on assisting startups with their AI strategy and building solutions. She leverages her extensive experience in machine learning and a Master's degree from the University of Waterloo, where her research bridged driving and machine learning, to offer valuable insights. Apurva's keen interest in the startup world fuels her passion for helping emerging companies incorporate AI effectively. In her free time, she is learning Spanish, and she also enjoys exploring hidden gem eateries, always eager to hear about new favourite spots!Ben EpsteinBen was the machine learning lead for Splice Machine, leading the development of their MLOps platform and Feature Store. He is now the Co-founder and CTO at GrottoAI, focused on supercharging multifamily teams and reducing vacancy loss with AI-powered guidance for leasing and renewals. Ben also works as an adjunct professor at Washington University in St. Louis, teaching concepts in cloud computing and big data analytics.Hosted by Adam Becker// Related LinksWebsite: https://www.databricks.com/https://mlflow.org/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Samraj on LinkedIn: /samrajmoorjani/Connect with Apurva on LinkedIn: /apurva-misra/Connect with Ben on LinkedIn: /ben-epstein/Connect with Adam on LinkedIn: /adamissimo/Timestamps:[00:00] Introduction[02:30] AI Agents in Operations[04:36] AI Strategy Consulting[05:30] Agent Quality Focus[06:17] AI Agent Expectations[11:44] AI Use Cases Evolution[15:25] Agent Expectations Adjustment[17:41] Agent Quality Monitoring[23:22] Trust in GenAI Systems[33:33] Data Prep vs Product Thinking[40:27] Quality Systems Distinction[44:54] Q & A[1:00:57] Wrap up

  11. 504

    arrowspace: Vector Spaces and Graph Wiring

    Lorenzo Moriondo is a Technical Lead for AI at tuned.org.uk, working on AI agent protocols, graph-based search, and production-grade LLM systems.arrowspace: Vector Spaces and Graph Wiring // MLOps Podcast #365 with Lorenzo Moriondo, AI Research and Product EngineerJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// Abstract Meet arrowspace — an open-source library for curating and understanding LLM datasets across the entire lifecycle, from pre-training to inference. Instead of treating embeddings as static vectors, arrowspace turns them into graphs (“graph wiring”) so you can explore structure, not just similarity. That unlocks smarter RAG search (beyond basic semantic matching), dataset fingerprinting, and deeper insights into how different datasets behave.You can compare datasets, predict how changes will affect performance, detect drift early, and even safely mix data sources while measuring outcomes.In short: arrowspace helps you see your data — and make better decisions because of it.// BioWith over a decade of experience in software and data engineering across startups and early-stage projects, Lorenzo has recently turned his focus to the AI-assisted movement to automate software and data operations. He has contributed to and founded projects within various open-source communities, including work with Summer of Code, where he focused on the Semantic Web and REST APIs.A strong enthusiast of Python and Rust, he develops tools centered around LLMs and agentic systems. He is a maintainer of the SmartCore ML library, as well as the creator of Arrowspace and the Topological Transformer.// Related LinksWebsite: https://www.tuned.org.uk~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Chris on LinkedIn: /lorenzomoriondoTimestamps:[00:00] Graph Wiring for ML[00:32] RAG and Vector Similarity[08:58] Geometric Search Trade-offs[13:12] Vector DB Algorithm Integration[21:32] Feature-Based Retrieval Shift[26:04] Epiplexity and Embeddings[31:26] Epiplexity and Embedding Structure[40:15] Training vs Post-hoc Models[47:16] Discovery-Driven Development[51:22] Updating Mental Models[53:00] Vector Search vs Agents[55:30] Wrap up

  12. 503

    Agentic Marketplace

    Donné Stevenson is a Machine Learning Engineer at Prosus, working on scalable ML infrastructure and productionizing GenAI systems across portfolio companies.Pedro Chaves is a Data Science Manager at OLX Group, working on GenAI-powered search, personalization, and large-scale marketplace recommendations.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractMarketplaces are about to get smarter.Agents that find your perfect house, negotiate the best deals, and even talk to other agents on your behalf.Less tedious searching. Less back-and-forth. More time for what matters.Pedro Chaves and Donné Stevenson discuss the future of buying and selling cars, homes, and everything in between - and what it'll take to get there.// BioDonné StevensonFocused on building AI-powered products that give companies the tools and expertise needed to harness the power of AI in their respective fields.Pedro ChavesPedro is a Data Science Manager at OLX Group, where he leads teams building machine learning solutions to improve marketplace performance, pricing, and user experience at scale.// Related LinksWebsite: https://www.prosus.com/Website: https://www.olxgroup.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideTimestamps:[00:00] OLX: Disrupting Buyer-Seller Experiences[03:33] Redefining the Home-Buying Experience[07:40] User Feedback and Iterative Rollouts[11:25] Beyond Chat: Redefining Agent Use[14:03] User Trust and Education Challenges[16:47] Learning Curve for Automoto[20:05] Interactive Decision-Making with AI[24:47] Agents Simplify Buyer-Seller Search[28:14] Garage Sale Treasure Hunting[33:43] Agent Discovery Layer Needed[34:53] Agents Relying on Agents[39:48] Reducing Friction in Selling Stuff[41:39] Extracting Buyer Intent Systematically[44:49] Optimizing Delivery with Lockers[50:10] Generative AI Commerce Strategies[51:03] Improving Chat Interaction Layer

  13. 502

    Durable Execution and Modern Distributed Systems

    Johann Schleier-Smith is the Technical Lead for AI at Temporal Technologies, working on reliable infrastructure for production AI systems and long-running agent workflows. Durable Execution and Modern Distributed Systems, Johann Schleier-Smith // MLOps Podcast #364Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Merch: https://shop.mlops.community/Big shoutout to ⁨ @Temporalio  for the support, and to  @trychroma  for hosting us in their recording studio// AbstractA new paradigm is emerging for building applications that process large volumes of data, run for long periods of time, and interact with their environment. It’s called Durable Execution and is replacing traditional data pipelines with a more flexible approach. Durable Execution makes regular code reliable and scalable.In the past, reliability and scalability have come from restricted programming models, like SQL or MapReduce, but with Durable Execution, this is no longer the case. We can now see data pipelines that include document processing workflows, deep research with LLMs, and other complex and LLM-driven agentic patterns expressed at scale with regular Python programs.In this session, we describe Durable Execution and explain how it fits in with agents and LLMs to enable a new class of machine learning applications.// Related Linkshttps://t.mp/hello?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johannhttps://t.mp/vibe?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johannhttps://t.mp/career?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johann ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Johann on LinkedIn: /jssmith/

  14. 501

    Performance Optimization and Software/Hardware Co-design across PyTorch, CUDA, and NVIDIA GPUs

    March 3rd, Computer History Museum CODING AGENTS CONFERENCE, come join us while there are still tickets left.https://luma.com/codingagentsChris Fregly is currently focused on building and scaling high-performance AI systems, writing and teaching about AI infrastructure, helping organizations adopt generative AI and performance engineering principles on AWS, and fostering large developer communities around these topics.Performance Optimization and Software/Hardware Co-design across PyTorch, CUDA, and NVIDIA GPUs // MLOps Podcast #363 with Chris Fregly, Founder, AI Performance Engineer, and InvestorJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractIn today’s era of massive generative models, it's important to understand the full scope of AI systems' performance engineering. This talk discusses the new O'Reilly book, AI Systems Performance Engineering, and the accompanying GitHub repo (https://github.com/cfregly/ai-performance-engineering). This talk provides engineers, researchers, and developers with a set of actionable optimization strategies. You'll learn techniques to co-design and co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems for both training and inference. // BioChris Fregly is an AI performance engineer and startup founder with experience at AWS, Databricks, and Netflix. He's the author of three (3) O'Reilly books, including Data Science on AWS (2021), Generative AI on AWS (2023), and AI Systems Performance Engineering (2025). He also runs the global AI Performance Engineering meetup and speaks at many AI-related conferences, including Nvidia GTC, ODSC, Big Data London, and more.// Related LinksAI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch 1st Edition by Chris Fregly: https://www.amazon.com/Systems-Performance-Engineering-Optimizing-Algorithms/dp/B0F47689K8/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Chris on LinkedIn: /cfreglyTimestamps:[00:00] SageMaker HyperPod Resilience[00:27] Book Creation and Software Engineering[04:57] Software Engineers and Maintenance[11:49] AI Systems Performance Engineering[22:03] Cognitive Biases and Optimization / "Mechanical Sympathy"[29:36] GPU Rack-Scale Architecture[33:58] Data Center Reliability Issues[43:52] AI Compute Platforms[49:05] Hardware vs Ecosystem Choice[1:00:05] Claude vs Codex vs Gemini[1:14:53] Kernel Budget Allocation[1:18:49] Steerable Reasoning Challenges[1:24:18] Data Chain Value Awareness

  15. 500

    Serving LLMs in Production: Performance, Cost & Scale // CAST AI Roundtable

    Roundtable CAST AI episode: Serving LLMs in Production: Performance, Cost & Scale. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractExperimenting with LLMs is easy. Running them reliably and cost-effectively in production is where things break. Most AI teams never make it past demos and proofs of concept. A smaller group is pushing real workloads to production—and running into very real challenges around infrastructure efficiency, runaway cloud costs, and reliability at scale.This session is for engineers and platform teams moving beyond experimentation and building AI systems that actually hold up in production.// BioIoana ApetreiIoana is a Senior Product Manager at CAST AI, leading the AI Enabler product, an AI Gateway platform for cost-effective LLM infrastructure deployment. She brings 12 years of experience building B2C and B2B products reaching over 10 million users. Outside of work, she enjoys assembling puzzles and LEGOs and watching motorsports.Igor ŠušićIgor is a founding Machine Learning Engineer at CAST AI’s AI Enabler, where he focuses on optimizing inference and training at scale. With a strong background in Natural Language Processing (NLP) and Recommender Systems, Igor has been tackling the challenges of large-scale model optimization long before transformers became mainstream. Prior to CAST AI, he worked at industry leaders like Bloomreach and Infobip, where he contributed to the development and deployment of large-scale AI and personalization systems from the early days of the field.// Related LinksWebsite: https://cast.ai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ioana on LinkedIn: /ioanaapetrei/Connect with Igor on LinkedIn: /igor-%C5%A1u%C5%A1i%C4%87/

  16. 499

    The Future of Information Retrieval: From Dense Vectors to Cognitive Search

    Rahul Raja is a Staff Software Engineer at LinkedIn, working on large-scale search infrastructure, information retrieval systems, and integrating AI/ML to improve ranking and semantic search experiences.The Future of Information Retrieval: From Dense Vectors to Cognitive Search // MLOps Podcast #362 with Rahul Raja, Staff Software Engineer at LinkedInJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractInformation Retrieval is evolving from keyword matching to intelligent, vector-based understanding. In this talk, Rahul Raja explores how dense retrieval, vector databases, and hybrid search systems are redefining how modern AI retrieves, ranks, and reasons over information. He discusses how retrieval now powers large language models through Retrieval-Augmented Generation (RAG) and the new MLOps challenges that arise, embedding drift, continuous evaluation, and large-scale vector maintenance.Looking ahead, the session envisions a future of Cognitive Search, where retrieval systems move beyond recall to genuine reasoning, contextual understanding, and multimodal awareness. Listeners will gain insight into how the next generation of retrieval will bridge semantics, scalability, and intelligence, powering everything from search and recommendations to generative AI.// BioRahul is a Staff Engineer at LinkedIn, where he focuses on search and deployment systems at scale. Rahul is a graduate from Carnegie Mellon University and has a strong background in building reliable, high-performance infrastructure. He has led many initiatives to improve search relevance and streamline ML deployment workflows.// Related LinksWebsite: https://www.linkedin.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rahul on LinkedIn: /rahulraja963/Timestamps:[00:00] Vector Search for Media[00:33] RAG and Search Evolution[04:45] Cognitive vs Semantic Search[08:26] High Value Search Signals[16:43] Scaling with Embeddings[22:37] BM25 Benchmark Bias[29:00] Video Search Use Cases[31:21] Context and Search Tradeoff[35:04] Personal Memory Augmentation[39:03] Future of Cognitive Search[44:51] Access Control in Vectors[49:14] Search Ranking Challenge[54:43] Hard Search Problems Solved[58:29] Freshness vs Cost[1:02:12] Wrap up

  17. 498

    Rethinking Notebooks Powered by AI

    Vincent Warmerdam is a Founding Engineer at marimo, working on reinventing Python notebooks as reactive, reproducible, interactive, and Git-friendly environments for data workflows and AI prototyping. He helps build the core marimo notebook platform, pushing its reactive execution model, UI interactivity, and integration with modern development and AI tooling so that notebooks behave like dependable, shareable programs and apps rather than error-prone scratchpads.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractVincent Warmerdam joins Demetrios fresh off marimo’s acquisition by Weights & Biases—and makes a bold claim: notebooks as we know them are outdated.They talk Molab (GPU-backed, cloud-hosted notebooks), LLMs that don’t just chat but actually fix your SQL and debug your code, and why most data folks are consuming tools instead of experimenting. Vincent argues we should stop treating notebooks like static scratchpads and start treating them like dynamic apps powered by AI.It’s a conversation about rethinking workflows, reclaiming creativity, and not outsourcing your brain to the model.// BioVincent is a senior data professional who worked as an engineer, researcher, team lead, and educator in the past. You might know him from tech talks with an attempt to defend common sense over hype in the data space. He is especially interested in understanding algorithmic systems so that one may prevent failure. As such, he has always had a preference to keep calm and check the dataset before flowing tonnes of tensors. He currently works at marimo, where he spends his time rethinking everything related to Python notebooks.// Related LinksWebsite: https://marimo.io/Coding Agent Conference: https://luma.com/codingagentsHyperbolic GPU Cloud: app.hyperbolic.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Vincent on LinkedIn: /vincentwarmerdam/Timestamps:[00:00] Context in Notebooks[00:24] Acquisition and Team Continuity[04:43] Coding Agent Conference Announcement![05:56] Hyperbolic GPU Cloud Ad[06:54] marimo and W&B Synergies[09:31] marimo Cloud Code Support[12:59] Hardest Code to Generate[16:22] Trough of Disillusionment[20:38] Agent Interaction in Notebooks[25:41] Wrap up

  18. 497

    Software Engineering in the Age of Coding Agents: Testing, Evals, and Shipping Safely at Scale

    Ereli Eran is the Founding Engineer at 7AI, where he’s focused on building and scaling the company’s agentic AI-driven cybersecurity platform — developing autonomous AI agents that triage alerts, investigate threats, enrich security data, and enable end-to-end automated security operations so human teams can focus on higher-value strategic work.Software Engineering in the Age of Coding Agents: Testing, Evals, and Shipping Safely at Scale // MLOps Podcast #361 with Ereli Eran, Founding Engineer at 7AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractA conversation on how AI coding agents are changing the way we build and operate production systems. We explore the practical boundaries between agentic and deterministic code, strategies for shared responsibility across models, engineering teams, and customers, and how to evaluate agent performance at scale. Topics include production quality gates, safety and cost tradeoffs, managing long-tail failures, and deployment patterns that let you ship agents with confidence.// BioEreli Eran is a founding engineer at 7AI, where he builds agentic AI systems for security operations and the production infrastructure that powers them. His work spans the full stack - from designing experiment frameworks for LLM-based alert investigation to architecting secure multi-tenant systems with proper authentication boundaries. Previously, he worked in data science and software engineering roles at Stripe, VMware Carbon Black, and was an early employee of Ravelin and Normalyze.// Related LinksWebsite: https://7ai.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Ereli on LinkedIn: /erelieran/Timestamps:[00:00] Language Sensitivity in Reasoning[00:25] Value of Claude Code[01:54] AI in Security Workflows[06:21] Agentic Systems Failures[12:50] Progressive Disclosure in Voice Agents[16:39] LLM vs Classic ML[19:44] Hybrid Approach to Fraud[25:58] Debugging with User Feedback[33:52] Prompts as Code[42:07] LLM Security Workflow[45:10] Shared Memory in Security[49:11] Common Agent Failure Modes[53:34] Wrap up

  19. 496

    Physical AI: Teaching Machines to Understand the Real World

    Nick Gillian is the Co-Founder and CTO at Archetype AI, working on physical AI foundation models that understand and reason over real-world sensor data.Physical AI: Teaching Machines to Understand the Real World // MLOps Podcast #360 with Nick Gillian, Co-Founder and CTO of Archetype AIJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide/ AbstractAs AI moves beyond the cloud and simulation, the next frontier is Physical AI: systems that can perceive, understand, and act within real-world environments in real time. In this conversation, Nick Gillian, Co-Founder and CTO of Archetype AI, explores what it actually takes to turn raw sensor and video data into reliable, deployable intelligence.Drawing on his experience building Google’s Soli and Jacquard and now leading development of Newton, a foundational model for Physical AI, Nick discusses how real-time physical understanding changes what’s possible across safety monitoring, infrastructure, and human–machine interaction. He’ll share lessons learned translating advanced research into products that operate safely in dynamic environments, and why many organizations underestimate the challenges and opportunities of AI in the physical world.// BioNick Gillian, Ph.D., is Co-Founder and CTO of Archetype AI with over 15 years of experience turning advanced AI and interaction research into real-world products. At Archetype, he leads the AI and engineering teams behind Newton—a first-of-its-kind Physical AI foundational model that can perceive, understand, and reason about the physical world. Before co-founding Archetype, Nick was a Senior Staff Machine Learning Engineer at Google and a researcher at MIT, where he developed AI and ML methods for real-time sensor understanding. At Google’s Advanced Technology and Projects group, he led machine learning research that powered breakthrough products like Soli radar and Jacquard, and helped advance sensing algorithms across Pixel, Nest, and wearable devices.// Related LinksWebsite: https://www.archetypeai.io/https://www.archetypeai.io/blog/timefusion-newton https://www.nature.com/articles/s41598-023-44714-2https://www.youtube.com/watch?v=Pow4utY9teU https://www.youtube.com/watch?v=uE0jjdzwe9w https://arxiv.org/abs/2410.14724 Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Nick on LinkedIn: /nick-gillian-b27b1094/Timestamps:[00:00] Physical Agent Framework[00:56] Physical AI Clarification[06:53] Building a Repair Model[12:41] World Models and LLMs[17:17] Data Weighting Strategies[24:19] Data Diversity vs Quantity[38:30] R&D and Product Creation[41:22] Construction Site Data Shipping[50:33] Wrap up

  20. 495

    Speed and Scale: How Today's AI Datacenters Are Operating Through Hypergrowth

    Kris Beevers is the CEO at NetBox Labs, working on turning NetBox into the system of record and automation backbone for modern and AI-driven infrastructure.Speed and Scale: How Today's AI Datacenters Are Operating Through Hypergrowth // MLOps Podcast #359 with Kris Beevers, CEO of NetBox LabsJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractHundreds of neocloud operators and "AI Factory" builders have emerged to serve the insatiable demand for AI infrastructure. These teams are compressing the design, build, deploy, operate, scale cycle of their infrastructures down to months, while managing massive footprints with lean teams. How? By applying modern intent-driven infrastructure automation principles to greenfield deployments. We'll explore how these teams carry design intent through to production, and how operating and automating around consistent infrastructure data is compressing "time to first train".// BioKris Beevers is the Co-founder and CEO of NetBox Labs. NetBox is used by nearly every Neocloud and AI datacenter to manage their networks and infrastructure. Kris is an engineer at heart and by background, and loves the leverage infrastructure innovation creates to accelerate technology and empower engineers to do their best work. A serial entrepreneur, Kris has founded and helped lead multiple other successful businesses in the internet and network infrastructure. Most recently, he co-founded and led NS1, which was acquired by IBM in 2023. He holds a Ph.D. in Computer Science from Rensselaer Polytechnic Institute and is based in New Jersey.// Related LinksWebsite: https://netboxlabs.com/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Kris on LinkedIn: /beevek/Timestamps:[00:00] Observability and Delta Analysis[00:26] New World Exploration[04:06] Bottlenecks in AI Infrastructure[13:37] Data Center Optimization Challenges[19:58] Tech Stack Breakdown[25:26] Data Center Design Principles[31:32] Constraints and Automation in Design[40:00] Complexity in Data Centers[45:02] GPU Cloud Landscape[50:24] Data Centers in Containers[57:45] Observability Beyond Software[1:04:43] Tighter Integrations vs NetBox[1:06:47] Wrap up

  21. 494

    Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces

    Mike Oaten is the Founder and CEO of TIKOS, working on building AI assurance, explainability, and trustworthy AI infrastructure, helping organizations test, monitor, and govern AI models and systems to make them transparent, fair, robust, and compliant with emerging regulations.Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces // MLOps Podcast #358 with Mike Oaten, Founder and CEO of TIKOSJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAs AI models move into high-stakes environments like Defence and Financial Services, standard input/output testing, evals, and monitoring are becoming dangerously insufficient. To achieve true compliance, MLOps teams need to access and analyse the internal reasoning of their models to achieve compliance with the EU AI Act, NIST AI RMF, and other requirements.In this session, Mike introduces the company's patent-pending AI assurance technology that moves beyond statistical proxies. He will break down the architecture of the Synapses Logger, a patent-pending technology that embeds directly into the neural activation flow to capture weights, activations, and activation paths in real-time.// BioMike Oaten serves as the CEO of TIKOS, leading the company’s mission to progress trustworthy AI through unique, high-performance AI model assurance technology. A seasoned technical and data entrepreneur, Mike brings experience from successfully co-founding and exiting two previous data science startups: Riskopy Inc. (acquired by Nasdaq-listed Coupa Software in 2017) and Regulation Technologies Limited (acquired by mnAi Data Solutions in 2022).Mike's expertise spans data, analytics, and ML product and governance leadership. At TIKOS, Mike leads a VC-backed team developing technology to test and monitor deep-learning models in high-stakes environments, such as defence and financial services, so they comply with the stringent new laws and regulations.// Related LinksWebsite: https://tikos.tech/LLM guardrails: https://medium.com/tikos-tech/your-llm-output-is-confidently-wrong-heres-how-to-fix-it-08194fdf92b9Model Bias: https://medium.com/tikos-tech/from-hints-to-hard-evidence-finally-how-to-find-and-fix-model-bias-in-dnns-2553b072fd83Model Robustness: https://medium.com/tikos-tech/tikos-spots-neural-network-weaknesses-before-they-fail-the-iris-dataset-b079265c04daGPU Optimisation: https://medium.com/tikos-tech/400x-performance-a-lightweight-open-source-python-cuda-utility-to-break-vram-barriers-d545e5b6492fHyperbolic GPU Cloud: app.hyperbolic.ai.Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Mike on LinkedIn: /mike-oaten/Timestamps:[00:00] Regulations as Opportunity[00:25] Regulation Compliance Fun[02:49] AI Act Layers Explained[05:19] Observability in Systems vs ML[09:05] Risk Transfer in AI[11:26] LLMs and Model Approval[14:53] LLMs in Finance[17:17] Hyperbolic GPU Cloud Ad[18:16] Stakeholder Alignment and Tech[22:20] AI in Regulated Environments[28:55] Autonomous Boat Regulations[34:20] Data Compliance Mapping[39:11] Data Capture Strategy[41:13] EU AI Act Insights[44:52] Wrap up[45:45] Join the Coding Agents Conference!

  22. 493

    A Playground for AI/ML Engineers

    Paulo Vasconcellos is the Principal Data Scientist for Generative AI Products at Hotmart, working on AI-powered creator and learning experiences, including intelligent tutoring, content automation, and multilingual localization at scale.Join us at Coding Agents: The AI Driven Developer Conference - https://luma.com/codingagentsMLOps GPU Guide: ⁠https://go.mlops.community/gpuguideJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// Abstract“Agent as a product” sounds like hype, until Hotmart turns creators’ content into AI businesses that actually work.// BioPaulo Vasconcellos is the Principal Data Scientist for Generative AI Products at Hotmart, where he leads efforts in applied AI, machine learning, and generative technologies to power intelligent experiences for creators and learners. He holds an MSc in Computer Science with a focus on artificial intelligence and is also a co-founder of Data Hackers, a prominent data science and AI community in Brazil. Paulo regularly speaks and publishes on topics spanning data science, ML infrastructure, and AI innovation.// Related LinksWebsite: paulovasconcellos.com.brCoding Agent - Virtual Conference: https://home.mlops.community/home/events/coding-agents-virtual ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Paulo on LinkedIn: /paulovasconcellos/Timestamps:[00:00] Hotmart Data Science Challenges[02:38] LLMs vs spaCy[11:38] Use Cases in Production[19:04] Coding Agents Virtual Conference Announcement![29:27] ML to AI Product Shift[34:49] Tool-Augmented Agent Approach[38:28] MLOps GPU Guide[41:24] AI Use Cases at Hotmart[49:34] Agent Tool Access Explained[51:04] MLOps Community Gratitude[53:22] Wrap up

  23. 492

    How Universal Resource Management Transforms AI Infrastructure Economics

    Wilder Lopes is the CEO and Founder of Ogre.run, working on AI-driven dependency resolution and reproducible code execution across environments.How Universal Resource Management Transforms AI Infrastructure Economics // MLOps Podcast #357 with Wilder Lopes, CEO / Founder of Ogre.runJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractEnterprise organizations face a critical paradox in AI deployment: while 52% struggle to access needed GPU resources with 6-12 month waitlists, 83% of existing CPU capacity sits idle. This talk introduces an approach to AI infrastructure optimization through universal resource management that reshapes applications to run efficiently on any available hardware—CPUs, GPUs, or accelerators.We explore how code reshaping technology can unlock the untapped potential of enterprise computing infrastructure, enabling organizations to serve 2-3x more workloads while dramatically reducing dependency on scarce GPU resources. The presentation demonstrates why CPUs often outperform GPUs for memory-intensive AI workloads, offering superior cost-effectiveness and immediate availability without architectural complexity.// BioWilder Lopes is a second-time founder, developer, and research engineer focused on building practical infrastructure for developers. He is currently building Ogre.run, an AI agent designed to solve code reproducibility.Ogre enables developers to package source code into fully reproducible environments in seconds. Unlike traditional tools that require extensive manual setup, Ogre uses AI to analyze codebases and automatically generate the artifacts needed to make code run reliably on any machine. The result is faster development workflows and applications that work out of the box, anywhere.// Related LinksWebsite: https://ogre.runhttps://lopes.aihttps://substack.com/@wilderlopes https://youtu.be/YCWkUub5x8c?si=7RPKqRhu0Uf9LTql~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Wilder on LinkedIn: /wilderlopes/Timestamps:[00:00] Secondhand Data Centers Challenges[00:27] AI Hardware Optimization Debate[03:40] LLMs on Older Hardware[07:15] CXL Tradeoffs[12:04] LLM on CPU Constraints[17:07] Leveraging Existing Hardware[22:31] Inference Chips Overview[27:57] Fundamental Innovation in AI[30:22] GPU CPU Combinations[40:19] AI Hardware Challenges[43:21] AI Perception Divide[47:25] Wrap up

  24. 491

    Conversation with the MLflow Maintainers

    Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn’t just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you’re already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor’s Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up

  25. 490

    Leadership on AI

    Euro Beinat is the Global Head of AI and Data Science at Prosus Group, working on scaling AI-driven tools and agent-based systems across Prosus’s global portfolio, deploying internal assistants like Toqan and generative AI platforms such as PlusOne, and building initiatives like AI House Amsterdam and interdisciplinary AI residencies to explore intent-driven AI and strengthen Europe’s AI ecosystem.Mert Öztekin is the Chief Technology Officer at Just Eat Takeaway.com, working on advancing the company’s platform with AI-driven ordering and personalised user experiences, scaling cloud and generative AI tooling for engineering productivity, and exploring innovative delivery technologies like automation to make ordering and delivery more seamless. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractAgents sound smart until millions of users show up. A real talk on tools, UX, and why autonomy is overrated.// BioEuro Beinat Euro is a technology executive and entrepreneur specializing in data science, machine learning, and AI. He works with global corporations and startups to build data- and ML-driven products and businesses. His current focus is on Generative AI and the use of AI as a tool for invention and innovation.Mert ÖztekinMert is the current Chief Technology Officer at Just Eat Takeaway.com with previous experience as a CTO at Delivery Hero Germany GmbH, Director of Engineering at Delivery Hero, and IT Manager at yemeksepeti.com. They have a background in software engineering, system-business analysis, and project management, with a master's degree in Computer Engineering. Mert has also worked as an IT Project Team Lead and has experience in managing mobile teams and global expansions in the online food ordering industry.// Related LinksWebsite: https://www.prosus.com/Website: https://justeattakeaway.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]MLOps GPU Guide: https://go.mlops.community/gpuguideConnect with Demetrios on LinkedIn: /dpbrinkmConnect with Euro on LinkedIn: /eurobeinat/Connect with Mert on LinkedIn: /mertoztekin/Timestamps:[00:00] AI Transformation Challenges[00:29] AI Productivity[04:30] Developer Tool Freedom[09:40] AI Alignment Bottleneck[22:17] Exploring Agent Potential[25:59] Governance of AI Agents[33:24] Shadow AI Governance[40:57] AI Budgeting for Growth[46:27] MLOps GPU Guide announcement!

  26. 489

    Computers that Think and Take Actions for You

    Zengyi Qin is the Founder of the OpenAGI Foundation, working on computer-use models and open, agent-centric AI infrastructure.Computers that Think and Take Actions for You, Zengy Qin // MLOps Podcast #355Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Merch: https://shop.mlops.community/// AbstractWhat if the computer itself can think and take actions for you? You just give it a goal, and it performs every click, type, drag, and gets work done across the desktop and web. In this talk, Zengyi reveals the breakthrough technology that his company OpenAGI is developing: AI that can use computers like humans do. He talks about how his team developed the model, why it outperforms similar models from OpenAI and Google, and its wide use cases across different domains. // Related LinksWebsite: https://www.qinzy.tech/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Zengyi on LinkedIn: /qinzy/Timestamps:[00:00] AI and Human Interaction[00:30] Zengyi's story[08:19] Why Expensive Models Lost[06:30] Bigger Models Are Lazy[10:24] Training Computer-Use vs LLMs[13:53] World Models and Sandboxes[19:42] Dealing with Non-Stationary States[23:56] Training with Software[26:44] Sandbox Training Process[41:33] Infrastructure for Computer Models[44:36] Wrap up

  27. 488

    Real time features, AI search, Agentic similarities

    Varant Zanoyan is the Co-founder & CEO at Zipline AI, working on building a next-generation AI/ML infrastructure platform that streamlines data pipelines, model deployment, observability, and governance to accelerate enterprise AI development. Nikhil Simha Raprolu is the Co-founder & CTO at Zipline AI, focused on architecting and scaling the company’s AI data platform — extending the open-source Chronon engine into a developer-friendly system that simplifies building and operating production AI applications.Real-time features, AI search, Agentic similarities, Varant Zanoyan & Nikhil Simha Raprolu // MLOps Podcast #354Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Swag/Merch: [https://shop.mlops.community/]And huge thanks to Chroma for hosting us in their recording studio// AbstractFeature stores might be the wrong abstraction. Varant Zanoyan and Nikhil Simha Raprolu explain why Cronon ditched “store-first” thinking and focused on compute, orchestration, and real-time correctness—born at Airbnb, battle-tested with Stripe. If embeddings, agents, and real-time ML feel painful, this episode explains why.// Related LinksWebsite: https://zipline.ai/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Varant on LinkedIn: /vzanoyan/Connect with Nikhil on LinkedIn: /nikhilsimha/Timestamps:[00:00] Feature Platform Insights[02:00] Zipline and Feature Stores[05:19] Cronon and Zipline Origins[10:49] Feast and Feather Comparison[13:27] Open source challenges[20:52] Zipline and Iceberg Integration [23:54] Airbnb Agent Systems[28:16] Features vs Embeddings[29:07] Wrap up

  28. 487

    Tool definitions are the new Prompt Engineering

    Alex Salazar is the CEO and Co-Founder of Arcade.dev, working on secure AI agents and real-world automation integrations.Chiara Caratelli is a Data Scientist at Prosus Group, working on AI agents, web automation, and evaluation of robust multimodal models.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: ⁠https://go.mlops.community/gpuguide// AbstractAgents sound smart until millions of users show up. A real talk on tools, UX, and why autonomy is overrated.// BioChiara CaratelliChiara is a Data Scientist at Prosus, where she develops AI-driven solutions with a focus on AI agents, multimodal models, and new user experiences. With a PhD in Computational Science and a background in machine learning engineering and data science, she has worked on deploying AI-powered applications at scale, collaborating with Prosus portfolio companies to drive real-world impact.Beyond her work at Prosus, she enjoys experimenting with generative AI and art. She is also an avid climber and book reader, always eager to explore new ideas and share knowledge with the AI and ML community.Alex SalazarAlex is the CEO and co-founder of Arcade.dev, the unified agent action platform that makes AI agents production-ready. Previously, Salazar co-founded Stormpath, the first authentication API for developers, which was acquired by Okta. At Okta, he led developer products, accounting for 25% of total bookings, and launched a new auth-centric proxy server product that reached $9M in revenue within a year. He also managed Okta's network of over 7,000 auth integrations. Alex holds a computer science degree from Georgia Tech and an MBA from Stanford University.// Related LinksWebsite: https://www.prosus.com/Website: https://www.arcade.dev/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Alex on LinkedIn: /alexsalazar/Connect with Chiara on LinkedIn: /chiara-caratelli/Timestamps:[00:00] Intro[00:15] Insights from iFood[06:22] API vs agent intention[09:45] Tool definition clarity[15:37] Preemptive context loading[27:50] Contextualizing agent data[33:27] Prompt bloat in payments[41:33] Agent building evolution[50:09] Agent program scalability[55:29] Why multi-agent is a dead end[56:17] Wrap up

  29. 486

    The Future of AI Agents is Sandboxed

    Jonathan Wall is the CEO at Runloop.ai, working on enterprise-grade infrastructure and execution environments for AI coding agents.The Future of AI Agents is Sandboxed // MLOps Podcast #353 with Jonathan Wall, CEO at Runloop.ai.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to  @runloop-ai  for powering this MLOps Podcast episode.// AbstractEveryone’s arguing about agents. Jonathan Wall says the real fight is about sandboxes, isolation, and why most “agent platforms” are doing it wrong.// BioJon was the techlead of Google File System, a founding engineer at Google Wallet, and then the founder of Inde, which was acquired by Stripe. He is building Runloop.ai to bridge the production gap for AI Agents by building a one-stop sandbox infrastructure for building, deploying, and refining agents. // Related LinksWebsite: runloop.aiBlogs and content at https://www.runloop.ai/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jon on LinkedIn: /jonathantwall/Timestamps:[00:00] GitHubification of workflows[00:29] Sandbox definitions explained[04:47] Agent setup explanation[08:03] Sandbox vs API agent[13:51] Resource usage in sandbox [22:50] Agent evaluation setup[28:08] Failure cases value[31:06] Sandbox isolation vs multi-tenancy[36:14] Frameworks vs Harnesses[39:02] Langraph vs Harness comparison[43:22] Agent flexibility and verification[52:51] Training data focus[57:10] Wrap up

  30. 485

    Context engineering 2.0, Agents + Structured Data, and the Redis Context Engine

    Simba Khadder is the founder and CEO of Featureform, now at Redis, working on real-time feature orchestration and building a context engine for AI and agents.Context Engineering 2.0, Simba Khadder // MLOps Podcast #352Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractFeature stores aren’t dead — they were just misunderstood. Simba Khadder argues the real bottleneck in agents isn’t models, it’s context, and why Redis is quietly turning into an AI data platform. Context engineering matters more than clever prompt hacks.// BioSimba Khadder leads Redis Context Engine and Redis Featureform, building both the feature and context layer for production AI agents and ML models. He joined Redis via the acquisition of Featureform, where he was Founder & CEO. At Redis, he continues to lead the feature store product as well as spearhead Context Engine to deliver a unified, navigable interface connecting documents, databases, events, and live APIs for real-time, reliable agent workflows. He also loves to surf, go sailing with his wife, and hang out with his dog Chupacabra.// Related LinksWebsite: featureform.comhttps://marketing.redis.io/blog/real-time-structured-data-for-ai-agents-featureform-is-joining-redis/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Simba on LinkedIn: /simba-k/Timestamps:[00:00] Context engineering explanation[00:25] MLOps and feature stores[03:36] Selling a company experience[06:34] Redis feature store evolution[12:42] Embedding hub[20:42] Human vs agent semantics[26:41] Enrich MCP data flow[29:55] Data understanding and embeddings[35:18] Search and context tools[39:45] MCP explained without hype[45:15] Wrap up

  31. 484

    Does AgenticRAG Really Work?

    Satish Bhambri is a Sr Data Scientist at Walmart Labs, working on large-scale recommendation systems and conversational AI, including RAG-powered GroceryBot agents, vector-search personalization, and transformer-based ad relevance models.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractThe MLOps Community Podcast features Satish Bhambri, Senior Data Scientist with the Personalization and Ranking team at Walmart Labs and one of the emerging leaders in applied AI, in its newest episode. Satish has quietly built one of the most diverse and impactful AI portfolios in his field, spanning quantum computing, deep learning, astrophysics, computer vision, NLP, fraud detection, and enterprise-scale recommendation systems. Bhambri's nearly a decade of research across deep learning, astrophysics, quantum computing, NLP, and computer vision culminated in over 10 peer-reviewed publications released in 2025 through IEEE and Springer, and his early papers are indexed by NASA ADS and Harvard SAO, marking the start of his long-term research arc. He also holds a patent for an AI-powered smart grid optimization framework that integrates deep learning, real-time IoT sensing, and adaptive control algorithms to improve grid stability and efficiency, a demonstration of his original, high-impact contributions to intelligent infrastructure. Bhambri leads personalization and ranking initiatives at Walmart Labs, where his AI systems serve more than (5% of the world’s population) 531 million users every month, roughly based on traffic data. His work with Transformers, Vision-Language Models, RAG and agentic-RAG systems, and GPU-accelerated pipelines has driven significant improvements in scale and performance, including increases in ad engagement, faster compute by and improved recommendation diversity.Satish is a Distinguished Fellow & Assessor at the Soft Computing Research Society (SCRS), a reviewer for IEEE and Springer, and has served as a judge and program evaluator for several elite platforms. He was invited to the NeurIPS Program Judge Committee, the most prestigious AI conference in the world, and to evaluate innovations for DeepInvent AI, where he reviews high-impact research and commercialization efforts. He has also judged Y Combinator Startup Hackathons, evaluating pitches for an accelerator that produced companies like Airbnb, Stripe, Coinbase, Instacart, and Reddit.Before Walmart, Satish built supply-chain intelligence systems at BlueYonder that reduced ETA errors and saved retailers millions while also bringing containers to the production pipeline. Earlier, at ASU’s School of Earth & Space Exploration, he collaborated with astrophysicists on galaxy emission simulations, radio burst detection, and dark matter modeling, including work alongside Dr. Lawrence Krauss, Dr. Karen Olsen, and Dr. Adam Beardsley.On the podcast, Bhambri discusses the evolution of deep learning architectures from RNNs and CNNs to transformers and agentic RAG systems, the design of production-grade AI architectures with examples, and his long-term vision for intelligent systems that bridge research and real-world impact. and the engineering principles behind building production-grade AI at a global scale.// Related LinksPapers: https://scholar.google.com/citations?user=2cpV5GUAAAAJ&hl=enPatent: https://search.ipindia.gov.in/DesignApplicationStatus ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkm

  32. 483

    How Sierra AI Does Context Engineering

    Zack Reneau-Wedeen is the Head of Product at Sierra, leading the development of enterprise-ready AI agents — from Agent Studio 2.0 to the Agent Data Platform — with a focus on richer workflows, persistent memory, and high-quality voice interactions.How Sierra Does Context Engineering, Zack Reneau-Wedeen // MLOps Podcast #350Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractSierra’s Zack Reneau-Wedeen claims we’re building AI all wrong and that “context engineering,” not bigger models, is where the real breakthroughs will come from. In this episode, he and Demetrios Brinkmann unpack why AI behaves more like a moody coworker than traditional software, why testing it with real-world chaos (noise, accents, abuse, even bad mics) matters, and how Sierra’s simulations and model “constellations” aim to fix the industry’s reliability problems. They even argue that decision trees are dead, replaced by goals, guardrails, and speculative execution tricks that make voice AI actually usable. Plus: how Sierra trains grads to become product-engineering hybrids, and why obsessing over customers might be the only way AI agents stop disappointing everyone.// Related LinksWebsite: https://www.zackrw.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Zack on LinkedIn: /zackrw/Timestamps:[00:00] Electron cloud vs energy levels[03:47] Simulation vs red teaming[06:51] Access control in models[10:12] Voice vs text simulations[13:12] Speaker-adaptive turn-taking[18:26] Accents and model behavior[23:52] Outcome-based pricing risks[31:40] AI cross-pollination strategies[41:26] Ensemble of models explanation[46:47] Real-time agents vs decision trees[50:15] Code and no-code mix[54:04] Goals and guardrails explained[56:23] Wrap up[57:31] APX program!

  33. 482

    Overcoming Challenges in AI Agent Deployment: The Sweet Spot for Governance and Security // Spencer Reagan // #349

    Spencer Reagan leads R&D at Airia, working on secure AI-agent orchestration, data governance systems, and real-time signal fusion technologies for regulated and defense environments.Overcoming Challenges in AI Agent Deployment: The Sweet Spot for Governance and Security // MLOps Podcast #349 with Spencer Reagan, R&D at Airia.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Airia for powering this MLOps Podcast episode.// AbstractSpencer Reagan thinks it might be, and he’s not shy about saying so. In this episode, he and Demetrios Brinkmann get real about the messy, over-engineered state of agent systems, why LLMs still struggle in the wild, and how enterprises keep tripping over their own data chaos. They unpack red-teaming, security headaches, and the uncomfortable truth that most “AI platforms” still don’t scale. If you want a sharp, no-fluff take on where agents are actually headed, this one’s worth a listen.// BioPassionate about technology, software, and building products that improve people's lives.// Related LinksWebsite: https://airia.com/Machine Learning, AI Agents, and Autonomy // Egor Kraev // MLOps Podcast #282 - https://youtu.be/zte3QDbQSekRe-Platforming Your Tech Stack // Michelle Marie Conway & Andrew Baker // MLOps Podcast #281 - https://youtu.be/1ouSuBETkdA~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Spencer on LinkedIn: /spencerreagan/Timestamps:[00:00] AI industry future[00:55] Use cases in software[05:44] LLMs for data normalization[11:02] ROI and overengineering[15:58] Street width history[20:58] High ROI examples[25:16] AI building challenges[33:37] Budget control challenges[39:30] Airia Orchestration platform[46:25] Agent evaluation breakdown[53:48] Wrap up

  34. 481

    Hardening Agents for E-commerce Scale: From RL Alignment to Reliability // Panel 2

    Thanks to Prosus Group for collaborating on the Agents in Production Virtual Conference 2025.Abstract //The discussion centers on highly technical yet practical themes, such as the use of advanced post-training techniques like Direct Preference Optimization (DPO) and Parameter-Efficient Fine-Tuning (PEFT) to ensure LLMs maintain stability while specializing for e-commerce domains. We compare the implementation challenges of Computer-Using Agents in automating legacy enterprise systems versus the stability issues faced by conversational agents when inputs become unpredictable in production. We will analyze the role of cloud infrastructure in supporting the continuous, iterative training loops required by Reinforcement Learning-based agents for e-commerce!Bio // Paul van der Boor (Panel Host) //Paul van der Boor is a Senior Director of Data Science at Prosus and a member of its internal AI group.Arushi Jain (Panelist) // Arushi is a Senior Applied Scientist at Microsoft, working on LLM post-training for Computer-Using Agent (CUA) through Reinforcement Learning. She previously completed Microsoft’s competitive 2-year AI Rotational Program (MAIDAP), building and shipping AI-powered features across four product teams.She holds a Master’s in Machine Learning from the University of Michigan, Ann Arbor, and a Dual Degree in Economics from IIT Kanpur. At Michigan, she led the NLG efforts for the Alexa Prize Team, securing a $250K research grant to develop a personalized, active-listening socialbot. Her research spans collaborations with Rutgers School of Information, Virginia Tech’s Economics Department, and UCLA’s Center for Digital Behavior.Beyond her technical work, Arushi is a passionate advocate for gender equity in AI. She leads the Women in Data Science (WiDS) Cambridge community, scaling participation in her ML workshops from 25 women in 2020 to 100+ in 2025—empowering women and non-binary technologists through education and mentorship.Swati Bhatia //Passionate about building and investing in cutting-edge technology to drive positive impact.Currently shaping the future of AI/ML at Google Cloud.10+ years of global experience across the U.S, EMEA, and India in product, strategy & venture capital (Google, Uber, BCG, Morpheus Ventures).Audi Liu //I’m passionate about making AI more useful and safe.Why? Because AI will be ubiquitous in every workflow, powering our lives just like how electricity revolutionized our society - It’s pivotal we get it right.At Inworld AI, we believe all future software will be powered by voice. As a Sr Product Manager at Inworld, I'm focused on building a real-time voice API that empowers developers to create engaging, human-like experiences. Inworld offers state-of-the-art voice AI at a radically accessible price - No. 1 on Hugging Face and Artificial Analysis, instant voice cloning, rich multilingual support, real-time streaming, and emotion plus non-verbal control, all for just $5 per million characters.Isabella Piratininga //Experienced Product Leader with over 10 years in the tech industry, shaping impactful solutions across micro-mobility, e-commerce, and leading organizations in the new economy, such as OLX, iFood, and now Nubank. I began my journey as a Product Owner during the early days of modern product management, contributing to pivotal moments like scaling startups, mergers of major tech companies, and driving innovation in digital banking.My passion lies in solving complex challenges through user-centered product strategies. I believe in creating products that serve as a bridge between user needs and business goals, fostering value and driving growth. At Nubank, I focus on redefining financial experiences and empowering users with accessible and innovative solutions.Check out all the talks from the conference here: https://go.mlops.community/carzleGet some "I hallucinate more than ChatGPT" t-shirts here: https://go.mlops.community/NL_RY25_Merch

  35. 480

    Building Cursor: A Fireside Chat with VP Solutions Ricky Doar

    Ricky Doar is the VP of Solutions at Cursor, where he leads forward-deployed engineers. A seasoned product and technical leader with over a decade of experience in developer tools and data platforms, Ricky previously served as VP of Field Engineering at Vercel, where he led global technical solutions for the company's next-generation frontend platform.Prior to Vercel, Ricky held multiple leadership roles at Segment (acquired by Twilio), including Director of Product Management for Twilio Engage, Group Product Manager for Personas, and RVP of Solutions Engineering for the West and APAC regions. He also worked as a Product Engineer and Senior Sales Engineer at Mixpanel, bringing deep technical expertise to customer-facing roles.Thanks to  Prosus Group for collaborating on the Agents in Production Virtual Conference 2025.In this session, Ricky Doar, VP of Solutions at Cursor, shares actionable insights from leading large-scale AI developer tool implementations at the world’s top enterprises. Drawing on field experience with organizations at the forefront of transformation, Ricky highlights key best practices, observed power-user patterns, and deployment strategies that maximize value and ensure smooth rollout. Learn what distinguishes high-performing teams, how tailored onboarding accelerates adoption, and which support resources matter most for driving enterprise-wide success.A Prosus | MLOps Community ProductionCheck out all the talks from the conference here: ⁠https://go.mlops.community/carzle⁠Get some "I hallucinate more than ChatGPT" t-shirts here: ⁠https://go.mlops.community/NL_RY25_Merch

  36. 479

    Relational Foundation Models: Unlocking the Next Frontier of Enterprise AI // Jure Leskovec // #348

    Dr. Jure Leskovec is the Chief Scientist at Kumo.AI and a Stanford professor, working on relational foundation models and graph-transformer systems that bring enterprise databases into the foundation-model era.Relational Foundation Models: Unlocking the Next Frontier of Enterprise AI // MLOps Podcast #348 with Jure Leskovec, Professor and Chief Scientist, Stanford University and Kumo.AI.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractToday’s foundation models excel at text and images—but they miss the relationships that define how the world works. In every enterprise, value emerges from connections: customers to products, suppliers to shipments, molecules to targets. This talk introduces Relational Foundation Models (RFMs)—a new class of models that reason over interactions, not just data points. Drawing on advances in graph neural networks and large-scale ML systems, I’ll show how RFMs capture structure, enable richer reasoning, and deliver measurable business impact. Audience will learn where relational modeling drives the biggest wins, how to build the data backbone for it, and how to operationalize these models responsibly and at scale.// BioJure Leskovec is the co-founder of Kumo.AI, an enterprise AI company pioneering AI foundation models that can reason over structured business data. He is also a Professor of Computer Science at Stanford University and a leading researcher in artificial intelligence, best known for pioneering Graph Neural Networks and creating PyG, the most widely used graph learning toolkit. Previously, Jure served as Chief Scientist at Pinterest and as an investigator at the Chan Zuckerberg BioHub. His research has been widely adopted in industry and government, powering applications at companies such as Meta, Uber, YouTube, Amazon, and more. He has received top awards in AI and data science, including the ACM KDD Innovation Award.// Related LinksWebsite: https://cs.stanford.edu/people/jure/https://www.youtube.com/results?search_query=jure+leskovecPlease watch Jure's keynote:https://www.youtube.com/watch?v=Rcfhh-V7x2U~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jure on LinkedIn: /leskovecTimestamps:[00:00] Structured data value[00:26] Breakdown of ML Claims[05:04] LLMs vs recommender systems[10:09] Building a relational model[15:47] Feature engineering impact[20:42] Knowledge graph inference[26:45] Advertising models scale[32:57] Feature stores evolution[38:00] Training model compute needs[42:34] Predictive AI for agents[45:32] Leveraging faster predictive models[48:00] Wrap up

  37. 478

    Context Engineering, Context Rot, & Agentic Search with the CEO of Chroma, Jeff Huber

    Jeff Huber is the CEO of ​Chroma, working on context engineering and building reliable retrieval infrastructure for AI systems. Context Engineering, Context Rot, & Agentic Search with the CEO of Chroma, Jeff Huber // MLOps Podcast #348.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractJeff Huber drops some hard truths about “context rot” — the slow decay of AI memory that’s quietly breaking your favorite models. From retrieval chaos to the hidden limits of context windows, he and Demetrios Brinkmann unpack why most AI systems forget what matters and how Chroma is rethinking the entire retrieval stack. It’s a bold look at whether smarter AI means cleaner context — or just better ways to hide the mess.// BioJeff Huber is the CEO and cofounder of Chroma. Chroma has raised $20M from top investors in Silicon Valley and builds modern search infrastructure for AI.// Related LinksWebsite: https://www.trychroma.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jeff on LinkedIn: /jeffchuber/Timestamps:[00:00] AI intelligence context clarity[00:37] Context rot explanation[03:02] Benchmarking context windows[05:09] Breaking down search eras[10:50] Agent task memory issues[17:21] Semantic search limitations[22:54] Context hygiene in AI[30:15] Chroma on-device functionality[38:23] Vision for precision systems[43:07] ML model deployment challenges[44:17] Wrap up

  38. 477

    Reliable Voice Agents

    Brooke Hopkins is the CEO of Coval, a company making voice agents more reliable. Reliable Voice Agents // MLOps Podcast #347 with Brooke Hopkins, Founder of Coval.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractVoice AI is finally growing up—but not without drama. Brooke Hopkins joins Demetrios Brinkmann to unpack why most “smart” voice systems still feel dumb, what it actually takes to make them reliable, and how startups are quietly outpacing big tech in building the next generation of voice agents.// BioBrooke Hopkins is the founder of Coval, a simulation and evaluation platform for AI agents. She previously led the evaluation job infrastructure at Waymo. There, her team was responsible for the developer tools for launching and running simulations, and she engineered many of the core simulation systems from the ground up.// Related LinksWebsite: https://www.coval.dev/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Brooke on LinkedIn: /bnhop/Timestamps:[00:00] Workshop feedback[02:21] IVR frustration and transition[05:06] Voice use cases in business[11:00] Voice AI reliability challenge[18:46] Voice AI reliability issues[24:35] Injecting context[27:16] Conversation flow analysis[34:52] AI overgeneralization and confidence[37:41] Wrap up

  39. 476

    The Future of AI Operations: Insights from PwC AI Managed Services

    Rani Radhakrishnan is a Principal at PwC US, leading work on AI-managed services, autonomous agents, and data-driven transformation for enterprises.The Future of AI Operations: Insights from PwC AI Managed Services // MLOps Podcast #345 with Rani Radhakrishnan, Principal, Technology Managed Services - AI, Data Analytics and Insights at PwC US.Huge thanks to PwC for supporting this episode!Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractIn today’s data-driven IT landscape, managing ML lifecycles and operations is converging.On this podcast, we’ll explore how end-to-end ML lifecycle practices extend to proactive, automation-driven IT operations.We'll discuss key MLOps concepts—CI/CD pipelines, feature stores, model monitoring—and how they power anomaly detection, event correlation, and automated remediation. // BioRani Radhakrishnan, a Principal at PwC, currently leads the AI Managed Services and Data & Insight teams in PwC US Technology Managed Services.Rani excels at transforming data into strategic insights, driving informed decision-making, and delivering innovative solutions. Her leadership is marked by a deep understanding of emerging technologies and a commitment to leveraging them for business growth.Rani’s ability to align and deliver AI solutions with organizational outcomes has established her as a thought leader in the industry.Her passion for applying technology to solve tough business challenges and dedication to excellence continue to inspire her teams and help drive success for her clients in the rapidly evolving AI landscape. // Related LinksWebsite: https://www.pwc.com/us/managedserviceshttps://www.pwc.com/us/en/tech-effect.html~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rani on LinkedIn: /rani-radhakrishnan-163615Timestamps:[00:00] Getting to Know Rani[01:54] Managed services[03:50] AI usage reflection[06:21] IT operations and MLOps[11:23] MLOps and agent deployment[14:35] Startup challenges in managed services[16:50] Lift vs practicality in ML[23:45] Scaling in production[27:13] Data labeling effectiveness[29:40] Sustainability considerations[37:00] Product engineer roles[40:21] Wrap up

  40. 475

    GPU Uptime with VAST Data CTO

    Andy Pernsteiner is the Field CTO at VAST Data, working on large-scale AI infrastructure, serverless compute near data, and the rollout of VAST’s AI Operating System.The GPU Uptime Battle // MLOps Podcast #346 with Andy Pernsteiner, Field CTO of VAST Data.Huge thanks to VAST Data for supporting this episode!Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractMost AI projects don’t fail because of bad models; they fail because of bad data plumbing. Andy Pernsteiner joins the podcast to talk about what it actually takes to build production-grade AI systems that aren’t held together by brittle ETL scripts and data copies. He unpacks why unifying data - rather than moving it - is key to real-time, secure inference, and how event-driven, Kubernetes-native pipelines are reshaping the way developers build AI applications. It’s a conversation about cutting out the complexity, keeping data live, and building systems smart enough to keep up with your models. // BioAndy is the Field Chief Technology Officer at VAST, helping customers build, deploy, and scale some of the world’s largest and most demanding computing environments.Andy has spent the past 15 years focused on supporting and building large-scale, high-performance data platform solutions. From humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical Ninjas at MapR, he’s consistently been in the frontlines solving some of the toughest challenges that customers face when implementing Big Data Analytics and next-generation AI solutions.// Related LinksWebsite: www.vastdata.comhttps://www.youtube.com/watch?v=HYIEgFyHaxkhttps://www.youtube.com/watch?v=RyDHIMniLro The Mom Test by Rob Fitzpatrick: https://www.momtestbook.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Andy on LinkedIn: /andypernsteinerTimestamps:[00:00] Prototype to production gap[00:21] AI expectations vs reality[03:00] Prototype vs production costs[07:47] Technical debt awareness[10:13] The Mom Test[15:40] Chaos engineering[22:25] Data messiness reflection[26:50] Small data value[30:53] Platform engineer mindset shift[34:26] Gradient description comparison[38:12] Empathy in MLOps[45:48] Empathy in Engineering[51:04] GPU clusters rolling updates[1:03:14] Checkpointing strategy comparison[1:09:44] Predictive vs Generative AI[1:17:51] On Growth, Community, and New Directions[1:24:21] UX of agents[1:32:05] Wrap up

  41. 474

    The Evolution of AI in Cyber Security // Jeff Schwartzentruber // #344

    Dr. Jeff Schwartzentruber is a Senior Machine Learning Scientist at eSentire, working on anomaly detection pipelines and the use of large language models to enhance cybersecurity operations.The Evolution of AI in Cyber Security // MLOps Podcast #344 with Jeff Schwartzentruber, Staff Machine Learning Scientist at eSentire.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractModern cyber operations can feel opaque. This talk explains—step by step—what a security operations center (SOC) actually does, how telemetry flows in from networks, endpoints, and cloud apps, and what an investigation can credibly reveal about attacker behavior, exposure, and control gaps. We then trace how AI has shown up in the SOC: from rules and classic machine learning for detection to natural-language tools that summarize alerts and turn questions like “show failed logins from new countries in the last 24 hours” into fast database queries. The core of the talk is our next step: agentic investigations. These GenAI agents plan their work, run queries across tools, cite evidence, and draft analyst-grade findings—with guardrails and a human in the loop. We close with what’s next: risk-aware auto-remediation, verifiable knowledge sources, and a practical checklist for adopting these capabilities safely.// BioDr. Jeff Schwartzentruber holds the position of Sr. Machine Learning Scientist at eSentire – a Canadian cybersecurity company specializing in Managed Detection and Response (MDR). Dr. Schwartzentruber’s primary academic and industry research has been concentrated on solving problems at the intersection of cybersecurity and machine learning (ML). Over his +10-year career, Dr. Schwartzentruber has been involved in applying ML for threat detection and security analytics for several large Canadian financial institutions, public sector organizations (federal), and SME’s. In addition to his private sector work, Dr. Schwartzentruber is also an Adjunct Faculty at Dalhousie University in the Department of Computer Science, a Special Graduate Faculty member with the School of Computer Science at the University of Guelph, and a Sr. Advisor on AI at the Rogers Cyber Secure Catalysts.// Related LinksWebsite: https://www.esentire.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jeff on LinkedIn: /jeff-schwartzentruber/

  42. 473

    Thousands of Fine-Tuned Models

    Jaipal Singh Goud is the CTO at Prem AI, working on model customization and privacy-preserving compute. This episode was recorded at the Plan B studios in Lugano, Switzerland. For more information, visit https://pow.space/How do fine-tuned models and RAG systems power personalized AI agents that learn, collaborate, and transform enterprise workflows? What kind of technical challenges do we need to first examine before this becomes real?Demetrios Brinkmann - Founder of MLOps Community~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]

  43. 472

    The Semantic Layer and AI Agents // David Jayatillake // #343

    The Semantic Layer and AI Agents // MLOps Podcast #343 with David Jayatillake, VP of AI at Cube.dev.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractDavid Jayatillake argues that the real battle in data isn’t about AI — it’s about who controls the semantics. In this episode, he calls out how proprietary BI tools quietly lock companies into their ecosystems, making data less open and less useful. David and Demetrios debate whether semantic layers should live in open-source hands and how AI agents might soon replace entire chunks of manual data engineering. From feature stores to LLM-driven analytics, this conversation challenges how we think about ownership, access, and the future of data workflows.// BioExperienced and world-renowned data, technology, and AI leader. Expert in the application of LLMs to the semantic layer.Writes at davidsj.substack.com about data, leadership, architecture, venture capital, and artificial intelligence.Two-time co-founder in the data space. Founded Delphi Labs, which focused on applying LLMs to semantic layers to enable data democratization.Regular data conference, podcast, panel, and webinar speaker. // Related LinksWebsite: davidsj.substack.com~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with David on LinkedIn: /david-jayatillake/

  44. 471

    Building Claude Code: Origin, Story, Product Iterations, & What's Next // Siddharth Bidasaria // #342

    Building Claude Code: Origin, Story, Product Iterations, & What's Next // MLOps Podcast #342 with Siddharth Bidasaria, Member of Technical Staff at Anthropic.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractDemetrios Brinkmann talks with Siddharth Bidasaria about Anthropic’s Claude code — how it was built, key features like file tools and Spotify control, and the team’s lean, user-focused approach. They explore testing, subagents, and the future of agentic coding, plus how users are pushing its limits.// BioSoftware engineer. Founding team of Claude Code. Ex-Robinhood and Rubrik. // Related LinksBio: https://sidb.io/Sid's Blog: https://sidb.io/posts/ I Let An AI Play Pokémon! - Claude plays Pokémon Creator: https://youtu.be/nRHeGJwVP18How Data Platforms Affect ML & AI // Jake Watson // MLOps Podcast #207: https://youtu.be/xWApMuyct_4The Agent Landscape - Lessons Learned Putting Agents Into Production: https://youtu.be/lRGldru7ohU~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Marco on LinkedIn: /siddharthbidasaria/Timestamps:[00:00] MCP servers usage creativity[00:34] Claude's code origin story[05:17] R&D freedom and tools[09:08] Model potential discovery[12:06] Model adaptation strategies[19:13] Steerability vs pattern alignment[22:09] Features to delete[24:12] Moore's law in LLMs[32:42] Power user surprises[35:56] Sub-agent evolution insights[39:54] Agent communication governance[45:26] At-scale agent coordination[49:56] Wrap up

  45. 470

    Building an Agentic AI Memory Framework

    What if AI could actually remember like humans do?Biswaroop Bhattacharjee joins Demetrios Brinkmann to challenge how we think about memory in AI. From building Cortex—a system inspired by human cognition—to exploring whether AI should forget, this conversation questions the limits of agentic memory and how far we should go in mimicking the mind.Guest speaker: Biswaroop Bhattacharjee - Senior ML Engineer at Prem AIHost :Demetrios Brinkmann - Founder of MLOps Community~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]#podcast #aiinfrastructure #aiagents #memory

  46. 469

    LLMs at Scale: Infrastructure That Keeps AI Safe, Smart & Affordable // Marco Palladino// # 341

    LLMs at Scale: Infrastructure That Keeps AI Safe, Smart & Affordable // MLOps Podcast #341 with Marco Palladino, Kong's Co-Founder and CTO.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractWhile conversations around AI regulations continue to evolve, the responsibility for AI continues to be with developers. In this episode, Marco Palladino, CTO and co-founder of Kong Inc., explores what it means to build and scale AI responsibly when the rulebook is still being written. He explains that infrastructure should be the frontline defense for enforcing governance, security, and reliability in AI deployments. Marco shares how Kong’s technologies, including AI Gateway and AI Manager, help organizations rein in shadow AI, reduce LLM hallucinations, improve observability, and act as the foundation for agentic workflows.// BioMarco Palladino is an inventor, software developer, and internet entrepreneur. As the CTO and co-founder of Kong, he is Kong’s co-author, responsible for the design and delivery of the company’s products, while also providing technical thought leadership around APIs and microservices within both Kong and the external software community. Prior to Kong, Marco co-founded Mashape in 2010, which became the largest API marketplace and was acquired by RapidAPI in 2017. // Related LinksWebsite: https://konghq.com/ https://www.youtube.com/watch?v=odpPVeQZjHU https://www.thestack.technology/the-big-interview-kong-cto-marco-palladino/ ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Marco on LinkedIn: /marcopalladino/Timestamps:[00:00] Agent-mediated interactions shift[01:17] Kong connectivity and agents[04:36] Transcript cleanup request[08:11] MCP server use cases[12:37] Agent world possibilities [15:55] Business communication evolution[18:55] System optimization[25:36] AI gateway patterns[31:30] Investment decision making[35:54] Building conviction process[41:34] Polished customer conversation[46:37] AI gateway R&D future[50:52] Wrap up

  47. 468

    Best AI Hackathon Project Ever? [Bite Size Episode]

    AI Conversations Powered by Prosus Group Unicorn Mafia won the recent hackathon at Raise Summit and explained to me what they built, including all the tech they used under the hood to make their AI agents work. Winners:Charlie Cheesman - Co-founder at 60x.aiMarissa Liu - Tech Lead, Reporting at WatershedAna Shevchenko - Software Engineer II at SpotifyFergus McKenzie-Wilson - Co-founder at 60x.aiAlex Choi - Founding Engineer at MedfinHost:Demetrios Brinkmann - Founder of MLOps Community~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]

  48. 467

    On-Device AI Agents in Production: Privacy, Performance, and Scale // Varun Khare & Neeraj Poddar // #340

    On-Device AI Agents in Production: Privacy, Performance, and Scale // MLOps Podcast #340 with NimbleEdge's Varun Khare, Founder/CEO and Neeraj Poddar, Co-founder & CTO.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI agents are transitioning from experimental stages to performing real work in production; however, they have largely been limited to backend task automation. A critical frontier in this evolution is the on-device AI agent, enabling sophisticated, AI-native experiences directly on mobile and embedded devices. While cloud-based AI faces challenges like constant connectivity demands, increased latency, privacy risks, and high operational costs, on-device breaks through these trade-offs.We'll delve into the practical side of building and deploying AI agents with “DeliteAI”, an open-source on-device AI agentic framework. We'll explore how lightweight Python runtimes facilitate the seamless orchestration of end-to-end workflows directly on devices, allowing AI/ML teams to define data preprocessing, feature computation, model execution, and post-processing logic independently of frontend code. This architecture empowers agents to adapt to varying tasks and user contexts through an ecosystem of tools natively supported on Android/iOS platforms, handling all the permissions, model lifecycles, and many more.// BioVarun KhareVarun is the Founder and CEO of NimbleEdge, an AI startup pioneering privacy-first, on-device intelligence. With an academic foundation in AI and neuroscience from UC Berkeley, MPI Frankfurt, and IIT Kanpur, Varun brings deep expertise at the intersection of technology and science. Before founding NimbleEdge, Varun led open-source projects at OpenMined, focusing on privacy-aware AI, and published research in computer vision.Neeraj PoddarNeeraj Poddar is the Co-founder and CTO at NimbleEdge. Prior to NimbleEdge, he was the Co-founder of Aspen Mesh, VP of Engineering at Solo.io, and led the Istio open source community. He has worked on various aspects of AI, networking, security, and distributed systems over the span of his career. Neeraj focuses on the application of open source technologies across different industries in terms of scalability and security. When not working on AI, you can find him playing racquetball and gaining back the calories spent playing by trying out new restaurants. // Related LinksWebsite: https://www.nimbleedge.com/https://www.nimbleedge.com/blog/why-ai-is-not-working-for-youhttps://www.nimbleedge.com/blog/state-of-on-device-aihttps://www.youtube.com/watch?v=Qqj_Nl2MihEhttps://www.linkedin.com/events/7343237917982527488/comments/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Varun on LinkedIn: /vkkhare/Connect with Neeraj on LinkedIn: /nrjpoddar/Timestamps:[00:00] On-device AI skepticism[02:47] Word suggestion for AI[06:40] Optimizing unique challenges[13:39] LLM on-device challenges[20:34] Agent overlord tension[23:56] AI app constraints[29:23] Siri limitations and trust gap[32:01] Voice-driven app privacy[35:49] Platform lock-in vs aggregation[42:26] On-device AI optimizations[45:38] Wrap up

  49. 466

    Are Evals Dead?

    AI Conversations Powered by Prosus Group  Your AI agent isn’t failing because it’s dumb—it’s failing because you refuse to test it. Chiara Caratelli cuts through the hype to show why evaluations—not bigger models or fancier prompts—decide whether agents succeed in the real world. If you’re not stress-testing, simulating, and iterating on failures, you’re not building AI—you’re shipping experiments disguised as products.Guest speaker: Chiara Caratelli - Data Scientist @ Prosus GroupHost: Demetrios Brinkmann - Founder of MLOps Community~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]

  50. 465

    The DuckLake Lakehouse Format // Hannes Mühleisen // #339

    The DuckLake Lakehouse Format // MLOps Podcast #339 with Hannes Mühleisen, Co-founder and CEO of DuckDB Labs.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractManaging data on Object Stores has been a painful affair. Users had to choose between data swamp chaos or a maze of metadata files with catalog servers on top. DuckLake is a new paradigm for managing data on object stores: First, it uses classical SQL data management systems to manage metadata. Second, actual data is stored in Parquet files on pretty arbitrary storage. Third, processing queries is done client-side, or anywhere really. DuckDB is the first system to integrate with DuckLake using an extension with the same name. Conceptually, DuckLake enables central control over truth while decentralizing compute and storage entirely. DuckLake turns data warehouse architecture upside down by departing from the integrated metadata/compute layer towards a fully disconnected operation with only centralized metadata. For the first time, DuckLake allows a “multi-player” experience with DuckDB, where computation stays fully local, but transactional control is centralized.// BioHannes Mühleisen 🔈 is a creator of the DuckDB database management system and Co-founder and CEO of DuckDB Labs. He is a senior researcher at the Centrum Wiskunde & Informatica (CWI) in Amsterdam. He is also Professor of Data Engineering at Radboud University Nijmegen.// Related LinksWebsite: https://hannes.muehleisen.orgUnleashing Unconstrained News Knowledge Graphs to Combat Misinformation // Robert Caulk // #279 - https://youtu.be/pF8zTI867EI~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Hudson on LinkedIn: /hfmuehleisenTimestamps:[00:00] Spooky ease in tech[00:29] DuckDB and DuckLake[07:50] Pain vs trust factors[13:12] Prioritizing project features[16:16] Platform growth tension[22:06] Building principles[25:26] OSS vs system reliability[30:27] Creative uses of DuckDB[35:35] Tecton product strategy[43:30] Mindset shift[52:25] DuckDB future shifts[55:37] Wrap up

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

TOPICS IN THIS SHOW

Click any topic to search every transcript on PodParley for moments someone mentioned it.

Loading reviews...

ABOUT THIS SHOW

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)

HOSTED BY

Demetrios

CATEGORIES

URL copied to clipboard!