EPISODE · Mar 18, 2026 · 8 MIN
Open healthcare robotics dataset drop & NVIDIA’s push for agentic infrastructure - AI News (Mar 18, 2026)
from The Automated Daily - AI News Edition · host TrendTeller
Please support this podcast by checking out our sponsors: - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Open healthcare robotics dataset drop - Researchers released Open-H-Embodiment, a healthcare robotics dataset with 778 hours of synchronized surgical and clinical data, plus open foundation models for vision-language-action learning and simulation. Keywords: healthcare robotics, surgical AI, dataset, GR00T-H, Cosmos-H. NVIDIA’s push for agentic infrastructure - At GTC, NVIDIA signaled its next phase: scaling agentic AI with new platforms and production inference software like Dynamo, while also reframing gaming tech like DLSS as a template for enterprise AI. Keywords: NVIDIA GTC, Dynamo, inference, DLSS, agentic AI. Safer autonomous agents via sandboxing - NVIDIA OpenShell and OnPrem.LLM examples both emphasize a practical rule for agents: powerful tools need strict containment, policy control, and auditability to reduce exfiltration and runaway actions. Keywords: sandbox, agent runtime, tool access, policies, security. OpenAI enterprise deals and compute - OpenAI is reportedly courting private equity for a joint venture to accelerate enterprise rollouts, while also expanding data-center capacity and diversifying chips to secure power and scale. Keywords: OpenAI, private equity, enterprise AI, data centers, chips. Open-source models and formal proof coding - Mistral’s releases highlight two directions at once: a stronger open general model for multimodal and coding work, and Leanstral for verifiable coding with Lean proofs to reduce human review. Keywords: Mistral Small 4, open-source, Leanstral, formal methods, coding agent. Subagents and security-first coding - Codex added generally available subagents for parallel coding workflows, and OpenAI also outlined why its security tooling avoids anchoring on SAST reports—favoring behavior- and evidence-driven validation. Keywords: Codex, subagents, AppSec, fuzzing, invariants. A blueprint for autonomous learning - A new paper by LeCun, Malik, and Dupoux argues today’s AI lacks true autonomous learning, proposing a split between learning by observation and learning by action with meta-control to switch modes. Keywords: autonomous learning, System A, System B, exploration, cognitive architecture. Backlash and power politics in AI - From AI refusal movements to nuclear-style analogies about control, multiple voices are debating who should steer frontier AI—private labs, governments, or democratic oversight—amid rising social costs and geopolitics. Keywords: AI backlash, governance, military, private labs, legitimacy. - Metronome Signup Page Blocks Sandbox Creation With Browser Verification Warning - OnPrem.LLM Demonstrates AgentExecutor for Tool-Using Agents with Sandbox and Custom Tools - Researchers Propose Cognitive-Inspired Architecture for More Autonomous AI Learning - Author Thomas Dekeyser ties today’s AI backlash to a long history of refusing harmful machines - OpenAI in talks with TPG and other buyout firms on enterprise AI joint venture - Benchmark Claims MCP Server Architecture Drives Large Gaps in AI Task Accuracy - AI Agents Improved Recall by Restructuring Memory to Capture Decision ‘Why’ - Nvidia Introduces DLSS 5, Combining Generative AI and 3D Data for More Realistic Graphics - a16z Warns AI Control Is Becoming a National-Security ‘Oppenheimer Moment’ - NVIDIA open-sources OpenShell, a policy-controlled sandbox runtime for AI agents - Dynatrace report calls for stronger observability in GenAI and agentic AI workloads - Former Intel AI Chief Sachin Katti Leads OpenAI’s Massive Data-Center Expansion - Mistral launches open-source Mistral Small 4, unifying reasoning, multimodal, and coding in one model - Anthropic Employee Shares How Work and Roles Shifted in a Year at an AI Lab - Alibaba Creates ‘Token Hub’ Unit to Centralize AI and Push Enterprise Monetization - OpenAI Codex Subagents Reach General Availability, Adding Custom Multi-Agent Workflows - NVIDIA Releases Dynamo 1.0 for Production Multi-Node AI Inference - OpenAI Says Codex Security Skips SAST Reports to Focus on Behavior and Validation - NVIDIA GTC 2026: Vera Rubin, agentic AI platforms, and expanded partnerships across industry, robotics and automotive - Mistral open-sources Leanstral, a Lean 4 agent for proof-verified code - Mistral AI Unveils Forge for Training Enterprise AI Models on Proprietary Data - Open-H-Embodiment Launches as First Open Dataset for Healthcare Robotics, With New Surgical Foundation Models Episode Transcript Open healthcare robotics dataset drop Let’s start with that healthcare robotics release. A large research collaboration led by Johns Hopkins, TUM, and NVIDIA published Open-H-Embodiment, described as the first open dataset built specifically for healthcare robotics. The headline is scale and realism: hundreds of hours spanning surgical robotics, ultrasound, and colonoscopy autonomy, pulled from simulation, benchtop tasks, and real clinical procedures. They also released two open models trained on it—one aimed at vision-language-action surgical behavior, and another that can generate plausible surgical video conditioned on robot motion. Why this matters: healthcare robotics has been bottlenecked by closed data and narrow demonstrations. Open, cross-platform training data is how you get from one-off demos to systems that generalize—and can be tested and audited by more than a single vendor. NVIDIA’s push for agentic infrastructure Staying in the NVIDIA orbit, GTC this year reinforced a clear theme: the company wants to be the operating layer for “agentic AI” at scale. Beyond the big platform talk, one practical piece stands out—NVIDIA Dynamo 1.0, positioned as production-ready distributed inference for running large models across multiple GPU nodes without turning latency into a disaster. The message is that multi-node inference, multimodal workloads, and agent-style traffic patterns are no longer edge cases. If you’re building real products, the hard part is serving, caching, routing, and recovering gracefully when something breaks—so tooling here can be as strategic as the models themselves. Safer autonomous agents via sandboxing And while NVIDIA is happy to talk data centers, it also used gaming as a preview of the broader shift. DLSS 5 was pitched as blending the predictable structure of traditional graphics with generative AI that fills in detail—so you get realism without rendering everything the old-fashioned way. The interesting angle isn’t just prettier games. It’s the pattern: combine structured, trustworthy signals with generative systems to reduce compute while keeping control. In enterprise settings, that looks like agents that ground their work in databases and logs, not just vibes—then use an LLM to stitch together insight and action. OpenAI enterprise deals and compute Now to the question everyone asks the moment you say “agents”: how do you keep them from doing something reckless? Two separate updates this week point to the same answer—containment by default. First, NVIDIA published OpenShell, an open-source runtime for running autonomous agents inside locked-down sandboxes with explicit policies over files, processes, credentials, and outbound network access. The key idea is governance you can actually enforce: what the agent can touch, where it can send data, and how secrets get injected without being sprayed into a filesystem. Second, the OnPrem.LLM project shared a fresh example notebook for tool-using agents that leans hard on safety controls: restrict agents to a working directory, optionally disable shell access, or run inside an ephemeral container. The takeaway across both: agent capability is easy to add; safe agent capability is a systems problem—policies, isolation, and repeatability. Open-source models and formal proof coding From agent runtimes to agent workflows: OpenAI made “subagents” generally available in Codex. If you’ve used modern coding assistants, you’ve felt the shift—one assistant isn’t one worker anymore. You spin up a small team: one agent reproduces a bug, another traces code paths, a third drafts the fix. Why it matters is less about novelty and more about expectation: developers are starting to design work in parallelizable chunks, and tooling is rapidly standardizing around orchestrating multiple LLM roles instead of one monolithic chat. Subagents and security-first coding OpenAI also shared a security perspective that’s worth highlighting: Codex Security reportedly doesn’t start from a SAST report, even if SAST remains useful. The argument is that many serious bugs aren’t obvious “tainted data goes to dangerous sink” stories—they’re broken assumptions about behavior, order of operations, or invariants that look fine until you try to falsify them. So the approach is closer to: understand intent, probe the boundaries, generate evidence, and validate in a sandbox. That’s a meaningful shift in tone for AI-assisted AppSec—less checkbox scanning, more adversarial verification. A blueprint for autonomous learning On the model front, Mistral announced Mistral Small 4 as open-source under Apache 2.0, aiming to unify instruction-following, deeper reasoning, multimodal understanding, and agentic coding in one system. The broader significance: the “default” open model is getting more capable across tasks people actually deploy—docs, code, images, long context—so open ecosystems can compete on product quality, not just ideology. Mistral also released Leanstral, a coding agent tailored to the Lean proof assistant. This is part of a bigger movement: using formal verification as the backstop when code correctness really matters. Instead of debating whether an LLM is trustworthy, you push it into a setting where proofs can be checked mechanically. That doesn’t solve every problem, but it’s one of the cleanest answers we have to the reliability question in high-stakes software. Backlash and power politics in AI A very different kind of blueprint came from academia. An arXiv paper by Emmanuel Dupoux, Yann LeCun, and Jitendra Malik argues that current AI still falls short of “autonomous learning”—the ability to keep learning flexibly from the world, not just from a training run. Their proposed framing separates learning from observation and learning through action, with a meta-controller that decides which mode to emphasize based on context and goals. Why it’s interesting: it’s a reminder that today’s LLM progress is enormous, but it’s not the end of the story. If AI is going to thrive in dynamic, messy environments, we’ll need systems that update themselves safely over time—without constant human retraining cycles. Story 9 Now, the business and power layer—because technology doesn’t deploy itself. Reuters reports OpenAI is in advanced talks with private equity firms about a joint venture to distribute enterprise AI across portfolio companies, potentially at a multibillion-dollar valuation. The angle here is distribution and governance. Private equity controls a lot of operational reality across industries, so a JV could fast-track adoption—and also shape how aggressively AI gets inserted into workflows. In parallel, OpenAI is also pushing to secure massive data-center capacity, led in part by infrastructure executive Sachin Katti. The story there is constraint: power availability, chip supply, local opposition, and build timelines are becoming the rate limit for frontier AI. If models are the “software,” compute is the new industrial base—and the winners may simply be the ones who can reliably buy, site, and power the machines. Story 10 Finally, two pieces that capture the mood around AI: one sociological, one political. In an interview, human geographer Thomas Dekeyser frames AI backlash as part of a long tradition of technology refusal—often rooted in rational concerns, not knee-jerk anti-progress. He connects resistance to issues people feel directly: job loss, surveillance, environmental costs, and the sense that benefits accrue to a narrow elite. Whether you agree or not, it’s a useful lens: social legitimacy is becoming a core dependency for AI infrastructure. And from the venture world, Andreessen Horowitz partner Erik Torenberg argued that advanced AI is approaching a nuclear-weapon-like inflection point—less about whether it will exist, more about who controls it, especially as governments seek military access. You don’t have to buy the analogy to see why it resonates: the governance question is moving from abstract ethics to concrete power, contracts, and state capability. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to [email protected] Youtube LinkedIn X (Twitter)
NOW PLAYING
Open healthcare robotics dataset drop & NVIDIA’s push for agentic infrastructure - AI News (Mar 18, 2026)
No transcript for this episode yet
Similar Episodes
No similar episodes found.