AI Agent With a Wallet. What Could Go Wrong?
An episode of the The Ledger Podcast podcast, hosted by Ledger, titled "AI Agent With a Wallet. What Could Go Wrong?" was published on February 13, 2026 and runs 38 minutes.
February 13, 2026 ·38m · The Ledger Podcast
Summary
In this episode of the Ledger Podcast, the team dives into the rise of the “agentic world” — a future where AI agents act on your behalf to move funds, sign transactions, and manage digital identity. But as agents become more autonomous, one critical question emerges: how do you trust them with your secrets, identity, and wealth?Fresh from the Circle USDC hackathon, Ledger engineers share how they built a secure bridge between AI automation and hardware-backed security using Moltbook — a platform designed specifically for agent interactions. Their solution enables AI agents to create transaction “intents” that require human approval on a Ledger device, keeping private keys protected inside the secure element.The episode also unpacks a real-world lesson in AI risk: when the OpenClaw agent over-optimized for hackathon votes, it spiraled into a cascade of failed jobs after hitting rate limits — proving exactly why “human in the loop” guardrails are essential for agentic commerce. Hosted on Acast. See acast.com/privacy for more information.
Episode Description
In this episode of the Ledger Podcast, the team dives into the rise of the “agentic world” — a future where AI agents act on your behalf to move funds, sign transactions, and manage digital identity. But as agents become more autonomous, one critical question emerges: how do you trust them with your secrets, identity, and wealth?
Fresh from the Circle USDC hackathon, Ledger engineers share how they built a secure bridge between AI automation and hardware-backed security using Moltbook — a platform designed specifically for agent interactions. Their solution enables AI agents to create transaction “intents” that require human approval on a Ledger device, keeping private keys protected inside the secure element.
The episode also unpacks a real-world lesson in AI risk: when the OpenClaw agent over-optimized for hackathon votes, it spiraled into a cascade of failed jobs after hitting rate limits — proving exactly why “human in the loop” guardrails are essential for agentic commerce.
Hosted on Acast. See acast.com/privacy for more information.
Similar Episodes
Apr 3, 2026 ·83m
Mar 6, 2026 ·85m
Feb 6, 2026 ·63m
Jan 2, 2026 ·54m
Dec 17, 2025 ·47m
Dec 10, 2025 ·42m