Securing AI Agents with Niall Merrigan
AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Episode 1031 of the RunAs Radio podcast, hosted by Niall Merrigan, Richard Campbell, titled "Securing AI Agents with Niall Merrigan" was published on April 8, 2026 and runs 37 minutes.
April 8, 2026 ·37m · RunAs Radio
Summary
AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Episode Description
AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Links
- AI Recommendation Poisoning
- Detecting Prompt Injection Attacks
- Mark Russinovich Crescendo Multi-Turn LLM Jailbreak Attack
- Cross-Site Scripting (XSS)
- Cameron Mattis LinkedIn
- Privilege Escalation in ServiceNow AI Platform
- Azure AI Content Safety Prompt Shields
- Task Adherence
- Simon Willison's Lethal Trifecta
- Microsoft Agent 365
- PyRIT
- OWASP Securing Agentic Applications Guide
Recorded February 16, 2026
Similar Episodes
Apr 13, 2026 ·33m
Mar 16, 2026 ·33m
Mar 2, 2026 ·37m
Feb 16, 2026 ·29m