EPISODE · Feb 16, 2026 · 55 MIN
=Coffee
from Hacked · host Hacked
A lot of modern AI models have a kind of security guard layer that sits in front of them. Its job? A binary choice as to whether the prompt heading into the model is safe or not. Kasimir Schulz, a lead security researcher at HiddenLayer, has been researching how to trick these models. Their solution, a technique called "Echogram" involves words with such positive statistical sentiment — such overwhelming good vibes — that it flips that verdict. Learn more about your ad choices. Visit podcastchoices.com/adchoices
NOW PLAYING
=Coffee
No transcript for this episode yet
Similar Episodes
May 7, 2026 ·1m
May 5, 2026 ·0m
May 3, 2026 ·0m
May 3, 2026 ·40m
May 1, 2026 ·64m
Apr 3, 2026 ·59m