EPISODE · Apr 27, 2026 · 31 MIN
“Fail safe(r) at alignment by channeling reward-hacking into a “spillway” motivation” by Anders Cairns Woodruff, Alex Mallen
It's plausible that flawed RL processes will select for misaligned AI motivations.[1] Some misaligned motivations are much more dangerous than others. So, developers should plausibly aim to control which kind of misaligned motivations emerge in this case. In particular, we tentatively propose that developers should try to make the most likely generalization of reward hacking a bespoke bundle of benign reward-seeking traits, called a spillway motivation. We call this process spillway design. We think spillway design could have two major benefits: Spillway design might decrease the probability of worst-case outcomes like long-term power-seeking or emergent misalignment.Spillway design might allow developers to decrease reward hacking at inference time, via satiation. Crucially, this could improve the AI's usefulness for hard-to-verify tasks like AI safety and strategy. Spillway design is related to inoculation prompting, but distinct and mutually compatible. Unlike inoculation prompting, spillway design tries to shape which reward-hacking motivations are salient going into RL, which might prevent dangerous generalization more robustly than inoculation prompting. I’ll say more about this in the third section. In this article I’ll: Explain the concept of a spillway motivationPropose spillway design methodsCompare spillway design to inoculation promptingDiscuss some potential [...] ---Outline:(02:07) What is a spillway motivation(02:21) The role of a spillway motivation(04:47) What should the spillway motivation be?(07:53) How a spillway motivation might make models safer(12:16) Implementing spillway design(15:26) Spillway design might work when inoculation prompting doesnt(17:53) The drawbacks of spillway design(20:31) Conclusion(21:45) Appendix A: Other traits of the spillway motivation(22:29) Appendix B: Other training interventions to increase safety(24:09) Appendix C: Proposed amendment to an AIs model spec(30:14) Appendix D: Proposed inference-time prompt The original text contained 5 footnotes which were omitted from this narration. --- First published: April 27th, 2026 Source: https://www.lesswrong.com/posts/rABTMovhz4miHiAyk/fail-safe-r-at-alignment-by-channeling-reward-hacking-into-a --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
NOW PLAYING
“Fail safe(r) at alignment by channeling reward-hacking into a “spillway” motivation” by Anders Cairns Woodruff, Alex Mallen
No transcript for this episode yet
Similar Episodes
Dec 20, 2021 ·0m