just now

AF - $20 Million in NSF Grants for Safety Research by Dan H

<a href="https://www.alignmentforum.org/posts/jwe6jpubuMiuSRqff/usd20-million-in-nsf-grants-for-safety-research">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $20 Million in NSF Grants for Safety Research, published by Dan H on February 28, 2023 on The AI Alignment Forum. After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research. Here is the detailed program description. The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope: "reverse-engineering, inspecting, and interpreting the internal logic of learned models to identify unexpected behavior that could not be found by black-box testing alone" "Safety also requires... methods for monitoring for unexpected environmental hazards or anomalous system behaviors, including during deployment." Note that research that has high capabilities externalities is explicitly out of scope: "Proposals that increase safety primarily as a downstream effect of improving standard system performance metrics unrelated to safety (e.g., accuracy on standard tasks) are not in scope." Thanks to OpenPhil for funding a portion the RfP---their support was essential to creating this opportunity! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Listen to this episode

0:00 / 0:00

Summary

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $20 Million in NSF Grants for Safety Research, published by Dan H on February 28, 2023 on The AI Alignment Forum. After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research. Here is the detailed program description. The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope: "reverse-engineering, inspecting, and interpreting the internal logic of learned models to identify unexpected behavior that could not be found by black-box testing alone" "Safety also requires... methods for monitoring for unexpected environmental hazards or anomalous system behaviors, including during deployment." Note that research that has high capabilities externalities is explicitly out of scope: "Proposals that increase safety primarily as a downstream effect of improving standard system performance metrics unrelated to safety (e.g., accuracy on standard tasks) are not in scope." Thanks to OpenPhil for funding a portion the RfP---their support was essential to creating this opportunity! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

First published

02/28/2023

Genres

education

Duration

84 minutes

Parent Podcast

The Nonlinear Library: Alignment Forum Weekly

View Podcast

Share this episode

Similar Episodes

  • AI alignment landscape by Paul Christiano

    11/19/2021

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Clean
  • AMA: Paul Christiano, alignment researcher by Paul Christiano

    12/06/2021

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Clean
  • Announcing the Alignment Research Center by Paul Christiano

    11/19/2021

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Alignment Research Center, published by on the AI Alignment Forum. (Cross-post from ai-alignment.com) I’m now working full-time on the Alignment Research Center (ARC), a new non-profit focused on intent alignment research. I left OpenAI at the end of January and I’ve spent the last few months planning, doing some theoretical research, doing some logistical set-up, and taking time off. For now it’s just me, focusing on theoretical research. I’m currently feeling pretty optimistic about this work: I think there’s a good chance that it will yield big alignment improvements within the next few years, and a good chance that those improvements will be integrated into practice at leading ML labs. My current goal is to build a small team working productively on theory. I’m not yet sure how we’ll approach hiring, but if you’re potentially interested in joining you can fill out this tiny form to get notified when we’re ready. Over the medium term (and maybe starting quite soon) I also expect to implement and study techniques that emerge from theoretical work, to help ML labs adopt alignment techniques, and to work on alignment forecasting and strategy. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Clean
  • What is the alternative to intent alignment called? Q by Richard Ngo

    11/17/2021

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Clean

Similar Podcasts

  • Vineyard Richmond Weekly Message

    08/12/2020

    Vineyard Community Church Richmond

    Each week, you can listen along to the most recent message from Vineyard Richmond.

    Clean
  • Shepherd of the Valley Bible Church

    08/12/2020

    Tommy Moon

    Weekly messages from the pastor at Shepherd of the Valley Bible Church in Hood River, Oregon.

    Clean
  • Muhammad West

    08/12/2020

    Muslim Central

    Muhammad West, FREE Audio Podcast brought to you by Muslim Central. Muslim Central is a private Audio Podcast Publisher. Our Audio Library consists of Islamic Lectures, Interviews, Debates and more, with over 100 Speakers and Shows from around the World.

    Clean
  • Retro Blissed

    08/12/2020

    The Network

    Two gamers with a love for the history of video games play and discuss the best and worst titles in existence. Blow out your NES cartridge, check the batteries in your remote, and memorize your Game Genie codes… It’s time to get RETRO!

    Clean
  • Grace Chicago Church

    08/12/2020

    Grace Chicago Church

    Sermons based on the weekly lectionary from a reformed church in the heart of Chicago.

    Clean
  • Rock The Walls

    08/12/2020

    idobi Network

    Always on the frontlines, Rock The Walls is hosted by music fan and devoted radio host Patrick Walford. Over one thousand interviews are already in the can—from being the first ever radio interview for bands like I Prevail & The Story So Far, to speaking with heavy & alternative music legends such as The Used, Anthrax, Parkway Drive, Godsmack, Korn, Sum 41, Bring Me The Horizon, A Day To Remember, and hundreds more.After doing the show for over a decade, hosting Warped Radio, bringing you your idobi Music News, and Music Directing idobi Howl, along with hitting the road for coverage on the Warped at Sea Cruise in 2017 & the final Vans Warped Tour in 2018, Walford is a long trusted voice in the music scene. Tune in to hear in-depth interviews you won't hear anywhere else with all your favorite heavy & alternative artists, along with spinning the best in new music.

    Clean
  • Skaana with Mark Leiren-Young | Oceans, Eco-Ethics & The Environment

    08/12/2020

    Mark Leiren-Young | Oceans, orcas, eco-ethics and the environment.

    Mark Leiren-Young, author of Orcas Everywhere and director of The Hundred-Year-Old Whale, meets the humans who are fighting to save our oceans, orcas and environment. Find out how you can make waves.Join the Pod…… https://www.patreon.com/mobydoll Skaana home….. https://www.skaana.orgFacebook……….. https://www.facebook.com/skaanapod/Twitter…………... @leirenyoung

    Clean
  • Sutton Vineyard

    08/12/2020

    Sutton Vineyard Church

    Weekly podcasts from Sutton Vineyard Church UK

    Clean
  • Bethesda Lutheran Brethren Church Podcast

    08/12/2020

    Bethesda LBC

    Weekly Sermons from Bethesda Lutheran Brethren Church

    Clean
  • Current Federal Tax Developments

    08/12/2020

    Edward Zollars, CPA

    Weekly update on federal income tax developments

    Clean
  • Lucky Words

    08/12/2020

    Jeffrey Windsor

    A weekly* email newsletter about literature, art, walking or riding or just sitting in the mountains or the desert of the American southwest, and poetry. luckywords.substack.com

    Clean
  • Doctrine and Devotion

    08/12/2020

    Joe Thorn & Jimmy Fowler

    Doctrine and Devotion is a weekly podcast exploring Christian faith and practice from an experiential perspective marked by the fun and humor that characterize real friendship.

    Clean

Episode Description

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $20 Million in NSF Grants for Safety Research, published by Dan H on February 28, 2023 on The AI Alignment Forum. After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research. Here is the detailed program description. The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope: "reverse-engineering, inspecting, and interpreting the internal logic of learned models to identify unexpected behavior that could not be found by black-box testing alone" "Safety also requires... methods for monitoring for unexpected environmental hazards or anomalous system behaviors, including during deployment." Note that research that has high capabilities externalities is explicitly out of scope: "Proposals that increase safety primarily as a downstream effect of improving standard system performance metrics unrelated to safety (e.g., accuracy on standard tasks) are not in scope." Thanks to OpenPhil for funding a portion the RfP---their support was essential to creating this opportunity! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Comments

Sign in to leave a comment.

Loading comments...