AF - Difficulties in making powerful aligned AI by DanielFilan
<a href="https://www.alignmentforum.org/posts/nwjtqoox7JAcvMynx/difficulties-in-making-powerful-aligned-ai">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Difficulties in making powerful aligned AI, published by DanielFilan on May 14, 2023 on The AI Alignment Forum. Here’s my breakdown of the difficulties involved in ensuring powerful AI makes our lives radically better, rather than taking over the world, as well as some reasons why I think they’re hard. Here are things it’s not: It’s not primarily a justification of why very powerful AI is possible or scary (altho it briefly discusses why very powerful AI would be scary). It’s not primarily a list of underlying factors that cause these difficulties (altho it does include and gesture to some of those). It’s not at all original - basically everything here has been said many times before, plausibly more eloquently. That said, it is my attempt to group the problems in my own words, in a configuration that I haven’t seen before, with enough high-level motivation that one can hopefully tell the extent to which advances in the state of the art address them. 1. What sort of thinking do we want? The first difficulty: we don’t have a sense of what sort of thinking we would want AI systems to use, in sufficient detail that one could (for instance) write python code to execute it. Of course, some of the difficulty here is that we don’t know how smart machines think, but we can give ourselves access to subroutines like “do perfect Bayesian inference on a specified prior and likelihood” or “take a function from vectors to real numbers and find the vector that minimizes the function” and still not solve the problem. To illustrate: Take a hard-coded goal predicate, consider a bunch of plans you could take, and execute the plan that best achieves the goal? Unfortunately, the vast majority of goals you could think of writing down in an executable way will incentivize behaviour like gaining control over sources of usable energy (so that you definitely have enough to achieve your goal, and to double- and triple-check that you’ve really achieved it) and stopping other agents from being able to meddle with your plans (because if they could, maybe they’d stop you from achieving your goal). Do things that maximize the number of thumbs up you get from humans?1 Best plan: take control of the humans, force them to give you a thumbs up, or trick them into doing so. Presumably this is possible if you’re much smarter than humans, and it’s more reliable than doing good things - some people might not see why your good thing is actually good if left to their own devices. Look at humans, figure out what they want based on what they’re doing, and do whatever that is? Main problem: people don’t do the literally optimal thing for what they want. For instance, when people play chess, they usually don’t play perfect moves - even if they’re experts! You need some rule that tells you what people would do if they wanted some goal or another, but it’s not clear what this rule would be, it’s not clear how you make this rule more in line with reality if you never observe “wanting”, and so this ends up having essentially the same problems as plans 1 and 2. Read some text written by humans about what they’d like you to do, and do that?2 This is passing the buck to the text written by humans to specify how we want the AI to think, but that’s precisely the problem we’re trying to solve. Concretely, one way you could imagine doing this is to write something relatively informal like “Please be helpful and harmless to your human operators”, and have your AI correctly understand what we mean by that. That (a) presumes that there is a coherent thing that we mean by that (which doesn’t seem obvious to me, given our difficulty in explicitly formalizing this request), and (b) passes the specification buck to the problem of specifying how you should understand this request. It’s not a priori definitely impossible to build a ...
First published
05/14/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Difficulties in making powerful aligned AI, published by DanielFilan on May 14, 2023 on The AI Alignment Forum. Here’s my breakdown of the difficulties involved in ensuring powerful AI makes our lives radically better, rather than taking over the world, as well as some reasons why I think they’re hard. Here are things it’s not: It’s not primarily a justification of why very powerful AI is possible or scary (altho it briefly discusses why very powerful AI would be scary). It’s not primarily a list of underlying factors that cause these difficulties (altho it does include and gesture to some of those). It’s not at all original - basically everything here has been said many times before, plausibly more eloquently. That said, it is my attempt to group the problems in my own words, in a configuration that I haven’t seen before, with enough high-level motivation that one can hopefully tell the extent to which advances in the state of the art address them. 1. What sort of thinking do we want? The first difficulty: we don’t have a sense of what sort of thinking we would want AI systems to use, in sufficient detail that one could (for instance) write python code to execute it. Of course, some of the difficulty here is that we don’t know how smart machines think, but we can give ourselves access to subroutines like “do perfect Bayesian inference on a specified prior and likelihood” or “take a function from vectors to real numbers and find the vector that minimizes the function” and still not solve the problem. To illustrate: Take a hard-coded goal predicate, consider a bunch of plans you could take, and execute the plan that best achieves the goal? Unfortunately, the vast majority of goals you could think of writing down in an executable way will incentivize behaviour like gaining control over sources of usable energy (so that you definitely have enough to achieve your goal, and to double- and triple-check that you’ve really achieved it) and stopping other agents from being able to meddle with your plans (because if they could, maybe they’d stop you from achieving your goal). Do things that maximize the number of thumbs up you get from humans?1 Best plan: take control of the humans, force them to give you a thumbs up, or trick them into doing so. Presumably this is possible if you’re much smarter than humans, and it’s more reliable than doing good things - some people might not see why your good thing is actually good if left to their own devices. Look at humans, figure out what they want based on what they’re doing, and do whatever that is? Main problem: people don’t do the literally optimal thing for what they want. For instance, when people play chess, they usually don’t play perfect moves - even if they’re experts! You need some rule that tells you what people would do if they wanted some goal or another, but it’s not clear what this rule would be, it’s not clear how you make this rule more in line with reality if you never observe “wanting”, and so this ends up having essentially the same problems as plans 1 and 2. Read some text written by humans about what they’d like you to do, and do that?2 This is passing the buck to the text written by humans to specify how we want the AI to think, but that’s precisely the problem we’re trying to solve. Concretely, one way you could imagine doing this is to write something relatively informal like “Please be helpful and harmless to your human operators”, and have your AI correctly understand what we mean by that. That (a) presumes that there is a coherent thing that we mean by that (which doesn’t seem obvious to me, given our difficulty in explicitly formalizing this request), and (b) passes the specification buck to the problem of specifying how you should understand this request. It’s not a priori definitely impossible to build a ...
Duration
14 minutes
Parent Podcast
The Nonlinear Library: Alignment Forum Daily
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Would an option to publish to AF users only be a useful feature?Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would an option to publish to AF users only be a useful feature?Q , published by Richard Ngo on the AI Alignment Forum. Right now there are quite a few private safety docs floating around. There's evidently demand for a privacy setting lower than "only people I personally approve", but higher than "anyone on the internet gets to see it". But this means that safety researchers might not see relevant arguments and information. And as the field grows, passing on access to such documents on a personal basis will become even less efficient. My guess is that in most cases, the authors of these documents don't have a problem with other safety researchers seeing them, as long as everyone agrees not to distribute them more widely. One solution could be to have a checkbox for new posts which makes them only visible to verified Alignment Forum users. Would people use this? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
sasodgy
Release Date: 04/14/2021
Description: Audio Recordings from the Students Against Sexual Orientation Discrimination (SASOD) Public Forum with Members of Parliament at the National Library in Georgetown, Guyana
Explicit: No