AF - $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions by johnswentworth
<a href="https://www.alignmentforum.org/posts/YDSjSpD7yBoivMHay/usd500-bounty-prize-problem-channel-capacity-using">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions, published by johnswentworth on May 16, 2023 on The AI Alignment Forum. Informal Problem Statement We have an information channel between Alice and Bob. Alice picks a function. Bob gets to see the value of that function at some randomly chosen input values... but doesn't know exactly which randomly chosen input values. He does get to see the randomly chosen values of some of the input variables, but not all of them. The problem is to find which functions Alice should pick with what frequencies, in order to maximize the channel capacity. Why Am I Interested In This? I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces. For instance, xor of a bunch of random bits is maximally sensitive: if we have a 50/50 distribution over any one of the bits but know all the others, then all information about the output is wiped out. On the other end of the spectrum, a majority function of a bunch of random bits is highly insensitive: if we have a 50/50 distribution over, say, 10% of the bits, but know all the others, then in most cases we can correctly guess the function's output. I have an argument here that the vast majority of functions f:{0,1}n{0,1} are pretty highly sensitive: as the number of unknown inputs increases, information falls off exponentially quickly. On the other hand, the example of majority functions shows that this is not the case for all functions. Intuitively, in the problem, Alice needs to mostly pick from "insensitive" functions, since Bob mostly can't distinguish between "sensitive" functions. ... And Why Am I Interested In That? I expect that natural abstractions have to be insensitive features of the world. After all, different agents don't all have exactly the same input data. So, a feature has to be fairly insensitive in order for different agents to agree on its value. In fact, we could view the problem statement itself as a very rough way of formulating the coordination problem of language: Alice has to pick some function f which takes in an image and returns 0/1 representing whether the image contains an apple. (The choice of function defines what "apple" means, for our purposes.) Then Alice wants to teach baby Bob what "apple" means. So, there's some random stuff around them, and Alice points at the random stuff and says "apple" for some of it, and says something besides "apple" the rest of the time. Baby Bob is effectively observing the value of the function at some randomly-chosen points, and needs to back out which function Alice intended. And Bob doesn't have perfect access to all the bits Alice is seeing, so the function has to be robust. Formal Problem Statement Consider the following information channel between Alice and Bob: Alice picks a function f:{0,1}n{0,1} Nature generates m possible inputs x1,...,xm, each sampled uniformly and independently from {0,1}n. Nature also generates m subsets S1,...,Sm of 1,...,n, each sampled uniformly and independently from subsets of size s. Bob observes Y=(Y1,...,Ym) where Yi=(f(xi),xSi,Si). The problem is to compute the distribution over f which achieves the channel capacity, i.e. argmaxP[f]∑f,YP[f]P[Y|f]lnP[Y|f]∑f′P[Y|f′]P[f′] Bounty/Prize Info The problem is to characterize the channel throughput maximizing distribution P[f]. The characterization should make clear the answers to questions like: What functions have the highest probability? How quickly does the probability fall off as we move "away" from the most probable functions, and what do marginally-less-probable functions look like? How much probability is assigned to a typical function chosen uniformly at random? Which functions, if any, are assigned zero probability? All of these should ...
First published
05/16/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions, published by johnswentworth on May 16, 2023 on The AI Alignment Forum. Informal Problem Statement We have an information channel between Alice and Bob. Alice picks a function. Bob gets to see the value of that function at some randomly chosen input values... but doesn't know exactly which randomly chosen input values. He does get to see the randomly chosen values of some of the input variables, but not all of them. The problem is to find which functions Alice should pick with what frequencies, in order to maximize the channel capacity. Why Am I Interested In This? I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces. For instance, xor of a bunch of random bits is maximally sensitive: if we have a 50/50 distribution over any one of the bits but know all the others, then all information about the output is wiped out. On the other end of the spectrum, a majority function of a bunch of random bits is highly insensitive: if we have a 50/50 distribution over, say, 10% of the bits, but know all the others, then in most cases we can correctly guess the function's output. I have an argument here that the vast majority of functions f:{0,1}n{0,1} are pretty highly sensitive: as the number of unknown inputs increases, information falls off exponentially quickly. On the other hand, the example of majority functions shows that this is not the case for all functions. Intuitively, in the problem, Alice needs to mostly pick from "insensitive" functions, since Bob mostly can't distinguish between "sensitive" functions. ... And Why Am I Interested In That? I expect that natural abstractions have to be insensitive features of the world. After all, different agents don't all have exactly the same input data. So, a feature has to be fairly insensitive in order for different agents to agree on its value. In fact, we could view the problem statement itself as a very rough way of formulating the coordination problem of language: Alice has to pick some function f which takes in an image and returns 0/1 representing whether the image contains an apple. (The choice of function defines what "apple" means, for our purposes.) Then Alice wants to teach baby Bob what "apple" means. So, there's some random stuff around them, and Alice points at the random stuff and says "apple" for some of it, and says something besides "apple" the rest of the time. Baby Bob is effectively observing the value of the function at some randomly-chosen points, and needs to back out which function Alice intended. And Bob doesn't have perfect access to all the bits Alice is seeing, so the function has to be robust. Formal Problem Statement Consider the following information channel between Alice and Bob: Alice picks a function f:{0,1}n{0,1} Nature generates m possible inputs x1,...,xm, each sampled uniformly and independently from {0,1}n. Nature also generates m subsets S1,...,Sm of 1,...,n, each sampled uniformly and independently from subsets of size s. Bob observes Y=(Y1,...,Ym) where Yi=(f(xi),xSi,Si). The problem is to compute the distribution over f which achieves the channel capacity, i.e. argmaxP[f]∑f,YP[f]P[Y|f]lnP[Y|f]∑f′P[Y|f′]P[f′] Bounty/Prize Info The problem is to characterize the channel throughput maximizing distribution P[f]. The characterization should make clear the answers to questions like: What functions have the highest probability? How quickly does the probability fall off as we move "away" from the most probable functions, and what do marginally-less-probable functions look like? How much probability is assigned to a typical function chosen uniformly at random? Which functions, if any, are assigned zero probability? All of these should ...
Duration
4 hours and 39 minutes
Parent Podcast
The Nonlinear Library: Alignment Forum Daily
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Would an option to publish to AF users only be a useful feature?Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would an option to publish to AF users only be a useful feature?Q , published by Richard Ngo on the AI Alignment Forum. Right now there are quite a few private safety docs floating around. There's evidently demand for a privacy setting lower than "only people I personally approve", but higher than "anyone on the internet gets to see it". But this means that safety researchers might not see relevant arguments and information. And as the field grows, passing on access to such documents on a personal basis will become even less efficient. My guess is that in most cases, the authors of these documents don't have a problem with other safety researchers seeing them, as long as everyone agrees not to distribute them more widely. One solution could be to have a checkbox for new posts which makes them only visible to verified Alignment Forum users. Would people use this? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
sasodgy
Release Date: 04/14/2021
Description: Audio Recordings from the Students Against Sexual Orientation Discrimination (SASOD) Public Forum with Members of Parliament at the National Library in Georgetown, Guyana
Explicit: No