just now

AF - Conjecture internal survey | AGI timelines and estimations of probability of human extinction from unaligned AGI by Maris Sala

<a href="https://www.alignmentforum.org/posts/kygEPBDrGGoM8rz9a/conjecture-internal-survey-or-agi-timelines-and-estimations">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conjecture internal survey | AGI timelines and estimations of probability of human extinction from unaligned AGI, published by Maris Sala on May 22, 2023 on The AI Alignment Forum. We put together a survey to study the opinions of timelines and probability of human extinction of the employees at Conjecture. The questions were based on previous public surveys and prediction markets, to ensure that the results are comparable with people’s opinions outside of Conjecture. The survey results were polled in April, 2023. There were 23 unique responses from people across teams. Section 1. Probability of human extinction from AI Setup and limitations The specific questions the survey asked were: What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species? What probability do A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species in general (not just because inability to control, but also stuff like people intentionally using AI systems in harmful ways)? The difference between the two questions is that the first focuses on risk from misalignment, whereas the second captures risk from misalignment and misuse. The main caveats of these questions are the following: The questions were not explicitly time bound. I'd expect differences in people’s estimates of risk of extinction this century, in the next 1000 years, and anytime in the future. The longer of a timeframe we consider, the higher the values would be. I suspect employees were considering extinction risk roughly within this century when answering. The first question is a subset of the second question. One employee gave a higher probability for the second question than the first; this was probably a misinterpretation. The questions factor in interventions such as how Conjecture and others’ safety work will impact extinction risk. The expectation is the numbers would be higher if factored out their own or others’ safety work. Responses Out of the 23 respondents, one rejected the premise, and two people did not respond to one of the two questions but answered the other one. The main issue respondents raised was answering without a time constraint. Generally, people estimate the extinction risk from autonomous AI / AI getting out of control to be quite high at Conjecture. The median estimation is 70% and the average estimation is 59%. The plurality estimates the risk to be between 60% to 80%. A few people believe extinction risk from AGI is higher than 80%. The second question surveying extinction risk from AI in general, which includes misalignment and misuse. The median estimate is 80% and the average is 71%. The plurality estimates the risk to be over 80%. Section 2. When will we have AGI? Setup and limitations For this question, we asked respondents to predict when AGI will be built using this specification used on Metaculus, enabling us to compare to the community baseline (Figure 3). The respondents were instructed to toggle with the probability density as seen in Figure 4. This was a deliberate choice to enable differences in confidence towards lower or higher values in uncertainty. The main caveats of this question were: The responses are probably anchored to the Metaculus community prediction. The community prediction is 2031: 8 year timelines. Conjecture responses centering around a similar prediction should not come as a surprise. The question allows for a prediction that AGI is already here. It’s unclear that respondents paid close attention to their lower and upper predictions to ensure that both are accordingly sensible. They probably focused on making their median prediction accurate, and might not have noticed how that affected lower and u...

First published

05/22/2023

Genres:

education

Listen to this episode

0:00 / 0:00

Summary

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conjecture internal survey | AGI timelines and estimations of probability of human extinction from unaligned AGI, published by Maris Sala on May 22, 2023 on The AI Alignment Forum. We put together a survey to study the opinions of timelines and probability of human extinction of the employees at Conjecture. The questions were based on previous public surveys and prediction markets, to ensure that the results are comparable with people’s opinions outside of Conjecture. The survey results were polled in April, 2023. There were 23 unique responses from people across teams. Section 1. Probability of human extinction from AI Setup and limitations The specific questions the survey asked were: What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species? What probability do A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species in general (not just because inability to control, but also stuff like people intentionally using AI systems in harmful ways)? The difference between the two questions is that the first focuses on risk from misalignment, whereas the second captures risk from misalignment and misuse. The main caveats of these questions are the following: The questions were not explicitly time bound. I'd expect differences in people’s estimates of risk of extinction this century, in the next 1000 years, and anytime in the future. The longer of a timeframe we consider, the higher the values would be. I suspect employees were considering extinction risk roughly within this century when answering. The first question is a subset of the second question. One employee gave a higher probability for the second question than the first; this was probably a misinterpretation. The questions factor in interventions such as how Conjecture and others’ safety work will impact extinction risk. The expectation is the numbers would be higher if factored out their own or others’ safety work. Responses Out of the 23 respondents, one rejected the premise, and two people did not respond to one of the two questions but answered the other one. The main issue respondents raised was answering without a time constraint. Generally, people estimate the extinction risk from autonomous AI / AI getting out of control to be quite high at Conjecture. The median estimation is 70% and the average estimation is 59%. The plurality estimates the risk to be between 60% to 80%. A few people believe extinction risk from AGI is higher than 80%. The second question surveying extinction risk from AI in general, which includes misalignment and misuse. The median estimate is 80% and the average is 71%. The plurality estimates the risk to be over 80%. Section 2. When will we have AGI? Setup and limitations For this question, we asked respondents to predict when AGI will be built using this specification used on Metaculus, enabling us to compare to the community baseline (Figure 3). The respondents were instructed to toggle with the probability density as seen in Figure 4. This was a deliberate choice to enable differences in confidence towards lower or higher values in uncertainty. The main caveats of this question were: The responses are probably anchored to the Metaculus community prediction. The community prediction is 2031: 8 year timelines. Conjecture responses centering around a similar prediction should not come as a surprise. The question allows for a prediction that AGI is already here. It’s unclear that respondents paid close attention to their lower and upper predictions to ensure that both are accordingly sensible. They probably focused on making their median prediction accurate, and might not have noticed how that affected lower and u...

Duration

4 hours and 55 minutes

Parent Podcast

The Nonlinear Library: Alignment Forum Daily

View Podcast

Share this episode

Similar Episodes

    AMA: Paul Christiano, alignment researcher by Paul Christiano

    Release Date: 12/06/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    What is the alternative to intent alignment called? Q by Richard Ngo

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AI alignment landscape by Paul Christiano

    Release Date: 11/19/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    Would an option to publish to AF users only be a useful feature?Q by Richard Ngo

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would an option to publish to AF users only be a useful feature?Q , published by Richard Ngo on the AI Alignment Forum. Right now there are quite a few private safety docs floating around. There's evidently demand for a privacy setting lower than "only people I personally approve", but higher than "anyone on the internet gets to see it". But this means that safety researchers might not see relevant arguments and information. And as the field grows, passing on access to such documents on a personal basis will become even less efficient. My guess is that in most cases, the authors of these documents don't have a problem with other safety researchers seeing them, as long as everyone agrees not to distribute them more widely. One solution could be to have a checkbox for new posts which makes them only visible to verified Alignment Forum users. Would people use this? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

Similar Podcasts

    The Nonlinear Library

    Release Date: 10/07/2021

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Section

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong

    Release Date: 03/03/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Top Posts

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    The Nonlinear Library: LessWrong Top Posts

    Release Date: 02/15/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    sasodgy

    Release Date: 04/14/2021

    Description: Audio Recordings from the Students Against Sexual Orientation Discrimination (SASOD) Public Forum with Members of Parliament at the National Library in Georgetown, Guyana

    Explicit: No