AF - Catastrophic Risks from AI #4: Organizational Risks by Dan H
<a href="https://www.alignmentforum.org/posts/wpsGprQCRffRKG92v/catastrophic-risks-from-ai-4-organizational-risks">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Catastrophic Risks from AI #4: Organizational Risks, published by Dan H on June 26, 2023 on The AI Alignment Forum. This is the fourth post in a sequence of posts giving an overview of catastrophic AI risks. 4 Organizational Risks In January 1986, tens of millions of people tuned in to watch the launch of the Challenger Space Shuttle. Approximately 73 seconds after liftoff, the shuttle exploded, resulting in the deaths of everyone on board. Though tragic enough on its own, one of its crew members was a school teacher named Sharon Christa McAuliffe. McAuliffe was selected from over 10,000 applicants for the NASA Teacher in Space Project and was scheduled to become the first teacher to fly in space. As a result, millions of those watching were schoolchildren. NASA had the best scientists and engineers in the world, and if there was ever a mission NASA didn't want to go wrong, it was this one [70]. The Challenger disaster, alongside other catastrophes, serves as a chilling reminder that even with the best expertise and intentions, accidents can still occur. As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies. In this section, we discuss how organizational safety plays a critical role in the safety of AI systems. First, we discuss how even without competitive pressures or malicious actors, accidents can happen—in fact, they are inevitable. We then discuss how improving organizational factors can reduce the likelihood of AI catastrophes. Catastrophes occur even when competitive pressures are low. Even in the absence of competitive pressures or malicious actors, factors like human error or unforeseen circumstances can still bring about catastrophe. The Challenger disaster illustrates that organizational negligence can lead to loss of life, even when there is no urgent need to compete or outperform rivals. By January 1986, the space race between the US and USSR had largely diminished, yet the tragic event still happened due to errors in judgment and insufficient safety precautions. Similarly, the Chernobyl nuclear disaster in April 1986 highlights how catastrophic accidents can occur in the absence of external pressures. As a state-run project without the pressures of international competition, the disaster happened when a safety test involving the reactor's cooling system was mishandled by an inadequately prepared night shift crew. This led to an unstable reactor core, causing explosions and the release of radioactive particles that contaminated large swathes of Europe [71]. Seven years earlier, America came close to experiencing its own Chernobyl when, in March 1979, a partial meltdown occurred at the Three Mile Island nuclear power plant. Though less catastrophic than Chernobyl, both events highlight how even with extensive safety measures in place and few outside influences, catastrophic accidents can still occur. Another example of a costly lesson on organizational safety came just one month after the accident at Three Mile Island. In April 1979, spores of Bacillus anthracis—or simply "anthrax," as it is commonly known—were accidentally released from a Soviet military research facility in the city of Sverdlovsk. This led to an outbreak of anthrax that resulted in at least 66 confirmed deaths [72]. Investigations into the incident revealed that the cause of the release was a procedural failure and poor maintenance of the facility's biosecurity systems, despite being operated by the state and not subjected to significant competitive pressures. The unsettling reality is that AI is far less understood and AI industry standards are far less stringent th...
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Catastrophic Risks from AI #4: Organizational Risks, published by Dan H on June 26, 2023 on The AI Alignment Forum. This is the fourth post in a sequence of posts giving an overview of catastrophic AI risks. 4 Organizational Risks In January 1986, tens of millions of people tuned in to watch the launch of the Challenger Space Shuttle. Approximately 73 seconds after liftoff, the shuttle exploded, resulting in the deaths of everyone on board. Though tragic enough on its own, one of its crew members was a school teacher named Sharon Christa McAuliffe. McAuliffe was selected from over 10,000 applicants for the NASA Teacher in Space Project and was scheduled to become the first teacher to fly in space. As a result, millions of those watching were schoolchildren. NASA had the best scientists and engineers in the world, and if there was ever a mission NASA didn't want to go wrong, it was this one [70]. The Challenger disaster, alongside other catastrophes, serves as a chilling reminder that even with the best expertise and intentions, accidents can still occur. As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies. In this section, we discuss how organizational safety plays a critical role in the safety of AI systems. First, we discuss how even without competitive pressures or malicious actors, accidents can happen—in fact, they are inevitable. We then discuss how improving organizational factors can reduce the likelihood of AI catastrophes. Catastrophes occur even when competitive pressures are low. Even in the absence of competitive pressures or malicious actors, factors like human error or unforeseen circumstances can still bring about catastrophe. The Challenger disaster illustrates that organizational negligence can lead to loss of life, even when there is no urgent need to compete or outperform rivals. By January 1986, the space race between the US and USSR had largely diminished, yet the tragic event still happened due to errors in judgment and insufficient safety precautions. Similarly, the Chernobyl nuclear disaster in April 1986 highlights how catastrophic accidents can occur in the absence of external pressures. As a state-run project without the pressures of international competition, the disaster happened when a safety test involving the reactor's cooling system was mishandled by an inadequately prepared night shift crew. This led to an unstable reactor core, causing explosions and the release of radioactive particles that contaminated large swathes of Europe [71]. Seven years earlier, America came close to experiencing its own Chernobyl when, in March 1979, a partial meltdown occurred at the Three Mile Island nuclear power plant. Though less catastrophic than Chernobyl, both events highlight how even with extensive safety measures in place and few outside influences, catastrophic accidents can still occur. Another example of a costly lesson on organizational safety came just one month after the accident at Three Mile Island. In April 1979, spores of Bacillus anthracis—or simply "anthrax," as it is commonly known—were accidentally released from a Soviet military research facility in the city of Sverdlovsk. This led to an outbreak of anthrax that resulted in at least 66 confirmed deaths [72]. Investigations into the incident revealed that the cause of the release was a procedural failure and poor maintenance of the facility's biosecurity systems, despite being operated by the state and not subjected to significant competitive pressures. The unsettling reality is that AI is far less understood and AI industry standards are far less stringent th...
First published
06/26/2023
Genres
Duration
2364 minutes
Parent Podcast
The Nonlinear Library: Alignment Forum Daily
View PodcastSimilar Episodes
-
AMA: Paul Christiano, alignment researcher by Paul Christiano
12/06/2021
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Clean -
What is the alternative to intent alignment called? Q by Richard Ngo
11/17/2021
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Clean -
AI alignment landscape by Paul Christiano
11/19/2021
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Clean -
Would an option to publish to AF users only be a useful feature?Q by Richard Ngo
11/17/2021
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Would an option to publish to AF users only be a useful feature?Q , published by Richard Ngo on the AI Alignment Forum. Right now there are quite a few private safety docs floating around. There's evidently demand for a privacy setting lower than "only people I personally approve", but higher than "anyone on the internet gets to see it". But this means that safety researchers might not see relevant arguments and information. And as the field grows, passing on access to such documents on a personal basis will become even less efficient. My guess is that in most cases, the authors of these documents don't have a problem with other safety researchers seeing them, as long as everyone agrees not to distribute them more widely. One solution could be to have a checkbox for new posts which makes them only visible to verified Alignment Forum users. Would people use this? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Clean
Similar Podcasts
-
Muhammad West
08/12/2020
Muslim Central
Muhammad West, FREE Audio Podcast brought to you by Muslim Central. Muslim Central is a private Audio Podcast Publisher. Our Audio Library consists of Islamic Lectures, Interviews, Debates and more, with over 100 Speakers and Shows from around the World.
Clean -
Dawg Pound Daily Podcast on the Cleveland Browns
08/12/2020
FanSided
FanSided's Dawg Pound Daily Podcast discusses the latest Cleveland Browns news, analysis and more from the staff at DawgPoundDaily.com.
Clean -
Conrad Satala
08/12/2020
Conrad Satala - Daily Nawals Wisdom
Conrad Satala - Daily Nawals Wisdom
Clean -
Cloud Engineering Archives - Software Engineering Daily
08/12/2020
Cloud Engineering Archives - Software Engineering Daily
Technical interviews about software topics.
Clean -
Machine Learning Archives - Software Engineering Daily
08/12/2020
Machine Learning Archives - Software Engineering Daily
Technical interviews about software topics.
Clean -
Depression Detox Show | Daily Inspirational Talks
08/12/2020
Malikee Josephs (Muh Leek - Jo Seffs)
Get UNSTUCK. Be inspired by the best motivational, inspirational, and brain health experts to live a happier and more purposeful life! Join host Malikee Josephs Monday through Friday for your dose of motivation. If you’re struggling with feelings of unhappiness, loneliness, emptiness, anxiety, sadness, grief & loss, or not having a sense of purpose, this show is for YOU! ——————If you are in need of medical care, please consult with a therapist or physician.If you are in a crisis or thinking of harming yourself, please call the National Suicide Prevention Lifeline 1-800-273-8255. Calls are private and confidential. CDC Mental Health recommendation pagehttps://www.cdc.gov/mentalhealth/tools-resources/index.htm
Clean -
Rock The Walls
08/12/2020
idobi Network
Always on the frontlines, Rock The Walls is hosted by music fan and devoted radio host Patrick Walford. Over one thousand interviews are already in the can—from being the first ever radio interview for bands like I Prevail & The Story So Far, to speaking with heavy & alternative music legends such as The Used, Anthrax, Parkway Drive, Godsmack, Korn, Sum 41, Bring Me The Horizon, A Day To Remember, and hundreds more.After doing the show for over a decade, hosting Warped Radio, bringing you your idobi Music News, and Music Directing idobi Howl, along with hitting the road for coverage on the Warped at Sea Cruise in 2017 & the final Vans Warped Tour in 2018, Walford is a long trusted voice in the music scene. Tune in to hear in-depth interviews you won't hear anywhere else with all your favorite heavy & alternative artists, along with spinning the best in new music.
Clean -
The Orchard Grove Podcast
08/12/2020
Orchard Grove Community Church
Sunday service messages from Orchard Grove Community Church
Clean -
The TAPE Podcast Network
08/12/2020
TAPE
Podcasts devised, developed and delivered by volunteers at community arts charity, TAPE Community Music and Film.
Clean -
Brewers Coverage
08/12/2020
Audacy
Best. Brewers. Coverage.Listen to the FAN On Deck Show before every game and then, after the last pitch make the switch - to The FAN Milwaukee Baseball Post Game Show, hosted by Tim Allen! Hear the latest from our baseball insiders and players here, too!
Clean -
PODCASTS - WELCOME TO HILLSIDE
08/12/2020
First Evangelical Free Church of Tahlequah
Come hear the Word of God preached at EFreeTahlequah anywhere in the world.
Clean -
Grace Chicago Church
08/12/2020
Grace Chicago Church
Sermons based on the weekly lectionary from a reformed church in the heart of Chicago.
Clean
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!