just now

EA - On Living Without Idols by Rockwell

<a href="https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum. For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others. Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here. Anarchists have no idols. I wrote a Facebook post in July 2019 following a blowup in one of my communities: "Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one. Idolize no one. Idolize no one. My mentor, Pt. 1. When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another. Abusive people do not exist. A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded: More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...

First published

01/14/2023

Genres:

education

Listen to this episode

0:00 / 0:00

Summary

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum. For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others. Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here. Anarchists have no idols. I wrote a Facebook post in July 2019 following a blowup in one of my communities: "Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one. Idolize no one. Idolize no one. My mentor, Pt. 1. When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another. Abusive people do not exist. A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded: More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...

Duration

9 minutes

Parent Podcast

The Nonlinear Library: EA Forum Weekly

View Podcast

Share this episode

Similar Episodes

    AMA: Paul Christiano, alignment researcher by Paul Christiano

    Release Date: 12/06/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AI alignment landscape by Paul Christiano

    Release Date: 11/19/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    What is the alternative to intent alignment called? Q by Richard Ngo

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

Similar Podcasts

    The Nonlinear Library

    Release Date: 10/07/2021

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Section

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong

    Release Date: 03/03/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Top Posts

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    The Nonlinear Library: LessWrong Top Posts

    Release Date: 02/15/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    Effective Altruism Forum Podcast

    Release Date: 07/17/2021

    Authors: Garrett Baker

    Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

    Explicit: No