EA - Consider Earning Less by ElliotJDavies
<a href="https://forum.effectivealtruism.org/posts/GxRcKACcJuLBEJPmE/consider-earning-less">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Earning Less, published by ElliotJDavies on July 1, 2023 on The Effective Altruism Forum. This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. When the Future Fund was founded in 2022, there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky". One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant. This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturn Some Reasons you Might Want to Earn Less: You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand. Your Organisation is likely to make redundancies, which could include you. You have short timelines, and you suspect that by earning less, more people could work on alignment. You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax). Some Reasons you May Not Want to Earn Less: It would cause you financial hardship. You would experience a significant drop in productivity. You suspect it would promote an unhealthy culture in your organisation. You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
First published
07/01/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Earning Less, published by ElliotJDavies on July 1, 2023 on The Effective Altruism Forum. This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. When the Future Fund was founded in 2022, there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky". One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant. This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturn Some Reasons you Might Want to Earn Less: You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand. Your Organisation is likely to make redundancies, which could include you. You have short timelines, and you suspect that by earning less, more people could work on alignment. You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax). Some Reasons you May Not Want to Earn Less: It would cause you financial hardship. You would experience a significant drop in productivity. You suspect it would promote an unhealthy culture in your organisation. You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Duration
2 hours and 36 minutes
Parent Podcast
The Nonlinear Library: EA Forum Daily
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Effective Altruism Forum Podcast
Release Date: 07/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No