EA - SoGive rates Open-Phil-funded charity NTI “too rich” by Sanjay
<a href="https://forum.effectivealtruism.org/posts/DpQFod5P9e5yJxeCP/sogive-rates-open-phil-funded-charity-nti-too-rich">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum. Exec summary Under SoGive’s methodology, charities holding more than 1.5 years’ expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more) Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more) We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more) We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more) Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more) It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more) We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more) Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems. We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with. 0. Intent of this post Although this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them. Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to. 1. Background on SoGive’s methodology for assessing reserves The SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money. So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities. In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves. Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...
First published
06/18/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum. Exec summary Under SoGive’s methodology, charities holding more than 1.5 years’ expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more) Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more) We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more) We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more) Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more) It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more) We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more) Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems. We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with. 0. Intent of this post Although this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them. Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to. 1. Background on SoGive’s methodology for assessing reserves The SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money. So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities. In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves. Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...
Duration
24 minutes
Parent Podcast
The Nonlinear Library: EA Forum Daily
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Effective Altruism Forum Podcast
Release Date: 07/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No