EA - Learning from our mistakes: how HLI plans to improve by PeterBrietbart
<a href="https://forum.effectivealtruism.org/posts/4edCygGHya4rGx6xa/learning-from-our-mistakes-how-hli-plans-to-improve">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from our mistakes: how HLI plans to improve, published by PeterBrietbart on September 1, 2023 on The Effective Altruism Forum. Hi folks, in this post we'd like to describe our views as the Chair (Peter) and Director (Michael) of HLI in light of the recent conversations around HLI's work. The purpose of this post is to reflect on HLI's work and its role within the EA community in response to community member feedback, highlight what we're doing about it, and engage in further constructive dialogue on how HLI can improve moving forward. HLI hasn't always got things right. Indeed, we think there have been some noteworthy errors (quick note: our goal here isn't to delve into details but to highlight broad lessons learnt, so this isn't an exhaustive list): Most importantly, we were overconfident and defensive in communication, particularly around our 2022 giving season post. We described our recommendation for StrongMinds using language that was too strong: "We're now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money". We agree with feedback that this level of confidence was and is not commensurate with the strength of the evidence and the depth of our analysis. The post's original title was "Don't give well, give WELLBYs". Though this was intended in a playful manner, it was tone-deaf, and we apologise. We made mistakes in our analysis. We made a data entry error. In our meta-analysis, we recorded that Kemp et al. (2009) found a positive effect, but in fact it was a negative effect. This correction reduced our estimated 'spillover effect' for psychotherapy (the effect that someone receiving an intervention had on other people) from 53% to 38% and therefore reduced the total cost-effectiveness estimate from 9.5x cash transfers to 7.5x. We did not include standard diagnostic tests of publication bias. If we had done this, we would have decreased our confidence in the quality of the literature on psychotherapy that we were using. After receiving feedback about necessary corrections to our cost-effectiveness estimates for psychotherapy and StrongMinds, we failed to update our materials on our website in a timely manner. As a community, EA prides itself on its commitment to epistemic rigour, and we're both grateful and glad that folks will speak up to maintain high standards. We have heard these constructive critiques, and we are making changes in response. We'd like to give a short outline of what HLI is doing next and has done in order to improve its epistemic health and comms processes. We've added an "Our Blunders" page on the HLI website, which lists the errors and missteps we mentioned above. The goal of this page is to be transparent about our mistakes, and to keep us accountable to making improvements. We've added the following text to the places in our website where we discuss StrongMinds: "Our current estimation for StrongMinds is that a donation of $1,000 produces 62 WELLBYs (or 7.5 times GiveDirectly cash transfers). See our changelog. However, we have been working on an update to our analysis since July 2023 and expect to be ready by the end of 2023. This will include using new data and improving our methods. We expect our cost-effectiveness estimate will decrease by about 25% or more - although this is a prediction we are very uncertain about as the analysis is yet to be done. While we expect the cost-effectiveness of StrongMinds will decrease, we think it is unlikely that the cost-effectiveness will be lower than GiveDirectly. Donors may want to wait to make funding decisions until the updated report is finished." We have added more/higher quality controls to our work: Since the initial StrongMinds report, we've added Samuel Dupret (researcher) and Dr Ryan Dwyer (senior research...
First published
09/01/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from our mistakes: how HLI plans to improve, published by PeterBrietbart on September 1, 2023 on The Effective Altruism Forum. Hi folks, in this post we'd like to describe our views as the Chair (Peter) and Director (Michael) of HLI in light of the recent conversations around HLI's work. The purpose of this post is to reflect on HLI's work and its role within the EA community in response to community member feedback, highlight what we're doing about it, and engage in further constructive dialogue on how HLI can improve moving forward. HLI hasn't always got things right. Indeed, we think there have been some noteworthy errors (quick note: our goal here isn't to delve into details but to highlight broad lessons learnt, so this isn't an exhaustive list): Most importantly, we were overconfident and defensive in communication, particularly around our 2022 giving season post. We described our recommendation for StrongMinds using language that was too strong: "We're now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money". We agree with feedback that this level of confidence was and is not commensurate with the strength of the evidence and the depth of our analysis. The post's original title was "Don't give well, give WELLBYs". Though this was intended in a playful manner, it was tone-deaf, and we apologise. We made mistakes in our analysis. We made a data entry error. In our meta-analysis, we recorded that Kemp et al. (2009) found a positive effect, but in fact it was a negative effect. This correction reduced our estimated 'spillover effect' for psychotherapy (the effect that someone receiving an intervention had on other people) from 53% to 38% and therefore reduced the total cost-effectiveness estimate from 9.5x cash transfers to 7.5x. We did not include standard diagnostic tests of publication bias. If we had done this, we would have decreased our confidence in the quality of the literature on psychotherapy that we were using. After receiving feedback about necessary corrections to our cost-effectiveness estimates for psychotherapy and StrongMinds, we failed to update our materials on our website in a timely manner. As a community, EA prides itself on its commitment to epistemic rigour, and we're both grateful and glad that folks will speak up to maintain high standards. We have heard these constructive critiques, and we are making changes in response. We'd like to give a short outline of what HLI is doing next and has done in order to improve its epistemic health and comms processes. We've added an "Our Blunders" page on the HLI website, which lists the errors and missteps we mentioned above. The goal of this page is to be transparent about our mistakes, and to keep us accountable to making improvements. We've added the following text to the places in our website where we discuss StrongMinds: "Our current estimation for StrongMinds is that a donation of $1,000 produces 62 WELLBYs (or 7.5 times GiveDirectly cash transfers). See our changelog. However, we have been working on an update to our analysis since July 2023 and expect to be ready by the end of 2023. This will include using new data and improving our methods. We expect our cost-effectiveness estimate will decrease by about 25% or more - although this is a prediction we are very uncertain about as the analysis is yet to be done. While we expect the cost-effectiveness of StrongMinds will decrease, we think it is unlikely that the cost-effectiveness will be lower than GiveDirectly. Donors may want to wait to make funding decisions until the updated report is finished." We have added more/higher quality controls to our work: Since the initial StrongMinds report, we've added Samuel Dupret (researcher) and Dr Ryan Dwyer (senior research...
Duration
7 minutes
Parent Podcast
The Nonlinear Library: EA Forum Weekly
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Effective Altruism Forum Podcast
Release Date: 07/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No