just now

EA - Prioritising animal welfare over global health and development? by Vasco Grilo

<a href="https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum. Summary Corporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions. In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful. I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to: Increase their support of animal welfare interventions relative to those of GHD (at the margin). Account for effects on animals in the cost-effectiveness analyses of GHD interventions. Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charities Corporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators. I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between: Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas. Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming: The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP). The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience. Excruciating pain is 1 k times as bad as disabling pain. Disabling pain is 100 times as bad as hurtful pain. Hurtful pain is 10 times as bad as annoying pain. The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso. Broilers sleep 8 h each day, and have a neutral experience during that time. Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. Median welfare range of chickens, which I set to RP's median estimate of 0.332. Reciprocal of the intensity of the mean human experience, which I obtained supposing humans: Sleep 8 h each day, and have a neutral experience during that time. Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between: Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID. Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here. The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”). Intensity of the mean experience as a fraction of the median welfare range Broiler in a conventional scenario Broiler in a reformed scenario Human 5.7710^-6 2.5910^-5 3.3310^-6 Broiler in a conventional scenario relative to a human Broiler in a reformed scenario relative to a human Broiler in a conventional scenario relative to a reformed scenario 7.77 1.73 4.49 Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of... The median welfare range of chickens The intensity of the mean human experience 2....

First published

05/13/2023

Genres:

education

Listen to this episode

0:00 / 0:00

Summary

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum. Summary Corporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions. In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful. I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to: Increase their support of animal welfare interventions relative to those of GHD (at the margin). Account for effects on animals in the cost-effectiveness analyses of GHD interventions. Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charities Corporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators. I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between: Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas. Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming: The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP). The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience. Excruciating pain is 1 k times as bad as disabling pain. Disabling pain is 100 times as bad as hurtful pain. Hurtful pain is 10 times as bad as annoying pain. The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso. Broilers sleep 8 h each day, and have a neutral experience during that time. Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. Median welfare range of chickens, which I set to RP's median estimate of 0.332. Reciprocal of the intensity of the mean human experience, which I obtained supposing humans: Sleep 8 h each day, and have a neutral experience during that time. Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between: Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID. Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here. The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”). Intensity of the mean experience as a fraction of the median welfare range Broiler in a conventional scenario Broiler in a reformed scenario Human 5.7710^-6 2.5910^-5 3.3310^-6 Broiler in a conventional scenario relative to a human Broiler in a reformed scenario relative to a human Broiler in a conventional scenario relative to a reformed scenario 7.77 1.73 4.49 Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of... The median welfare range of chickens The intensity of the mean human experience 2....

Duration

30 minutes

Parent Podcast

The Nonlinear Library: EA Forum Weekly

View Podcast

Share this episode

Similar Episodes

    AMA: Paul Christiano, alignment researcher by Paul Christiano

    Release Date: 12/06/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AI alignment landscape by Paul Christiano

    Release Date: 11/19/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    What is the alternative to intent alignment called? Q by Richard Ngo

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

Similar Podcasts

    The Nonlinear Library

    Release Date: 10/07/2021

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Section

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong

    Release Date: 03/03/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Top Posts

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    The Nonlinear Library: LessWrong Top Posts

    Release Date: 02/15/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    Effective Altruism Forum Podcast

    Release Date: 07/17/2021

    Authors: Garrett Baker

    Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

    Explicit: No