just now

EA - Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity by MHR

<a href="https://forum.effectivealtruism.org/posts/CmAexqqvnRLcBojpB/electric-shrimp-stunning-a-potential-high-impact-donation">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity, published by MHR on July 13, 2023 on The Effective Altruism Forum. Epistemic status: layperson's attempt to understand the relevant considerations. I welcome corrections from anyone with a better understanding of welfare biology Summary The Shrimp Welfare Project (SWP) has a novel opportunity to spend up to $115,500 to purchase and install electric stunners at multiple shrimp farms The stunners would be used to stun shrimp prior to slaughter, likely rendering them unconscious and thereby preventing suffering that is currently experienced when shrimp asphyxiate or freeze without effective analgesics Based on formal agreements SWP has signed with multiple producers, raising $115,500 would enable the stunning (rather than rather than conventional slaughtering) of 1.7 billion shrimp over the next three years, for a ratio of nearly 15000 shrimp/dollar I performed a preliminary cost-effectiveness analysis of this initiative and reached the following three tentative conclusions: The expected cost-effectiveness distribution for electric shrimp stunning likely overlaps that of corporate hen welfare campaigns The cost-effectiveness of electric shrimp stunning is more likely to be lower than that of corporate hen welfare campaigns than it is to be higher Shrimp stunning is a very heavy-tailed intervention. The mean cost-effectiveness of stunning is significantly influenced by a few extreme cases, which mostly represent instances in which the undiluted experience model of welfare turns out to be correct Given these results, electric shrimp stunning might be worth supporting as a somewhat speculative bet in the animal welfare space. Considerations that might drive donor decisions on this project include risk tolerance, credence in the undiluted experience model of welfare, and willingness to take a hits-based giving approach. Description of the Opportunity The following information is quoted from the project description written by Marcus Abramovitch on the Manifund donation platform, based on information provided by Andrés Jiménez Zorrilla (CEO of SWP) : Project summary Shrimp Welfare Project is an organization of people who believe that shrimps are capable of suffering and deserve our moral consideration [1]. We aim to cost-effectively reduce the suffering of billions of shrimps and envision a world where shrimps don't suffer needlessly. Programme: our current most impactful intervention is to place electrical stunners with producers ($60k/stunner): We have signed agreements with 2 producers willing and able to use electrical stunning technology as part of their slaughter process which will materially reduce the acute suffering at the last few minutes / hours of shrimps lives. Collectively, these 2 agreements will impact more than half a billion animals per year at a rate of more than 4,000 shrimps/dollar/annum. Please take a look at our blog post on the first agreement here. We are in advanced negotiations with 2 more producers which would take the number of animals to more than 1 billion shrimps per annum. See our back-of-the-envelope calculation for the number of shrimps and cost-effectiveness analysis here Project goals Simplified end-game of this programme: the interim goal of placing these stunners with selected producers in different contexts/systems is to remove some perceived obstacles to the industry and show major retailers and other shrimp buyers that electrical stunning is something they can demand from their supply chain The ultimate goal is for electrical stunning to be: widely adopted by medium to large shrimp producers in their slaughter process (pushed by their buyers), included by certifiers in their standards, and eventually considered (eventually) to be an obvious requirement by legislat...

First published

07/13/2023

Genres:

education

Listen to this episode

0:00 / 0:00

Summary

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity, published by MHR on July 13, 2023 on The Effective Altruism Forum. Epistemic status: layperson's attempt to understand the relevant considerations. I welcome corrections from anyone with a better understanding of welfare biology Summary The Shrimp Welfare Project (SWP) has a novel opportunity to spend up to $115,500 to purchase and install electric stunners at multiple shrimp farms The stunners would be used to stun shrimp prior to slaughter, likely rendering them unconscious and thereby preventing suffering that is currently experienced when shrimp asphyxiate or freeze without effective analgesics Based on formal agreements SWP has signed with multiple producers, raising $115,500 would enable the stunning (rather than rather than conventional slaughtering) of 1.7 billion shrimp over the next three years, for a ratio of nearly 15000 shrimp/dollar I performed a preliminary cost-effectiveness analysis of this initiative and reached the following three tentative conclusions: The expected cost-effectiveness distribution for electric shrimp stunning likely overlaps that of corporate hen welfare campaigns The cost-effectiveness of electric shrimp stunning is more likely to be lower than that of corporate hen welfare campaigns than it is to be higher Shrimp stunning is a very heavy-tailed intervention. The mean cost-effectiveness of stunning is significantly influenced by a few extreme cases, which mostly represent instances in which the undiluted experience model of welfare turns out to be correct Given these results, electric shrimp stunning might be worth supporting as a somewhat speculative bet in the animal welfare space. Considerations that might drive donor decisions on this project include risk tolerance, credence in the undiluted experience model of welfare, and willingness to take a hits-based giving approach. Description of the Opportunity The following information is quoted from the project description written by Marcus Abramovitch on the Manifund donation platform, based on information provided by Andrés Jiménez Zorrilla (CEO of SWP) : Project summary Shrimp Welfare Project is an organization of people who believe that shrimps are capable of suffering and deserve our moral consideration [1]. We aim to cost-effectively reduce the suffering of billions of shrimps and envision a world where shrimps don't suffer needlessly. Programme: our current most impactful intervention is to place electrical stunners with producers ($60k/stunner): We have signed agreements with 2 producers willing and able to use electrical stunning technology as part of their slaughter process which will materially reduce the acute suffering at the last few minutes / hours of shrimps lives. Collectively, these 2 agreements will impact more than half a billion animals per year at a rate of more than 4,000 shrimps/dollar/annum. Please take a look at our blog post on the first agreement here. We are in advanced negotiations with 2 more producers which would take the number of animals to more than 1 billion shrimps per annum. See our back-of-the-envelope calculation for the number of shrimps and cost-effectiveness analysis here Project goals Simplified end-game of this programme: the interim goal of placing these stunners with selected producers in different contexts/systems is to remove some perceived obstacles to the industry and show major retailers and other shrimp buyers that electrical stunning is something they can demand from their supply chain The ultimate goal is for electrical stunning to be: widely adopted by medium to large shrimp producers in their slaughter process (pushed by their buyers), included by certifiers in their standards, and eventually considered (eventually) to be an obvious requirement by legislat...

Duration

18 minutes

Parent Podcast

The Nonlinear Library: EA Forum Weekly

View Podcast

Share this episode

Similar Episodes

    AMA: Paul Christiano, alignment researcher by Paul Christiano

    Release Date: 12/06/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AI alignment landscape by Paul Christiano

    Release Date: 11/19/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

    What is the alternative to intent alignment called? Q by Richard Ngo

    Release Date: 11/17/2021

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Explicit: No

Similar Podcasts

    The Nonlinear Library

    Release Date: 10/07/2021

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Section

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong

    Release Date: 03/03/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Daily

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Weekly

    Release Date: 05/02/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Top Posts

    Release Date: 02/10/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    The Nonlinear Library: LessWrong Top Posts

    Release Date: 02/15/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    Effective Altruism Forum Podcast

    Release Date: 07/17/2021

    Authors: Garrett Baker

    Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

    Explicit: No