just now
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
en-us
05/02/2022 17:19:05
The Nonlinear Fund
education
Release Date: 9/1/2023
Duration: 422 Mins
Authors: PeterBrietbart
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning from our mistakes: how HLI plans to improve, published by PeterBrietbart on September 1, 2023 on The Effective Altruism Forum. Hi folks, in this post we'd like to describe our views as the Chair (Peter) and Director (Michael) of HLI in light of the recent conversations around HLI's work. The purpose of this post is to reflect on HLI's work and its role within the EA community in response to community member feedback, highlight what we're doing about it, and engage in further constructive dialogue on how HLI can improve moving forward. HLI hasn't always got things right. Indeed, we think there have been some noteworthy errors (quick note: our goal here isn't to delve into details but to highlight broad lessons learnt, so this isn't an exhaustive list): Most importantly, we were overconfident and defensive in communication, particularly around our 2022 giving season post. We described our recommendation for StrongMinds using language that was too strong: "We're now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money". We agree with feedback that this level of confidence was and is not commensurate with the strength of the evidence and the depth of our analysis. The post's original title was "Don't give well, give WELLBYs". Though this was intended in a playful manner, it was tone-deaf, and we apologise. We made mistakes in our analysis. We made a data entry error. In our meta-analysis, we recorded that Kemp et al. (2009) found a positive effect, but in fact it was a negative effect. This correction reduced our estimated 'spillover effect' for psychotherapy (the effect that someone receiving an intervention had on other people) from 53% to 38% and therefore reduced the total cost-effectiveness estimate from 9.5x cash transfers to 7.5x. We did not include standard diagnostic tests of publication bias. If we had done this, we would have decreased our confidence in the quality of the literature on psychotherapy that we were using. After receiving feedback about necessary corrections to our cost-effectiveness estimates for psychotherapy and StrongMinds, we failed to update our materials on our website in a timely manner. As a community, EA prides itself on its commitment to epistemic rigour, and we're both grateful and glad that folks will speak up to maintain high standards. We have heard these constructive critiques, and we are making changes in response. We'd like to give a short outline of what HLI is doing next and has done in order to improve its epistemic health and comms processes. We've added an "Our Blunders" page on the HLI website, which lists the errors and missteps we mentioned above. The goal of this page is to be transparent about our mistakes, and to keep us accountable to making improvements. We've added the following text to the places in our website where we discuss StrongMinds: "Our current estimation for StrongMinds is that a donation of $1,000 produces 62 WELLBYs (or 7.5 times GiveDirectly cash transfers). See our changelog. However, we have been working on an update to our analysis since July 2023 and expect to be ready by the end of 2023. This will include using new data and improving our methods. We expect our cost-effectiveness estimate will decrease by about 25% or more - although this is a prediction we are very uncertain about as the analysis is yet to be done. While we expect the cost-effectiveness of StrongMinds will decrease, we think it is unlikely that the cost-effectiveness will be lower than GiveDirectly. Donors may want to wait to make funding decisions until the updated report is finished." We have added more/higher quality controls to our work: Since the initial StrongMinds report, we've added Samuel Dupret (researcher) and Dr Ryan Dwyer (senior research...
Is Closed Captioned: No
Explicit: No
Release Date: 8/23/2023
Duration: 459 Mins
Authors: Linch
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Select examples of adverse selection in longtermist grantmaking, published by Linch on August 23, 2023 on The Effective Altruism Forum. Sometimes, there is a reason other grantmakers aren't funding a fairly well-known EA (-adjacent) project. This post is written in a professional capacity, as a volunteer/sometimes contractor for EA Funds' Long-Term Future Fund (LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead. There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of funding in longtermist EA. There are many advantages of having a diverse pool of funding: Potentially increases financial stability of projects and charities Allows for a diversification of worldviews Encourages accountability, particularly of donors and grantmakers - if there's only one or a few funders, people might be scared of offering justified criticisms Access to more or better networks - more diverse grantmakers might mean access to a greater diversity of networks, allowing otherwise overlooked and potentially extremely high-impact projects to be funded Greater competition and race to excellence and speed among grantmakers - I've personally been on both sides of being faster and much slower than other grantmakers, and it's helpful to have a competitive ecosystem to improve grantee and/or donor experience However, this comment will mostly talk about the disadvantages. I want to address adverse selection: In particular, if a project that you've heard of through normal EA channels hasn't been funded by existing grantmakers like LTFF, there is a decently high likelihood that other grantmakers have already evaluated the grant and (sometimes for sensitive private reasons) have decided it is not worth funding. Reasons against broadly sharing reasons for rejection From my perspective as an LTFF grantmaker, it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection. For example: Our assessments may include private information that we are not able to share with other funders. Writing up our reasons for rejection of specific projects may be time-consuming, politically unwise, and/or encourage additional ire ("punching down"). We don't want to reify our highly subjective choices too much, and public writeups of rejections can cause informational cascades. Often other funders don't even think to ask about whether the project has already been rejected by us, and why (and/or rejected grantees don't pass on that they've been rejected by another funder). Sharing negative information about applicants would make applying to EA Funds more costly and could discourage promising applicants. Select examples Here are some (highly) anonymized examples of grants I have personally observed being rejected by a centralized grantmaker. For further anonymization, in some cases I've switched details around or collapsed multiple examples into one. Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors. An academic wants funding for a promising sounding existential safety research intervention in an area of study that none of the LTFF grantmakers ...
Is Closed Captioned: No
Explicit: No
Release Date: 8/15/2023
Duration: 71 Mins
Authors: Jacob_Peacock
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat, published by Jacob Peacock on August 15, 2023 on The Effective Altruism Forum. Also available on the Rethink Priorities website. Executive summary Plant-based meats, like the Beyond Sausage or Impossible Burger, and cultivated meats have become a source of optimism for reducing animal-based meat usage. Public health, environmental, and animal welfare advocates aim to mitigate the myriad harms of meat usage. The price, taste, and convenience (PTC) hypothesis posits that if plant-based meat is competitive with animal-based meat on these three criteria, the large majority of current consumers would replace animal-based meat with plant-based meat. The PTC hypothesis rests on the premise that PTC primarily drive food choice. The PTC hypothesis and premise are both likely false. A majority of current consumers would continue eating primarily animal-based meat even if plant-based meats were PTC-competitive. PTC do not mainly determine food choices of current consumers; social and psychological factors also play important roles. Although not examined here, there may exist other viable approaches to drive the replacement of animal-based meats with plant-based meats. There is insufficient empirical evidence to more precisely estimate or optimize the current (or future) impacts of plant-based meat. To rectify this, consider funding: Research measuring the effects of plant-based meat sales on displacement of animal-based meat. Research comparing the effects of plant-based meats with other interventions to reduce animal-based meat usage. Informed (non-blinded) taste tests to benchmark current plant-based meats and enable measurements of taste improvement over time. Introduction Plant-based meats, like the Beyond Sausage or Impossible Burger, and cultivated meats[1] have been identified as important means of reducing the public health, environmental, and animal welfare harms associated with animal-based meat production (Rubio et al., 2020). By providing competitive alternatives, these products might displace the consumption of animal-based meats. Since cultivated meats are not currently widely available on the public market, this paper will focus on plant-based meats, although many of the arguments might also apply to cultivated meats. Animal welfare, environmental, and public health advocates believe plant-based meats present a valuable opportunity to mitigate significant negative externalities of industrial animal agriculture, like animal suffering, greenhouse gas emissions, and antimicrobial resistance. For example, Animal Charity Evaluators lists "[cultivated] and plant-based food tech" as a priority cause area (Animal Charity Evaluators, 2022b), and a 2018 survey of 30 animal advocacy leaders and researchers ranked creating plant-based (and cultivated) meats third (after only research and corporate outreach) in their top priorities (Savoie, 2018). Non-profits working to research and support plant-based and cultivated meat production have received millions of dollars in funding (Animal Charity Evaluators, 2022a; New Harvest, 2021). Hu et al. (2019) describes plant-based meats as a potentially "vital" means to reduce the risks of diabetes, cardiovascular disease, and some cancers. Others have focused on reducing the climate impact of food production and "the need to de-risk global food systems" (Zane Swanson et al., 2023). The private and public sectors have taken note as well; in 2022, the "plant-based meat, seafood, eggs, and dairy companies" foods industry attracted at least $1.2 billion in private investment activity and at least $874 million in public funding (The Good Food Institute, 2022, pp. 55, 85-88). This enthusiasm has been propelled in some significant part by the informa...
Is Closed Captioned: No
Explicit: No
Release Date: 8/3/2023
Duration: 995 Mins
Authors: Dave Banerjee
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: University EA Groups Need Fixing, published by Dave Banerjee on August 3, 2023 on The Effective Altruism Forum. (Cross-posted from my website.) I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don't claim to have a complete picture of the university group ecosystem. Disclaimer: I've written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal. My EA Experience During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship. After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right - it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia. After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat. EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat. I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I'd hear these ideas, I didn't really know what to do besides nod my head and occasionally say "that makes sense." After each one-on-one, I knew that I shouldn't update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn't help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety. It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don't prioritize AI safety, the burden of proof is on me to explain why I "dissent" from EA. If you're a longtermist AI safety person, there's no need to offer evidence to defend your view. (I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets su...
Is Closed Captioned: No
Explicit: No
Release Date: 7/26/2023
Duration: 184 Mins
Authors: Joey
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching the meta charity funding circle (MCF): Apply for funding or join as a donor!, published by Joey on July 26, 2023 on The Effective Altruism Forum. Summary We are launching the Meta Charity Funders, a growing network of donors sharing knowledge and discussing funding opportunities in the EA meta space. Apply for funding by August 27th or join the circle as a donor. See below or visit our website to learn more! If you are doing EA-aligned "meta" work, and have not received substantial funding for several years, you might be worried about funding. Over the past 10 years, Open Philanthropy and EA Funds comprised a large percent of total meta funding and are far from independent of each other. This lack of diversity means potentially effective projects outside their priorities often struggle to stay afloat or scale, and the beliefs of just a few grant-makers can massively shape the EA movement's trajectory. It can be difficult for funders within meta as well. Individual donors often don't know where to give if they don't share EA Funds' approach. Thorough vetting is scarce and expensive, with only a handful of grant-makers deploying tens of millions per year in meta grants, resulting in sub-optimal allocations. This is why we are launching the Meta Charity Funders, a growing network of donors sharing knowledge, discussing funding opportunities, and running joint open grant rounds in the EA meta space. We believe many charitable projects create a huge impact by working at one level removed from direct impact to instead enhance the impact of others. Often these projects cut across causes and don't fit neatly into a box, thus being neglected by funders. Well known examples of meta organizations include charity evaluators like GiveWell, incubators like Charity Entrepreneurship, cause prioritization research organizations like Rethink Priorities, or field-building projects promoting effective giving or impactful careers. Grantees: Apply to many HNW donors at once - 1st round closes August 27. We are open to funding meta work across a range of causes, organizational stages, strategies, etc. We are most interested in applications that have not already been substantially supported by similar actors such as EA Funds or Open Philanthropy, though we will still consider these. We expect most of our grants to range from $10,000 to $500,000 and consider grants to both individuals and organizations. We expect our first round to be between $500,000 and $1.5m of total funding. Please lean in favor of applying if you are unsure if you would be a good fit! Donors: Join us! Find neglected opportunities, get help with ops and vetting, and give on your own terms. People who are unable to commit to regular meetings are still encouraged to apply and may be invited to our Slack and email list and gain access to our grant opportunities database. Meta Charity Funding Circle is a project of Charity Entrepreneurship and Impactful Grantmaking. It is organized by this post's authors: Gage Weston, Vilhelm Skoglund, and Joey Savoie. Our members are anonymous. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Is Closed Captioned: No
Explicit: No
Release Date: 7/18/2023
Duration: 188 Mins
Authors: Toby_Ord
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shaping Humanity's Longterm Trajectory, published by Toby Ord on July 18, 2023 on The Effective Altruism Forum. Since writing The Precipice, one of my aims has been to better understand how reducing existential risk compares with other ways of influencing the longterm future. Helping avert a catastrophe can have profound value due to the way that the short-run effects of our actions can have a systematic influence on the long-run future. But it isn't the only way that could happen. For example, if we advanced human progress by a year, perhaps we should expect to see us reach each subsequent milestone a year earlier. And if things are generally becoming better over time, then this may make all years across the whole future better on average. I've developed a clean mathematical framework in which possibilities like this can be made precise, the assumptions behind them can be clearly stated, and their value can be compared. The starting point is the longterm trajectory of humanity, understood as how the instantaneous value of humanity unfolds over time. In this framework, the value of our future is equal to the area under this curve and the value of altering our trajectory is equal to the area between the original curve and the altered curve. This allows us to compare the value of reducing existential risk to other ways our actions might improve the longterm future, such as improving the values that guide humanity, or advancing progress. Ultimately, I draw out and name 4 idealised ways our short-term actions could change the longterm trajectory: advancements speed-ups gains enhancements And I show how these compare to each other, and to reducing existential risk. While the framework is mathematical, the maths in these four cases turns out to simplify dramatically, so anyone should be able to follow it. My hope is that this framework, and this categorisation of some of the key ways we might hope to shape the longterm future, can improve our thinking about longtermism. Some upshots of the work: Some ways of altering our trajectory only scale with humanity's duration or its average value - but not both. There is a serious advantage to those that scale with both: speed-ups, enhancements, and reducing existential risk. When people talk about 'speed-ups', they are often conflating two different concepts. I disentangle these into advancements and speed-ups, showing that we mainly have advancements in mind, but that true speed-ups may yet be possible. The value of advancements and speed-ups depends crucially on whether they also bring forward the end of humanity. When they do, they have negative value. It is hard for pure advancements to compete with reducing existential risk as their value turns out not to scale with the duration of humanity's future. Advancements are competitive in outcomes where value increases exponentially up until the end time, but this isn't likely over the very long run. Work on creating longterm value via advancing progress is most likely to compete with reducing risk if the focus is on increasing the relative progress of some areas over others, in order to make a more radical change to the trajectory. The work is appearing as a chapter for the forthcoming book, Essays on Longtermism, but as of today, you can also read it online here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Is Closed Captioned: No
Explicit: No
Release Date: 7/13/2023
Duration: 1084 Mins
Authors: MHR
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity, published by MHR on July 13, 2023 on The Effective Altruism Forum. Epistemic status: layperson's attempt to understand the relevant considerations. I welcome corrections from anyone with a better understanding of welfare biology Summary The Shrimp Welfare Project (SWP) has a novel opportunity to spend up to $115,500 to purchase and install electric stunners at multiple shrimp farms The stunners would be used to stun shrimp prior to slaughter, likely rendering them unconscious and thereby preventing suffering that is currently experienced when shrimp asphyxiate or freeze without effective analgesics Based on formal agreements SWP has signed with multiple producers, raising $115,500 would enable the stunning (rather than rather than conventional slaughtering) of 1.7 billion shrimp over the next three years, for a ratio of nearly 15000 shrimp/dollar I performed a preliminary cost-effectiveness analysis of this initiative and reached the following three tentative conclusions: The expected cost-effectiveness distribution for electric shrimp stunning likely overlaps that of corporate hen welfare campaigns The cost-effectiveness of electric shrimp stunning is more likely to be lower than that of corporate hen welfare campaigns than it is to be higher Shrimp stunning is a very heavy-tailed intervention. The mean cost-effectiveness of stunning is significantly influenced by a few extreme cases, which mostly represent instances in which the undiluted experience model of welfare turns out to be correct Given these results, electric shrimp stunning might be worth supporting as a somewhat speculative bet in the animal welfare space. Considerations that might drive donor decisions on this project include risk tolerance, credence in the undiluted experience model of welfare, and willingness to take a hits-based giving approach. Description of the Opportunity The following information is quoted from the project description written by Marcus Abramovitch on the Manifund donation platform, based on information provided by Andrés Jiménez Zorrilla (CEO of SWP) : Project summary Shrimp Welfare Project is an organization of people who believe that shrimps are capable of suffering and deserve our moral consideration [1]. We aim to cost-effectively reduce the suffering of billions of shrimps and envision a world where shrimps don't suffer needlessly. Programme: our current most impactful intervention is to place electrical stunners with producers ($60k/stunner): We have signed agreements with 2 producers willing and able to use electrical stunning technology as part of their slaughter process which will materially reduce the acute suffering at the last few minutes / hours of shrimps lives. Collectively, these 2 agreements will impact more than half a billion animals per year at a rate of more than 4,000 shrimps/dollar/annum. Please take a look at our blog post on the first agreement here. We are in advanced negotiations with 2 more producers which would take the number of animals to more than 1 billion shrimps per annum. See our back-of-the-envelope calculation for the number of shrimps and cost-effectiveness analysis here Project goals Simplified end-game of this programme: the interim goal of placing these stunners with selected producers in different contexts/systems is to remove some perceived obstacles to the industry and show major retailers and other shrimp buyers that electrical stunning is something they can demand from their supply chain The ultimate goal is for electrical stunning to be: widely adopted by medium to large shrimp producers in their slaughter process (pushed by their buyers), included by certifiers in their standards, and eventually considered (eventually) to be an obvious requirement by legislat...
Is Closed Captioned: No
Explicit: No
Release Date: 6/27/2023
Duration: 383 Mins
Authors: KarolinaSarek
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing CE’s new Research Training Program - Apply Now!, published by KarolinaSarek on June 27, 2023 on The Effective Altruism Forum. TL;DR: We are excited to announce our Research Training Program. This online program is designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities and interventions. It is a full-time, fully cost-covered program that will run online for 11 weeks. Apply here!Deadline for application: July 17, 2023The program dates are: October 2 - December 17, 2023 So far, Charity Entrepreneurship has launched and run two successful training programs: a Charity Incubation Program and a Foundation Program. Now we are piloting a third - a Research Training Program, which will tackle a different problem. The Problem: People: Many individuals are eager to enter research careers, level up their current knowledge and skills from junior to senior, or simply make their existing skills more applicable to work within EA frameworks/organizations. At the same time, research organizations have trouble filling a senior-level researcher talent gap. There is a scarcity of specific training opportunities for the niche skills required, such as intervention prioritization and cost-effectiveness analyses, which are hard to learn through traditional avenues. Ideas: A lack of capacity for exhaustive investigation means there is a multitude of potentially impactful intervention ideas that remain unexplored. There may be great ideas being missed, as with limited time, we will only get to the most obvious solutions that other people are likely to have thought of as well. Evaluation: Unlike the for-profit sector, the nonprofit sector lacks clear metrics for assessing an organization's actual impact. External evaluations can help nonprofits evaluate and reorganize their own effectiveness and also allow funders to choose the highest impact opportunities available to them- potentially unlocking more funding (sometimes limited by lack of public external evaluation). There are some great organizations that carry out evaluations (e.g., GiveWell), but they are constrained by capacity and have limited scope; this results in several potentially worthwhile organizations remaining unassessed. Who Is This Program For? Motivated researchers who want to produce trusted research outputs to improve the prioritization and allocation decisions of effectiveness-minded organizations Early career individuals who are seeking to build their research toolkits and gain practical experience through real projects Existing researchers in the broader Global Health and Well-being communities (global health, animal advocacy, mental health, health/biosecurity, etc.) who are interested in approaching research from an effectiveness-minded perspective What Does Being a Fellow Involve? Similar to our Charity Incubation Program, the first month focuses on learning generalizable and specific research skills. It involves watching training videos, reading materials, and practicing by applying those skills to concrete mini-research projects. Participants learn by doing while we provide guidance and lots of feedback. The second month is focused on applying skills, working on different stages of the research process, and producing final research reports that could be used to guide real decision-making. Frequent feedback on your projects from expert researchers Regular check-in calls with a mentor for troubleshooting, guidance on research, and your career Writing reports on selected topics Opportunities to connect with established researchers and explore potential job opportunities Assistance with editing your cause area report for publication and dissemination What Are We Offering? 11 weeks of online, full-time training with practical research assig...
Is Closed Captioned: No
Explicit: No
Release Date: 6/22/2023
Duration: 39 Mins
Authors: Ben_West
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lab-grown meat is cleared for sale in the United States, published by Ben West on June 22, 2023 on The Effective Altruism Forum. Upside Foods and Good Meat, two companies that make what they call “cultivated chicken,” said Wednesday that they have gotten approval from the US Department of Agriculture to start producing their cell-based proteins. Good Meat, which is owned by plant-based egg substitute maker Eat Just, said that production is starting immediately. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Is Closed Captioned: No
Explicit: No
Release Date: 6/14/2023
Duration: 321 Mins
Authors: Joey
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA organizations should have a transparent scope, published by Joey on June 14, 2023 on The Effective Altruism Forum. Executive summary One of the biggest challenges of being in a community that really cares about counterfactuals is knowing where the most important gaps are and which areas are already effectively covered. This can be even more complex with meta organizations and funders that often have broad scopes that change over time. However, I think it is really important for every meta organization to clearly establish what they cover and thus where these gaps are; there is a substantial negative flowthrough effect when a community thinks an area is covered when it is not. Why this matters The topic of having a transparent scope recently came up at a conference as one of the top concerns with many EA meta orgs. Some negative effects that have been felt by the community are in large part due to unclear scopes, including: Organizations leaving a space thinking it's covered when it's not. Funders reducing funding in an area due to an assumption that someone else is covering it when there are still major gaps. Two organizations working on the same thing without knowledge of each other, due to both having a broad mandate, but simultaneously putting resources into an overlapping subcomponent of this mandate. Talent being turned off or feeling misled by EA when they think an org misportrays itself. Talent ‘dropping out of the funnel’ when they go to what they believe is the primary organization covering an area and finding that what they care about isn’t covered, due to the organization claiming too broad a mandate. There can also be a significant amount of general frustration caused when people think an organization will cover, or is covering, an area and then an organization fails to deliver (often on something they did not even plan on doing). What do I mean when I say that organizations should have a transparent scope: Broadly, I mean organizations being publicly clear and specific about what they are planning to cover both in terms of action and cause area. In a relevant timeframe: I think this is most important in the short term (e.g., there is a ton of value in an organization saying what they are going to cover over the next 12 months, and what they have covered over the last months). For the most important questions: This clarity needs to both be in priorities (e.g., cause prioritization) and planned actions (e.g., working with student chapters). This can include things the organization might like or think is impactful to do but are not doing due to capacity constraints or its current strategic direction. For the areas most likely for people to confuse: It is particularly important to provide clarity about things that people think one might be doing (for example, Charity Entrepreneurship probably doesn’t need to clarify that it doesn’t sell flowers, but should really be transparent over whether it plans to incubate projects in a certain cause area or not). How to do this When I have talked to organizations about this, I sometimes think that the “perfect” becomes the enemy of the good and they do not want to share a scope that is not set in stone. All prioritizations can change, and it can sometimes even be hard internally to have a sense of where the majority of your resources are going. However, given the importance of counterfactuals and the number of aspects that can help proxy these factors, I think a pretty solid template can be created. Given that CE is also often asked this question I made a quick template below that I think gives a lot of transparency if answered clearly and can give people a pretty clear sense of an organization's focus. It's worth noting that what I am suggesting is more about clarity rather than justification. While an org can choo...
Is Closed Captioned: No
Explicit: No
Release Date: 5/30/2023
Duration: 75 Mins
Authors: Center for AI Safety
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum. Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders. Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta). The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Is Closed Captioned: No
Explicit: No
Release Date: 5/25/2023
Duration: 99 Mins
Authors: alene
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum. Dear EA Forum readers, The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit! As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering. Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty. The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily. Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty. Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays. Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers. Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens! Sincerely, Alene Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Is Closed Captioned: No
Explicit: No
Release Date: 5/19/2023
Duration: 277 Mins
Authors: Peter Singer
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum. Summary My new book, Animal Liberation Now, will be out next Tuesday (May 23). I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new. Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.) Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production). Please spread the words (and links) about the book and the speaking tour to help give the book a strong start. Why a new book? The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change. What’s different? The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed. Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement” and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats. ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming. There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book. Is the book relevant to EA? Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA. Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are...
Is Closed Captioned: No
Explicit: No
Release Date: 5/13/2023
Duration: 1839 Mins
Authors: Vasco Grilo
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum. Summary Corporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions. In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful. I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to: Increase their support of animal welfare interventions relative to those of GHD (at the margin). Account for effects on animals in the cost-effectiveness analyses of GHD interventions. Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charities Corporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators. I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between: Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas. Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming: The time broilers experience each level of pain defined here (search for “definitions”) in a conventional and reformed scenario is given by these data (search for “pain-tracks”) from the Welfare Footprint Project (WFP). The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience. Excruciating pain is 1 k times as bad as disabling pain. Disabling pain is 100 times as bad as hurtful pain. Hurtful pain is 10 times as bad as annoying pain. The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso. Broilers sleep 8 h each day, and have a neutral experience during that time. Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. Median welfare range of chickens, which I set to RP's median estimate of 0.332. Reciprocal of the intensity of the mean human experience, which I obtained supposing humans: Sleep 8 h each day, and have a neutral experience during that time. Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences. I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between: Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID. Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here. The results are in the tables below. The data and calculations are here (see tab “Cost-effectiveness”). Intensity of the mean experience as a fraction of the median welfare range Broiler in a conventional scenario Broiler in a reformed scenario Human 5.7710^-6 2.5910^-5 3.3310^-6 Broiler in a conventional scenario relative to a human Broiler in a reformed scenario relative to a human Broiler in a conventional scenario relative to a reformed scenario 7.77 1.73 4.49 Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of... The median welfare range of chickens The intensity of the mean human experience 2....
Is Closed Captioned: No
Explicit: No
Release Date: 4/26/2023
Duration: 317 Mins
Authors: Eva
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog. I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!) GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side. There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin. 1) Research on decision-making under uncertainty There is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper). If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category. 2) Increasing empirical research GPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research. Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others. 3) Expanding GPI’s network in economics There is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation. 4) Exploring expanding to other fields and topics There are a number of topics that appear relevant to gl...
Is Closed Captioned: No
Explicit: No
Release Date: 1/14/2023
Duration: 559 Mins
Authors: Rockwell
Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Living Without Idols, published by Rockwell on January 13, 2023 on The Effective Altruism Forum. For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others. Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here. Anarchists have no idols. I wrote a Facebook post in July 2019 following a blowup in one of my communities: "Anarchists have no idols."Years ago, I heard this expression (that weirdly doesn't seem to exist in Google) and it really stuck with me. I think about it often. It's something I try to live by and it feels extremely timely. Whether you agree with anarchism or not, I think this is a philosophy everyone might benefit from.What this means to me: Never put someone on a pedestal. Never believe anyone is incapable of doing wrong. Always create mechanisms for accountability, even if you don't anticipate ever needing to use them. Allow people to be multifaceted. Exist in nuance. Operate with an understanding of that nuance. Cherish the good while recognizing it doesn't mean there is no bad. Remember not to hero worship. Remember your fave is probably problematic. Remember no one is too big to fail, too big for flaws. Remember that when you idolize someone, it depersonalizes the idolized and erodes your autonomy. Hold on to your autonomy. Cultivate a culture of liberty. Idolize no one. Idolize no one. Idolize no one. My mentor, Pt. 1. When I was in college, I had a boss I considered my mentor. She was intelligent, ethical, and skilled. She shared her expertise with me and I eagerly learned from her. She gave me responsibility and trusted me to use it well. She oversaw me without micromanaging me, and used a gentle hand to correct my course and steer my development. She saw my potential and helped me to see it, too.She also lied to me. Directly to my face. She violated an ethical principle she had previously imparted to me, involved me in the violation, and then lied to me about it. I was made an unwitting participant in something I deeply morally opposed and I experienced a major, life-shattering breach of trust from someone I deeply respected. She was my boss and my friend, but in a sense, she was also my idol. And since then, I have refused to have another. Abusive people do not exist. A month after my mentor ceased to be my mentor, I took a semester-long course, "Domestic Violence". It stands as one of the most formative experiences in my way of thinking about the world. There's a lot I could write about it, but I want to share one small tidbit here, that I wrote about a few years after the course concluded: More and more people are promoting a shift in our language away from talking about “abusive relationships” and toward relationships with “abusive people.” This is a small but powerful way to locate where culpability lies. It is not the relationship that is to blame, but one individual in it. I suggest taking this a step further and selectively avoiding use of...
Is Closed Captioned: No
Explicit: No
Release Date: 10/7/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 2/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 3/3/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 5/2/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 5/2/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 5/2/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 5/2/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 5/2/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
Release Date: 2/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Release Date: 2/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Release Date: 7/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No
Comments (0) -