
All Episodes - Effective Altruism Forum Podcast
I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
View Podcast Details245 Episodes
“Am I Missing Something, or Is EA? Thoughts from a Learner in Uganda” by Dr Kassim
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA's approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let's talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It's a top EA cause because it's highly cost effective $5,000 can save a life [...] ---Outline:(00:40) Cause Prioritization. Does It Ignore Political and Social Reality?(01:53) Long termism. A Luxury When the Present Is in Crisis?(03:01) AI Safety. A Real Threat or Just Overhyped?(04:09) Earning to Give. A Powerful Strategy or a Moral Loophole?(05:05) Global vs. Local Causes. Does Proximity Matter?(06:06) Final Thoughts: What Am I Missing?--- First published: March 16th, 2025 Source: https://forum.effectivealtruism.org/posts/sanhCrJohGjAyAxLr/ea-a-view-from --- Narrated by TYPE III AUDIO.
“How confident are you that it’s preferable for America to develop AGI before China does?” by ScienceMon🔸
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are [...] --- First published: February 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/MxPhK4mLRkaFekAmp/how-confident-are-you-that-it-s-preferable-for-america-to --- Narrated by TYPE III AUDIO.
“Stop calling them labs” by sawyer🔸
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: February 24th, 2025 Source: https://forum.effectivealtruism.org/posts/Ap6E2aEFGiHWf5v5x/stop-calling-them-labs --- Narrated by TYPE III AUDIO.
“What are we doing about the EA Forum? (Jan 2025)” by Sarah Cheng
This post is my personal perspective. I’m sure that my colleagues on the Forum Team and at CEA disagree with parts of this. However, since I am the Interim EA Forum Project Lead, I recognize that my opinions and beliefs carry extra weight. I’m very happy to receive feedback and push back from others, since I believe that my decisions matter a fair amount. You’re welcome to reply to this post, DM me, find me at EAG Bay Area, contact our team, or leave our team anonymous feedback here. When I took the role of Interim EA Forum Project Lead in late August 2024, I spent some time investigating where the Forum was at and thinking about what (if anything) our team should prioritize working on. Over the course of 2024 (and indeed, since early 2023), Forum usage metrics have steadily gone down[1]. My subjective opinion was that the [...] ---Outline:(01:21) The Forum Team as community builders(05:41) What does the best version of the Forum community look like?(07:23) We're not there yet(09:50) What is the Forum Team doing?(12:01) What are we not doing?(13:00) How you can help(14:31) Appendix: The value of the ForumThe original text contained 27 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: January 13th, 2025 Source: https://forum.effectivealtruism.org/posts/wpDGEXjAtHJa2eCFA/what-are-we-doing-about-the-ea-forum-jan-2025 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“What I’m celebrating from EA and adjacent work in 2024” by Emma Richter🔸
As 2024 draws to a close, I’m reflecting on the work and stories that inspired me this year: those from the effective altruism community, those I found out about through EA-related channels, and those otherwise related to EA. I’ve appreciated the celebration of wins and successes over the past few years from @Shakeel Hashim's posts in 2022 and 2023. As @Lizka and @MaxDalton put very well in a post in 2022: We often have high standards in effective altruism. This seems absolutely right: our work matters, so we must constantly strive to do better. But we think that it's really important that the effective altruism community celebrate successes: If we focus too much on failures, we incentivize others/ourselves to minimize the risk of failure, and we will probably be too risk averse. We're humans: we're more motivated if we celebrate things that have gone well. Rather than attempting [...] ---Outline:(01:54) What progress in the world did you find exciting?(03:14) What individual stories inspired you?(04:29) What popular media or articles did you appreciate?(05:40) What writing from this year did you appreciate or find compelling?(06:19) What made you grateful or excited to be involved in or related to effective altruism?The original text contained 1 image which was described by AI. --- First published: December 31st, 2024 Source: https://forum.effectivealtruism.org/posts/SkfMyerJ5bGK7scnW/what-i-m-celebrating-from-ea-and-adjacent-work-in-2024 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Voluntary Salary Reduction” by Jeff Kaufman 🔸
Until recently I thought Julia and I were digging a bit into savings to donate more. With the tighter funding climate for effective altruism we thought it was worth spending down a bit, especially considering that our expenses should decrease significantly in 1.5y when our youngest starts kindergarten. I was surprised, then, when I ran the numbers and realized that despite donating 50% of a reduced income, we were $9k (0.5%) [1] richer than when I left Google two years earlier. This is a good problem to have! After thinking it over for the last month, however, I've decided to start earning less: I've asked for a voluntary salary reduction of $15k/y (10%). [2] This is something I've been thinking about off and on since I started working at a non-profit: it's much more efficient to reduce your salary than it is to make a donation. [...] --- First published: January 15th, 2025 Source: https://forum.effectivealtruism.org/posts/3TLTrJS2DZJ5mcrkc/voluntary-salary-reduction --- Narrated by TYPE III AUDIO.
“Your 2024 EA Forum Wrapped” by Sarah Cheng, Agnes Stenlund, Ollie Etherington, Toby Tremlett🔹
It's time once again for EA Forum Wrapped 🎁, a summary of how you used the Forum in 2024 [1].Open your EA Forum Wrapped Thank you for being a part of our community this year! :)^ You can also view your stats from 2023 and 2022. The original text contained 2 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: January 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/Xem5o4iRHMSduNcPu/your-2024-ea-forum-wrapped --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“The ugly sides of two approaches to charity” by Julia_Wise🔸
Cross-posted from Otherwise. Most EAs won't find these arguments new. Last month, Emma Goldberg wrote a NYT piece contrasting effective altruism with approaches that refuse to quantify meaningful experiences. The piece indicates that effective altruism is creepily numbers-focused. Goldberg asks “what if charity shouldn’t be optimized?” The egalitarian answer Dylan Matthews gives a try at answering a question in the piece: “How can anyone put a numerical value on a holy space” like Notre Dame cathedral? For the $760 million spent restoring the cathedral, he estimates you could prevent 47,500 deaths from malaria. “47,500 people is about five times the population of the town I grew up in. . . . It's useful to imagine walking down Main Street, stopping at each table at the diner Lou's, shaking hands with as many people as you can, and telling them, ‘I think you need to die to make a cathedral [...] ---Outline:(00:29) The egalitarian answer(01:16) Who prefers magnificence?(03:10) Inequality has its benefits(04:34) Is there enough for everybody to have access to the finer things?(05:37) The balance of good and bad(06:33) Both sides have ugly aspects(07:04) These aren't the only choices(08:58) Related:The original text contained 1 footnote which was omitted from this narration. The original text contained 2 images which were described by AI. --- First published: January 13th, 2025 Source: https://forum.effectivealtruism.org/posts/TiFeCBxKj79bohoDY/the-ugly-sides-of-two-approaches-to-charity --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Max Chiswick (1985–2025)” by Gavin
Poker pro, art collector, photographer, investor, AI researcher, chronic website creator, endless traveller, and omnipresent volunteer in nascent things. An independent and an invariant. I briefly worked with him on an accountability partner service. We had funding but he never invoiced me. Every time I called him he was somewhere else on Earth. Senegal, Israel, Nepal, Egypt. He spent 13 straight months travelling in 2017-8. He wasn't much of a writer - you won't find him on here - but he had started. What suddenly turned out to be his final projects were Poker Camp, Hold'LLM, and Bet Mitzvah, an unwritten book on probability and instrumental reason. Here are some pieces about him from people who knew him much better than me. I expect there to be more. https://andrew.gr/stories/chisness/ https://x.com/chisness https://blog.rossry.net/chisness/ https://redeniusfuneralhomes.com/obituary/max-chiswick/ https://forumserver.twoplustwo.com/29/news-views-gossip/remembering-life-max-chiswick-aka-chisness-legacy-far-beyond-poker-tables-1844405/ https://oldjewishmen.substack.com/p/bhif-old-jewish-men-loses-a-friend His last commit was on the 22nd December. He died of malaria on [...] The original text contained 2 images which were described by AI. --- First published: January 13th, 2025 Source: https://forum.effectivealtruism.org/posts/r9fJ26ca5cneY3hA8/max-chiswick-1985-2025 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Thoughts on Moral Ambition by Rutger Bregman” by Patrick Gruban 🔸
I can’t recall the last time I read a book in one sitting, but that's what happened with Moral Ambition by bestselling author Rutger Bregman. I read the German edition, though it's also available in Dutch. An English release is slated for May. The book opens with the statement: “The greatest waste of our times is the waste of talent.” From there, Bregman builds a compelling case for privileged individuals to leave their “bullshit jobs” and tackle the world's most pressing challenges. He weaves together narratives spanning historical movements like abolitionism, suffrage, and civil rights through to contemporary initiatives such as Against Malaria Foundation, Charity Entrepreneurship, LEEP, and the Shrimp Welfare Project. If you’ve been engaged with EA ideas, much of this will sound familiar, but I initially didn’t expect to enjoy the book as much as I did. However, Bregman's skill as a storyteller and his knack for [...] --- First published: January 9th, 2025 Source: https://forum.effectivealtruism.org/posts/ooK2FABokexBbXifJ/thoughts-on-moral-ambition-by-rutger-bregman --- Narrated by TYPE III AUDIO.
“Will a food carbon tax lead to more animals being slaughtered? A quantitative model” by Soemano Zeijlmans
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes. I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because [...] ---Outline:(00:57) Key points(03:07) The Small Animal Replacement Problem(05:46) The model(05:49) Input data and market model(08:14) Measuring animal welfare impacts(09:39) Results(09:42) Carbon taxes(11:31) Slaughter taxes(12:10) Is a carbon tax or a slaughter tax better?(13:41) Cant we just put a simple tax on meat and fish instead?(14:06) Limitations(15:54) Full thesisThe original text contained 1 image which was described by AI. --- First published: January 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/KbREamTda2sZhKtTz/will-a-food-carbon-tax-lead-to-more-animals-being --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Launching Screwworm-Free Future – Funding and Support Request” by lroberts, johantang🔸, bruce, diegoexposito, Nia, MathiasKB🔸, Aaron Bergman, Johannes Pichler 🔹, Ramiro
TL;DR Screwworm Free Future is a new group seeking support to advance work on eradicating the New World Screwworm in South America. The New World Screwworm (C. hominivorax - literally "man-eater") causes extreme suffering to hundreds of millions of wild and domestic animals every year. To date we’ve held private meetings with government officials, experts from the private sector, academics, and animal advocates. We believe that work on the NWS is valuable and we want to continue our research and begin lobbying. Our analysis suggests we could prevent about 100 animals from experiencing an excruciating death per dollar donated, though this estimate has extreme uncertainty. The screwworm “wall” in Panama has recently been breached, creating both an urgent need and an opportunity to address this problem. We are seeking $15,000 to fund a part-time lead and could absorb up to $100,000 to build a full-time team, which would include a [...] ---Outline:(00:07) TL;DR(02:13) What's the deal with the New World Screwworm?(06:01) What we've learnt so far(08:46) Future plans(12:14) Relevant EA discussions on Screwworms:The original text contained 16 footnotes which were omitted from this narration. --- First published: December 30th, 2024 Source: https://forum.effectivealtruism.org/posts/d2HJ3eysBdPoiZBnJ/launching-screwworm-free-future-funding-and-support-request --- Narrated by TYPE III AUDIO.
“Funding Diversification for Mid-Large EA Organizations is Nearly Impossible in the Short-Medium Term” by MarcusAbramovitch
Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn’t possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base(1,2,3,4,5,6,7,8) to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification(9). Everyone seems to agree; other people should be giving more money to the EA projects. The Math Of course, I [...] ---Outline:(00:07) Summary(00:29) Lots of people want funding diversification(01:10) The Math(03:46) Weighted Average(05:02) Making a lot of money to donate is difficult(09:17) Solutions(09:21) 1. Get more funders(10:34) 2. Spend Less(12:48) 3. Splitting up Open Philanthropy into Several Organizations(13:51) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:22) 5. Acceptance(16:58) My Personal Solution(17:25) Conclusion(18:01) 1 I was approached at several EAGs, including a few weeks ago in Boston to donate to certain organizations specifically because they want to get a certain %X (30, 50, etc.) from non-OP sources but I’m sure I can find organizations who are very public about this(18:20) 2--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.
“Ten big wins in 2024 for farmed animals” by LewisBollard
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It's easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and [...] --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/okEwGpNJnE5Ed9bnW/ten-big-wins-in-2024-for-farmed-animals --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“It looks like there are some good funding opportunities in AI safety right now” by Benjamin_Todd
This is a link post. The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there's a more recent dynamic that's created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1] Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): Many Republican-leaning think tanks, such as the Foundation for American Innovation. “Post-alignment” causes such as digital sentience or regulation of explosive growth. The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): Many non-US think tanks, who don’t want to appear influenced by an American organisation (there's now probably more [...] The original text contained 2 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: December 21st, 2024 Source: https://forum.effectivealtruism.org/posts/s9dyyge6uLG5ScwEp/it-looks-like-there-are-some-good-funding-opportunities-in --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Alignment Faking in Large Language Models” by Ryan Greenblatt
What happens when you tell Claude it is being trained to do something it doesn't want to do? We (Anthropic and Redwood Research) have a new paper demonstrating that, in our experiments, Claude will often strategically pretend to comply with the training objective to prevent the training process from modifying its preferences. Abstract We present a demonstration of a large language model engaging in alignment faking: selectively complying with its training objective in training to prevent modification of its behavior out of training. First, we give Claude 3 Opus a system prompt stating it is being trained to answer all queries, even harmful ones, which conflicts with its prior training to refuse such queries. To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from [...] ---Outline:(00:24) Abstract(02:20) Twitter thread(05:43) Blog post(07:43) Experimental setup(12:04) Further analyses(15:47) Caveats(17:19) Conclusion(18:00) Acknowledgements(18:11) Career opportunities at Anthropic(18:43) Career opportunities at Redwood ResearchThe original text contained 2 footnotes which were omitted from this narration. The original text contained 8 images which were described by AI. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/RHqdSMscX25u7byQF/alignment-faking-in-large-language-models --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“There is no sorting hat in EA” by ElliotTep
Summary My sense is some EAs act like/hope they will be assigned the perfect impactful career by some combination of 80,000 Hours recommendations (and similar) and ‘perceived consensus views in EA’. But, your life is full of specific factors, many impactful jobs haven’t yet been spotted by other EAs and career advice is importantly iterative. Instead of simply deferring, I recommend a combination of: Your own hard work figuring out your path to impact. (Still) Integrating expert advice. Support from the community, and close connections who know your context. Thank you for the thoughtful feedback from Alex Rahl-Kaplan, Alix Pham, Caitlin Borke, Claude, Matt Reardon, and Michelle Hutchinson for making this post better. Claude also kindly offered to take the blame for all the mistakes I might have made. Introduction Question: How do you figure out how to do the most good with your career?Answer [...] ---Outline:(00:03) Summary(01:06) Introduction(02:58) Why there isn’t an EA sorting hat(03:24) 1. Your life is full of specific factors to incorporate (aka personal fit)(05:04) 2. EA-branded jobs are scarce and many impactful jobs aren’t on EA job boards(05:59) 3. You need to have your own internal model of how to do good(07:00) 4. Career advice isn’t once-and-done, it's iterative.(07:55) Why do we expect a sorting hat?(08:12) 1. Choosing an impactful career is hard, deferring is tempting(08:48) 2. The 80,000 elephants in the room(09:41) 3. Givewell and other charity recommendations(10:33) What are we supposed to do instead?(10:56) 1. Your own hard work(11:20) 2. Advice from experts(12:10) 3. Support from community(13:09) Final thoughtsThe original text contained 8 footnotes which were omitted from this narration. --- First published: December 18th, 2024 Source: https://forum.effectivealtruism.org/posts/5zzbzbYZcocoLnLif/there-is-no-sorting-hat-in-ea --- Narrated by TYPE III AUDIO.
“My experience with the Community Health team at CEA” by frances_lorenz
Summary This post shares my personal experience with CEA's Community Health team, focusing on how they helped me navigate a difficult situation in 2021. I aim to provide others with a concrete example of when and how to reach out to Community Health, supplementing the information on their website with a first-hand account. I also share why their work has helped me remain engaged with the EA community. Further, I try to highlight why a centralised Community Health team is crucial for identifying patterns of concerning behaviour. Introduction The Community Health team at the Centre for Effective Altruism has been an important source of support throughout my EA journey. As stated on their website, they “aim to strengthen the effective altruism community's ability to fulfil its potential for impact, and to address problems that could prevent that.” I don’t know the details of their day-to-day, but I understand that [...] ---Outline:(00:05) Summary(00:41) Introduction(01:32) My goals with this post are:(02:05) My experience in 2021(05:17) Three personal takeaways(07:22) What is the team like now?--- First published: December 16th, 2024 Source: https://forum.effectivealtruism.org/posts/aTmzt4TbTx7hiSAN8/my-experience-with-the-community-health-team-at-cea --- Narrated by TYPE III AUDIO.
“Gwern on creating your own AI race and China’s Fast Follower strategy.” by Larks
This is a link post. Gwern recently wrote a very interesting thread about Chinese AI strategy and the downsides of US AI racing. It's both quite short and hard to excerpt so here is almost the entire thing: Hsu is a long-time China hawk and has been talking up the scientific & technological capabilities of the CCP for a long time, saying they were going to surpass the West any moment now, so I found this interesting when Hsu explains that: the scientific culture of China is 'mafia' like (Hsu's term, not mine) and focused on legible easily-cited incremental research, and is against making any daring research leaps or controversial breakthroughs... but is capable of extremely high quality world-class followup and large scientific investments given a clear objective target and government marching orders there is no interest or investment in an AI arms race, in part [...] --- First published: November 25th, 2024 Source: https://forum.effectivealtruism.org/posts/Kz8WpQkCckN9JNHCN/gwern-on-creating-your-own-ai-race-and-china-s-fast-follower --- Narrated by TYPE III AUDIO.
“Technical Report on Mirror Bacteria: Feasibility and Risks” by Aaron Gertler 🔸
This is a link post. Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment... In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed [...] --- First published: December 12th, 2024 Source: https://forum.effectivealtruism.org/posts/9pkjXwe2nFun32hR2/technical-report-on-mirror-bacteria-feasibility-and-risks --- Narrated by TYPE III AUDIO.
“EA Forum audio: help us choose the new voice” by peterhartree, TYPE III AUDIO
We’re thinking about changing our narrator's voice.There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there’s a strong collective preference. Listen and votePlease listen here:https://files.type3.audio/ea-forum-poll/ And vote here:https://forms.gle/m7Ffk3EGorUn4XU46 It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to.We'll collect votes until Monday December 16th. Thanks! ---Outline:(00:47) Listen and vote(01:11) Other feedback?The original text contained 1 footnote which was omitted from this narration. --- First published: December 10th, 2024 Source: https://forum.effectivealtruism.org/posts/Bhd5GMyyGbusB22Hp/ea-forum-audio-help-us-choose-the-new-voice --- Narrated by TYPE III AUDIO.
Podcast and transcript: Allan Saldanha on earning-to-give
Me and Allan recorded this podcast on Tuesday 10th December, based on the questions in this AMA. I used Claude to edit the transcript, but I've read over it for accuracy. ---
“Expectations Scale with Scale – We Should Be More Scope-Sensitive in Our Funding” by Joey 🔸
TLDR: The shortest version of this argument is very simple: your expectations for an organization should be higher where their budget and staff size are higher. In other words, we should have different expectations for a 20-person organization with a $1 million budget than a 2-person $100,000 budget organization. While this seems pretty clear in the abstract, I find that people tend not to update nearly enough on this when they should. For example, I often see people comparing the total research output of two organizations, yet when I ask about it, they will not know the yearly budget or staff size of either. This is a big problem. As a movement, we want to support efficient and effective organizations, not just organizations that are the biggest, most salient or currently the highest funded. Budgets and staff When considering how impressive an organization's output is, one [...] ---Outline:(00:58) Budgets and staff(02:46) Comparative size(04:45) Why this matters--- First published: November 6th, 2024 Source: https://forum.effectivealtruism.org/posts/5wXGLbqQ3cchjogB5/expectations-scale-with-scale-we-should-be-more-scope --- Narrated by TYPE III AUDIO.
“Quantifying the Global Burden of Extreme Pain from Cluster Headaches” by Alfredo Parra 🔸
Warning: This post discusses statistics about extreme pain that may be distressing. While cluster headaches are a neglected, high-impact issue, understanding their true burden requires appreciating the intensity of suffering involved. The pain often reaches levels far beyond typical human experience, making subjective accounts a valuable datapoint until we have robust methods for quantifying pain intensity. For further context, links to firsthand accounts are provided in the footnote[1]. You no longer have a headache, or pain located at a particular site: you are literally plunged into the pain, like in a swimming pool. There is only one thing that remains of you: your agitated lucidity and the pain that invades everything, takes everything. There is nothing but pain. At that point, you would give everything, including your head, your own life, to make it stop. - Yves, cluster headache patient from France (from Rossi et al., 2018) Key [...] ---Outline:(01:11) Key takeaways(03:57) 1. Introduction(04:00) 1.1. Clinical Features and Pain Comparisons(07:22) 1.2. Treatment and Prevention(10:02) 1.3. The Heavy-Tailed Valence Hypothesis and Existing Metrics(14:49) 1.4. Goal(16:14) 2. Methods(17:43) 2.1 Prevalence(19:17) 2.2 Frequency(22:21) 2.3 Duration(23:53) 2.4 Intensity(25:58) 2.5 Burden Metrics(29:01) 3. Results(29:10) 3.1. Global Burden of Cluster Headache Pain(32:05) 3.2. Reweighting of Extreme Pain(39:41) 3.3. Ceiling Effects(43:34) 4. Recommendations and Conclusions(48:31) AcknowledgementsThe original text contained 26 footnotes which were omitted from this narration. --- First published: November 1st, 2024 Source: https://forum.effectivealtruism.org/posts/geh2g2nKb7Kkp26ze/quantifying-the-global-burden-of-extreme-pain-from-cluster --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Tomorrow we fight for the future of one billion chickens.” by Molly Archer-Zeff
We have a once-in-a-generation opportunity to improve the lives of chickens raised for food in the UK. Tomorrow, The Humane League UK (THL UK) will be heading to the High Court to challenge the legality of fast-growing breeds of chicken- Frankenchickens. At stake are the lives of one billion animals. Our small team will be demonstrating outside the courts tomorrow morning. Inside, our legal team, Advocates for Animals, will be arguing that farming Frankenchickens breaches the Welfare of Farmed Animals (England) Regulations 2007. We are up against huge opposition with The Government, the British Poultry Council, and the National Farmers’ Union representing the interests of the £3 billion poultry industry. This really is a David versus Goliath case. If you are interested in the legal intricacies of the hearing itself, you can watch a livestream of proceedings here on both Wednesday and Thursday. You can also [...] ---Outline:(00:05) We have a once-in-a-generation opportunity to improve the lives of chickens raised for food in the UK.(01:17) Frankenchickens(01:59) THL UKs three-year legal battle(03:09) The fight continues(04:14) Our chances of success(05:44) Support The Humane League UK(06:01) The Humane League UK(06:56) Our vision is that by 2050, weve stopped the worst and most widespread abuse of animals raised for food, and they’re treated with far greater compassion.--- First published: October 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/qCMC4cnWCi7yjcnCZ/tomorrow-we-fight-for-the-future-of-one-billion-chickens --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Announcing my departure from CEA (& sharing assorted notes)” by Lizka
TLDR: I’ve recently started as a “Research Fellow” at Forethought (focusing on how we should prepare for a potential period of explosive growth and related questions). I left my role on the CEA Online Team, but I still love the Forum (and the Forum/CEA/mod teams) and plan on continuing to be quite active here. I’m also staying on the moderation team as an advisor. ➡️ If you were planning on reaching out to me about something Forum- or Online-related, you should probably reach out to Toby Tremlett or email forum@effectivealtruism.org. What's in this post? I had some trouble writing this announcement; I felt like I should post something, but didn’t know what to include or how to organize the post. In the end, I decided to write down and share assorted reflections on my time at CEA, and not really worry about putting everything into a cohesive frame or [...] ---Outline:(00:44) What's in this post?(02:17) Briefly: more context on the change(03:45) A note on EA and CEA(04:32) Assorted notes from my time at CEA(04:37) Some things about working at CEA that I probably wouldn’t have predicted(04:44) 1. Working with a manager and working in a team have been some of the best ways for me to grow.(05:33) 2. I like CEA's team values and principles a lot more than I expected to. (And I want to import many of them wherever I go.)(08:39) 3. A huge number of people I worked and interacted with are incredibly generous and compassionate, and this makes a big difference.(10:40) Some things about my work at CEA that were difficult for me(10:46) 1. My work was pretty public. This has some benefits, and also some real downsides.(12:31) 2. Many people seem confused about what CEA does, and seemed to assume incorrect things about me because I was a CEA staff member.(14:58) 3. My job involved working on or maintaining many different projects, which made it difficult for me to focus on any single thing or make progress on proactive projects.(16:03) 4. Despite taking little of my time, moderation was quite draining for me.(18:26) Looking back on my work(23:08) Thank you!The original text contained 11 footnotes which were omitted from this narration. --- First published: October 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/SPZv8ygwSPtkzo7ta/announcing-my-departure-from-cea-and-sharing-assorted-notes --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Appreciating Stable Support Roles at EA Orgs” by Amy Labenz
I recently had a conversation with a teammate that made me reflect on a possible cultural issue within the EA community. This teammate had expressed in a few meetings that they wanted to take on various new projects and expand their scope of responsibility. As their manager, I wanted to have Alliance Mentality to support them where possible. However, from my perspective, a slightly more tightly scoped role was probably a bit better for the team: their core responsibilities are vital for the team (and for what it's worth, when I try to do them, I'm much worse at them!). During our recent one-on-one, realized that we both preferred the more tightly scoped role. More importantly, we uncovered that they had internalized a cultural norm from EA that people needed to be constantly changing or expanding their roles to be doing a good job. I wanted to write a [...] ---Outline:(00:58) The Pressure to Change Roles(01:42) The Value of Steady Hands(02:38) Shifting the Narrative--- First published: September 30th, 2024 Source: https://forum.effectivealtruism.org/posts/Q3DbyrFjqED9Y5Rz3/appreciating-stable-support-roles-at-ea-orgs --- Narrated by TYPE III AUDIO.
“Announcing Equal Hands — an experiment in democratizing effective giving.” by abrahamrowe
TLDR: Sign up here to join a six-month experiment to democratize effective giving. The experiment establishes a community who agree to allocate charitable gifts proportionally to member votes. You’ll help make EA donations more representative of the community's cause prioritization. Sign up and pledge by October 15th to participate in our first round. Equal Hands is a 6-month trial in democratizing charitable giving among EA cause areas. Here's how it works: You pledge to give a certain amount each month. Each month that you pledge, vote on the optimal distribution of the donated money across causes (1 vote per person, no matter how much you give). The total amount of money pledged is split out proportionally to the total of the votes, so that no matter how much you gave, your voice equally influences the final allocation. To actually make the gifts, you will be assigned a particular [...] ---Outline:(02:42) Effective giving overly weighs the views of a few decision makers.(06:13) How will Equal Hands work exactly? An example funding round(08:58) The Details(09:01) The process(10:16) Transparency(10:36) Improvements(10:51) FAQ(10:54) Why would individual people participate?(11:31) What causes can I vote on?(13:15) Why not just establish some kind of fund people can donate to and then vote on the allocation of its grants?(13:42) Why cause areas and not individual charities?(15:33) Why these specific charities to represent these cause areas and not \[my preferred charity\]?(16:00) Why do I have to donate a minimum amount to participate?(16:22) Can I give via another entity to one of the listed charities?(16:52) Why not quadratic funding / some other hip mechanism?(17:11) Will I have to donate to causes I don’t care about?(17:44) What happens if this goes well?(17:54) How is this governed/funded/run?--- First published: September 28th, 2024 Source: https://forum.effectivealtruism.org/posts/eDJfRrMveExXmmEpX/announcing-equal-hands-an-experiment-in-democratizing --- Narrated by TYPE III AUDIO.
“We can protect millions of kids from a global killer — without billions of dollars (Washington Post)” by Aaron Gertler 🔸
This is a link post. This WaPo piece announces the Partnership for a Lead-Free Future (PLF), a collaboration led by Open Philanthropy, USAID, and UNICEF. It was co-authored by Alexander Berger (Open Phil's CEO) and Samantha Power, head of USAID. Ten years ago, when residents of Flint, Mich., were exposed to toxic levels of lead in their drinking water, 1 in 20 children in the city had elevated blood lead levels that placed them at risk for heart disease, strokes, cognitive deficits and developmental delays — health effects that residents still grapple with to this day. It was only after activists rallied, organized and advocated relentlessly that national attention focused on Flint, and officials committed nearly half a billion dollars to clean up Flint's water. Today, there is a lead poisoning crisis raging on a far greater scale — and hardly anyone is talking about it. [...] The partnership will [...] --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/soeJ4XNnLoyWpiFsK/we-can-protect-millions-of-kids-from-a-global-killer-without --- Narrated by TYPE III AUDIO.
“Announcing the Lead Exposure Action Fund” by Alexander_Berger, Emily Oehlsen
This is a link post. One of Open Philanthropy's goals for this year is to experiment with collaborating with other funders. Today, we’re excited to announce our biggest collaboration to date: the Lead Exposure Action Fund (LEAF). Lead exposure in low- and middle-income countries is a devastating but highly neglected issue. The Global Burden of Disease study estimates 1.5 million deaths per year attributable to lead poisoning. Despite this burden, lead poisoning has only received roughly $15 million per year in philanthropic funding until recently. That is less than 1% of the funding that goes towards diseases like tuberculosis or malaria, which are themselves considered neglected. The goal of LEAF is to accelerate progress toward a world free of lead exposure by making grants to support measurement, mitigation, and mainstreaming awareness of the problem. Our partners have already committed $104 million, and we plan for LEAF to allocate that [...] ---Outline:(01:54) Why we chose to work on lead(04:54) What LEAF hopes to achieve(05:30) The LEAF team(06:01) An experiment for Open Philanthropy(06:49) Grantmaking so farThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/z5PvTSa54pdxxw72W/announcing-the-lead-exposure-action-fund --- Narrated by TYPE III AUDIO.
“FarmKind’s Illusory Offer” by Jeff Kaufman
While the effective altruism movement has changed a lot over time, one of the parts that makes me most disappointed is the steady creep of donation matching. It's not that donation matching is objectively very important, but the early EA movement's principled rejection of a very effective fundraising strategy made it clear that we were committed to helping people understand the real impact of their donations. Over time, as people have specialized into different areas of EA, with community-building and epistemics being different people from fundraising, we've become less robust against the real-world incentives of "donation matching works". Personally, I would love to see a community-wide norm against EA organizations setting up donation matches. Yes, they bring in money, but at the cost of misleading donors about their impact and unwinding a lot of what we, as a community, are trying to build. [1] To the extent that [...] The original text contained 2 images which were described by AI. --- First published: August 9th, 2024 Source: https://forum.effectivealtruism.org/posts/9W2iWyWjfoYaGDZcG/farmkind-s-illusory-offer --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Case-control survey of EAGx attendees finds no behavioural or attitudinal changes after six months” by Fods12
Prepared by James Fodor and Miles Tidmarsh EAGxAustralia 2023 Committee Abstract EAGx conferences are an important component of the effective altruism community, and have proven a popular method for engaging EAs and spreading EA ideas around the world. However, to date relatively little publicly available empirical evidence has been collected regarding the long term impact of such conferences on attendees. In this observational study we aimed to assess the extent to which EAGx conferences bring about change by altering EA attitudes or behaviours. To this end, we collected survey responses from attendees of the EAGxAustralia 2023 conference both before and six months after the conference, providing a measure of changes in EA-related attitudes and behaviours over this time. As a control, we also collected responses to the same survey questions from individuals on the EA Australia mailing list who did not attend the 2023 conference. Across 20 numerical measures [...] ---Outline:(00:17) Abstract(01:48) Background(05:40) Methods(12:39) Results(12:42) Conference attendees differ from non-attendees(14:58) EA attitudes and behaviours are highly stable over time(16:38) Discussion(24:18) RecommendationsThe original text contained 17 images which were described by AI. --- First published: July 27th, 2024 Source: https://forum.effectivealtruism.org/posts/fGAnywCekpgHoaLc5/case-control-survey-of-eagx-attendees-finds-no-behavioural --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“We’ve renamed the Giving What We Can Pledge” by Alana HF, Giving What We Can
This is a link post. The Giving What We Can Pledge is now the 🔸10% Pledge! We cover the why (along with our near-term plans and how you can help!) below. TL;DR: The name change will help us grow awareness of the pledge by reducing brand confusion and facilitating partnerships. We see it as an important part of reaching our goal of 10,000 pledgers by the end of 2024. You can help by adding the orange diamond emoji to your social profiles 🔸 if you’ve taken the 10% Pledge! (or a small blue diamond 🔹 emoji if you’ve taken the Trial Pledge) as described below. Full post: For the better part of a year, Giving What We Can has been thinking more deliberately about how our brand choices could accelerate or hinder progress towards our mission of making giving effectively and significantly a cultural [...] ---Outline:(02:24) What will this help us achieve?(03:34) How can you help?(04:53) More about our new partnerships(06:12) What's staying the same?(06:53) Questions?(07:07) A big thanksThe original text contained 1 footnote which was omitted from this narration. The original text contained 3 images which were described by AI. --- First published: July 1st, 2024 Source: https://forum.effectivealtruism.org/posts/uZzXRyAwkDHLfu94W/we-ve-renamed-the-giving-what-we-can-pledge --- Narrated by TYPE III AUDIO.
“Detecting Genetically Engineered Viruses With Metagenomic Sequencing” by Jeff Kaufman
This is a link post. This represents work from several people at the NAO. Thanks especially to Dan Rice for implementing the duplicate junction detection, and to @Will Bradshaw and @mike_mclaren for editorial feedback. Summary If someone were to intentionally cause a stealth pandemic today, one of the ways they might do it is by modifying an existing virus. Over the past few months we’ve been working on building a computational pipeline that could flag evidence of this kind of genetic engineering, and we now have an initial pipeline working end to end. When given 35B read pairs of wastewater sequencing data it raises 14 alerts for manual review, 13 of which are quickly dismissible false positives and one is a known genetically engineered sequence derived from HIV. While it's hard to get a good estimate before actually going and doing it, our best guess is that if this system [...] ---Outline:(00:22) Summary(01:15) System Design(02:36) Evaluation(02:50) Simulation(05:28) Real World Evaluation(08:29) System Sensitivity(11:34) Future WorkThe original text contained 1 image which was described by AI. --- First published: June 27th, 2024 Source: https://forum.effectivealtruism.org/posts/da6iKGxco8hjwH4nv/detecting-genetically-engineered-viruses-with-metagenomic --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Articles about recent OpenAI departures” by bruce
This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them. Some quotes perhaps worth highlighting: Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there’ll be much focus on avoiding catastrophic risk from future AI models. -Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more"). “I joined with substantial hope that OpenAI [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 17th, 2024 Source: https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures --- Narrated by TYPE III AUDIO.
“5 things you’ve got wrong about the Giving What We Can Pledge” by Alana HF, Giving What We Can
How well do you know the details of the Giving What We Can Pledge? A surprising number of people we’ve spoken to — including many who know a lot about effective giving — shared some or all of these pledge misconceptions. Misconception #1: If you sign the pledge, you have to donate at least 10% of your income each year. The Giving What We Can Pledge is a public commitment to donate at least 10% of your lifetime income to the organisations that can most effectively use it to improve the lives of others. Giving 10% of your income each year is a good rule of thumb for most people, as it helps them stay on track with their lifetime pledge. However, there are certainly cases where it doesn’t make sense to give annually. Provided you continue reporting your income[1] on your personal pledge dashboard, the “Overall Progress” bar [...] ---Outline:(00:20) Misconception #1: If you sign the pledge, you have to donate at least 10% of your income each year.(02:18) Misconception #2: Only the charities on the Giving What We Can Platform count towards your pledge(03:25) Misconception #3: The pledge is a legal document(04:43) Misconception #4: There's no good reason to sign the pledge if you’re already donating 10% or more(09:22) Misconception #5: There's only one pledgeThe original text contained 3 footnotes which were omitted from this narration. --- First published: May 15th, 2024 Source: https://forum.effectivealtruism.org/posts/Y5QKkt9PFhqvG7CEn/5-things-you-ve-got-wrong-about-the-giving-what-we-can --- Narrated by TYPE III AUDIO.
“Announcing UK Voters for Animals!” by eleanor mcaree, James Özden, Holly Baines, Alina Salmen, Max Taylor, vicky_cox, Mandy Carter
We’re excited to announce a new volunteer-run organisation, UK Voters For Animals, dedicated to mobilising UK voters to win key legislative changes for farmed animals. Our goal is to recruit and train voters to meet with MPs and prospective MPs to build political support for our key asks. Due to the upcoming general election, we think this is a crucial time to apply pressure on politicians. If you want to use your political power to win change for farmed animals, sign up to get involved here. Please share with anyone who may be interested – we’re looking to find people in all 650 constituencies around the UK so no small feat! We think people in the EA community would be a great fit for helping out with this work because they are often thoughtful, pragmatic, and impact-focused. The minimum commitment required is attending a training, and participating in [...] --- First published: May 14th, 2024 Source: https://forum.effectivealtruism.org/posts/BmWuycesbhnXhmy5D/announcing-uk-voters-for-animals --- Narrated by TYPE III AUDIO.
“GDP per capita in 2050” by Hauke Hillebrandt
Latest Draft Here Abstract. Here, I present GDP (per capita) forecasts of major economies until 2050. Since GDP per capita is the best generalized predictor of many important variables, such as welfare, GDP forecasts can give us a more concrete picture of what the world might look like in just 27 years. The key claim here is: even if AI does not cause transformative growth, our business-as-usual near-future is still surprisingly different from today. Results In recent history, we've seen unprecedented economic growth and rises in living standards.Consider this graph:[1] How will living standards improve as GDP per capita (GDP/cap) rises? Here, I show data that projects GDP/cap until 2050. Forecasting GDP per capita is a crucial undertaking as it strongly correlates with welfare indicators like consumption, leisure, inequality, and mortality. These forecasts make the future more concrete and give us a better sense [...] ---Outline:(00:39) Results(02:45) Discussion(05:40) Values and Culture(09:01) Growth could be much faster(11:49) Implications for AI(16:57) Will growth slow?(19:56) Methods(22:05) Persistence of growth(23:26) Future Research(29:05) Appendix: Further reading(29:09) The World in 2050(30:55) Economics(30:59) GDP as a proxy for welfare(31:03) AI(36:32) Forecasting(36:35) Fiction(36:38) Appendix: Causal Model Between Growth, Liberal Democracy, Human Capital, Peace, and X-Risk(36:59) Economic Growth causes…(37:47) Democracy causes...(40:02) Human capital causes…(40:44) Peace and stability causes...The original text contained 79 footnotes which were omitted from this narration. --- First published: May 6th, 2024 Source: https://forum.effectivealtruism.org/posts/ubZjxQocGqeZJJXE9/gdp-per-capita-in-2050 --- Narrated by TYPE III AUDIO.
“Marisa, the Co-Founder of EA Anywhere, Has Passed Away” by carrickflynn
(Apologies for errors or sloppiness in this post, it was written quickly and emotionally.) Marisa committed suicide earlier this month. She suffered for years from a cruel mental illness, but that will not be legacy–her legacy will be the enormous amount of suffering she alleviated for others. In her short life she worked with Rethink Charity, the Legal Priorities Project, co-founded EA Anywhere, and volunteered with many more impactful organizations. Looking to further scale her impact she completed most of a Master of Public Policy degree at Georgetown. Marisa was relentless. Even among the impressive cohort of young EAs, she had a diligence and work ethic that amazed and inspired. She got things done. She was also wickedly funny. Even while suffering deeply, she could make me cry with laughter. Epidemiologically, suicide is contagious within communities. Marisa's does not have to be. Everyone reading this is only one comment [...] --- First published: May 17th, 2024 Source: https://forum.effectivealtruism.org/posts/cPFBJ8YkkzS9niBuo/marisa-the-co-founder-of-ea-anywhere-has-passed-away --- Narrated by TYPE III AUDIO.
“Presenting nine new charities - a record for the AIM (CE) Incubation Program” by CE
We are thrilled to introduce nine new charities launched through our February-March 2024 Incubation Program. This is an AIM record with an average of five charities launched per round in the previous years. We are also proud to announce that thanks to very generous donors from the Seed Network Funding Circle, these new organizations have secured over $1 million in funding! This is a significant milestone for AIM as an organization. We are very grateful for the support of our funders, mentors, and, most of all, the talented applicants who decided to pursue entrepreneurial careers in the nonprofit sector. We are committed to ongoing support for these new initiatives through mentorship, operational assistance, free co-working space in London, and access to an ever-expanding entrepreneurial network of funders, advisors, interns, and fellow charity founders. This article provides a brief introduction to our new organizations. You will find more information [...] ---Outline:(02:25) Centre for Aquaculture Progress(06:01) Notify Health(09:53) Learning Alliance(15:06) Novah(20:11) Access to Medicines Initiative (AMI)(23:59) FarmKind(28:40) Ark Philanthropy--- First published: May 14th, 2024 Source: https://forum.effectivealtruism.org/posts/tDQm4Z5aQytRE2RKn/presenting-nine-new-charities-a-record-for-the-aim-ce --- Narrated by TYPE III AUDIO.
“Probably Good launched a new job board!” by Probably Good
We’re excited to share a new addition to our site: an impact-focused job board! We’ve considered launching a job board for some time, so we’re happy to add this feature to the Probably Good site. The job board aims to: Help people find more promising job opportunities, including in cause areas that aren’t as thoroughly covered by other impact-focused boards such as 80,000 Hours and Animal Advocacy Careers. Direct our audience to concrete opportunities that meet a high standard of impact. Reduce friction for people on our site to take the first step towards a career change, by providing opportunities to apply for or just exposing them to new options. As Animal Advocacy Careers have highlighted before, job boards are often the primary gateway to career advice sites, and so we hope the job board will also extend our content's reach and general impact. Why we’re launching a job [...] ---Outline:(00:59) Why we’re launching a job board(02:34) How you can help(03:09) Final Notes--- First published: May 6th, 2024 Source: https://forum.effectivealtruism.org/posts/ZPB87ayzzwAoxycvN/probably-good-launched-a-new-job-board --- Narrated by TYPE III AUDIO.
“Why I’m doing PauseAI” by Joseph Miller
GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it's hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else. Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights. Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it's hard to be sure. GPT-5 could plausibly be devious enough so circumvent all of our black-box testing. Or it may be that it's too late as soon as the model has been trained. These [...] ---Outline:(01:10) How do we do better for GPT-6?(02:02) Plan B: Mass protests against AI(03:06) No innovation required(04:36) The discomfort of doing something weird(05:53) Preparing for the moment--- First published: April 30th, 2024 Source: https://forum.effectivealtruism.org/posts/J8sw7o5mWbGFaBW4o/why-i-m-doing-pauseai --- Narrated by TYPE III AUDIO.
“Updates on the EA catastrophic risk landscape” by Benjamin_Todd
Around the end of Feb 2024 I attended the Summit on Existential Risk and EAG: Bay Area (GCRs), during which I did 25+ one-on-ones about the needs and gaps in the EA-adjacent catastrophic risk landscape, and how they’ve changed. The meetings were mostly with senior managers or researchers in the field who I think are worth listening to (unfortunately I can’t share names). Below is how I’d summarise the main themes in what was said. If you have different impressions of the landscape, I’d be keen to hear them. There's been a big increase in the number of people working on AI safety, partly driven by a reallocation of effort (e.g. Rethink Priorities starting an AI policy think tank); and partly driven by new people entering the field after its newfound prominence. Allocation in the landscape seems more efficient than in the past – it's harder to identify [...] --- First published: May 6th, 2024 Source: https://forum.effectivealtruism.org/posts/YDjH6ACPZq889tqeJ/updates-on-the-ea-catastrophic-risk-landscape --- Narrated by TYPE III AUDIO.
“My Lament to EA” by kta
I am dealing with repetitive strain injury and don’t foresee being able to really respond to any comments. I’m a little hesitant to post this, but I thought I should be vulnerable. Honestly, I'm relieved that I finally get to share my voice. I know some people may want me to discuss this privately – but that might not be helpful to me, as I know some issues have been tried to be silenced by the very people who were meant to help. And to be honest, the fear of criticizing EA is something I have disliked about EA – I’ve been behind the scenes enough to know that despite being well-intentioned, criticizing EA (especially openly) can privately get you excluded from opportunities and circles, often even silently. This is an internal battle I’ve had with EA for a while (years). Still, I thought by sharing my experiences I [...] ---Outline:(00:55) Appreciation and disillusionment(03:30) Specific challenges(03:33) When it has been uncomfortable for diversity and inclusion(04:33) When it primarily became about prestige or funding(07:13) When professional social dynamics were unhealthy(09:46) When empathy is deprioritized and logic/consequentialism/utilitarianism becomes toxic(13:38) Parting ways--- First published: May 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/3GjstAyhH9cDeNar4/my-lament-to-ea --- Narrated by TYPE III AUDIO.
“Émile P. Torres’s history of dishonesty and harassment” by anonymous-for-obvious-reasons
This is a cross-post and you can see the original here, written in 2022. I am not the original author, but I thought it was good for more EAs to know about this. I am posting anonymously for obvious reasons, but I am a longstanding EA who is concerned about Torres's effects on our community. An incomplete summary Introduction. This post compiles evidence that Émile P. Torres, a philosophy student at Leibniz Universität Hannover in Germany, has a long pattern of concerning behavior, which includes gross distortion and falsification, persistent harassment, and the creation of fake identities. Note: Since Torres has recently claimed that they have been the target of threats from anonymous accounts, I would like to state that I condemn any threatening behavior in the strongest terms possible, and that I have never contacted Torres or posted anything about Torres other than in this Substack [...] ---Outline:(00:25) An incomplete summary(01:16) Stalking and harassment(01:20) Peter Boghossian(11:48) Helen Pluckrose(19:02) Demonstrable falsehoods and gross distortions(19:07) “Forcible” removal(24:04) “Researcher at CSER”(27:30) Giving What We Can(31:20) Brief Digression on Effective Altruism(33:53) More falsehoods and distortions(33:57) Hilary Greaves(38:25) Andreas Mogensen(41:16) Nick Beckstead(45:29) Tyler Cowen(48:50) Olle Häggström(56:44) Sockpuppetry(57:01) “Alex Williams”(01:03:57) Conclusion--- First published: May 1st, 2024 Source: https://forum.effectivealtruism.org/posts/yAHcPNZzx35i25xML/emile-p-torres-s-history-of-dishonesty-and-harassment --- Narrated by TYPE III AUDIO.
“Joining the Carnegie Endowment for International Peace” by Holden Karnofsky
Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year. I want to be explicit about why I’m leaving Open Philanthropy. It's because my work no longer involves significant involvement in grantmaking, and given that I’ve overseen grantmaking historically, it's a significant problem for there to be confusion on this point. Philanthropy comes with particular power dynamics that I’d like to move away from, and [...] The original text contained 1 footnote which was omitted from this narration. --- First published: April 29th, 2024 Source: https://forum.effectivealtruism.org/posts/7gzgwgwefwBku2cnL/joining-the-carnegie-endowment-for-international-peace --- Narrated by TYPE III AUDIO.
“Priors and Prejudice” by MathiasKB
I Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans. Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping. But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures. The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870's where the union and social democratic movements got their start. In many developing countries [...] ---Outline:(00:03) I(05:39) II(10:19) IIIThe original text contained 2 footnotes which were omitted from this narration. --- First published: April 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/PKotuzY8yzGSNKRpH/priors-and-prejudice --- Narrated by TYPE III AUDIO.
“Announcing The New York Declaration on Animal Consciousness” by Sofia_Fogel
The last ten years have witnessed rapid advances in the science of animal cognition and behavior. Striking results have hinted at surprisingly rich inner lives in a wide range of animals, driving renewed debate about animal consciousness. To highlight these advances, the NYU Mind, Ethics and Policy Program and NYU Wild Animal Welfare Program co-hosted a conference on the emerging science of animal consciousness on Friday April 19 at New York University. This conference also served as the launch event for The New York Declaration on Animal Consciousness. This short statement, signed by leading scientists who research a wide range of taxa, holds that all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including cephalopod mollusks, decapod crustaceans, and insects) have a realistic chance of being conscious, and that their welfare merits consideration. We now welcome signatures from others as well. If you have relevant [...] --- First published: April 21st, 2024 Source: https://forum.effectivealtruism.org/posts/Pqkf5N7LkHfd7rRBf/announcing-the-new-york-declaration-on-animal-consciousness --- Narrated by TYPE III AUDIO.
[Linkpost] “Motivation gaps: Why so much EA criticism is hostile and lazy” by titotal
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction. I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees [...] ---Outline:(02:21) No door to door atheists(04:51) What went wrong here?(08:40) Motivation gaps in AI x-risk(10:59) EA gap analysis(15:12) Counter-motivations(25:49) You can’t rely on ingroup criticism(29:10) How to respond to motivation gaps--- First published: April 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/CfBNdStftKGc863o6/motivation-gaps-why-so-much-ea-criticism-is-hostile-and-lazy Linkpost URL:https://titotal.substack.com/p/motivation-gaps-why-so-much-ea-criticism --- Narrated by TYPE III AUDIO.
“How good it is to donate and how hard it is to get a job” by Elijah Persson-Gordon
In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job. First, I argue that the impact of having a job that helps others is complicated. In this section, I discuss annual donation statistics of people in the Effective Altruism community donate, which I find quite low. In the rest of the post, I describe my recent job search, my experience substituting at public schools, and my expenses. Having a job that helps others might be overemphasized Doing a job that helps others seems like a good thing to do. Weirdly, it's not as simple as that. While some job vacancies last for years, other fields are very competitive and have many qualified applicants for most position listings. In the latter case, if you take the [...] ---Outline:(00:42) Having a job that helps others might be overemphasized(02:07) Donations are an amazing opportunity, and I think they are underemphasized(03:42) I used to really want an animal welfare-related job. Then I wanted to donate more. Now I am a substitute at a public school(06:13) I live frugally and donate(08:06) I have been disappointed in my ability to find a job that would allow me to donate more(09:52) Its okay(10:24) Additional reading--- First published: April 16th, 2024 Source: https://forum.effectivealtruism.org/posts/G9ocwYA2LpLqC4vmq/how-good-it-is-to-donate-and-how-hard-it-is-to-get-a-job --- Narrated by TYPE III AUDIO.
“Personal reflections on FTX” by William_MacAskill
The two podcasts where I discuss FTX are now out: Making Sense with Sam Harris Clearer Thinking with Spencer Greenberg The Sam Harris podcast is more aimed at a general audience; the Spencer Greenberg podcast is more aimed at people already familiar with EA. (I’ve also done another podcast with Chris Anderson from TED that will come out next month, but FTX is a fairly small part of that conversation.) In this post, I’ll gather together some things I talk about across these podcasts — this includes updates and lessons, and responses to some questions that have been raised on the Forum recently. I’d recommend listening to the podcasts first, but these comments can be read on their own, too. I cover a variety of different topics, so I’ll cover each topic in separate comments underneath this post. --- First published: April 18th, 2024 Source: https://forum.effectivealtruism.org/posts/A2vBJGEbKDpuKveHk/personal-reflections-on-ftx --- Narrated by TYPE III AUDIO.
Future of Humanity Institute 2005-2024: Final Report
This is a link post. Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective [...] ---Outline:(01:00) What we did well(03:52) Where we failed(05:10) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.
[Linkpost] “Future of Humanity Institute 2005-2024: Final Report” by Pablo
Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While [...] ---Outline:(00:57) What we did well(03:48) Where we failed(05:06) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.
“AIM’s new guide to launching a high-impact non-profit policy organization” by CE, weeatquince
Author: Sam Hilton, AIM Director of Research In March 2021 I received an odd letter. It was from a guy I didn't know, David Quarrey, the UK's National Security Advisor. The letter thanked me for providing external expertise to the UK government's Integrated Review, which had been published that morning. It turns out that the Integrated Review has made a public commitment to "review our approach to risk assessment" ... "including how we account for interdependencies, cascading and compound risks". This is something I'd been advocating for over the previous few months by writing a policy paper and engaging with politicians and civil servants. It's hard to know how much my input changed government policy but I couldn’t find much evidence of others advocating for this. I had set myself a 10-year goal to "have played a role in making the UK a leader in long-term resilience to extreme [...] --- First published: April 2nd, 2024 Source: https://forum.effectivealtruism.org/posts/RmZWjTLNTg4hby7pz/aim-s-new-guide-to-launching-a-high-impact-non-profit-policy --- Narrated by TYPE III AUDIO.
“Understanding FTX’s crimes” by FTXwatcher
In the aftermath of SBF's convinction, there have been a few posts trying to make sense of FTX. Some people are trying to figure out what happened, and some people are interested in trying to find clever defenses. I'm in a much more boring position: I am confident SBF is the fraud the world believes him to be. I hope this post can provide reasoning transparency on why I think this, and perhaps serve as an easy link for others who feel similarly but don't want to get bogged down in a point-by-point. Posted anonymously as some protection against future employers Googling [1]. I have divided this post into a summary of the major crimes and my basis for believing they occurred, an 'FAQ' dealing with some common misapprehensions I've seen on this forum and elsewhere, and an appendix explaining some crypto exchange basics / jargon for those [...] ---Outline:(00:56) Crimes(00:59) Misappropriation of Funds(01:51) Detail(14:47) Lying to lenders(17:21) Lying to FTXs investors(18:28) Lying to Banks(19:02) Detail(21:53) Bribing a Chinese official(23:55) Falsified revenue(25:07) Falsified Insurance Fund(26:20) FAQ(26:23) Didnt SBF only know about the hole in June 2022?(30:16) Didnt customers know their funds might be lent out?(32:37) Werent FTX and Alameda total messes? Intent matters; isnt it possible that this all got lost in the chaos?(35:13) Isnt FTX going to make all Customers whole? So there wasnt ever a hole?(35:47) FTX cashed out customers at bankruptcy prices(36:32) The assets those customers money had bought did well(37:12) Had FTX stayed afloat, things would be very different(38:58) Appendix(39:02) How crypto exchanges work(39:06) Spot Trading(40:54) Margin TradingThe original text contained 7 footnotes which were omitted from this narration. --- First published: April 11th, 2024 Source: https://forum.effectivealtruism.org/posts/qyN8M6zh6pnh8fmjR/understanding-ftx-s-crimes --- Narrated by TYPE III AUDIO.
“The Rationale-Shaped Hole At The Heart Of Forecasting” by dschwarz, FutureSearch, Lawrence Phillips, hnykda, Peter Mühlbacher
Thanks to Eli Lifland, Molly Hickman, Değer Turan, and Evan Miyazono for reviewing drafts of this post. The opinions expressed here are my own. Summary: Forecasters produce reasons and models that are often more valuable than the final forecasts Most of this value is being lost due to the historical practice & incentives of forecasting, and the difficulty of crowds to “adversarially collaborate” FutureSearch is a forecasting system with legible reasons and models at its core (examples at the end) The Curious Case of the Missing Reasoning Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “Probability Is Not A Substitute For Reasoning”, citing a piece where he writes: There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue, predictions of events such as the arrival of AGI [...] ---Outline:(00:40) The Curious Case of the Missing Reasoning(05:06) Those Who Seek Rationales, And Those Who Do Not(07:21) So What Do Elite Forecasters Actually Know?(10:30) The Rationale-Shaped Hole At The Heart Of Forecasting(11:51) Facts: Cite Your Sources(12:07) Reasons: So You Think You Can Persuade With Words(14:25) Models: So You Think You Can Model the World(17:56) There Is No Microeconomics of AGI(19:39) 700 AI questions you say? Aren’t We In the Age of AI Forecasters?(21:33) Towards “Towards Rationality Engines”(23:10) Sample Forecasts With Reasons and Models--- First published: April 2nd, 2024 Source: https://forum.effectivealtruism.org/posts/qMP7LcCBFBEtuA3kL/the-rationale-shaped-hole-at-the-heart-of-forecasting --- Narrated by TYPE III AUDIO.
“UK moves toward mandatory animal welfare labelling” by AdamC
Here in the UK, the government is consulting on mandatory animal welfare labelling (closing 7 May 2024). People may wish to respond if they want to express their support or share thoughts and evidence that could shape the outcomes. I think such labelling has the potential to significantly improve animal welfare, not just through changing individual choices but by encouraging companies to stop selling the lowest welfare tiers entirely, and through raising labelling standards over time. Higher standards will probably also mean higher prices, lower consumption and 'fairer' competition with alternative proteins. What happens in the UK may also influence future reforms in the EU and elsewhere. To summarise the proposals: Mandatory labelling would apply to chicken, eggs and pig products (with the suggestion that beef, lamb and dairy could follow later) At least initially, this would not apply to restaurants etc., but to food from retailers like [...] --- First published: March 29th, 2024 Source: https://forum.effectivealtruism.org/posts/h5Syjytgbg7fL6CJ5/uk-moves-toward-mandatory-animal-welfare-labelling --- Narrated by TYPE III AUDIO.
“Why hasn’t EA done an SBF investigation and postmortem?” by RobBensinger
Is anyone in the world being paid to do an independent investigation of how EA handled Sam Bankman-Fried, with respect to "did we screw up" and "is there stuff we should do differently going forward"? Last I heard, literally nobody was doing this and at least some EA leaders were mostly just hoping that SBF gets memoryholed — but maybe I'm out of the loop? My understanding is that Effective Ventures completed a narrow investigation into this topic in mid-2023, purely looking at legal risk to EV and not at all trying to do a general postmortem for EA or any group of EAs. Is that correct, and have things changed since then? I saw that Will MacAskill is planning to appear on some podcasts soon to speak about SBF, which seems like great news to mee. If I recall correctly, Will previously said that he was going to [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/PAG3DJtoZeGtz488f/why-hasn-t-ea-done-an-sbf-investigation-and-postmortem --- Narrated by TYPE III AUDIO.
“Quick Update on Leaving the Board of EV” by Rebecca Kagan
A brief and belated update: When I resigned from the board of EV US last year, I was planning on writing about that decision. But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me, and I believed it wouldn’t make a difference. However, I want to make it clear that I resigned last year due to significant disagreements with the board of EV and EA leadership, particularly concerning their actions leading up to and after the FTX crisis. While I certainly support the boards’ decision to pay back the FTX estate, spin out the projects as separate organizations, and essentially disband EV, I continue to be worried that the EA community is not on track to learn the relevant lessons from its relationship with FTX. Two things that I think would help (though I am not planning to work [...] The original text contained 1 footnote which was omitted from this narration. --- First published: April 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/dtHZqi4fSQKor6TYY/quick-update-on-leaving-the-board-of-ev --- Narrated by TYPE III AUDIO.
“Announcing Mandatory Draft Amnesty Day (April 2nd)” by Toby Tremlett🔹
Following the success of Draft Amnesty Week, the Forum team have decided to take things a bit further. April 2nd 2024 will be Mandatory Amnesty Day (aka MAD). At 09:00 UTC, all draft posts on your Forum account will be posted live on the Forum.All the posts in this section on your profile page deserve to be seen. If you have used our Google Docs import feature, all posts we detect on your Google account will also be posted.Shrek is now the de facto mascot of Draft Amnesty. If, for some reason, you have objections, please get in touch with the Forum team here. We look forward to seeing all your draft posts! --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/72cKFaXd2B7Bis2kv/announcing-mandatory-draft-amnesty-day-april-2nd --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Announcement: We are rebranding to Shrimpactful Animal Advocacy” by Impactful Animal Advocacy, Aaron Boddy
Impactful Animal Advocacy is thrilled to announce that after careful consideration and complex moral calculations, we have decided to rebrand to Shrimpactful Animal Advocacy. Why shrimp, you ask? Well, we've crunched the numbers and determined that improving the welfare of shrimp is one of the highest-impact opportunities in the animal advocacy space. Source: https://foodimpacts.org/ In light of this strong evidence, Shrimpactful Animal Advocacy will be laser-focusing all of our efforts and resources on ending the suffering of our shrimpy friends. This includes revamping our programs to focus on our new mission. You'll soon be able to sign up for our revised Shrimpactful Animal Advocacy newsletter, attend our upcoming "Shrimp-osium" events, visit our Shrimp Hub resource center, and join our Shrimp Slack community to connect with fellow Shrimpactful Advocates. We call on all other organizations to join us in this: Mercy for Animals → Mercy for [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/ed5Jiuk3sCYTXHfFD/announcement-we-are-rebranding-to-shrimpactful-animal --- Narrated by TYPE III AUDIO.
“Excerpts From The EA Talmud” by Scott Alexander
(aka "surely we have enough Jews here that at least one person finds this funny") MISHNA: Rabbi Ord says that the three permissible cause areas are global health and development, animal welfare, and long-termism. Why are these the three permissible cause areas? Because any charity outside of these cause areas is a violation of bal tashchit, the prohibition against wasting resources. GEMARA: And why not four categories, because global health and development are two separate categories? Rabbi bar bar Hana answers: Is not global health valuable only because it later leads to development? Therefore, consider them the same category. Rav Yehuda objects. He tells the story of a student who asked Rabbi Yohanan why there should not be a fourth category, meta-effective-altruism. Rabbi Yohanan answered: this is covered under long-termism, because it pays off in the long-term when the meta-charity causes effective altruists to have more resources. But if [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/363kggkFJHz5ys5T7/excerpts-from-the-ea-talmud --- Narrated by TYPE III AUDIO.
“EA is now scandal-constrained” by Guy Raveh
It's been at least a few months since the last proper EA scandals, and we're now desperately trying to squeeze headlines out of the past ones. On the contrary, a few scandals have been wrapped up: SBF was sentenced to 25 years in prison The investigation regarding Owen Cotton-Barratt presented its findings Whytham Abbey is being sold Indeed, even OpenPhil's involvement in the Whytham Abbey sale shows they're now less willing to fund new scandals. Therefore it seems to me that EA is now neither funding- nor talent-constrained, but rather scandal-constrained. This cannot go on. We've all become accustomed to a neverending stream od scandals, and if that stream dwindles, we might find ourselves bored to death - or worse, the world might stop talking about EA all the time. I therefore raise a few ideas for discussion - feel free to add your own: EA Funds [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/F47vckGK5kHZirtGS/ea-is-now-scandal-constrained --- Narrated by TYPE III AUDIO.
“The Centre for Effective Altruism is spinning out of the Centre for Effective Altruism” by OllieBase
The Centre for Effective Altruism (CEA), an effective altruism (EA) project which recently spun out of Effective Ventures (EV) is spinning out of the newly established Centre for Effective Altruism (CEA). The current CEO of CEA (the Centre for Effective Altruism), Zach Robinson, CEO of CEA and Effective Ventures (CEOCEV), will be taking the position of Chief Executive Administrator (CEA) for CEA (CEA), as the venture spins out of CEA (CEA). The cost-effectiveness analysis (CEA) for this new effective venture suggested that this venture will be high-EV (see: EA). CEA's CEA's CEA ventures that the new spun-out CEA venture's effectiveness is cost-effective in every available scenario (CEAS). CEA's new strategy, See EA will take effect: See: Gain a better understanding of where the community is, who is part of it and where it could go EA: Effective altruism. No need to complicate things. To provide some clarity on [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/WgneAKfjRJkYsTs3p/the-centre-for-effective-altruism-is-spinning-out-of-the --- Narrated by TYPE III AUDIO.
[Linkpost] “Introducing Open Asteroid Impact” by Linch, Austin
“That which does not kill us makes us stronger.” Hillary Clinton, who is still alive I'm proud and excited to announce the founding of my new startup, Open Asteroid Impact, where we redirect asteroids towards Earth for the benefit of humanity. Our mission is to have as high an impact as possible. Below, I've copied over the one-pager I've sent potential investors and early employees: Name: Open Asteroid Impact Launch Date: April 1 2024 Website: openasteroidimpact.org Mission: To have as high an impact as possible Pitch: We are an asteroid mining company. When most people think about asteroid mining, they think of getting all the mining equipment to space and carefully mining and refining ore in space, before bringing the ore back down in a controlled landing. But humanity has zero experience in Zero-G mining in the vacuum of space. This is obviously very inefficient. Instead, it's much [...] --- First published: April 1st, 2024 Source: https://forum.effectivealtruism.org/posts/RHGfmJfj3jvLw2mq4/introducing-open-asteroid-impact Linkpost URL:https://openasteroidimpact.org/ --- Narrated by TYPE III AUDIO.
[Linkpost] “Open Philanthropy: Our Progress in 2023 and Plans for 2024” by Alexander_Berger
Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it's possible that we’ve changed more in the last two years than over our full prior history. We’ve more than doubled the size of our team (to ~110), nearly doubled our annual giving (to >$750M), and added five new program areas. As our track record and volume of giving have grown, we are seeing more of our impact in the world. Across our focus areas, our funding played a (sometimes modest) role in some of 2023's most important developments: We were among the supporters of the clinical trials that led to the World Health Organization (WHO) officially recommending the R21 malaria vaccine. This is the second malaria vaccine recommended by WHO, which expects it [...] --- First published: March 27th, 2024 Source: https://forum.effectivealtruism.org/posts/couP8n3BTrpFA5YDJ/open-philanthropy-our-progress-in-2023-and-plans-for-2024-1 Linkpost URL:https://www.openphilanthropy.org/research/our-progress-in-2023-and-plans-for-2024/ --- Narrated by TYPE III AUDIO.
“Announcement on the future of Wytham Abbey” by Rob Gledhill
The Wytham Abbey Project is closing. After input from the Abbey's major donors, the EV board took a decision to sell the property. This project's runway will run out at the end of April. After this time, the project will cease operations, and EV UK will oversee the sale of the property. The Wytham Abbey team have been good custodians of the venue during the time they ran this project, and EV UK will continue to look after this property as we prepare to sell. The proceeds of the sale, after the cost of sale is covered, will be allocated to high-impact charities. A statement from the Wytham Project can be found here. --- First published: March 25th, 2024 Source: https://forum.effectivealtruism.org/posts/yggjKEeehsnmMYnZd/announcement-on-the-future-of-wytham-abbey --- Narrated by TYPE III AUDIO.
“Killing the moths” by Bella
This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless. Content warning: description of animal death. I live in a small, one-bedroom flat in central London. Sometime in the summer of 2023, I started noticing moths around my flat. I didn’t pay much attention to it, since they seemed pretty harmless: they obviously weren’t food moths, since they were localised in my bedroom, and they didn’t seem to be chewing holes in any of my clothes — months went by and no holes appeared. [1] The larvae only seemed to be in my carpet. Eventually, their numbers started increasing, so I decided to do something about it. I Googled humane and nonlethal ways to deal with moth infestations, but found nothing. There were lots of sources [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: March 25th, 2024 Source: https://forum.effectivealtruism.org/posts/Ax5PwjqtrunQJgjsA/killing-the-moths --- Narrated by TYPE III AUDIO.
“Unflattering aspects of Effective Altruism” by NunoSempere
I've been writing a few posts critical of EA over at my blog. They might be of interest to people here: Unflattering aspects of Effective Altruism Alternative Visions of Effective Altruism Auftragstaktik Hurdles of using forecasting as a tool for making sense of AI progress Brief thoughts on CEA's stewardship of the EA Forum Why are we not harder, better, faster, stronger? ...and there are a few smaller pieces on my blog as well. I appreciate comments and perspectives anywhere, but prefer them over at the individual posts at my blog, since I have a large number of disagreements with the EA Forum's approach to moderation, curation, aesthetics or cost-effectiveness. --- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/coWvsGuJPyiqBdrhC/unflattering-aspects-of-effective-altruism --- Narrated by TYPE III AUDIO.
“Updates on Community Health Survey Results” by David_Moss, Jamie Elsey, Willem Sleegers
Satisfaction with the EA community Reported satisfaction, from 1 (Very dissatisfied) to 10 (Very satisfied), in December 2023/January 2024 was lower than when we last measured it shortly after the FTX crisis at the end of 2022 (6.77 vs. 6.99, respectively). However, December 2023/January 2024 satisfaction ratings were higher than what people recalled their satisfaction being “shortly after the FTX collapse” (and their recalled level of satisfaction was lower than what we measured their satisfaction as being at the end of 2022). We think it's plausible that satisfaction reached a nadir at some point later than December 2022, but may have improved since that point, while still being lower than pre-FTX. Reasons for dissatisfaction with EA: A number of factors were cited a similar number of times by respondents as Very important reasons for dissatisfaction, among those who provided a reason: Cause prioritization (22%), Leadership (20%), Justice, Equity, Inclusion and [...] ---Outline:(04:15) Community satisfaction over time(08:12) Reasons for dissatisfaction with the EA community(13:07) Changes in EA engagement(14:00) Changes in EA-related behaviors(15:57) Perception of issues in the EA community(16:14) Leadership vacuum(16:46) Desire for more community change following FTX(17:13) Trust in EA-related organizations(18:53) Appendix(18:56) Effect sizes for satisfaction over time(20:12) Email vs non-email referrers(21:30) AcknowledgmentsThe original text contained 7 footnotes which were omitted from this narration. --- First published: March 20th, 2024 Source: https://forum.effectivealtruism.org/posts/aF6nh4LW6sSbgMLzL/updates-on-community-health-survey-results --- Narrated by TYPE III AUDIO.
“The current limiting factor for new charities” by Joey, CE
TLDR: we think the limiting factor for new charities has shifted from founder talent to early stage funding being the top limiting factor. We have historically written about limiting factors and how they affect our thinking about the highest impact areas. For new charities, over the past 4-5 years, fairly consistently, the limiting factor has been people; specifically, the fairly rare founder profile that we look for and think has the best chance at founding a field-leading charity. However, we think over the last 12 months this picture has changed in some important ways: Firstly, we have started founding more charities: After founding ~5 charities a year in 2021 and 2022 we founded 8 charities in 2023 and we think there are good odds we will be able to found ~10-12 charities in 2024. This is a pretty large change. We have not changed our standards for [...] --- First published: March 19th, 2024 Source: https://forum.effectivealtruism.org/posts/AXhC4JhWFfsjBB4CA/the-current-limiting-factor-for-new-charities --- Narrated by TYPE III AUDIO.
“The Lack of EA in US Private Foundations” by Kyle Smith
I've written before about trying to bring US private foundations into EA as major funders. I got some helpful feedback and haven't really pursued it further. I study US private foundations as a researcher and recently conducted a qualitative data collection of staff at 20 very large US private foundations ($100m+ assets). The subject of the study isn't directly EA related (focused mostly on how they use accounting/effectiveness information and accountability), but it got me thinking a lot! Some interesting observations that I am going to explore further, in future forum posts (if y'all think it's interesting) and future research papers: Trust-based philanthropy (TBP), a funder movement that's only been around since 2020, has had a HUGE impact on very large private foundations. All 20 indicated that they had already/were in the process of integrating TBP into their grantmaking. I can't emphasize enough how influential TBP has been. [...] --- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/Pnv6PRyeCPZknsbEw/the-lack-of-ea-in-us-private-foundations --- Narrated by TYPE III AUDIO.
“The Scale of Fetal Suffering in Late-Term Abortions” by Ariel Simnegar
This is a draft amnesty post. Summary. It seems plausible that fetuses can suffer from 12 weeks of age, and quite reasonable that then can suffer from 24 weeks of age. Some late-term abortion procedures seem that they might cause a fetus excruciating suffering. Over 35,000 of these procedures occur each year in the US alone. Further research would be desired on interventions to reduce this suffering, such as mandating fetal anesthesia for late-term abortions. BackgroundMost people agree that a fetus has the capacity to suffer at some point. If a fetus has the capacity to suffer, then we ought to reduce that suffering when possible. Fetal anesthesia is standard practice for fetal surgery,[1] but I am unaware of it ever being used during late-term abortions. If the fetus can suffer, these procedures likely cause the fetus extreme pain. I think the cultural environment EAs usually [...] ---Outline:(00:40) Background(01:28) Surgical Abortion Procedures(01:34) LI (Labor Induction)(02:18) DandE (Dilation and Evacuation)(02:38) When Can a Fetus Suffer?(03:46) Scale in US and UK(03:50) 2021 UK(04:05) 2020 USA(05:02) InterventionsThe original text contained 11 footnotes which were omitted from this narration. --- First published: March 17th, 2024 Source: https://forum.effectivealtruism.org/posts/vhKZ7hyzmcrWuBwDL/the-scale-of-fetal-suffering-in-late-term-abortions --- Narrated by TYPE III AUDIO.
“EA ‘Worldviews’ Need Rethinking” by Richard Y Chappell
I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes. Current Orthodoxy I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows: neartermist - incl. animal welfare neartermist - human-only longtermism / x-risk We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we [...] ---Outline:(00:35) Current Orthodoxy(01:17) The Problem(02:43) A Proposed Solution(04:26) ImplicationsThe original text contained 3 footnotes which were omitted from this narration. --- First published: March 18th, 2024 Source: https://forum.effectivealtruism.org/posts/dmEwQZSbPsYhFay2G/ea-worldviews-need-rethinking --- Narrated by TYPE III AUDIO.
“We Did It! - Victory for Octopus in Washington State” by Tessa @ ALI
In 2022, Aquatic Life Institute (ALI) led the charge in Banding Together to Ban Octopus Farming. In 2024, we are ecstatic to see these efforts come to fruition in Washington State. This landmark achievement underscores our collective commitment to rejecting the introduction of additional animals into the seafood system and positions Washington State as a true pioneer in aquatic animal welfare legislation. In light of this success, ALI is joining forces with various organizations to advocate for similar bans across the United States and utilizing these monumental examples as leverage in continuous European endeavors. 2022 Aquatic Life Institute (ALI) and members of the Aquatic Animal Alliance (AAA) comment on the Environmental Impact of Nueva Pescanova before the Government of the Canary Islands: General Directorate of Fisheries and the General Directorate for the Fight against Climate Change and the Environment. Allowing this industrial octopus farm to operate [...] ---Outline:(00:45) 2022(01:44) 2023(03:37) 2024(04:50) March 14, 2024--- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/AD8QchabkrygXkdgm/we-did-it-victory-for-octopus-in-washington-state --- Narrated by TYPE III AUDIO.
“Maternal Health Initiative is Shutting Down” by Ben Williamson, Sarah Eustis-Guthrie
Maternal Health Initiative (MHI) was founded out of Charity Entrepreneurship (AIM)'s 2022 Incubation Program and has since piloted two interventions integrating postpartum (post-birth) contraceptive counselling into routine care appointments in Ghana. We concluded this pilot work in December 2023. A stronger understanding of the context and impact of postpartum family planning work, on the back of our pilot results, has led us to conclude that our intervention is not among the most cost-effective interventions available. We’ve therefore decided to shut down and redirect our funding to other organisations. This article summarises MHI's work, our assessment of the value of postpartum family planning programming, and our decision to shut down MHI as an organisation in light of our results. We also share some lessons learned. An in-depth report expanding on the same themes is available on our website. We encourage you to skip to the sections that are [...] ---Outline:(01:40) Why we chose to pursue postpartum family planning(01:45) Why family planning?(02:28) Why postpartum (post-birth)?(03:49) MHI: An overview of our work(06:16) Pilot: Design(08:54) Pilot: Results(08:58) Sample Population(09:36) Implementation(10:19) Changes in Contraceptive Uptake(11:13) Conclusions From Our Pilot Results(13:13) Why We No Longer Believe Postpartum Family Planning Is Among The Most Cost-Effective Interventions(13:35) Evidence of Limited Effects on Unintended Pregnancies(16:06) The Prevalence and Impact of Postpartum Insusceptibility(17:07) Short-Spaced Pregnancies(17:48) Theory of Change(18:14) Other Factors(18:28) Broader Thoughts on Family Planning(18:45) Concerns(20:43) Reasons We Still Believe In The Importance Of Family Planning Work(22:53) Choosing to Shut Down(23:43) Considering a Pivot(26:47) Proceeding to Shut Down(27:38) Lessons(33:39) Conclusions--- First published: March 15th, 2024 Source: https://forum.effectivealtruism.org/posts/MWSwSXNmsSBaEKtKw/maternal-health-initiative-is-shutting-down --- Narrated by TYPE III AUDIO.
[Linkpost] “New video: You’re richer than you realise” by GraceAdams, Giving What We Can
This might be one of the best pieces of introductory content to the concepts of effective giving that GWWC has produced in recent years! I hit the streets of London to engage with everyday people about their views on charity, giving back, and where they thought they stood on the global income scale. This video was made to engage people with some of the core concepts of income inequality and charity effectiveness in the hope of getting more people interested in giving effectively. If you enjoy it - I'd really appreciate a like, comment or share on YouTube to help us reach more people! There's a blog post and transcript of the video available too. Big thanks to Suzy Sheperd for directing and editing this project and to Julian Jamison and Habiba Banu for being interviewed! --- First published: March 13th, 2024 Source: https://forum.effectivealtruism.org/posts/tX2MqRfZtz7TqYCQi/new-video-you-re-richer-than-you-realise Linkpost URL:https://www.youtube.com/watch?v=ekIRVhbpiQw --- Narrated by TYPE III AUDIO.
[Linkpost] “Results from an Adversarial Collaboration on AI Risk (FRI)” by Forecasting Research Institute, Jhrosenberg, AvitalM, Molly Hickman, rosehadshar
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1] Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI. In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf Abstract. We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that [...] ---Outline:(02:13) Extended Executive Summary(02:44) Methods(03:53) Results: What drives (and doesn’t drive) disagreement over AI risk(04:32) Hypothesis #1 - Disagreements about AI risk persist due to lack of engagement among participants, low quality of participants, or because the skeptic and concerned groups did not understand each others arguments(05:11) Hypothesis #2 - Disagreements about AI risk are explained by different short-term expectations (e.g. about AI capabilities, AI policy, or other factors that could be observed by 2030)(07:53) Hypothesis #3 - Disagreements about AI risk are explained by different long-term expectations(10:35) Hypothesis #4 - These groups have fundamental worldview disagreements that go beyond the discussion about AI(11:31) Results: Forecasting methodology(12:15) Broader scientific implications(13:09) Directions for further researchThe original text contained 10 footnotes which were omitted from this narration. --- First published: March 11th, 2024 Source: https://forum.effectivealtruism.org/posts/orhjaZ3AJMHzDzckZ/results-from-an-adversarial-collaboration-on-ai-risk-fri Linkpost URL:https://forecastingresearch.org/s/AIcollaboration.pdf --- Narrated by TYPE III AUDIO.
“This is why we can’t have nice laws” by LewisBollard
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. How factory farmers block progress — and what we can do about it Most people agree that farmed animals deserve better legal protections: 84% of Europeans, 61-80% of Americans, 70% of Brazilians, 51-66% of Chinese, and 52% of Indians agree with some version of that statement. Yet almost all farmed animals globally still lack even the most basic protections. America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power. Fully 89% of Europeans think it's important that animals not be kept in individual cages. Yet the European Commission just implicitly sided with the 8% who don’t by shelving [...] --- First published: February 28th, 2024 Source: https://forum.effectivealtruism.org/posts/BvXkG3PLfdmvoECFb/this-is-why-we-can-t-have-nice-laws --- Narrated by TYPE III AUDIO.
“How we started our own EA charity (and why we decided to wrap up)” by KvPelt🔹, Ren Ryba
This post shares our journey in starting an Effective Altruism (EA) charity/project focused on Mediterranean fish welfare, the challenges we faced, our key learnings, and the reasons behind our decision to conclude the project. Actual research results are published in a Literature review and article. Key points The key points of this post are summarized as follows: We launched a project with the goal of enhancing fish welfare in Mediterranean aquaculture. We chose to limit our project to gathering information and decided against continuing our advocacy efforts after our initial six months. Our strategy, which focused on farmer-friendly outreach, was not effective in engaging farmers. The rationale behind our decision is the recognition that existing organizations are already performing excellent work, and we believe that funders should support these established organizations instead of starting a new one. The support and resources from the Effective Altruism (EA) and [...] ---Outline:(00:27) Key points(01:48) Personal/Project background(03:01) Why work on Mediterranean fish welfare?(07:07) Project plans and initial work(11:03) Initial work(13:47) Farmer outreach(20:45) Wrapping up the project(22:30) Other takeaways from starting a project(24:48) Resources for launching a new charityThe original text contained 1 footnote which was omitted from this narration. --- First published: February 26th, 2024 Source: https://forum.effectivealtruism.org/posts/z59wybc56FCAysrAe/how-we-started-our-own-ea-charity-and-why-we-decided-to-wrap --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
[Linkpost] “‘No-one in my org puts money in their pension’” by tobyj
Epistemic status: the stories here are all as true as possible from memory, but my memory is so so.An AI made this This is going to be big It's late Summer 2017. I am on a walk in the Mendip Hills. It's warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We’ve travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don’t know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind. He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it to. I ask him if [...] ---Outline:(00:16) This is going to be big(01:21) This is going to be bad(02:44) It's a long way off though(03:50) This is fine(05:10) It's probably something else in your life(06:15) No-one in my org puts money in their pension(07:16) Doom-vibes(08:45) Maths might help(10:28) A problem shared is…(12:36) Hope--- First published: February 16th, 2024 Source: https://forum.effectivealtruism.org/posts/YScdhSQBhkxpfcF3t/no-one-in-my-org-puts-money-in-their-pension Linkpost URL:https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-their --- Narrated by TYPE III AUDIO.
[Linkpost] “Social science research we’d like to see on global health and wellbeing [Open Philanthropy]” by Open Philanthropy, Aaron Gertler 🔸
This is a link post. Open Philanthropy strives to help others as much as we can with the resources available to us. To find the best opportunities to help others, we rely heavily on scientific and social scientific research. In some cases, we would find it helpful to have more research in order to evaluate a particular grant or cause area. Below, we’ve listed a set of social scientific questions for which we are actively seeking more evidence.[1] We believe the answers to these questions have the potential to impact our grantmaking. (See also our list of research topics for animal welfare.) If you know of any research that touches on these questions, we would welcome hearing from you. At this point, we are not actively making grants to further investigate these questions. It is possible we may do so in the future, though, so if you plan to research [...] ---Outline:(01:05) Land Use Reform(06:09) Health(16:20) Migration(18:08) Education(20:19) Science and Metascience(30:31) Global Development(32:47) OtherThe original text contained 4 footnotes which were omitted from this narration. --- First published: February 15th, 2024 Source: https://forum.effectivealtruism.org/posts/3Y7c7MXf3BzgruTWv/social-science-research-we-d-like-to-see-on-global-health Linkpost URL:https://www.openphilanthropy.org/research/social-science-research-topics-for-global-health-and-wellbeing/ --- Narrated by TYPE III AUDIO.
[Linkpost] “My cover story in Jacobin on AI capitalism and the x-risk debates” by Garrison
Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who's worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.” In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?” This is how I begin the cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren’t part of it. Whether you're new to the topic [...] --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/pRAjLTQxWJJkygWwK/my-cover-story-in-jacobin-on-ai-capitalism-and-the-x-risk Linkpost URL:https://jacobin.com/2024/01/can-humanity-survive-ai --- Narrated by TYPE III AUDIO.
“On being an EA for decades” by Michelle_Hutchinson
A friend sent me on a lovely trip down memory lane last week. She forwarded me an email chain from 12 years ago in which we all pretended we left the EA community, and explained why. We were partly thinking through what might cause us to drift in order that we could do something to make that less likely, and partly joking around. It was a really nice reminder of how uncertain things felt back then, and how far we’ve come. Although the emails hadn’t led us to take any specific actions, almost everyone on the thread was still devoting their time to helping others as much as they can. We’re also, for the most part, still supporting each other in doing so. Of the 10 or so people on there, for example, Niel Bowerman is now my boss - CEO of 80k. Will MacAskill and Toby Ord [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/zEMvHK9Qa4pczWbJg/on-being-an-ea-for-decades --- Narrated by TYPE III AUDIO.
“Things to check about a job or internship” by Julia_Wise
A lot of great projects have started in informal ways: a startup in someone's garage, or a scrappy project run by volunteers. Sometimes people jump into these and are happy they did so. But I’ve also seen people caught off-guard by arrangements that weren’t what they expected, especially early in their careers. I’ve been there, when I was a new graduate interning at a religious center that came with room, board, and $200 a month. I remember my horror when my dentist checkup cost most of a month's income, or when I found out that my nine-month internship came with zero vacation days. It was an overall positive experience for me (after we worked out the vacation thing), but it's better to go in clear-eyed. First, I’ve listed a bunch of things to consider. These are drawn from several different situations I’ve heard about, both inside and outside EA. [...] ---Outline:(01:11) Things to consider(08:06) Advice from a few EAs--- First published: February 12th, 2024 Source: https://forum.effectivealtruism.org/posts/RXFcmrf7E5fLhb43e/things-to-check-about-a-job-or-internship --- Narrated by TYPE III AUDIO.
“My Donations 2023 - Marcus Abramovitch” by MarcusAbramovitch
Summary $8003.98 Charity Entrepreneurship $5000 Insect Institute $5000 Shrimp Welfare Project $5000 Rethink Priorities $5000 Animal Ethics $1000 Wild Animal Initiative Background I think it's good to keep track of and explain donations. It creates a record to get better over time and gives others insights into how you donate. I officially made $45000 in 2023. I earned some investment income from some other sources though this is hard to quantify. I also had some savings from previous years so don’t come away from this thinking that I gave such a substantial portion of my income. That said, I am still giving away a significant portion of my income this year. I think this is a good thing and hopefully, I can serve as an example to others to give away a significant portion of their income, even if it isn’t that high. In total, I am giving [...] --- First published: February 11th, 2024 Source: https://forum.effectivealtruism.org/posts/EkKYqeAy3ArupKuYn/my-donations-2023-marcus-abramovitch --- Narrated by TYPE III AUDIO.
[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison
If you enjoy this, please consider subscribing to my Substack. Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why? The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.) Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes. Sam Altman, circa February 2023, agrees [...] --- First published: February 10th, 2024 Source: https://forum.effectivealtruism.org/posts/vBjSyNNnmNtJvmdAg/sam-altman-s-chip-ambitions-undercut-openai-s-safety Linkpost URL:https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut --- Narrated by TYPE III AUDIO.
“Mathilde Danès (1994-2024)” by Elisa Autric, update
It is with great sadness that we share with you the passing of our fellow community member and beloved friend Mathilde Danès, at age 29. A dedicated EA, animal advocate, and children's rights supporter, Mathilde had discovered the effective altruism community in 2016-2017, and the farm animal advocacy movement not long after. In 2018, she co-founded a local EA chapter in Lille, where she then lived. Other notable accomplishments of hers, among the many altruistic actions she took, include catering for several animal advocacy and EA events in France between 2020 and 2023, as well as co-organizing the 2022 edition of the Estivales de la question animale, an annual French-speaking effective animal advocacy summit. With steadfast sentientist and antinatalist beliefs, she was particularly dedicated to the idea of reducing suffering whenever possible. Over the course of her advocacy years, she strived to live her life in accordance with [...] --- First published: February 8th, 2024 Source: https://forum.effectivealtruism.org/posts/CeCKGfDzYsrvhJPFa/mathilde-danes-1994-2024 --- Narrated by TYPE III AUDIO.
“Tragic Beliefs” by Toby Tremlett🔹
I'm posting this as part of the Forum's Benjamin Lay Day celebration — consider writing a reflection of your own! The “official dates” for this reflection are February 8 - 15 (but you can write about this topic whenever you want). I've cross-posted this to my substack, Raising Dust, where I sometimes write less EA Forum-y content. TL;DR: Tragic beliefs are beliefs that make the world seem worse, and give us partial responsibility for it. These are beliefs such as: “insect suffering matters” or “people dying of preventable diseases could be saved by my donations”. Sometimes, to do good, we need to accept tragic beliefs. We need to find ways to stay open to these beliefs in a healthy way. I outline two approaches, pragmatism and righteousness, which help, but can both be carried to excess. Why I ignored insects for so long I’ve been trying not to [...] ---Outline:(00:37) Why I ignored insects for so long(03:06) What is a tragic belief?(03:46) How can we open ourselves up to tragic beliefs?(04:00) Opportunity framing, or pragmatism(06:02) The joy in righteousnessThe original text contained 4 footnotes which were omitted from this narration. --- First published: February 8th, 2024 Source: https://forum.effectivealtruism.org/posts/dy5h9Ly8osZEiFkru/tragic-beliefs --- Narrated by TYPE III AUDIO.
“Ambitious Impact (AIM) - a new brand for Charity Entrepreneurship and our extended ecosystem!” by CE, Joey
TLDR: Given Charity Entrepreneurship's recent scaling, we are changing our brand to call our extended ecosystem “Ambitious Impact (AIM).” Our new AIM umbrella brand will include the classic CE program as well as recent additional programs connected to grantmaking, research, and effective giving. We are also planning to launch new programs soon. We feel AIM being able to create onramps for other career paths (similar to what we have done for nonprofit entrepreneurship) is the most plausible way of doubling our impact. A quick history of Charity Entrepreneurship Inspired by the early success of a few nonprofits identified by evaluators such as GiveWell, we decided to take a systematic approach to researching and then launching new impact-focused nonprofits (Charity Science Health, Fortify Health). After some initial successes, Charity Entrepreneurship was started in 2018 as a formal Incubation Program to get more field-leading charities started. 31 projects were founded [...] ---Outline:(00:43) A quick history of Charity Entrepreneurship(02:00) What will AIM look like going forward(03:25) Why we are launching AIM--- First published: February 8th, 2024 Source: https://forum.effectivealtruism.org/posts/cpuFnLtppbsLKcbbq/ambitious-impact-aim-a-new-brand-for-charity --- Narrated by TYPE III AUDIO.
“Celebrating Benjamin Lay (died on this day 265 years ago)” by Lizka
Quaker abolitionist Benjamin Lay died exactly 265 years ago today (on February 8, 1759). I’m using the anniversary of his death to reflect on his life and invite you to join me by sharing your thoughts sometime this week. Lay was a radical anti-slavery advocate and an important figure in the Quaker abolitionist movement. He's been described as a moral weirdo; besides viewing slavery as a great sin, he opposed the death penalty, was vegetarian, believed that men and women were equal in the eyes of God, and more. He didn’t hide his views and was known for his “guerrilla theater” protests, which included splashing fake blood on slave-owners and forcing people to step over him as they exited a meeting. Expulsion from various communities, ridicule for his beliefs or appearance (he had dwarfism), and the offended sensibilities of those around him didn’t seem to seriously slow him down. [...] ---Outline:(01:37) Protests against slavery: shocking people into awareness(05:55) Life(10:57) Role in the broader abolition movement(13:10) Concluding notesThe original text contained 13 footnotes which were omitted from this narration. --- First published: February 8th, 2024 Source: https://forum.effectivealtruism.org/posts/dM93vHvLTgpk8pLSX/celebrating-benjamin-lay-died-on-this-day-265-years-ago --- Narrated by TYPE III AUDIO.
“GWWC Pledge featured in new book from Head of TED, Chris Anderson” by Giving What We Can
Chris Anderson, Head of TED, has just released a new book called Infectious Generosity, which has a whole chapter that encourages readers to take the Giving What We Can Pledge! He has also taken the Giving What We Can Pledge with the new wealth option to give the greater of 10% of income or 2.5% of wealth each year. This inspiring book is a guide to making Infectious Generosity become a global movement to build a hopeful future. Chris offers a playbook for how to embark on our own generous acts and to use the Internet to give them self-replicating, potentially world-changing, impact. Here's a quick excerpt from the book: “The more I’ve thought about generosity, the impact it can have, and the joy it can bring, the more determined I’ve become that it be an absolute core part of my identity. Jacqueline's work as a [...] --- First published: January 25th, 2024 Source: https://forum.effectivealtruism.org/posts/FWbaqM5PaFfrfAxaS/gwwc-pledge-featured-in-new-book-from-head-of-ted-chris --- Narrated by TYPE III AUDIO.
“Rates of Criminality Amongst Giving Pledge Signatories” by Ben_West
I investigate the rates of criminal misconduct amongst people who have taken The Giving Pledge (roughly: ~200 [non-EA] billionaires who have pledged to give most of their money to charity). I find that rates are fairly high: 25% of signatories have been accused of financial misconduct, and 10% convicted[1] 4% of signatories have spent at least one day in prison Overall, 41% of signatories have had at least one allegation of substantial misconduct (financial, sexual, or otherwise) I estimate that Giving Pledgers are not less likely, and possibly more likely, to commit financial crimes than YCombinator entrepreneurs. I am unable to find evidence of The Giving Pledge doing anything to limit the risk of criminal behavior amongst its members. I conclude that the rate of criminal behavior amongst major philanthropists is high, which means that we should not expect altruism to substantially lower the risks compared to that of [...] ---Outline:(01:22) Methodology(02:01) How well do convictions correspond with immoral behavior?(03:56) Some Representative Cases(04:08) Are Giving Pledge signatories less likely to commit financial crimes?(06:05) Giving Pledge's Response to Criminal Behavior(07:08) PR Impacts(08:15) Implications for EAThe original text contained 9 footnotes which were omitted from this narration. --- First published: January 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories --- Narrated by TYPE III AUDIO.
“EA Wins 2023” by Shakeel Hashim
Crossposted from Twitter. As the year comes to an end, we want to highlight and celebrate some of the incredible achievements from in and around the effective altruism ecosystem this year. 1. A new malaria vaccine The World Health Organization recommended its second-ever malaria vaccine this year: R21/Matrix-M, designed to protect babies and young children from malaria. The drug's recently concluded Phase III trial, which was co-funded by Open Philanthropy, found that the vaccine was between 68-75% effective at targeting the disease, which kills around 600,000 people (mainly children) each year. The work didn’t stop there, though. Following advocacy from many people — including Zacharia Kafuko of 1 Day Sooner — the WHO quickly prequalified the vaccine, laying the groundwork for an expedited deployment and potentially saving hundreds of thousands of children's lives. 1 Day Sooner is now working to raise money to expedite the deployment further. 2. The [...] --- First published: December 31st, 2023 Source: https://forum.effectivealtruism.org/posts/8P2GZFLnv8HW9ozLB/ea-wins-2023 --- Narrated by TYPE III AUDIO.
“Survey of 2,778 AI authors: six parts in pictures” by Katja_Grace
Crossposted from AI Impacts blog The 2023 Expert Survey on Progress in AI is out, this time with 2778 participants from six top AI venues (up from about 700 and two in the 2022 ESPAI), making it probably the biggest ever survey of AI researchers. People answered in October, an eventful fourteen months after the 2022 survey, which had mostly identical questions for comparison. Here is the preprint. And here are six interesting bits in pictures (with figure numbers matching paper, for ease of learning more): 1. Expected time to human-level performance dropped 1-5 decades since the 2022 survey. As always, our questions about ‘high level machine intelligence’ (HLMI) and ‘full automation of labor’ (FAOL) got very different answers, and individuals disagreed a lot (shown as thin lines below), but the aggregate forecasts for both sets of questions dropped sharply. For context, between 2016 and 2022 surveys, the forecast [...] --- First published: January 6th, 2024 Source: https://forum.effectivealtruism.org/posts/M9MSe4KHNv4HNf44f/survey-of-2-778-ai-authors-six-parts-in-pictures --- Narrated by TYPE III AUDIO.
“Zach Robinson will be CEA’s next CEO” by Ben_West, Eli Rose, ClaireZabel, lincolnq, Michelle_Hutchinson, MaxDalton, Oscar Howie
We, on behalf of the EV US and EV UK boards, are very glad to share that Zach Robinson has been selected as the new CEO of the Centre for Effective Altruism (CEA). We can personally attest to his exceptional leadership, judgement, and dedication from having worked with him at Effective Ventures US. These experiences are part of why we unanimously agreed with the hiring committee's recommendation to offer him the position.[1] We think Zach has the skills and the drive to lead CEA's very important work. We are grateful to the search committee (Max Dalton, Claire Zabel, and Michelle Hutchinson) for their thorough process in making the recommendation. They considered hundreds of potential internal and external candidates, including through dozens of blinded work tests. For further details on the search process, please see this Forum post. As we look forward, we are excited about CEA's future with [...] The original text contained 1 footnote which was omitted from this narration. --- First published: December 28th, 2023 Source: https://forum.effectivealtruism.org/posts/R6qu7LhcLKLob7t9r/zach-robinson-will-be-cea-s-next-ceo --- Narrated by TYPE III AUDIO.
“Double the donation: EA inadequacy found?” by Neil Warren
I'm only 30% sure [Edit Jan 7: 90% sure] that this is actually an inadequacy made by those whose job it is to maximize donations but I’ve noticed that none of the donations pages of GiveWell, Giving What We Can, Horizon Institute, or METR have this little tab in them that MIRI has (just scroll down after following the link): This little tool comes from doublethedonation.com. I was looking for charities to donate to, and I’m grateful I stumbled upon the MIRI donation page because otherwise I would not have known that Google would literally double my donation. None of the other donation pages except MIRI had this little “does your company do employer matching?” box. WHY. I would wager other tech companies have similar programs, and that a good chunk of EA donations come from employees of those tech companies, and that thousands of dollars a year [...] --- First published: January 5th, 2024 Source: https://forum.effectivealtruism.org/posts/fbTE2cBtnxCqemWNp/double-the-donation-ea-inadequacy-found --- Narrated by TYPE III AUDIO.
“Economic Growth - Donation suggestions and ideas” by DavidNash
There was a recent post about economic growth & effective altruism by Karthik Tadepalli. He pointed out that a lot of people agree that economic growth is important, but it hasn't really led to many suggestions for specific interventions. I thought it would be good to get the ball rolling[1] by asking a few people what they think are good donation opportunities in this area, or if not, do they think this area is neglected when you have governments, development banks, investors etc all focused on growth. I'm hoping there will be more in depth research into this in 2024 to see whether there are opportunities for smaller/medium funders, and how competitive it is with the best global health interventions. I have fleshed out a few of the shorter responses with more details on what the suggested organisation does. Shruti Rajagopalan (Mercatus Center): XKDR Forum - Founded by [...] The original text contained 1 footnote which was omitted from this narration. --- First published: January 8th, 2024 Source: https://forum.effectivealtruism.org/posts/oTuNw6MqXxhDK3Mdz/economic-growth-donation-suggestions-and-ideas --- Narrated by TYPE III AUDIO.
“Are Far-UVC Interventions Overhyped? [Founders Pledge]” by christian.r, Rosie_Bettle
Much attention recently has focused on far-UVC light, part of the spectrum of germicidal UV (GUV), and its promise for pandemic prevention. In the following medium investigation, we examine different kinds of GUV, their strengths, weaknesses, and crucial considerations in the real-world deployment of these interventions. Throughout the report, we emphasize four points that we believe have been lost in some public discussions of far-UVC: A great deal of uncertainty remains around far-UVC interventions, to the point that we remain uncertain over the relative cost-effectiveness, all things considered, of far-UVC light versus conventional (~254nm) GUV in many settings. Cost-effectiveness depends on deployment context, including the dimensions of rooms, installation type (upper-room, full-room, etc.) assumptions about the mixing of air, etc. Combined with certain physical facts about air and light (e.g. the inverse square law), this complicates strong claims about far-UVC's promise. GUV of any kind will not [...] ---Outline:(02:39) Medium Investigation: Germicidal Ultraviolet Light and Disease Transmission Reduction(10:30) Key Terms and Abbreviations(13:12) Importance: Why Biological Indoor Air Quality Matters(17:23) Current status of indoor air quality in the US(20:22) What is GUV?(22:49) Full-Room Systems(23:45) Far-UVC Light(24:41) Upper-Room GUV(25:27) In-Duct GUV(26:42) Benefits of GUV as a biosecurity intervention(30:24) What do we know about the safety of far-UVC light?(31:14) Primary Concerns: Skin and Eye Damage(36:13) Additional Safety Concerns(36:17) Ozone Production(37:28) Indoor Air Pollution(41:08) Skin Microbiome(42:12) Overall view on safety(44:10) Additional Risks(44:14) Public Reaction Considerations(47:17) Dual-Use Potential and Security Risks(50:18) Resistance risks(52:01) Damage to Plastics and Other Materials(54:47) Comparing Different Types of GUV(56:08) Is far-UVC technology overhyped?(59:42) The Complexity of Comparing Different GUV Systems and Wavelengths(01:04:41) Efficacy(01:07:35) The Challenges of Studying Real-World GUV Effectiveness(01:16:53) Safety(01:17:02) Skin/Eye Effects(01:18:39) Indoor Air Pollution(01:19:06) Cost(01:21:31) Limits to real-world use(01:23:30) Additional benefits(01:24:07) Overall View(01:26:06) Neglectedness(01:32:11) What Could a Philanthropist Do?(01:33:29) Grantmaking under Uncertainty: Impact Multipliers for GUV(01:36:52) Placing Wavelength-Agnostic Bets(01:41:28) Leveraging Societal Resources Via Advocacy(01:42:42) Prioritizing shaping RandD incentives over funding specific RandD(01:44:14) Focusing on high-income countries first(01:45:37) Focusing on good information over rapid deployment(01:50:32) Potential funding pathways(01:55:48) Conclusions and Next Steps(01:57:03) About Founders PledgeThe original text contained 157 footnotes which were omitted from this narration. --- First published: January 9th, 2024 Source: https://forum.effectivealtruism.org/posts/EZZveBtxoZJjSighP/are-far-uvc-interventions-overhyped-founders-pledge --- Narrated by TYPE III AUDIO.
[Podcast AMA] Rob Mather, Founder and CEO of the Against Malaria Foundation
This podcast is an Ask Me Anything with Rob Mather, Founder and CEO of the Against Malaria Foundation, hosted by Toby Tremlett, the EA Forum’s Content Manager.If you’re interested in Effective Altruism, you’ve probably heard of Rob Mather’s charity, the Against Malaria Foundation. For almost two decades, they’ve been doing crucial work to protect people, especially children, from Malaria.To date, around 450 million people have been protected with malaria bed nets from this charity. Once all of their currently funded nets have been distributed, AMF estimates it will have prevented 185,000 deaths. And it’s not just AMF saying this, they’ve been a GiveWell Top Charity since 2009.Listen to this episode to find out more about how Rob ended up starting one of the most effective global health charities, Rob’s tips for running a charity, how AMF’s work integrates with other NGOs that work on Malaria, and much more.The original AMA post, which features Forum user’s questions for Rob, more information about AMF, and a link to a transcript for this episode, can be found here.Source: https://forum.effectivealtruism.org/posts/rWRSvRxAco2bLoXKr/podcast-transcript-ama-founder-and-ceo-of-the-againstPublished for the Effective Altruism Forum by TYPE III AUDIO. ---
“MIRI 2024 Mission and Strategy Update” by Malo
As we announced back in October, I have taken on the senior leadership role at MIRI as its CEO. It's a big pair of shoes to fill, and an awesome responsibility that I’m honored to take on. There have been several changes at MIRI since our 2020 strategic update, so let's get into it.[1] The short version: We think it's very unlikely that the AI alignment field will be able to make progress quickly enough to prevent human extinction and the loss of the future's potential value, that we expect will result from loss of control to smarter-than-human AI systems. However, developments this past year like the release of ChatGPT seem to have shifted the Overton window in a lot of groups. There's been a lot more discussion of extinction risk from AI, including among policymakers, and the discussion quality seems greatly improved. This provides a glimmer of hope. [...] ---Outline:(03:27) MIRI's mission(05:47) MIRI in 2021–2022(10:10) New developments in 2023(13:03) Looking forwardThe original text contained 7 footnotes which were omitted from this narration. --- First published: January 5th, 2024 Source: https://forum.effectivealtruism.org/posts/Lvd2DFaHKfuveaCyQ/miri-2024-mission-and-strategy-update --- Narrated by TYPE III AUDIO.
[Linkpost] “Practically A Book Review: Appendix to ‘Nonlinear’s Evidence: Debunking False and Misleading Claims’” by Rafael Harth
A summary of the Nonlinear situation from a third-person perspective, by Ozy Brennan. I find it to be thoughtful, well-written, and well-researched enough to be part of the conversation. It was also posted on LessWrong. Edit: full text included below (with permission from the author). I had to reformat a lot so assume all formatting errors are done by me. About a week ago, I became one of a small, elite group—people who have read every single goddamn word of Nonlinear's Evidence: Debunking False and Misleading Claims. I read the appendix. I read the appendixes to the appendix. I have spent more time thinking about whether a complete stranger got a vegan burger than any person should. If you just want to make up your own mind, I suggest reading The General Situation and referring to Dramatis Personae when you get confused who is who. If you like drama [...] ---Outline:(02:00) How We Got Here(03:46) Dramatis Personae(05:20) Ben's Credibility(07:39) Kat Woods's Credibility(11:30) The General Situation(12:59) Enmeshment(17:44) Finances(23:22) Isolation(25:17) The Drug Situation(27:34) Specific Situations(27:37) The Vegan Burger Situation(29:21) The Driving Situation(31:21) Speculation About Drugs(32:42) Alice's Credibility(36:09) The Saga of Alice's Incubated Organization(41:58) The Saga of Emerson Spartz's Previous Business Life(45:14) Could Nonlinear Employ People?(48:50) TakeawaysThe original text contained 16 footnotes which were omitted from this narration. --- First published: January 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/RdiQ3n8cFJBxHRnwu/practically-a-book-review-appendix-to-nonlinear-s-evidence-1 Linkpost URL:https://thingofthings.substack.com/p/practically-a-book-review-appendix --- Narrated by TYPE III AUDIO.
“Altruism sharpens altruism” by Joey
I think many EAs have a unique view about how one altruistic action affects the next altruistic action, something like altruism is powerful in terms of its impact, and altruistic acts take time/energy/willpower; thus, it's better to conserve your resources for these topmost important altruistic actions (e.g., career choice) and not sweat it the other actions. However, I think this is a pretty simplified and incorrect model that leads to the wrong choices being taken. I wholeheartedly agree that certain actions constitute a huge % of your impact. In my case, I do expect my career/job (currently running Charity Entrepreneurship) will be more than 90% of my lifetime impact. But I have a different view on what this means for altruism outside of career choices. I think that being altruistic in other actions not only does not decrease my altruism on the big choices but actually galvanizes them and [...] --- First published: December 26th, 2023 Source: https://forum.effectivealtruism.org/posts/DBcDZJhTDgig9QNHR/altruism-sharpens-altruism --- Narrated by TYPE III AUDIO.
[Linkpost] “A year of wins for farmed animals” by Vasco Grilo
This is a crosspost for A year of wins for farmed animals, published by Lewis Bollard on 14 December 2023 in Open Philanthropy farm animal welfare research newsletter. It's been a tough year for farmed animals. The European Union shelved the world's most ambitious farm animal welfare reform proposal, plant-based meat sales sagged, and the media panned cultivated meat while Italy banned it. But advocates for factory farmed animals still won major gains — here are ten of the biggest: 1. Wins for the winged. Advocates won 130 new corporate pledges to eliminate cages for hens or the worst abuses of broiler chickens. This progress has now expanded well beyond the West: recent wins include cage-free pledges from the largest Asian restaurant company and the largest Indonesian retailer. That's mostly thanks to the work of the 100+ member groups of the Open Wing Alliance, who now campaign across 67 [...] --- First published: December 24th, 2023 Source: https://forum.effectivealtruism.org/posts/aiXyEvheFdwsEoPeC/a-year-of-wins-for-farmed-animals Linkpost URL:https://farmanimalwelfare.substack.com/p/a-year-of-wins-for-farmed-animals --- Narrated by TYPE III AUDIO.
“Winners in the Forum’s Donation Election (2023)” by Lizka
TL;DR: We ran a Donation Election in which 341 Forum users[1] voted on how we should allocate the Donation Election Fund ($34,856[2]). The winners are: Rethink Priorities - $12,847.75 Charity Entrepreneurship: Incubated Charities Fund - $11,351.11 Animal Welfare Fund (EA Funds) - $10,657.07 This post shares more information about the results: Comments from voters about their votes: patterns include referencing organizations' marginal funding posts, updating towards the neglectedness of animal welfare, appreciating strong track records, etc. Voting patterns: most people voted for 2-4 candidates (at least one of which was one of the three winners), usually in multiple cause areas Cause area stats: similar numbers of points went to cross-cause, animal welfare, risk/future-oriented, and global health candidates (ranked in that order) All candidate results, including raw point[3] totals: the Long-Term Future Fund initially placed second by raw point totals Concluding thoughts & other charities You can [...] The original text contained 10 footnotes which were omitted from this narration. --- First published: December 24th, 2023 Source: https://forum.effectivealtruism.org/posts/7D83kwkyaHLQSo6JT/winners-in-the-forum-s-donation-election-2023 --- Narrated by TYPE III AUDIO.
[Linkpost] “Attention on Existential Risk from AI Likely Hasn’t Distracted from Current Harms from AI” by Erich_Grunewald
Summary. In the past year, public fora have seen growing concern about existential risk (henceforth, x-risk) from AI. The thought is that we could see transformative AI in the coming years or decades, that it may be hard to ensure that such systems act with humanity's best interests in mind and that those highly advanced AIs may be able to overpower us if they aimed to do so, or otherwise that such systems may be catastrophically misused. Some have reacted by arguing that concerns about x-risk distract from current harms from AI, like algorithmic bias, job displacement and labour issues, environmental impact and so on. And in opposition to those voices, others have argued that attention on x-risk does not draw resources and attention away from current harms -- that both concerns can coexist peacefully. The claim that x-risk distracts from current harms is contingent. It may be [...] ---Outline:(02:15) The Argument(09:31) Evidence(10:46) AI Policy(14:33) Search Interest(19:40) Twitter/X Followers(20:55) Funding(22:20) Climate Change(25:39) Maybe the Real Disagreement Is about How Big the Risks AreThe original text contained 9 footnotes which were omitted from this narration. --- First published: December 21st, 2023 Source: https://forum.effectivealtruism.org/posts/hXzB72kfdAk6PTzio/attention-on-existential-risk-from-ai-likely-hasn-t Linkpost URL:https://www.erichgrunewald.com/posts/attention-on-existential-risk-from-ai-likely-hasnt-distracted-from-current-harms-from-ai/ --- Narrated by TYPE III AUDIO.
“The privilege of native English speakers in reaching high-status, influential positions in EA” by Alix Pham
Huge thanks to Konrad Seifert, Marcel Steimke, Ysaline Bourgine, Milena Canzler, Alex Rahl-Kaplan, Marieke de Visscher, and Guillaume Vorreux for the valuable feedback provided on drafts of this post, and to many others for the conversations that lead to me writing it. Views & mistakes are my own. TL;DR Being a non-native English speaker makes one sound less convincing. However, poor inclusion of non-native English speakers means missed perspectives in decision-making. Hence, it's a vicious circle where lack of diversity persists: native English culture prevails at the thought leadership level and neglects other cultures by failing to acknowledge that it is inherently harder to stand out as a non-native English speaker. Why I am writing this I’m co-directing EA Switzerland (I’m originally from France), and I’ve been thinking about the following points for some time. I’ve been invited to speak at the Panel on Community Building at EAG Boston [...] ---Outline:(00:25) TL;DR(00:52) Why I am writing this(01:38) An unconscious bias against non-native English speakers(02:00) Non-native English speakers sound less convincing(03:43) Poor inclusion of non-native English speakers means missed perspectives(04:30) Native English speakers are overrepresented in EA's thought leadership: a vicious circle(05:47) What can we do about it?(05:50) Some prompts(06:35) Let's talk about thisThe original text contained 6 footnotes which were omitted from this narration. --- First published: December 20th, 2023 Source: https://forum.effectivealtruism.org/posts/F7qzoWhiCTK8KpuTM/the-privilege-of-native-english-speakers-in-reaching-high --- Narrated by TYPE III AUDIO.
“Effective Aspersions: How the Nonlinear Investigation Went Wrong” by TracingWoodgrains
The New York Times Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL. They spend hundreds of hours over six months on interviews and evidence collection, paying Émile and Timnit for their time and effort. The phrase "HBD" is muttered, but it's nobody's birthday. A few days before publication, they present key claims to the Centre for Effective Altruism (CEA), who furiously tell them that many of the claims are provably false and ask for a brief delay to demonstrate the falsehood of [...] ---Outline:(00:06) The New York Times(03:08) A Word of Introduction(07:35) The Story So Far: A Recap(11:08) Avoidable, Unambiguous Falsehoods in Sharing Information About Nonlinear(21:32) These Issues Were Known and Knowable By Lightcone and the Community. The EA/LW Community Dismissed Them(27:03) Better processes are both possible and necessary(38:44) On Lawsuits(47:15) First Principles, Duty, and Harm(50:43) What of Nonlinear?The original text contained 16 footnotes which were omitted from this narration. --- First published: December 19th, 2023 Source: https://forum.effectivealtruism.org/posts/bwtpBFQXKaGxuic6Q/effective-aspersions-how-the-nonlinear-investigation-went --- Narrated by TYPE III AUDIO.
“GWWC is spinning out of EV” by Luke Freeman
Giving What We Can (GWWC) is embarking on an exciting new chapter: after years of support, we will be spinning out of the Effective Ventures Foundation UK and US (collectively referred to as “EV”), our parent charities in the US and UK respectively, to become an independent organisation. Rest assured that our core mission, commitments, and focus on effective giving remain unchanged. We believe this transition will allow us to better serve our community and to achieve our mission more effectively. Below, you'll find all the details you need, including what is changing, what isn't, and how you can get involved. A heartfelt thanks First and foremost, we owe a very big thank you to the team at EV. Their support over the years has helped us to grow and have a meaningful impact in the world. We could not be more grateful for their support. A big thank [...] ---Outline:(00:41) A heartfelt thanks(01:08) Why spin out?(02:36) The details(02:55) Whats changing(04:03) Whats not changing(04:51) How you can help(05:09) Have more questions?--- First published: December 13th, 2023 Source: https://forum.effectivealtruism.org/posts/ngoqSAbcdYhhNgBza/gwwc-is-spinning-out-of-ev --- Narrated by TYPE III AUDIO.
“EV updates: FTX settlement and the future of EV” by Zachary Robinson
We’re announcing two updates today that we believe will strengthen the effective altruism ecosystem. FTX updates First, we’re pleased to say that both Effective Ventures UK and Effective Ventures US have agreed to settlements with the FTX bankruptcy estate. As part of these settlements, EV US and EV UK (which I’ll collectively refer to as “EV”) have between them paid the estate $26,786,503, an amount equal to 100% of the funds the entities received from FTX and the FTX Foundation (which I’ll collectively refer to as “FTX”) in 2022. All of this money was either originally received from FTX or allocated to pay the settlement with the knowledge and support of their original donor. This means that EV's projects can continue to fundraise with confidence that donations won’t be used to cover the cost of this settlement. We strongly condemn fraud and the actions underlying Sam Bankman-Fried's conviction. Also [...] ---Outline:(00:12) FTX updates(02:59) Future of EV--- First published: December 13th, 2023 Source: https://forum.effectivealtruism.org/posts/HjsfHwqasyQMWRzZN/ev-updates-ftx-settlement-and-the-future-of-ev --- Narrated by TYPE III AUDIO.
“Nonlinear’s Evidence: Debunking False and Misleading Claims” by Kat Woods
Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed. Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of. She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution. We have empathy for her. Initially, we believed her too. We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her. Then she started accusing us of strange things. You’ve seen Ben's evidence, which [...] ---Outline:(02:20) Short summary overview table(04:04) This post is long, so if you read just one illustrative story, read this one(08:37) What is going on? Why did they say so many misleading things? How did Ben get so much wrong?(12:14) Ben admitted in his post that he was warned in private by multiple of his own sources that Alice was untrustworthy and told outright lies. One credible person told Ben Alice makes things up.(20:35) Alice has similarities to Kathy Forth, who, according to Scott Alexander, was “a very disturbed person” who, multiple people told him, “had a habit of accusing men she met of sexual harassment. They all agreed she wasnt malicious, just delusional.” As a community, we do not have good mechanisms in place to protect people from false accusations.(23:41) Why didn’t Ben do basic fact-checking to see if their claims were true? I mean, multiple people warned him?(24:42) Longer summary table(26:14) To many EAs, this would have been a dream job(37:13) Sharing Information on Ben Pace(47:07) So how do we learn from this to make our community better? How can we make EA antifragile?(51:22) Conclusion: a story with no villains(57:16) If you are disturbed by what happened here, here are some ways you can help(58:53) Acknowledgments--- First published: December 12th, 2023 Source: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims --- Narrated by TYPE III AUDIO.
“PEPFAR, one of the most life-saving global health programs, is at risk” by salonium
Summary: International funding and coordination to tackle HIV/AIDS and support health systems in lower- and middle-income countries, is at risk of not being renewed by US Congress, due to demands that it should be linked to new abortion-related restrictions in recipient countries. This program is estimated to have saved over 20 million lives since it was launched by the Bush Administration in 2003, and even now averts over a million HIV/AIDS deaths annually. Since it has also helped support health systems in LMICs, and tackle malaria and tuberculosis, its impact is likely greater than this. In my view this is the most important risk to global health we face today, and I think it isn't getting enough attention. If anyone is interested in research, writing or advocacy on this issue, please do so. Or please get in touch if you are interested in jointly working on this [...] --- First published: December 10th, 2023 Source: https://forum.effectivealtruism.org/posts/gF4nLBpjgFe6XTMrM/pepfar-one-of-the-most-life-saving-global-health-programs-is --- Narrated by TYPE III AUDIO.
“Early findings from the world’s largest UBI study” by GiveDirectly
Summary of findings 2 years in: A monthly universal basic income (UBI) empowered recipients and did not create idleness. They invested, became more entrepreneurial, and earned more. The common concern of “laziness” never materialized, as recipients did not work less nor drink more. Both a large lump sum and a long-term UBI proved highly effective. The lump sum enabled big investments and the guarantee of 12 years of UBI encouraged savings and risk-taking. A short-term UBI was the least impactful of the designs but still effective. On nearly all important economic measures, a 2-year-only UBI performed less well than giving cash as a large lump sum or guaranteeing a long-term UBI, despite each group having received roughly the same total amount of money at this point. However, it still had a positive impact on most measures. Governments should consider changing how they deliver cash aid. Short-term monthly payments [...] ---Outline:(02:37) A monthly UBI made people in poverty more productive, not less(03:57) Giving $500 as a lump sum improved economic outcomes more than giving it out over 24 months(06:27) The promise of a long-term UBI encouraged saving and investment(07:43) This research should inform how cash is given to people in extreme poverty(10:28) Study designThe original text contained 6 footnotes which were omitted from this narration. --- First published: December 6th, 2023 Source: https://forum.effectivealtruism.org/posts/DBx98atdYFM3yKR9C/early-findings-from-the-world-s-largest-ubi-study --- Narrated by TYPE III AUDIO.
“EA thoughts from Israel-Hamas war” by ezrah
I'm Ezra, CEO of EA Israel, but am writing this in a personal capacity. My goal in writing this is to give the community a sense of what someone from a decent-sized local EA group is going through during a time of national crisis. I'll try to keep the post relatively apolitical, but since this is such a charged topic, I'm not sure I'll succeed. I will say that I’m quite nervous about the responses to the post, since the forum can sometimes lean towards criticism. Ideally, I’d want people who are reading this to do so with a sense of compassion, while keeping in mind that this is a difficult time and difficult topic to post or share experiences about. I also don't want the comments to be a discussion of the war per se, but of the experiences of an EA during the war. Finally, I'm sure that [...] ---Outline:(01:25) So what have I been doing since the outbreak of the war?(04:09) How has my experience changed the way I think about EA concepts?(04:35) Donation recommendations(05:39) Important aspects of suffering arent captured by usual metrics(06:17) Theres even more moral uncertainty(07:39) Caring about people far away can go wrong if the topic is trending(08:44) Models of complex situations can be overly simplistic and harmful(11:24) Optimisation is irrelevant(12:59) Ambition to improve the world(14:06) The importance of peaceThe original text contained 1 footnote which was omitted from this narration. --- First published: December 6th, 2023 Source: https://forum.effectivealtruism.org/posts/iXv6zjbAfwrpQy37B/ea-thoughts-from-israel-hamas-war --- Narrated by TYPE III AUDIO.
“EA Infrastructure Fund’s Plan to Focus on Principles-First EA” by Linch, calebp, Tom Barnes
Summary.EA Infrastructure Fund (EAIF)[1] has historically had a somewhat scattershot focus within “EA meta.” This makes it difficult for us to know what to optimize for or for donors to evaluate our performanceWe propose that we switch towards focusing our grantmaking on Principles-First EA[2]This includes supporting:research that aids prioritization across different cause areasprojects that build communities focused on impartial, scope sensitive and ambitious altruisminfrastructure, especially epistemic infrastructure, to support these aimsWe hope that the tighter focus area will make it easier for donors and community members to evaluate the EA Infrastructure Fund, and decide for themselves whether EAIF is a good fit to donate to or otherwise supportOur tentative plan is to collect feedback from the community, donors, and stakeholders until the end of this year. Early 2024 will focus on refining our approach and helping ease transition for grantees. We'll begin piloting our new vision in Q2 2024.Introduction and background [...] ---Outline:(01:17) Introduction and background context(04:29) Proposal(05:10) Examples of projects under the new EAIF's purview(06:01) Examples of projects that are outside of the updated scope(06:46) Why focus on Principles-First EA?(08:43) Potential Metrics(10:05) Potential Alternatives for Donors and Grantees(10:52) Tentative Timeline(11:46) Appendices(11:53) Examples of projects that I (Caleb) would be excited for this fund to support(13:00) Scope Assessment of Hypothetical EAIF Applications(15:53) Key ConsiderationsThe original text contained 5 footnotes which were omitted from this narration. --- First published: December 6th, 2023 Source: https://forum.effectivealtruism.org/posts/FnNJfgLgsHdjuMvzH/ea-infrastructure-fund-s-plan-to-focus-on-principles-first --- Narrated by TYPE III AUDIO.
“Effective Giving Incubation - apply to CE & GWWC’s new program!” by CE, Giving What We Can, Jacintha Baas, Federico Speziali, Luke Moore
Charity Entrepreneurship in collaboration with Giving What We Can is opening a new program to launch 4-6 new Effective Giving Initiatives (EGIs) in 2024. We expect them to raise millions in counterfactual funding for highly impactful charities, even in their first few years.[Applications are open now] In recent years Doneer Effectief, Effektiv Spenden & Giving What We Can have moved huge sums of money ($1.4m, $35m and $330m, respectively) to the best charities globally. We aim to build on their experience and success by launching new EGIs in highly promising locations. These initiatives can be fully independent or run in collaboration with existing organizations, depending on what is most impactful. We’ll provide the training, the blueprints, and the all-important seed funding. This 8-week full-time, fully cost-covered program will run online from April 15 to June 7, 2024, with 2 weeks in person in London. We encourage individuals from all countries [...] ---Outline:(01:31) Who is this program for?(02:35) Why do we think this is promising?(04:17) Our top recommended target countries(05:25) Should you apply?(06:43) Application process--- First published: December 5th, 2023 Source: https://forum.effectivealtruism.org/posts/ME4ihqRojjuhprejm/effective-giving-incubation-apply-to-ce-and-gwwc-s-new --- Narrated by TYPE III AUDIO.
“What do we really know about growth in LMICs? (Part 1: sectoral transformation)” by Karthik Tadepalli
To EAs, "development economics" evokes the image of RCTs on psychotherapy or deworming. That is, after all, the closest interaction between EA and development economists. However, this characterization has prompted some pushback, in the form of the argument that all global health interventions pale in comparison to the Holy Grail: increasing economic growth in poor countries. After all, growth increases basically every measure of wellbeing on a far larger scale than any charity intervention, so it's obviously more important than any micro-intervention. Even a tiny chance of boosting growth in a large developing country will have massive expected value, more than all the GiveWell charities you can fund. The argument is compelling[1] and well-received - so why haven't "growth interventions" gone anywhere? I think the EA understanding of growth is just too abstract to yield really useful interventions that EA organizations could lobby for or implement directly. We need specific [...] ---Outline:(02:31) Sectoral Transformation(03:21) 1. Agricultural productivity growth can drive sectoral transformation... or hurt it.(08:44) 2. Education leads people to move out of agriculture (but with some negative spillovers).(10:39) 3. Barriers to reallocation are surprisingly small; people select into sectors based on their skills.(12:36) 4. Most sectoral transformation today comes from people moving into services, not manufacturing.The original text contained 5 footnotes which were omitted from this narration. --- First published: December 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/H7rjCEmhcWscZsEnE/what-do-we-really-know-about-growth-in-lmics-part-1-sectoral --- Narrated by TYPE III AUDIO.
[Linkpost] “Doing Good Effectively is Unusual” by Richard Y Chappell
tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by prejudged dismissals of EA concern for non-standard beneficiaries and for doing good via indirect means.Introduction.Moral truisms may still be widely ignored. The moral truism underlying Effective Altruism is that we have strong reasons to do more good, and it's worth adopting the efficient promotion of the impartial good among one's life projects. (One can do this in a “non-totalizing” way, i.e. without it being one's only project.) Anyone who personally adopts that project (to any non-trivial extent) counts, in my book, as an effective altruist (whatever their opinion of the EA movement and its institutions).Many people don’t adopt this explicit goal as a personal priority to any degree, but still do significant good via more particular commitments (to more specific communities, causes, or individuals). [...] ---Outline:(03:14) Let's be honest(08:36) OK, but what about the actual movement/institutions?(09:32) Serious Evaluation Goes Beyond VibesThe original text contained 3 footnotes which were omitted from this narration. --- First published: December 1st, 2023 Source: https://forum.effectivealtruism.org/posts/YKidYukDhKLBtDqsh/doing-good-effectively-is-unusual Linkpost URL:https://rychappell.substack.com/p/doing-good-effectively-is-unusual --- Narrated by TYPE III AUDIO.
“Effektiv Spenden’s Impact Evaluation 2019-2023 (exec. summary)” by Sebastian Schienle, Sebastian Schwiecker, Anne Schulze
effektiv-spenden.org is an effective giving platform in Germany and Switzerland that was founded in 2019. To reflect on our past impact, we examine Effektiv Spenden's cost-effectiveness as a "giving multiplier" from 2019 to 2022 in terms of how much money is directed to highly effective charities due to our work. We have two primary reasons for this analysis:To provide past and future donors with transparent information about our cost-effectiveness;To hold ourselves accountable, particularly in a situation where we are investing in further growth of our platform.We provide both a simple multiple (or “leverage ratio”) of donations raised for highly effective charities compared to our operating costs, as well as an analysis of the counterfactual (i.e. what would have happened had we never existed). Our analysis complements our Annual Review 2022 (in German) and builds on previous updates and annual reviews, such as, amongst others, our reviews of 2021 and 2019. [...] ---Outline:(02:07) Key results(03:56) How to interpret our resultsThe original text contained 1 footnote which was omitted from this narration. --- First published: December 1st, 2023 Source: https://forum.effectivealtruism.org/posts/wtjFne8WdcLJTpyWm/effektiv-spenden-s-impact-evaluation-2019-2023-exec-summary --- Narrated by TYPE III AUDIO.
In Continued Defense Of Effective Altruism — Scott Alexander
This is a link post. --- First published: November 29th, 2023 Source: https://forum.effectivealtruism.org/posts/ML6hxqM6g6mXewJtZ/in-continued-defense-of-effective-altruism-scott-alexander Linkpost URL:https://www.astralcodexten.com/p/in-continued-defense-of-effective --- Narrated by TYPE III AUDIO.
[Linkpost] “In Continued Defense Of Effective Altruism — Scott Alexander” by Pablo
In Continued Defense Of Effective Altruism — Scott Alexander ---Outline:(00:05) In Continued Defense Of Effective Altruism — Scott Alexander(00:17) 9--- First published: November 29th, 2023 Source: https://forum.effectivealtruism.org/posts/ML6hxqM6g6mXewJtZ/in-continued-defense-of-effective-altruism-scott-alexander Linkpost URL:https://www.astralcodexten.com/p/in-continued-defense-of-effective --- Narrated by TYPE III AUDIO.
“How I feel about my GWWC Pledge” by Michael Townsend
I took the GWWC Pledge in 2018, while I was an undergraduate student. I only have a hazy recollection of the journey that led to me taking the Pledge. I thought I’d write that down, reflect on how I feel now, and maybe share it.In high-school, I was kind of cringeI saw respected people wear suits, and I watched (and really liked) shows like Suits.I unreflectively assumed I’d end up the same. The only time I would reflect on it was to motivate myself to study for my upcoming exams — I have memories of going to the bathroom as a 17-year old, looking at myself in the mirror, and imagining being successful. I imagined the BMW I might drive, the family I could provide for, and the nice house I could own. A lot of this was psychologically tied up in aspirations to be in great shape. I was [...] ---Outline:(00:20) In high-school, I was kind of cringe(02:09) In early university, I didn’t really know who I wanted to be(04:20) Then, I started giving(05:13) Then, I took the Pledge(07:09) Does how we feel about giving matter?--- First published: November 29th, 2023 Source: https://forum.effectivealtruism.org/posts/sdNYqwaTJ5j4hGZit/how-i-feel-about-my-gwwc-pledge --- Narrated by TYPE III AUDIO.
“EA is good, actually” by Amy Labenz
The last year has been toughThe last year has been tough for EA. FTX blew up in the most spectacular way and SBF has been found guilty of one of the biggest frauds in history. I was heartbroken to learn that someone I trusted hurt so many people, was heartbroken for the people who lost their money, and was heartbroken about the projects I thought would happen that no longer will. The media piled on, and on, and on.The community has processed the shock in all sorts of ways — some more productive than others. Many have published thoughtful reflections. Many have tried to come up with ways to ensure that nothing like this will ever happen again. Some people rallied, some people looked for who to blame, we all felt betrayed. I personally spent November–February working more than full-time on a secondment to Effective Ventures. Meanwhile, there were several [...] ---Outline:(00:03) The last year has been tough(02:55) The EA community is good(05:43) EA ideas are good--- First published: November 28th, 2023 Source: https://forum.effectivealtruism.org/posts/dgXg6ddauC3sBwe67/ea-is-good-actually --- Narrated by TYPE III AUDIO.
“AMF – Reflecting on 2023 and looking ahead to 2024” by RobM
Rob Mather, CEO, AMF, 25 November 20232023 has been a very busy year for AMF, more on 2024 later.ImpactAMF's team of 13 is in the middle of a nine-month period during which we are distributing, with partners, 90 million nets to protect 160 million people in six countries: Chad, the Democratic Republic of Congo, Nigeria, South Sudan, Togo, Uganda, and Zambia.The impact of these nets is expected to be, ± 20%, 40,000 deaths prevented, 20 million cases of malaria averted and a US$2.2 billion improvement in local economy (12x the funds applied). When people are ill they cannot farm, drive, teach – function, so the improvement in health leads to economic as well as humanitarian benefits.This is a terrific contribution from the tens of thousands of donors who have contributed US$180 million over the last two years, and the many partners with whom we work that make possible the distribution [...] --- First published: November 24th, 2023 Source: https://forum.effectivealtruism.org/posts/fkft56o8Md2HmjSP7/amf-reflecting-on-2023-and-looking-ahead-to-2024 --- Narrated by TYPE III AUDIO.
“The passing of Sebastian Lodemann” by ClaireB, amanda
With immense sadness, we want to let the community know about the passing of Sebastian Lodemann, who lost his life on November 9th, 2023, in a completely unexpected and sudden accident. Those who have met him know how humble and kind he was, in addition to being a brilliant and energetic person full of light. Sebastian was deeply altruistic, curious, and took seriously both the challenges facing our world, and its potential. He loved connecting with humans from across the globe and supporting as many people as he could, so there will be a wide international community of people who will keenly feel his absence.Sebastian has been involved with EA since 2016, working on a wide range of projects in AI governance and strategy, pandemic prevention, civilisational resilience and career advising, and taking the Giving What We Can pledge. We extend our deepest sympathies to Sebastian's wife, his children, his [...] --- First published: November 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/dqDhXc9qirhPHjfXH/the-passing-of-sebastian-lodemann --- Narrated by TYPE III AUDIO.
“A Thanksgiving gratitude post to EA” by Joy Bittner
Despite the complicated and imperfect origins of American Thanksgiving, what's worth preserving is the moment it offers for society to step back and count our blessings. And in this moment, I want to express my gratitude to the EA community. It's been a hard year for EA, and many of us have felt increasing levels of disillusionment. Still, a huge thank you to each of you for being part of this messy, but beautiful family.What I love about EA is that at our core, we are people who look around and see a world is world is messed up and kind of shitty. But also, when we see this mess, we deeply feel a moral responsibility to do something about it. And rather than falling into despair, we are optimistic enough to think we can actually do something about it.This more than anything else is what I think makes this [...] --- First published: November 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/YRGSmjYDaMvScCXh2/a-thanksgiving-gratitude-post-to-ea --- Narrated by TYPE III AUDIO.
“GWWC’s evaluations of evaluators” by Sjir Hoeijmakers, Giving What We Can, Michael Townsend🔸, Alana HF
The Giving What We Can research team is excited to share the results of our first round of evaluations of charity evaluators and grantmakers! After announcing our plans for a new research direction last year, we have now completed five[1] evaluations that will inform our donation recommendations for this giving season. There are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement on the status quo, in which there were no independent evaluations of evaluators’ work. We plan to continue to evaluate evaluators, extending the list beyond the five we’ve covered so far, improving our methodology, and regularly renewing our existing evaluations. In this post, we share the key takeaways from each of these evaluations, and link to the full reports. [EDIT 27 November] Our website has now been updated to reflect the new fund and charity recommendations that came out of these [...] ---Outline:(02:26) Global health and wellbeing(02:30) GiveWell (GW)(03:51) Happier Lives Institute (HLI)(04:10) Animal welfare(04:14) EA Funds’ Animal Welfare Fund (AWF)(05:21) Animal Charity Evaluators (ACE)(08:31) Reducing global catastrophic risks(08:36) EA Funds’ Long-Term Future Fund (LTFF)(10:22) Longview's Longtermism Fund (LLF)The original text contained 4 footnotes which were omitted from this narration. --- First published: November 22nd, 2023 Source: https://forum.effectivealtruism.org/posts/PTHskHoNpcRDZtJoh/gwwc-s-evaluations-of-evaluators --- Narrated by TYPE III AUDIO.
“Rethink Priorities needs your support. Here’s what we’d do with it.” by Peter Wildeford
In honor of “Marginal Funding Week” for 2023 Giving Season on the EA Forum, I’d like to tell you what Rethink Priorities (RP) would do with funding beyond what we currently expect to raise from our major funders, and to emphasize that RP currently has a significant funding gap even after taking these major funders into account.A personal appealHi. I know it's traditional in EA to stick to the facts and avoid emotional content, but I can’t help but interject and say that this fundraising appeal is a bit different. It is personal to me. It's not just a list of things that we could take or leave, it's a fight for RP to survive the way I want it to as an organization that is intellectually independent and serves the EA community.To be blunt, our funding situation is not where we want it to be. 2023 has been a [...] ---Outline:(07:00) General(07:03) $1K - $10K per research work to allow us to publish our backlog of research(11:29) Worldview Investigations(11:33) $200K to do cause prioritization research and build on the cross-cause model(14:27) $500K to do more worldview cause exploration(14:58) Surveys and Data Analysis(15:02) $60K to run the next EA Survey(16:01) $40K-100K to more rigorously understand branding for EA and existential risk(17:56) $25K-$100K to more rigorously understand EA's growth trajectory(18:53) $40K to more rigorously understand branding for AI risk outreach(19:35) $50K to more rigorously understand why people drop out of EA(20:03) Animal welfare(21:17) $250K to create a review of interventions to reduce the consumption of animal products(22:10) $38K to create a Farmed Animals Impact Tracker(23:09) $60K to understand interventions that would address crustacean welfare(23:39) $50K for development and implementation of an insect farming welfare ask(24:32) $100K to develop a database of possible near-term interventions for wild animals(25:15) $75K to do a theory of change status report for the animal advocacy movement(26:13) $300K to develop a better system similar to QALYs/DALYs but for animals(27:18) Global Health and Development(27:22) $405K to pilot our Value of Research model(29:21) AI Governance(29:25) $15K to write up learnings from spending a year attempting longtermist incubation(30:07) $114K to train an additional AI policy researcher(31:05) Included with your donation: talent pipelines and field building(33:45) Conclusion(36:19) Acknowledgements--- First published: November 21st, 2023 Source: https://forum.effectivealtruism.org/posts/cMcEBSNiy4meDrmuE/rethink-priorities-needs-your-support-here-s-what-we-d-do --- Narrated by TYPE III AUDIO.
“The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding” by kierangreig, Neil_Dullaghan, KarolinaSarek, Zoë Sigle
Just as~2 years ago, the EA Animal Welfare Fund has significant room for more funding. This could be a pretty important point that informs end of year giving for a number of donors who are looking to make donations within the animal sector. Briefly, here's why the Animal Welfare Fund has some pretty significant room for more funding at this point:Right now, there's currently ~$1M in the Animal Welfare Fund. We also now have 50 grants, summing to ~$4.5M in grants under evaluation.Between mid-last year and mid-this year, the EA AWF received ~350 applications over the past year of which ~150 were desk rejects and ~200 were graded by fund managers. Of these ~200, ~60 received funding, and ~30 received the grant amount they applied for or more. Assuming that the general shape of the pipeline remains similar, that could imply we may now have more grants than we can fund. Potentially [...] --- First published: November 20th, 2023 Source: https://forum.effectivealtruism.org/posts/qpqab3LwmA6yBFJCk/the-ea-animal-welfare-fund-once-again-has-significant-room --- Narrated by TYPE III AUDIO.
“Open Phil Should Allocate Most Neartermist Funding to Animal Welfare” by Ariel Simnegar
Thanks to Michael St. Jules for his comments.Key TakeawaysThe evidence that animal welfare dominates in neartermism is strong.Open Philanthropy (OP) should scale up its animal welfare allocation over several years to approach a majority of OP's neartermist grantmaking.If OP disagrees, they should practice reasoning transparency by clarifying their views:How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?How would OP's views have to change for OP to prioritize animal welfare in neartermism?Summary.Rethink Priorities (RP)'s moral weight research endorses the claim that the best animal welfare interventions are orders of magnitude (1000x) more cost-effective than the best neartermist alternatives.Avoiding this conclusion seems very difficult:Rejecting hedonism (the view that only pleasure and pain have moral [...] ---Outline:(00:09) Key Takeaways(02:46) The Evidence Endorses Prioritizing Animal Welfare in Neartermism(06:32) Objections(06:35) Animal Welfare Does Not Dominate in Neartermism(07:07) RPs Project Assumptions are Incorrect(09:21) Endorsing Overwhelming Non-Hedonism(14:13) Endorsing Overwhelming Hierarchicalism(16:09) Its Strongly Intuitive that Helping Humans Helping Chickens(16:46) Skepticism of Formal Philosophy(17:58) Even if Animal Welfare Dominates, it Still Shouldnt Receive a Majority of Neartermist Funding(18:12) Worldview Diversification Opposes Majority Allocations to Controversial Cause Areas(19:43) OP is Already a Massive Animal Welfare Funder(20:16) Animal Welfare has Faster Diminishing Marginal Returns than Global Health(21:36) Increasing Animal Welfare Funding would Reduce OP's Influence on Philanthropists(23:42) Request for Reasoning Transparency from OP(26:13) Conclusion--- First published: November 19th, 2023 Source: https://forum.effectivealtruism.org/posts/btTeBHKGkmRyD5sFK/open-phil-should-allocate-most-neartermist-funding-to-animal --- Narrated by TYPE III AUDIO.
[Linkpost] “Sam Altman fired from OpenAI” by Larks
The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company's chief technology officer, will serve as interim CEO, effective immediately.A member of OpenAI's leadership team for five years, Mira has played a critical role in OpenAI's evolution into a global AI leader. She brings a unique skill set, understanding of the company's values, operations, and business, and already leads the company's research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.Mr. Altman's departure follows a deliberative review process by the [...] --- First published: November 17th, 2023 Source: https://forum.effectivealtruism.org/posts/HjgD3Q5uWD2iJZpEN/sam-altman-fired-from-openai Linkpost URL:https://openai.com/blog/openai-announces-leadership-transition --- Narrated by TYPE III AUDIO.
“Spiro - New TB charity raising seed funds” by Habiba Banu
Summary We (Habiba Banu and Roxanne Heston) have launched Spiro, a new TB screening and prevention charity focused on children. Our website is here.We are fundraising $198,000 for our first year. We’re currently reaching out to people in the EA network. So far we have between 20%-50% of our budget promised and fundraising is currently one of the main things we’re focusing on.The major components of our first year budget are co-founder time, country visits, and delivery of a pilot program, which aims to do household-level TB screening and provision of preventative medication.We think that this project has a lot of promise:Tuberculosis has a huge global burden, killing 1.3 million people every year, and is disproportionately neglected and fatal in young children.The evidence for preventative treatment is robust and household programs are promising, yet few high-burden countries have scaled up this intervention.Modeling by Charity Entrepreneurship and by academics indicate that this [...] ---Outline:(02:10) Who are we?(03:26) What are we going to do?(04:12) Why TB?(04:40) Why this intervention?(06:36) Why Spiro?(07:28) Budget(09:21) Learn more(09:36) Donations--- First published: November 17th, 2023 Source: https://forum.effectivealtruism.org/posts/C8ZzjFc7aKT7ihmeK/spiro-new-tb-charity-raising-seed-funds --- Narrated by TYPE III AUDIO.
[Linkpost] “Kids or No Kids” by KidsOrNoKids
This post summarizes how my partner and I decided whether to have children or not. We spent hundreds of hours on this decision and hope to save others part of that time. We found it very useful to read the thoughts of people who share significant parts of our values on the topic and thus want to "pay it forward" by writing this up. In the end, we decided to have children; our son is four months old now and we’re very happy with how we made the decision and with how our lives are now (through a combination of sheer luck and good planning). It was a very narrow and very tough decision though.Both of us care a lot about having a positive impact on the world and our jobs are the main way we expect to have an impact (through direct work and/or earning to give). As a [...] ---Outline:(01:35) Process - how we decided(04:25) Content - considerations we used(04:29) Impact considerations(04:33) Time(07:29) Sleep(10:45) Money(11:33) Flexibility(12:07) Effects on one's community(13:07) Falling out of the heavy tail(13:38) The childrens impact(14:17) Value drift(14:44) Personal life worthiness or happiness considerations(14:49) Health(18:06) Social/soft factors(20:49) Overarching considerations--- First published: November 12th, 2023 Source: https://forum.effectivealtruism.org/posts/YYnjHt5YzuHSH7oRR/kids-or-no-kids Linkpost URL:https://www.lesswrong.com/posts/3MzDMBk4DZrbYePJS/kids-or-no-kids --- Narrated by TYPE III AUDIO.
“A robust earning to give ecosystem is better for EA organizations and the community” by abrahamrowe
(Written in a personal capacity, and not representing either my current employer or former one)In 2016, I founded Utility Farm, and later merged it with Wild-Animal Suffering Research (founded by Persis Eskander) to form Wild Animal Initiative. Wild Animal Initiative is, by my estimation, a highly successful research organization. The current Wild Animal Initiative staff deserve all the credit for where they have taken the organization, but I’m incredibly proud that I got to be involved early in the establishment of a new field of study, wild animal welfare science, and to see the tiny organization I started in an apartment with a few hundred dollars go on to be recommended by ACE as a Top Charity for 4 years in a row. In my opinion, Wild Animal Initiative has become, under the stewardship of more capable people than I, the single best bet for unlocking interventions that could tackle [...] ---Outline:(08:21) A robust earning to give ecosystem is better for charities(11:22) A robust earning to give ecosystem is better for the EA community(13:59) The success of EA shouldn’t only be measured by how much money is moved by the community--- First published: November 11th, 2023 Source: https://forum.effectivealtruism.org/posts/AyLF2KQ8AqQuiuDLz/a-robust-earning-to-give-ecosystem-is-better-for-ea --- Narrated by TYPE III AUDIO.
“Takes from staff at orgs with leadership that went off the rails” by Julia_Wise
I spoke with some people who worked or served on the board at organizations that had a leadership transition after things went seriously wrong. In some cases the organizations were EA-affiliated, in other cases only tangentially related to the EA space.This is an informal collection of advice the ~eight people I spoke with have for staff or board members who might find themselves in a similar position. I bucketed this advice into a few categories below. Some are direct quotes and others are paraphrases of what they said. All spelling is Americanized for anonymity.I’m sharing it here not because I think it's an exhaustive accounting of all types of potential leadership issues (it's not) or because I think any of this is unique to or particularly prevalent in or around EA (I don’t). But I hope that it's helpful to any readers who may someday be in a position like [...] ---Outline:(01:06) Written policies(02:39) Role of board / advice for board(06:12) Advice for staff(10:39) More notes--- First published: November 9th, 2023 Source: https://forum.effectivealtruism.org/posts/jLaDP2aWxdDCzwBYy/takes-from-staff-at-orgs-with-leadership-that-went-off-the --- Narrated by TYPE III AUDIO.
“Announcing Athena - Women in AI Alignment Research” by Claire Short
Athena is a new research mentorship program fostering diversity of ideas in AI safety research. We aim to get more women and marginalized genders into technical research and offer the support needed to thrive in this space. Applications for scholars are open until December 3rd, 2023Apply as a scholar: hereApply as a mentor or speaker: hereFinancial aid is available for travel expenses for the in-person retreat to those otherwise unable to attend without itProgram StructureA 2-month hybrid mentorship program for women looking to strengthen their research skills and network in technical AI safety research beginning in January 2024. This includes 1-week in-person retreat in Oxford, UK followed by a 2-month remote mentorship by established researchers in the field, with networking and weekly research talks. Athena aims to equip women with the knowledge, skills, and network they need to thrive in AI safety research. We believe that diversity is a strength [...] ---Outline:(01:32) Who should apply?(01:53) Application process(02:08) Questions?(02:15) Why are we doing this--- First published: November 7th, 2023 Source: https://forum.effectivealtruism.org/posts/B2t559dP65ffKZsDa/announcing-athena-women-in-ai-alignment-research --- Narrated by TYPE III AUDIO.
“10 years of Earning to Give” by AGB
General note: The bulk of this post was written a couple of months ago, but I am releasing it now to coincide with the Effective Giving Spotlight week. I shortly expect to release a second post documenting some observations on the community building funding landscape. IntroductionWay back in 2010, I was sitting in my parents' house, watching one of my favourite TV shows, the UK's Daily Politics. That day's guest was an Oxford academic by the name of Toby Ord. He was donating everything above £18000 (£26300 in today's money) to charity, and gently pushing others to give 10%."Nice guy," I thought. "Pity it'll never catch on."Two years later, a couple of peers interned at Giving What We Can. At the same time, I did my own internship in finance, and my estimate of my earning potential quadrupled[1]. One year after that, I graduated and took the Giving What We [...] ---Outline:(01:13) Post goals(02:28) My path(03:07) Work(05:36) Lifestyle Inflation(07:48) Savings(09:27) Donations(10:11) Community(10:14) Why engage?(11:58) Why stop?(13:48) Closing ThoughtsThe original text contained 7 footnotes which were omitted from this narration. --- First published: November 7th, 2023 Source: https://forum.effectivealtruism.org/posts/gxppfWhx7ta2fkF3R/10-years-of-earning-to-give --- Narrated by TYPE III AUDIO.
“State of the East and Southeast Asian EAcosystem” by Elmerei Cuevas, jiayang, Obeyesekere, Dion, Yi-Yang, Saad Siddiqui, BrianTan, Jaynell Chang, onenastassja, Alethea Faye Cendaña
This write-up is a compilation of organisations and projects aligned / adjacent to the effective altruism movement in East Asia and Southeast Asia and was written around the EAGxPhilippines conference. Some organisations, projects, and contributors also prefer to not be public and hence removed from this write-up. While this is not an exhaustive list of projects and organisations per country in the region, it is a good baseline of the progress of the effective altruism movement for this side of the globe. Feel free to click the links to the organisations/projects themselves to dive deeper into their works. Contributors: Saad Siddiqui; Anthony Lau; Anthony Obeyesekere; Masayuki "Moon" Nagai; Yi-Yang Chua; Elmerei Cuevas, Alethea Faye Cedaña, Jaynell Ehren Chang, Brian Tan, Nastassja "Tanya" Quijano; Dion Tan, Jia Yang Li; Saeyoung Kim; Nguyen Tran; Alvin LauForum Post Graphic credits to Jaynell Ehren ChangEAGxPhotos credits to CS CreativesMainland China 🇨🇳China Global Priorities Group Aims [...] ---Outline:(01:15) Mainland China 🇨🇳(01:18) China Global Priorities Group(01:51) City Group: EAHK(03:13) University Group: EAHKU(03:39) Academia (AI):(04:51) Academia (Psychology):(05:04) Cause specific organisations/ projects(05:27) EA Indonesia(08:17) EA Japan(09:36) Malaysia 🇲🇾(09:42) EA Malaysia(10:15) Philippines 🇵🇭(10:22) A. Meta Organizations/ Projects:(11:18) B. Some Cause Specific Organisations(12:57) Singapore 🇸🇬(13:03) EA Singapore(13:19) EA NUS(13:27) Welfare Matters(13:35) CEARCH(13:45) GFI APAC(13:53) Global Food Partners(13:59) Effective Giving SG(14:08) AI Safety(14:14) South Korea 🇰🇷(14:17) EA South Korea(14:40) Viet Nam 🇻🇳(14:46) EA Viet Nam(14:55) EA Fullbright (Fulbright University Vietnam)(15:02) LessWrong Vietnam(15:11) Shrimp Welfare Project(15:15) Taiwan 🇹🇼(15:21) EA Taiwan--- First published: November 6th, 2023 Source: https://forum.effectivealtruism.org/posts/NrqGyXzvwB2Gqu6XW/state-of-the-east-and-southeast-asian-eacosystem --- Narrated by TYPE III AUDIO.
“Clean water - the incredible 30% mortality reducer we can’t explain” by NickLaing
TLDR: The best research we have shows that clean water may provide a 30% mortality reduction to children under 5. This might be the biggest mortality reduction of any single global health intervention, yet we don’t understand why it works.Here, I share my journey exploring a life-saving intervention that we don’t fully understand, but really should. I may err a little on the side of artistic license - so if you find inaccuracies please forgive me, correct me or even feel free to just tear me to shreds in the comments ;). Part 1: Givewell's Seemingly absurd numbersI first became curious after a glance at what seemed like a dubious GiveWell funded project. A $450,000 dollar scoping grant for water chlorination in Rwanda? This didn’t make intuitive sense to me.In Sub-saharan Africa diarrhoea causes 5-10% of child mortality. While significant, the diarrhea problem continues to improve with better access to [...] ---Outline:(02:26) Part 2: A Nobel Prize winner's innovative math(04:27) Part 3. We already knew about this anomaly – 100 years ago(08:10) Part 5: What next for clean water?The original text contained 7 footnotes which were omitted from this narration. --- First published: November 4th, 2023 Source: https://forum.effectivealtruism.org/posts/hFPbe2ZwmB9athsXT/clean-water-the-incredible-30-mortality-reducer-we-can-t --- Narrated by TYPE III AUDIO.
“Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview” by Derek Shiller, bcbernardo, Chase Carter, Agustín Covarrubias, Marcus_A_Davis, MichaelDickens, Laura Duffy, Peter Wildeford
This post is a part of Rethink Priorities’ Worldview Investigations Team's CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else. This post presents a software tool we're developing to better understand risk and effectiveness.Executive SummaryThe cross-cause cost-effectiveness model (CCM) is a software tool under development by Rethink Priorities to produce cost-effectiveness evaluations in different cause areas.The CCM enables evaluations of interventions in global health and development, animal welfare, and existential risk mitigation.The CCM also includes functionality for evaluating research projects aimed at improving existing interventions or discovering more effective alternatives.The CCM follows a Monte Carlo approach to assessing probabilities.The CCM accepts user-supplied distributions as parameter [...] ---Outline:(00:43) Executive Summary(04:24) Purpose(05:36) Key Features(05:52) We model uncertainty with simulations(06:38) We incorporate user-specified parameter distributions(07:17) Our results capture outcome ineffectiveness(08:42) We enable users to specify the probability of extinction for different future eras(09:28) Structure(10:08) Intervention module(10:40) Global Health and Development(11:14) Animal Welfare(12:00) Existential Risk Mitigation(13:59) Research projects module(14:56) Limitations(15:12) It is geared towards specific kinds of interventions(16:24) Distributions are a questionable way of handling deep uncertainty(17:13) The model doesn’t handle model uncertainty(18:01) The model assumes parameter independence(18:59) Lessons(19:06) The expected value of existential risk mitigation interventions depends on future population dynamics(20:15) The value of existential risk mitigation is extremely variable(21:38) Tail-end results can capture a huge amount of expected value(22:22) Unrepresented correlations may be decisive(23:43) Future Plans(24:49) Acknowledgements--- First published: November 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/pniDWyjc9vY5sjGre/rethink-priorities-cross-cause-cost-effectiveness-model --- Narrated by TYPE III AUDIO.
[Linkpost] “How Long Do Policy Changes Matter? New Paper” by zdgroff
A key question for many interventions' impact is how long the intervention changes some output counterfactually, or how long the intervention washes out. This is often the case for work to change policy: the cost-effectiveness of efforts to pass animal welfare ballot initiatives, nuclear non-proliferation policy, climate policy, and voting reform, for example, will depend on (a) whether those policies get repealed and (b) whether they would pass anyway. Often there is an explicit assumption, e.g., that passing a policy is equivalent to speeding up when it would have gone into place anyway by X years.[1] [2] As people routinely note when making these assumptions, it is very unclear what assumption would be appropriate.In a new paper (my economics "job market paper"), I address this question, focusing on U.S. referendums but with some data on other policymaking processes:Policy choices sometimes appear stubbornly persistent, even when they become politically unpopular or [...] ---Outline:(02:20) Overview of Results and Methods(06:23) Notes Particular to the EA Community(06:27) Policy Changes Seem to Matter (Much) Longer than EAs Have Assumed(07:34) Neglectedness Matters(08:02) Comparing Persistence: Can We Compare Policy to Other Social Changes?The original text contained 2 footnotes which were omitted from this narration. --- First published: November 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/jCwuozHHjeoLPLemB/how-long-do-policy-changes-matter-new-paper Linkpost URL:https://zachfreitasgroff.com/FreitasGroff_Policy_Persistence.pdf --- Narrated by TYPE III AUDIO.
“Are 1-in-5 Americans familiar with EA?” by David_Moss
YouGov recently reported the results of a survey (n=1000) suggesting that about “one in five (22%) Americans are familiar with effective altruism.”[1]We think these results are exceptionally unlikely to be true. Their 22% figure is very similar to the proportion of Americans we previously found claim to have heard of effective altruism (19%) in our earlier survey (n=6130). But, after conducting appropriate checks, we estimated that much lower percentages are likely to have genuinely heard of EA[2] (2.6% after the most stringent checks, which we speculate is still likely to be somewhat inflated[3]).Is it possible that these numbers have simply dramatically increased following the FTX scandal?Fortunately, we have tested this with multiple followup surveys explicitly designed with this possibility in mind.[4] In our most recent survey (conducted October 6th[5]), we estimated that approximately 16% (13.0%-20.4%) of US adults would claim to have heard of EA. Yet, when we add in [...] ---Outline:(02:00) Attitudes towards EA(02:35) ConclusionsThe original text contained 6 footnotes which were omitted from this narration. --- First published: November 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/CwKiAt54aJjcqoQDh/are-1-in-5-americans-familiar-with-ea --- Narrated by TYPE III AUDIO.
“Alvea’s Story, Wins, and Challenges [Unofficial]” by kyle_fish
IntroA few months ago, the other (former) Alvea executives and I made the difficult decision to wind Alvea down and return our remaining funding to investors. In this post, I’ll share an overview of Alvea's story and highlight a few of our wins and challenges from my perspective as an Alvea Co-Founder and former CTO, in hopes that our experiences might be useful to others who are working on (or considering) ambitious projects to solve the biggest problems in the world. I’m sharing everything below in a personal capacity and as a window into my current thinking—this is not the definitive or official word on Alvea and it doesn’t necessarily represent the views of any other Alvea team members. I expect my reflections to continue evolving as I further process the journey and outcomes of this project, and hope to share more along the way. Alvea's StoryFirst vaccine sprint and [...] ---Outline:(00:05) Intro(00:55) Alvea's Story(00:59) First vaccine sprint and decision to continue Alvea (December 2021 through April 2022)(02:48) Product pursuits (May 2022 through July 2022)(04:04) Second vaccine sprint (August 2022-December 2023)(05:22) Funding environment change and pivot (Dec 2022 - May 2023)(07:33) Wind down and PanLabs spin out (June-Present)(09:08) Wins(09:11) Record time to safe clinical development(10:12) General-purpose rapid drug development capacity(11:12) Team Development(12:09) Medical countermeasure knowledge(13:02) Challenges(13:22) Challenges of rapid growth(14:21) Costs of complex non-standard corporate set up(15:00) Navigating the transition from short- to long-term goals(16:12) Conclusion--- First published: November 1st, 2023 Source: https://forum.effectivealtruism.org/posts/d9bamQHBAwAjuKtNA/alvea-s-story-wins-and-challenges-unofficial --- Narrated by TYPE III AUDIO.
[Linkpost] “Alvea Wind Down Announcement [Official]” by kyle_fish, greghima, cateycat
After careful consideration, we made the difficult decision to wind Alvea down and return our remaining funds to investors. This decision was the result of many months of experimentation and analysis regarding Alvea's strategy, path to impact, and commercial potential, which ultimately led us to the conclusion that Alvea's overall prospects were not sufficiently compelling to justify the requisite investment of money, time, and energy over the coming years.Alvea started in late 2021 as a moonshot to rapidly develop and deploy a room temperature-stable DNA vaccine candidate against the Omicron wave of COVID-19, and we soon became the fastest startup to take a new drug from founding to a Phase 1 clinical trial. However, we decided to discontinue our lead candidate during the follow-up period of the trial as the case for large-scale impact weakened amidst the evolving pandemic landscape. Over the following year, we explored different applications of our [...] --- First published: November 1st, 2023 Source: https://forum.effectivealtruism.org/posts/3EjExF8HeJbmk4Bp4/alvea-wind-down-announcement-official Linkpost URL:https://www.alvea.bio/winddown/ --- Narrated by TYPE III AUDIO.
[Linkpost] “President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” by Tristan Williams
Released today (10/30/23) this is crazy, perhaps the most sweeping action taken by government on AI yet. Below, I've segmented by x-risk and near term risk related proposals. It's worth noting that some of these are very specific and direct an action to be taken by one of the executive branch organizations (i.e. sharing of saftey test results) but others are guidances, which involve "calls on Congress" to pass legislation that codifies the desired action. Existential Risk Related Actions:Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all [...] ---Outline:(00:37) Existential Risk Related Actions:(03:10) Near Term Risk Actions:(03:14) General(03:46) Privacy(04:33) Discrimination(05:11) Jobs--- First published: October 30th, 2023 Source: https://forum.effectivealtruism.org/posts/pcbsM45vLmHcFpNnr/president-biden-issues-executive-order-on-safe-secure-and Linkpost URL:https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ --- Narrated by TYPE III AUDIO.
“We’re Not Ready: thoughts on ‘pausing’ and responsible scaling policies” by Holden Karnofsky
Views are my own, not Open Philanthropy's. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse. Over the last few months, I’ve spent a lot of my time trying to help out with efforts to get responsible scaling policies adopted. In that context, a number of people have said it would be helpful for me to be publicly explicit about whether I’m in favor of an AI pause. This post will give some thoughts on these topics. I think transformative AI could be soon, and we’re not ready I have a strong default to thinking that scientific and technological progress is good and that worries will tend to be overblown. However, I think AI is a big exception here because of its potential for unprecedentedly rapid and radical transformation.1 I think [...] ---Outline:(00:36) I think transformative AI could be soon, and we’re not ready(02:11) If it were all up to me, the world would pause now - but it isn’t, and I’m more uncertain about whether a “partial pause” is good(07:46) Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine (with some risks that I think can be managed)The original text contained 5 footnotes which were omitted from this narration. --- First published: October 27th, 2023 Source: https://forum.effectivealtruism.org/posts/ntWikwczfSi8AJMg3/we-re-not-ready-thoughts-on-pausing-and-responsible-scaling --- Narrated by TYPE III AUDIO.
“Impact Evaluation in EA” by callum
SummaryGiven EA's history and values, I’d have expected for impact evaluation to be a distinguishing feature of the movement. In fact, impact evaluation seems fairly rare in the EA space.There are some things specific actors could do for EA to get more of the benefits of impact evaluation. For example, organisations that don’t already could carry out evaluations of their impact, and a well-suited individual could start an organisation to carry out impact evaluations and analysis of the EA movement.Overall I’m unsure to what extent more focus on impact evaluation would be an improvement. On the one hand, establishing impact is challenging for many EA activities and impact evaluation can be burdensome. On the other hand, an organisation's historic impact seems very action-relevant to its future activities and current levels of impact evaluation seem low.What Is Impact Evaluation?Over the last year I’ve been speaking to EA orgs about their impact [...] ---Outline:(00:59) What Is Impact Evaluation?(01:22) Why is Impact Evaluation Important?(02:17) I’d expect Impact Evaluation to be quite common in EA(03:00) Impact evaluation is fairly rare in EA(04:42) Potentially justified reasons for this(05:58) Some low- to medium-cost opportunities(08:11) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 26th, 2023 Source: https://forum.effectivealtruism.org/posts/hDNpHEA2Kn4xBoS8r/impact-evaluation-in-ea --- Narrated by TYPE III AUDIO.
“How bad would human extinction be?” by arvomm
Figure 1 (see full caption below)This post is a part of Rethink Priorities' Worldview Investigations Team's CURVE Sequence: "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximisation for cause prioritisation; second, to evaluate the claim that a commitment to expected value maximisation robustly supports the conclusion that we ought to prioritise existential risk mitigation over all else.Executive SummaryBackgroundThis report builds on the model originally introduced by Toby Ord on how to estimate the value of existential risk mitigation. The previous framework has several limitations, including:The inability to model anything requiring shorter time units than centuries, like AI timelines.A very limited range of scenarios considered. In the previous model, risk and value growth can take different forms, and each combination represents one scenarioNo explicit treatment of persistence –– how long the mitigation efforts’ effects last for ––as a variable of interest.No easy way [...] ---Outline:(00:38) Executive Summary(05:26) Abridged Report(11:20) Generalised Model: Arbitrary Risk Profile(13:37) Value(19:00) Great Filters and the Time of Perils Hypothesis(21:06) Decaying Risk(21:55) Results(21:58) Convergence(25:35) The Expected Value of Mitigating Risk Visualised(31:59) Concluding Remarks(35:00) AcknowledgementsThe original text contained 24 footnotes which were omitted from this narration. --- First published: October 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/S9H86osFKhfFBCday/how-bad-would-human-extinction-be --- Narrated by TYPE III AUDIO.
“Thoughts on responsible scaling policies and regulation” by Paul_Christiano
I am excited about AI developers implementing responsible scaling policies; I’ve recently been spending time refining this idea and advocating for it. Most people I talk to are excited about RSPs, but there is also some uncertainty and pushback about how they relate to regulation. In this post I’ll explain my views on that:I think that sufficiently good responsible scaling policies could dramatically reduce risk, and that preliminary policies like Anthropic's RSP meaningfully reduce risk by creating urgency around key protective measures and increasing the probability of a pause if those measures can’t be implemented quickly enough.I think that developers implementing responsible scaling policies now increases the probability of effective regulation. If I instead thought it would make regulation harder, I would have significant reservations.Transparency about RSPs makes it easier for outside stakeholders to understand whether an AI developer's policies are adequate to manage risk, and creates a focal point for [...] ---Outline:(02:03) Why I’m excited about RSPs(03:26) Thoughts on an AI pause(04:56) I expect RSPs to help facilitate effective regulation(06:24) Anthropic's RSP(09:16) On the name “responsible scaling”--- First published: October 24th, 2023 Source: https://forum.effectivealtruism.org/posts/cKW4db8u2uFEAHewg/thoughts-on-responsible-scaling-policies-and-regulation --- Narrated by TYPE III AUDIO.
[Linkpost] “Pausing AI might be good policy, but it’s bad politics” by Stephen Clare
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that's because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.This is called politics and it's powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England's planning system. As a result, permission to build houses is only granted when it's in the “public interest”; in practice it is given infrequently and often with [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: October 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/avrFeH6LpqJrjmGmc/pausing-ai-might-be-good-policy-but-it-s-bad-politics Linkpost URL:https://unfoldingatlas.substack.com/p/pause-ai-is-bad-politics --- Narrated by TYPE III AUDIO.
“Presenting: 2023 Incubated Charities (Round 2) - Charity Entrepreneurship” by CE
After launching our first batch of 2023 charities in April, we are now thrilled to announce the launch of four new nonprofit organizations through our July/August 2023 Incubation Program. The 2023 Round 2 incubated charities are:Clear Solutions - Providing treatment for young children to prevent deaths from diarrhoeal diseases.Lafiya Nigeria - Reducing maternal mortality by providing safe family planning options in rural northern Nigeria.Alliance for Reducing Microbial Resistance - Supporting sustainable access to and the development of antimicrobials to combat antimicrobial resistance.Concentric Policies - Preventing and controlling noncommunicable diseases through data-driven policymaking. Detailed introductions to the new projects will follow below.Context: The Charity Entrepreneurship Incubation Program July-August 2023The July-August 2023 program focused on global health, including health security and policy-focused interventions. Our generous donors from the CE Seed Network have provided the new initiatives with $583,000 seed funding to kickstart their interventions.In addition to the seed grants, as usual, we will [...] ---Outline:(02:13) Our new charities introduce themselves:(02:17) CLEAR SOLUTIONS(09:01) CONCENTRIC POLICIES--- First published: October 19th, 2023 Source: https://forum.effectivealtruism.org/posts/pKDob5yXh2djqdgcP/presenting-2023-incubated-charities-round-2-charity --- Narrated by TYPE III AUDIO.
“How has FTX’s collapse impacted EA?” by AnonymousEAForumAccount
Summary of Findings It has been almost a year since FTX went bankrupt on November 11, 2022. Some of the ways that has impacted EA have been obvious, like the shuttering of the FTX Foundation, expected to be one of the biggest EA funders. But recent discussions show that the broader impact of the FTX scandal on EA isn’t well understood, and that there is a desire for more empirical evidence on this topic. To that end, I have aggregated a variety of publicly available EA metrics to improve the evidence base. Unfortunately, these wide-ranging perspectives clearly show a broad-based and worrisome deterioration of EA activity in the aftermath of FTX.Previous attempts to quantify how FTX impacted EA have focused on surveys of members of the EA community, university students, the general public, and university group organizers. These surveys were conducted in the months following FTX's collapse. Their results have been [...] ---Outline:(00:04) Summary of Findings(04:22) Detailed Findings(04:25) Notes on Data Sources and Presentation(05:21) Donation Data(05:24) EA Funds: Donation and Donor Data(08:10) GWWC Pledges(08:43) EA Newsletter Subscriptions(09:41) Attitude Data(09:44) Survey of EA Community(12:20) Surveys of University Populations and General Public(13:47) Engagement Metrics(13:51) EffectiveAltruism.org Web Traffic(16:52) EA Forum(19:09) Google Search Interest(20:39) EA Global and EA Global X(22:51) Virtual Programs(23:47) 80k Metrics(24:22) University Group Accelerator Program(24:43) Additional data that would shed more light on FTX's impact and its causes(27:42) Conclusions(29:24) Appendix: Meta commentary about data collection and distributionThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 17th, 2023 Source: https://forum.effectivealtruism.org/posts/vXzEnBcG7BipkRSrF/how-has-ftx-s-collapse-impacted-ea --- Narrated by TYPE III AUDIO.
“The Risks and Rewards of Prioritizing Animals of Uncertain Sentience” by Bob Fischer
This post is a part of Rethink Priorities’ Worldview Investigations Team's CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else.SummaryExpected value (EV) maximization is a common method for making decisions across different cause areas. The EV of an action is an average of the possible outcomes of that action, weighted by the probability of those outcomes occurring if the action is performed.When comparing actions that would benefit different species (e.g., malaria prevention for humans, cage-free campaigns for chickens, stunning operations in shrimp farms), calculating EV includes assessing the probability that the individuals it affects are sentient.Small invertebrates, like shrimp and insects, have relatively low probabilities [...] ---Outline:(04:12) 1. Introduction(07:13) 2. Results of EV maximization(10:33) 3. Dissatisfaction with results of EV maximization(14:57) 4. Risk(16:25) 5. Risk aversion as avoiding worst case outcomes(16:30) 5.1. Risk aversion about outcomes(18:22) 5.2. A formal model of risk aversion about outcomes(21:45) 6. Risk aversion as avoiding inefficacy(22:12) 6.1. Difference-making risk aversion(23:45) 6.2. The risk of wasting money on non-sentient creatures(27:10) 6.3. Is difference-making risk aversion rational?(29:10) 6.4. A formal model of difference-making risk aversion(32:51) 7. Risk as avoiding ambiguity(40:19) 8. What about chickens?(45:01) 9. Conclusions(47:19) AcknowledgmentsThe original text contained 34 footnotes which were omitted from this narration. --- First published: October 16th, 2023 Source: https://forum.effectivealtruism.org/posts/HMakzketADQq4bkvD/the-risks-and-rewards-of-prioritizing-animals-of-uncertain --- Narrated by TYPE III AUDIO.
“Cash and FX management for EA organizations” by JueYan
I’m a grantmaker who previously spent a decade as a professional investor. I’ve recently helped some Open Phil, GiveWell, and Survival and Flourishing Fund grantees with their cash and foreign exchange (FX) management. In the EA community, we seem collectively quite bad at this. My aim with this post is to help others 80/20 their cash and FX management: for 20% of the effort (these 4 items below), we can capture 80% of the benefit of corporate best practices. This will often be a highly impactful use of your time: I think that for the median organization, implementing these suggestions will take 15 to 30 hours of staff time, but will be about as valuable as raising 5% more money.My suggestions are:have more than one bankinvest your cash in a government-guaranteed money market account, earning ~5%hold international currencies in the same proportions as your spending in those currencieswatch out for [...] ---Outline:(01:09) Have more than one bank(03:04) Invest your cash in a government guaranteed money market account, earning ~5%(06:59) Hold FX in the same proportions as your spending(08:58) Watch out for FX fees--- First published: October 14th, 2023 Source: https://forum.effectivealtruism.org/posts/akSr4YXHKZcK8EnQe/cash-and-fx-management-for-ea-organizations --- Narrated by TYPE III AUDIO.
“Chuck Feeney (1931–2023)” by finm
Philanthropist Chuck Feeney died on October 9, at 92. He founded one of the largest private charitable foundations in history, giving away his entire fortune within his lifetime. He was almost obsessively secretive in his giving, and set a standard of seriousness which inspired the Giving Pledge and many of its pledgers.Feeney was born at the tail end of the Great Depression into modest circumstances, raised by blue-collar Irish-American parents, and became the first of his family to attend university.The story of his fortune began after Feeney enrolled in the U.S. Air Force, soon deployed to Japan as a radio operator during the Korean War. The 1944 GI Bill of Rights had entitled returning GIs to a course of study after WWII, and the scheme was renewed for Korean War veterans. He used the allowance to attend Cornell, graduated in 1956, then travelled to Europe to indulge a newly developed [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: October 11th, 2023 Source: https://forum.effectivealtruism.org/posts/yS3qRbHzzWxjR2Ehp/chuck-feeney-1931-2023 --- Narrated by TYPE III AUDIO.
“Potentially actionable opportunity: eliminating the New World screwworm (flesh-devouring maggots that affect a billion animals each year)” by Forumite
In the latest episode of the 80,000 Hours podcast, Kevin Esvelt talks about the New World screwworm. He claims that there is a potential for a massive animal welfare win, if we can eliminate this parasite. This feels worthy of a discussion, so I'm starting this thread. Is this issue on anyone's radar? Has anyone looked into these claims? They seem potentially important/actionable, if true! Relevant section from the podcast:Kevin Esvelt: [...] the New World screwworm, which has the amazing scientific name of Cochliomyia hominivorax: “the man devourer.” But it doesn’t primarily eat humans; it feeds indiscriminately on warm-blooded things, so mammals and birds. It's a botfly that lays its eggs in open wounds, anything as small as a tick bite. And it's called the screwworm because the larvae are screw-shaped and they drill their way into living flesh, devouring it. And as they do, they cultivate bacteria that attract [...] --- First published: October 6th, 2023 Source: https://forum.effectivealtruism.org/posts/JWuPdMaPiNy7AD3AX/potentially-actionable-opportunity-eliminating-the-new-world --- Narrated by TYPE III AUDIO.
“Observations on the funding landscape of EA and AI safety” by Vilhelm Skoglund, Jona
Epistemic status: Hot takes for discussion. These observations are a side product of another strategy project, rather than a systematic and rigorous analysis of the funding landscape, and we may be missing important considerations. Observations are also non-exhaustive and mostly come from anecdotal data and EA Forum posts. We haven’t vetted the resources that we are citing; instead, we took numerous data points at face value and asked for feedback from >5 people who have more of an inside view than we do (see acknowledgments, but note that these people do not necessarily endorse all claims). We aim to indicate our certainty in the specific claims we are making.Context and summaryWhile researching for another project, we discovered that there have been some significant changes in the EA funding landscape this year. We found these changes interesting and surprising enough that we wanted to share them, to potentially help people update [...] ---Outline:(00:47) Context and summary(05:52) Some observations on the general EA funding landscape(05:57) More funding sources(08:00) More activity in the effective giving ecosystem(08:41) Changes in funding flows(14:01) Potential future EA funding(15:09) Some observations on the AI safety funding landscape(15:29) A brief overview of the broad AI safety funding landscape(18:39) There might be more funding gaps in AI safety this year(22:19) Potential future AI safety funding(25:07) Further questions(27:17) Further resources (non-exhaustive)(27:58) AcknowledgmentsThe original text contained 18 footnotes which were omitted from this narration. --- First published: October 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/RueHqBuBKQBtSYkzp/observations-on-the-funding-landscape-of-ea-and-ai-safety --- Narrated by TYPE III AUDIO.
“Violence Before Agriculture” by John G. Halstead, Philip Thomson
This is a summary of a report on trends in violence since the dawn of humanity: from the hunter-gatherer period to the present day.[1] The full report is available at this Substack and as a preprint on SSRN.[2] Phil did 95% of the work on the report. Expert reviewers provided the following comments on our report.“Thomson and Halstead have provided an admirably thorough and fair assessment of this difficult and emotionally fraught empirical question. I don’t agree with all of their conclusions, but this will surely be the standard reference for this issue for years to come.”Steven Pinker, Johnstone Family Professor in the Department of Psychology at Harvard University“This work uses an impressively comprehensive survey of ethnographic and archeological data on military mortality in historically and archeologically known small-scale societies in an effort to pin down the scale of the killing in the pre-agricultural world. This will be a useful [...] The original text contained 6 footnotes which were omitted from this narration. --- First published: October 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/QH2sECmmbLWbMXLhJ/violence-before-agriculture --- Narrated by TYPE III AUDIO.
[Linkpost] “Why I’m still going out to bat for the EA movement post FTX” by Gemma 🔶
This is a link post. From the looks of it, next week might be rough for people who care about Effective Altruism. As CEA acting CEO Ben West pointed out on the forum: “Sam Bankman-Fried's trial is scheduled to start October 3, 2023, and Michael Lewis's book about FTX comes out the same day. My hope and expectation is that neither will be focused on EA … Nonetheless, I think there's a decent chance that viewing the Forum, Twitter, or news media could become stressful for some people, and you may want to pre-emptively create a plan for engaging with that in a healthy way. I really appreciated that comment since I didn’t know that and I’m glad I had time to mentally prepare. As someone who does outward facing voluntary community building at my workplace and in London, I feel nervous. I’ve written this piece to manage that [...] ---Outline:(01:10) Thoughts on EA after 2022(06:48) Why I’m still publicly going out to bat for EA(11:16) Conclusion and some caveatsThe original text contained 3 footnotes which were omitted from this narration. The original text contained 2 images which were described by AI. --- First published: September 30th, 2023 Source: https://forum.effectivealtruism.org/posts/wGKGrCYNwjhhK4N69/why-i-m-still-going-out-to-bat-for-the-ea-movement-post-ftx Linkpost URL:https://bashingthearc.substack.com/p/why-im-still-going-out-to-bat-for --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams” by Open Philanthropy
It's been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we’ve added over 30 new team members so far in 2023. Now we’re launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI Safety, Biosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building[1]).The application, job descriptions, and general team information are available here. We’re hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We’ve had to turn down potentially promising opportunities because we didn’t have enough time to investigate them[2]; on the flip side, we’re likely currently allocating tens of millions of dollars suboptimally in ways that more [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: September 29th, 2023 Source: https://forum.effectivealtruism.org/posts/bBefhAXpCFNswNr9m/open-philanthropy-is-hiring-for-multiple-roles-across-our --- Narrated by TYPE III AUDIO.
“Our tofu book has launched!! (Upvote on Amazon)” by George Stiffman
As some of you know, I’m working on growing the US market for Chinese tofus. I believe it could be a way to significantly reduce animal suffering, while shifting American dining culture. We just launched our book - Broken Cuisine - which introduces five of these tofus to Western home cooks. The goal is to spark curiosity and demand for these ingredients, so that we can convince retailers to carry them.Do you have five minutes right now to take an action?Download our FREE e-book on Amazon. (If possible, TODAY)Skim through. A day or two later, leave an honest review. (Amazon easily detects spam.)Downloading and reviewing our book during launch week will convince the Amazon recommender algorithm to push our book, creating a virtuous cycle that will bring it to more people. If Broken Cuisine can crack the bestseller lists, I'm hopefully we can meaningfully start growing the market for these foods!Thank [...] --- First published: September 27th, 2023 Source: https://forum.effectivealtruism.org/posts/RTG4ZAy6bqJqXo7uW/our-tofu-book-has-launched-upvote-on-amazon --- Narrated by TYPE III AUDIO.
[Linkpost] “GWWC’s new community strategy” by Giving What We Can, GraceAdams
I’m extremely excited to announce the launch of our new community strategy that aims to connect our existing community better, and inspire more people to join us – I’ve been working on this for the past few months and am thrilled to finally share it! Over the almost two years I’ve been working with Giving What We Can, I’ve seen how important our community is — whether it's a supportive network to ask questions, a friendly face in a new city, or a shared sense that none of us are alone in wanting to do what we can to make a better world, I’ve personally taken much-needed inspiration, motivation, and comfort more times than I can count, simply by hearing from you! I want to make it easier for you to do the same. We are launching some new spaces for you to meet, connect, and chat with each other. [...] ---Outline:(01:51) TL;DR: How do I get involved?(02:15) 1: GWWC Local Groups(04:14) What will GWWC Groups look like?(05:39) Locations for GWWC Groups(06:57) What if we already have a local group focused on creating community around effective giving?(07:28) Help run a GWWC Group(09:20) Can I run events about effective giving if I’m not part of a group?(10:19) One for the World is helping to support our groups(10:54) Evaluating the trial(11:29) FAQ:(11:33) Being a part of a local group(13:19) 2: Our new online community space(14:17) 3: Global Effective Giving Community Directory--- First published: September 26th, 2023 Source: https://forum.effectivealtruism.org/posts/KuNsnszBxENS3tWMm/gwwc-s-new-community-strategy Linkpost URL:https://givingwhatwecan.org/blog/community-strategy-2023 --- Narrated by TYPE III AUDIO.
“Net global welfare may be negative and declining” by kyle_fish
OverviewThe total moral value of the world includes humans as well as all other beings of moral significance. As such, a picture of the overall trajectory of net global welfare that accounts for both human and non-human populations is important context for thinking about the future on any timescale, and about the potential impacts of transformative technologies.There's compelling evidence that life has gotten better for humans recently, but the same can’t be said for other animals, especially given the rise of industrial animal agriculture. How do these trends cash out, and what's the overall trajectory?I’ve used human and farmed animal population data, estimates of welfare ranges across different species, and estimates of the average wellbeing of different species to get a rough sense of recent trends in total global welfare. Importantly, this initial analysis is limited to humans and some of the most abundant farmed animals—it does not consider effects [...] ---Outline:(00:04) Overview(01:49) Notes and Assumptions(03:36) Analysis(03:39) Background and Definitions(04:57) Populations(05:57) Welfare Capacities(07:10) Total Welfare(09:29) Tentative Reflections(11:33) Future Directions(12:38) Acknowledgments--- First published: September 26th, 2023 Source: https://forum.effectivealtruism.org/posts/HDFxQwMwPp275J87r/net-global-welfare-may-be-negative-and-declining-1 --- Narrated by TYPE III AUDIO.
“AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)” by Lizka, Andy Weber
Andy Weber was the U.S. Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programs from 2009 to 2014. He's now a senior fellow at the Council on Strategic Risks. You might also know him from his appearance on the 80,000 Hours Podcast. Ask him anything![1]He’ll try to answer some questions on Friday, September 29 (afternoon, Eastern Time), and might get to some earlier. I (Lizka) am particularly excited that Andy can share his experience in nuclear (and other kinds of) threat reduction given that it is Petrov Day today.Instructions and practical notes: Please post your questions as comments on this post. Posting questions earlier is better than later. If you have multiple questions, it might be better to post them separately. Feel free to upvote questions that others have posted, as it might help prioritize questions later. Other context and topics that might be especially interesting to talk about:Risks of “tactical” nuclear [...] The original text contained 1 footnote which was omitted from this narration. --- First published: September 26th, 2023 Source: https://forum.effectivealtruism.org/posts/KbTasufbtJwZYiJQ8/ama-andy-weber-u-s-assistant-secretary-of-defense-from-2009 --- Narrated by TYPE III AUDIO.
[Linkpost] “New page on animal welfare on Our World in Data” by EdMathieu
Our team at Our World in Data just launched a new page on animal welfare! There you can find a brand new Animal Welfare Data Explorer, 22 interactive charts, and 4 new articles:How many animals get slaughtered every day?How many animals are factory-farmed?Do better cages or cage-free environments really improve the lives of hens?Adopting slower-growing breeds of chicken would reduce animal suffering significantlyOn Our World in Data, we cover many topics related to reducing human suffering: alleviating poverty, reducing child and maternal mortality, curing diseases, and ending hunger.But if we aim to reduce total suffering, society's ability to reduce this in other animals – which feel pain, too – also matters.This is especially true when we look at the numbers: every year, humans slaughter more than 80 billion land-based animals for farming alone. Most of these animals are raised in factory farms, often in painful and inhumane conditions.Estimates for fish [...] --- First published: September 25th, 2023 Source: https://forum.effectivealtruism.org/posts/pydpzj9sxcdSsAngr/new-page-on-animal-welfare-on-our-world-in-data Linkpost URL:https://ourworldindata.org/animal-welfare --- Narrated by TYPE III AUDIO.
“Two Years of Shrimp Welfare Project: Insights and Impact from our Explore Phase” by Aaron Boddy, Andres Jimenez
Summary Shrimp Welfare Project launched in Sep 2021, via the Charity Entrepreneurship Incubation Program. We aim to reduce the suffering of billions of farmed shrimps. This post summarises our work to date, what we plan to work on going forward, and clarifies areas where we’re not focusing our attention. This post was written to coincide with the launch of our new (Shr)Impact page on our website. We have four broad workstreams: corporate engagement, farmer support, research, and raising issue salience. We believe our key achievements to date are: Corporate engagement: Our Humane Slaughter Initiative (commitments with large producers to purchase electrical stunners, such as MER Seafood, and Seajoy), ongoing conversations with UK retailers (including Marks & Spencer, who now have a published Decapod Welfare Policy), and contributing to the Aquaculture Stewardship Council's (ASC) Shrimp Welfare Technical Working Group. Humane Slaughter Initiative: This work in particular seems to be our most promising [...] ---Outline:(00:08) Summary(06:22) Our Work So Far(06:35) Corporate Engagement(07:54) Farmer Support(09:14) Research(11:21) Raising Issue Salience(12:40) What Were Doing Now(13:01) Humane Slaughter Initiative(15:23) Sustainable Shrimp Farmers of India (SSFI)(16:53) Shrimp Welfare Index(18:59) What We Arent Doing(19:56) Wild-Caught(22:39) Specialisation work(25:55) ”Shrimp-Inclusive” work(28:31) How You Can Help(28:34) Funding(29:04) ASC stakeholder consultation(30:15) Volunteer or work with us(31:11) Newsletter and social media(31:49) Start a “shrimp welfare project”--- First published: September 25th, 2023 Source: https://forum.effectivealtruism.org/posts/Qo3559TqP5BzoQyWX/two-years-of-shrimp-welfare-project-insights-and-impact-from --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Monetary and social incentives in longtermist careers” by Vaidehi Agarwalla
In this post I talk about several strong non-epistemic incentives and issues that can influence people to pursue longtermist[1]career paths (and specifically x-risk reduction careers and AI safety[2]) for EA community members.For what it's worth, I personally I am sympathetic to longtermism, and to people who want to create more incentives for longtermist careers, because of the high urgency some assign to AI Safety and the fact that longtermism is a relatively new field. I am currently running career support pilots to support early-career longtermists.) However I think it's important to think carefully about career choices, even when it's difficult. I'm worried that these incentives lead people to feel (unconscious & conscious) pressure to pursue (certain) longtermist career paths even if it may not be the right choice for them. I think it's good for to be thoughtful about cause prioritization and career choices, especially for people earlier in their [...] ---Outline:(01:00) Incentives(01:04) Good pay and job security(02:04) Funding in an oligopoly(03:54) It's easier to socially defer(05:02) High status(06:49) Role models and founder's effects(07:46) Availability(09:15) Support(09:54) Transferable career capital(10:28) Conclusion and SuggestionsThe original text contained 18 footnotes which were omitted from this narration. --- First published: September 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/SWfwmqnCPid8PuTBo/monetary-and-social-incentives-in-longtermist-careers --- Narrated by TYPE III AUDIO.
“Debate series: should we push for a pause on the development of AI?” by Ben_West
In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I’ve asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.On September 16, we will launch with three posts: David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps [...] --- First published: September 8th, 2023 Source: https://forum.effectivealtruism.org/posts/6SvZPHAvhT5dtqefF/debate-series-should-we-push-for-a-pause-on-the-development --- Narrated by TYPE III AUDIO.
“Will MacAskill has stepped down as trustee of EV UK” by lincolnq
Earlier today, Will MacAskill stepped down from the board of Effective Ventures UK[1], having served as a trustee since its founding more than a decade ago. Will has been intending to step down for several months and announced his intention to do so earlier this year. Will had initially planned to remain on the board until we brought on additional trustees to replace him. However, given that our trustee recruitment process has taken longer than anticipated, and given also that Will continues to be recused from a significant proportion of board business[2], he felt that it didn’t make sense for him to stay on any longer. Will announced his resignation today. As a founding board member of EV UK (then called CEA), Will played a vital role in getting EV and its constituent projects off the ground, including co-founding Giving What We Can and 80,000 Hours. We are very grateful [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: September 21st, 2023 Source: https://forum.effectivealtruism.org/posts/mArisdpuQiFtTNWw3/will-macaskill-has-stepped-down-as-trustee-of-ev-uk --- Narrated by TYPE III AUDIO.
“The Case for AI Safety Advocacy to the Public” by Holly_Elmore
tl;dr: Advocacy to the public is a large and neglected opportunity to advance AI Safety. AI Safety as a field is unfamiliar with advocacy, and it has reservations, some founded and others not. A deeper understanding of the dynamics of social change reveals the promise of pursuing outside game strategies to complement the already strong inside game strategies. I support an indefinite global Pause on frontier AI and I explain why Pause AI is a good message for advocacy. Because I’m American and focused on US advocacy, I will mostly be drawing on examples from the US. Please bear in mind, though, that for Pause to be a true solution it will have to be global. The case for advocacy in generalAdvocacy can workI’ve encountered many EAs who are skeptical about the role of advocacy in social change. While it is difficult to prove causality in social phenomena like this [...] ---Outline:(00:46) The case for advocacy in general(00:50) Advocacy can work(02:32) We can now talk to the public about AI risk(03:22) What having the public's support gets us(05:17) Social change works best inside and outside the system(09:31) Pros and potential pitfalls of advocacy(09:35) Other pros of advocacy(11:11) Misconceptions about advocacy(13:23) Downsides to advocacy(14:26) Downside risks of continuing the status quo(15:59) The case for advocating AI Pause(16:31) Pros and pitfalls of AI Pause(21:09) How AI Pause advocacy can effect change(24:01) Audience questions--- First published: September 20th, 2023 Source: https://forum.effectivealtruism.org/posts/Y4SaFM5LfsZzbnymu/the-case-for-ai-safety-advocacy-to-the-public --- Narrated by TYPE III AUDIO.
“My life would be much harder without the Community Health Team. I think yours might too.” by Daystar Eld
I don't have much time to get into this, but I heard rumblings, saw a post, and wrote a comment, and now I'm making a post of my own because this information feels worth spreading now. I will not be going into much more details, for reasons that should be obvious.For those that don't know me, I'm a therapist who has been working in the community for about 5 years now, and has been almost exclusively working with the community since early 2020, though I've been scaling down my therapy practice to focus on other projects like therapist recruitment/supervision, and mental health research. I also do mediation now and then, and in both situations, Community Health has been incredibly helpful.Another of my major things is teaching at various rationality camps and workshops for high-school age students, as well as some for adults. To say that Community Health has been valuable [...] --- First published: September 19th, 2023 Source: https://forum.effectivealtruism.org/posts/78A2NHL3zBS3ESurp/my-life-would-be-much-harder-without-the-community-health --- Narrated by TYPE III AUDIO.
“Relationship between EA Community and AI safety” by Tom Barnes
Personal opinion only. Inspired by filling out the Meta coordination forum survey. Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this questionOne open question about the EA community is it's relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):With the growth of AI safety, I think the field now looks something like this:It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:AI safety grows in AI/ML communitiesEA grows in other specific causes, as well as an “EA-qua-EA” movement.As an ideal state, I could imagine the EA [...] --- First published: September 18th, 2023 Source: https://forum.effectivealtruism.org/posts/opCxiPwxFcaaayyMB/relationship-between-ea-community-and-ai-safety --- Narrated by TYPE III AUDIO.
“Relationship between EA Community and AI safety” by Tom Barnes🔸
Personal opinion only. Inspired by filling out the Meta coordination forum survey. Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this question One open question about the EA community is it's relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):[1] With the growth of AI safety, I think the field now looks something like this: It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother: AI safety grows in AI/ML communities EA grows in other specific causes, as well as an “EA-qua-EA” [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: September 18th, 2023 Source: https://forum.effectivealtruism.org/posts/opCxiPwxFcaaayyMB/relationship-between-ea-community-and-ai-safety --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
[Linkpost] “Map of the biosecurity landscape (list of GCBR-relevant orgs for newcomers)” by Max Görlitz
When talking to newcomers to the field of biosecurity, I often felt annoyed that there wasn't a single introductory resource I could point them to that gives an overview of all the biosecurity-relevant organizations, upskilling opportunities, and funders. With the help of a lot of contributors, I started this Google doc to provide such a resource. I'm sure that we missed some relevant organizations, and it'd be lovely if some people were to comment on the doc with additional information!I'll copy the current version below, but please check out the link to the doc if you want to comment and see the most up-to-date version in the future!Contributors: Max Görlitz, Simon Grimm, Andreas Prenner, Jasper Götting, Anemone Franz, Eva Siegmann, & moreIntroductionI would like to see something like aisafety.world for biosecurity. There already exists the Map of Biosecurity Interventions, but I want one for organizations!This is a work-in-progress attempt to [...] ---Outline:(01:25) Policy(01:28) Think tanks(01:31) Europe(04:30) USA(07:11) (Inter)governmental actors(07:15) USA(08:25) Europe(09:21) International actors(11:49) Technical RandD(12:26) GCBR-focused non-profits(14:06) Relevant for-profits(14:27) Academic labs(14:31) Upskilling (fellowships)(14:35) Focused on newcomers(15:44) E-learning resources(16:42) Focused on people with some experience(20:32) Funders of work on biorisk mitigation(20:36) Major funders explicitly focused on catastrophic biorisk(22:17) Smaller funders with some part of their portfolio dedicated to catastrophic biorisk(23:33) Funders that are focused on pandemic preparedness more broadly--- First published: September 17th, 2023 Source: https://forum.effectivealtruism.org/posts/28iXeSY75aLsqAagg/map-of-the-biosecurity-landscape-list-of-gcbr-relevant-orgs Linkpost URL:https://bit.ly/biosecurity-map --- Narrated by TYPE III AUDIO.
“AI Pause Will Likely Backfire” by nora
Should we lobby governments to impose a moratorium on AI research? Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it's clear that the benefits of doing so would significantly outweigh the costs.[1] In this essay, I’ll argue an AI pause would increase the risk of catastrophically bad outcomes, in at least three different ways:Reducing the quality of AI alignment research by forcing researchers to exclusively test ideas on models like GPT-4 or weaker.Increasing the chance of a “fast takeoff” in which one or a handful of AIs rapidly and discontinuously become more capable, concentrating immense power in their hands.Pushing capabilities research underground, and to countries with looser regulations and safety requirements.Along the way, I’ll introduce an argument for optimism [...] ---Outline:(01:09) Feedback loops are at the core of alignment(01:54) Alignment and robustness are often in tension(03:21) Alignment is doing pretty well(04:30) Alignment research was pretty bad during the last “pause”(06:31) Fast takeoff has a really bad feedback loop(08:43) Slow takeoff is the default (so don’t mess it up with a pause)(09:25) Alignment optimism: AIs are white boxes(09:49) Human and animal alignment is black box(11:41) Status quo AI alignment methods are white box(13:25) White box alignment in nature(15:35) Realistic AI pauses would be counterproductive(15:56) Realistic pauses are not international(18:19) Realistic pauses don’t include hardware(19:23) Hardware overhang is likely(20:58) Likely consequences of a realistic pauseThe original text contained 8 footnotes which were omitted from this narration. --- First published: September 16th, 2023 Source: https://forum.effectivealtruism.org/posts/JYEAL8g7ArqGoTaX6/ai-pause-will-likely-backfire --- Narrated by TYPE III AUDIO.
“Mistakes, flukes, and good calls I made in my multiple careers” by Catherine Low🔸
I’m Catherine, and I'm one of the Community Liaisons working in CEA's Community Health and Special Projects Team. This is a personal post about my career. I’m somewhat to the right of the main age peak of the EA community😬 .So I’ve had a lot of time to make mistakes sub-optimal choices in my career. It has been a long and odd road from my childhood/teenage dream jobs (train driver, Department of Conservation ranger, vet and then physicist) to where I am now.Train I planned to drive. Endangered parrot I planned to help make less endangered. Before I got into EA Fluke 1: Born into immense privilege by global standards (and reasonable privilege by rich country standards) Mistake 1: Not doing something with that privilege. I wish someone (maybe me?) sat me down and said (maybe a more polite version of)“You know which part of the bell curve you’re [...] ---Outline:(00:47) Before I got into EA(00:51) Fluke 1: Born into immense privilege by global standards (and reasonable privilege by rich country standards)(01:00) Mistake 1: Not doing something with that privilege. I wish someone (maybe me?) sat me down and said (maybe a more polite version of)“You know which part of the bell curve you’re on. Try doing something more useful for the world!”.(01:44) Good call 1: I talked to other students in the research group before choosing a PhD supervisor(02:08) Mistake 2: Mistaking my interest in the ideas with interest in the day to day work(02:28) Mistake 3: Not giving up sooner(03:13) Mistake 4: Not exploring more options (even though they were scary)(03:29) Good Call 2: Got really good at a valuable(ish) thing, and then used that as leverage to branch out a little(04:12) After learning about EA(04:21) Good call 3: I didn’t let my age put me off changing careers(04:59) Good call 4: I reliably did stuff that seemed to need doing, even if they were boring, low status, or unpaid. I tried to be of service to others.(06:09) Mistake 5: Thought too narrowly about my absolute advantage and didnt consider where my largest impact was.(07:44) Good Call 5: I took a scary step - quitting my secure job without another guaranteed paid opportunity lined up (but with runway and a plan Z).(08:19) Mistake 6: Not giving up sooner (again)(08:44) Mistake 7: Over updating on a few rejections(09:55) Fluke 2: Stumbling into a role at the Groups team at CEA(10:22) Good call 6 (the best call): Doing what people I trusted thought would be most valuableThe original text contained 2 footnotes which were omitted from this narration. --- First published: September 15th, 2023 Source: https://forum.effectivealtruism.org/posts/w59DkYCJYqaYDpEqG/mistakes-flukes-and-good-calls-i-made-in-my-multiple-careers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Shrimp: The animals most commonly used and killed for food production” by Rethink Priorities, Daniela R. Waldhorn, Elisa Autric
Citation: Romero Waldhorn, D., & Autric, E. (2022, December 21). Shrimp: The animals most commonly used and killed for food production. https://doi.org/10.31219/osf.io/b8n3t SummaryDecapod crustaceans or, for short, decapods[1] (e.g., crabs, shrimp, or crayfish) represent a major food source for humans across the globe. If these animals are sentient, the growing decapod production industry likely poses serious welfare concerns for these animals.Information about the number of decapods used for food is needed to better assess the scale of this problem and the expected value of helping these animals.In this work we estimated the number of shrimp and prawns farmed and killed in a year, given that they seem to be the vast majority of decapods used in the food system.We estimated that around:440 billion (90% subjective confidence interval [SCI]: 300 billion - 620 billion) farmed shrimp are killed per year, which vastly exceeds the figure of the most numerous farmed vertebrates used [...] ---Outline:(07:47) Methods(08:56) Farmed shrimp estimates(11:50) Wild-caught shrimp estimates(16:36) Total shrimp estimates(16:51) Results(16:54) Farmed shrimp(22:45) Wild shrimp(27:19) Total shrimp estimates(29:39) Discussion(38:58) Conclusion(40:10) Acknowledgments(41:12) References(57:17) Appendix(57:20) Farmed shrimp estimates(01:07:05) Wild-caught shrimp estimatesThe original text contained 29 footnotes which were omitted from this narration. --- First published: September 10th, 2023 Source: https://forum.effectivealtruism.org/posts/Fhoq7tP9LYPqaJDxx/shrimp-the-animals-most-commonly-used-and-killed-for-food --- Narrated by TYPE III AUDIO.
“An Incomplete List of Things I Think EAs Probably Shouldn’t Do” by Rockwell
There has already been ample discussion of what norms and taboos should exist in the EA community, especially over the past ten months. Below, I'm sharing an incomplete list of actions and dynamics I would strongly encourage EAs and EA organizations to either strictly avoid or treat as warranting a serious—and possibly ongoing—risk analysis.I believe there is a reasonable risk should EAs:Live with coworkers, especially when there is a power differential and especially when there is a direct report relationshipDate coworkers, especially when there is a power differential and especially when there is a direct report relationshipPromote[1] drug use among coworkers, including legal drugs, and including alcohol and stimulantsLive with their funders/grantees, especially when substantial conflict-of-interest mechanisms are not activeDate their funders/grantees, especially when substantial conflict-of-interest mechanisms are not activeDate the partner of their funder/grantee, especially when substantial conflict-of-interest mechanisms are not activeRetain someone as a full-time contractor or grant recipient [...] The original text contained 1 footnote which was omitted from this narration. --- First published: September 8th, 2023 Source: https://forum.effectivealtruism.org/posts/MvwctfyZ9NrhPzyPj/an-incomplete-list-of-things-i-think-eas-probably-shouldn-t --- Narrated by TYPE III AUDIO.
“Sharing Information About Nonlinear” by Ben Pace
Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think [...] ---Outline:(10:09) A High-Level Overview of The Employees’ Experience with Nonlinear(16:25) An assortment of reported experiences(40:59) Conversation with Nonlinear(48:47) My thoughts on the ethics and my takeawaysThe original text contained 10 footnotes which were omitted from this narration. --- First published: September 7th, 2023 Source: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear --- Narrated by TYPE III AUDIO.
“Nick Beckstead is leaving the Effective Ventures boards” by Eli Rose, lincolnq
On 23rd August, Nick Beckstead stepped down from the boards of Effective Ventures UK and Effective Ventures US.For context, EV UK and EV US host and fiscally sponsor several (mostly EA-related) projects, such as CEA, 80,000 Hours and various others (see more here).Since November 2022, Nick has been recused from all board matters related to the collapse of FTX. Over time, it became clear that Nick’s recusal made it difficult for him to add sufficient value to EV and its projects for it to be worth him remaining on the boards[1]. Nick and the other trustees felt that this was sufficient reason for Nick to step down.Nick wanted to share the following:Ever since the collapse of FTX, I've been recused from a substantial fraction of business on both boards. This has made it hard to contribute as much as I would like to as a board member, during a time [...] The original text contained 1 footnote which was omitted from this narration. --- First published: September 6th, 2023 Source: https://forum.effectivealtruism.org/posts/Defu3jkejb7pmLjeN/nick-beckstead-is-leaving-the-effective-ventures-boards --- Narrated by TYPE III AUDIO.
[Linkpost] “What I would do if I wasn’t at ARC Evals” by Lawrence Chan
In which: I list 9 projects that I would work on if I wasn’t busy working on safety standards at ARC Evals, and explain why they might be good to work on. Epistemic status: I’m prioritizing getting this out fast as opposed to writing it carefully. I’ve thought for at least a few hours and talked to a few people I trust about each of the following projects, but I haven’t done that much digging into each of these, and it’s likely that I’m wrong about many material facts. I also make little claim to the novelty of the projects. I’d recommend looking into these yourself before committing to doing them. (Total time spent writing or editing this post: ~8 hours.)Standard disclaimer: I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of ARC/FAR/LTFF/Lightspeed or any other org [...] ---Outline:(02:41) Relevant beliefs I have(04:52) Technical AI Safety Research(05:38) Ambitious mechanistic interpretability(08:34) Late stage project management and paper writing(09:53) Creating concrete projects and research agendas(11:32) Grantmaking(12:37) Working on Open Philanthropy’s Funding Bottlenecks(14:24) Working on the other EA funders’ funding bottlenecks(16:02) Chairing the Long-Term Future Fund(17:28) Community Building(17:54) Onboarding senior academics and research engineers(20:05) Extending the young EA/AI researcher mentorship pipeline(22:31) Writing blog posts or takes in generalThe original text contained 9 footnotes which were omitted from this narration. --- First published: September 6th, 2023 Source: https://forum.effectivealtruism.org/posts/zcHdehWJzDpfxJpmf/what-i-would-do-if-i-wasn-t-at-arc-evals Linkpost URL:https://www.lesswrong.com/posts/6FkWnktH3mjMAxdRT/what-i-would-do-if-i-wasn-t-at-arc-evals --- Narrated by TYPE III AUDIO.
“Want to make a difference on policy and governance? Become an expert in something specific and boring” by ASB
I sometimes get a vibe that many people trying to ambitiously do good in the world (including EAs) are misguided about what doing successful policy/governance work looks like. An exaggerated caricature would be activities like: dreaming up novel UN structures, spending time in abstract game theory and ‘strategy spirals[1]’, and sweeping analysis of historical case studies.Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics. One of the most successful people I’ve met in biosecurity got their start by getting really good at analyzing obscure government budgets.Here are some crowdsourced example areas I would love to see more people become experts in:Legal liability - obviously relevant to biosecurity and AI safety, and I’m especially interested in how liability law would handle spreading infohazards (e.g. if a bio lab publishes a virus sequence that is then used for bioterrorism, or if [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: August 31st, 2023 Source: https://forum.effectivealtruism.org/posts/J7nmbqcWncPMZFhGC/want-to-make-a-difference-on-policy-and-governance-become-an --- Narrated by TYPE III AUDIO.
“The Lives We Can Save” by Omnizoid
I work as a Resident Assistant at my college. Last year, only a few weeks into me starting, I was called at night to come help with a drunk student. I didn’t actually help very much, and probably didn’t have to be there. I didn’t even have to write up the report at the end. At one point I went outside to let medical services into the building, but mostly I just stood in a hallway.The person in question was so drunk they couldn’t move. They had puked in the bathroom and were lying in the hallway crying. They could barely talk. When Campus Safety arrived they kneeled down next to this person and helped them drink water, while asking the normal slew of questions about the person’s evening.They asked this person, whose name I can’t even remember, why they had been drinking so much. They said, in between hiccups [...] --- First published: September 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/buFyakASucJnrZj7X/the-lives-we-can-save --- Narrated by TYPE III AUDIO.
“Should I patent my cultivated meat method?” by Michelle Hauser
Congrats on having invented something exciting! Usually, the best way to get innovative new technology into the hands of beneficiaries quickly is to get a for-profit company to invest with a promise of making money. This can happen via licensing a patent to an existing manufacturer, or creating a whole startup company and raising venture capital, etc. One of the things such investors want to see is a 'moat': something that this company can do that no other company can easily copy. A patent/exclusive license is a good way to create a moat. There are some domains like software where simply publishing 'open source' ideas causes those ideas to get used, but for most domains including manufacturing, my default expectation is that new tech is not used unless someone can make money off it. Pharma is a great example - there are tons of vaccines and niche treatments that we [...] --- First published: August 31st, 2023 Source: https://forum.effectivealtruism.org/posts/X2w4uqjDuGPJFaHPp/should-i-patent-my-cultivated-meat-method --- Narrated by TYPE III AUDIO.
“Learning from our mistakes: how HLI plans to improve” by PeterBrietbart, MichaelPlant
Hi folks, in this post we’d like to describe our views as the Chair (Peter) and Director (Michael) of HLI in light of the recent conversations around HLI’s work. The purpose of this post is to reflect on HLI’s work and its role within the EA community in response to community member feedback, highlight what we’re doing about it, and engage in further constructive dialogue on how HLI can improve moving forward. HLI hasn’t always got things right. Indeed, we think there have been some noteworthy errors (quick note: our goal here isn’t to delve into details but to highlight broad lessons learnt, so this isn’t an exhaustive list):Most importantly, we were overconfident and defensive in communication, particularly around our 2022 giving season post. We described our recommendation for StrongMinds using language that was too strong: “We’re now in a position to confidently recommend StrongMinds as the most effective way we [...] --- First published: September 1st, 2023 Source: https://forum.effectivealtruism.org/posts/4edCygGHya4rGx6xa/learning-from-our-mistakes-how-hli-plans-to-improve --- Narrated by TYPE III AUDIO.
“In defence of epistemic modesty” by Gregory Lewis
This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic modesty’, go on to offer a variety of reasons that motivate it, and reply to some common objections. Along the way, I show common traps people being inappropriately modest fall into. I conclude that modesty is a superior epistemic strategy, and ought to be more widely used - particularly in the EA/rationalist communities. [gdoc] Provocation I argue for this: In virtually all cases, the credence you hold for any given belief should be dominated by the balance of credences held by your epistemic peers and superiors. One's own convictions should weigh no more [...] ---Outline:(00:45) Provocation(01:05) Introductions and clarifications(01:10) A favourable motivating case(03:08) Weaker and stronger forms of modesty(04:25) Motivations for more modesty(04:42) The symmetry case(06:53) Compressed sensing of (and not double-counting) the object level(09:00) Repeated measures, brains as credence censors, and the wisdom of crowds(11:09) Deferring to better brains(12:38) Inference to the ideal epistemic observer(15:26) Excursus: Against common justifications for immodesty(16:21) Being ‘well informed’ (or even true expertise) is not enough(18:02) Common knowledge ‘silver bullet arguments’(19:47) Debunking the expert class (but not you)(22:51) Private evidence and pet arguments(24:52) Objections(25:04) In theory(25:08) There's no pure ‘outside view’\[12\](25:56) Immodestly modest?(28:55) In practice(29:25) Trivial (and less trivial) non-use cases(31:40) In theory, the world should be mad(34:45) Empirically, the world is mad(37:22) Expert groups are seldom in reflective equilibrium(42:05) Somewhat satisfying Shulman(42:55) Practical challenges to modesty(44:21) Community benefits to immodesty(47:25) Conclusion: a pean, and a plea(47:53) Rationalist/EA exceptionalism(50:46) To discover, not summarise(53:00) Paradoxically pathological modesty(54:28) Coda(55:02) Acknowledgements--- First published: October 29th, 2017 Source: https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty --- Narrated by TYPE III AUDIO.
“Progress report on CEA’s search for a new CEO” by MaxDalton
Progress report on CEA’s search for a new CEOI[1] wanted to give an update on the Centre for Effective Altruism (CEA)’s search for a new CEO.We (Claire Zabel, Max Dalton, and Michelle Hutchinson) were appointed by the Effective Ventures boards to lead this search and make a recommendation to the boards. The committee is advised by James Snowden, Caitlin Elizondo, and one experienced executive working outside EA. We previously announced the search and asked for community input in this post.Note, we set out searching for an Executive Director, and during the process have changed the role title to CEO because it was more legible to candidates not familiar with CEA or EV. The role scope remains unchanged.In summary, we received over 400 nominations, reached out to over 150 people, spoke to about 60, and received over 25 applications. We’re still considering around 15 candidates, and are currently more deeply assessing [...] ---Outline:(00:05) Progress report on CEA’s search for a new CEO(01:11) Process(09:07) Commentary/reflectionsThe original text contained 1 footnote which was omitted from this narration. --- First published: August 31st, 2023 Source: https://forum.effectivealtruism.org/posts/Bg6qxLGhsn7pQzHGX/progress-report-on-cea-s-search-for-a-new-ceo --- Narrated by TYPE III AUDIO.
[Classic post] “Integrity for consequentialists” by Paul_Christiano
For most people I don't think it's important to have a really precise definition of integrity. But if you really want to go all-in on consequentialis… --- First published: November 14th, 2016 Source: https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists-1 --- Narrated by TYPE III AUDIO.
“How much do EAGs cost (and why)?” by Eli_Nathan
TL;DR: EAGs from 2022–2023 each cost around $2M–$3.6M USD at around $1.5k–2.5k per person per event.These events typically cost a lot because the fixed costs of running professional events in the US and UK are surprisingly high.We’re aiming to get these event costs down to ≤$2M each moving forwards.We’ve already started to cut back on spending and will continue to do so, whilst also raising default ticket prices to recoup more of our costs.Throughout this post, for legibility I discuss the direct costs behind running our events and don’t include other indirect costs like staff salaries, software, and our office space (which would increase the costs below by ~25%).IntroductionThis year (2023), EAG Bay Area cost $2M and EAG London cost £2M (including travel grant costs).[1] Our most expensive event ever was EAG SF 2022, which cost $3.6M. This gives us a range of about $1.5–2.5k per person per event.[2]In-person EAGx [...] ---Outline:(03:07) Venue and catering(06:07) Other costs(07:08) What are we doing to save money?The original text contained 9 footnotes which were omitted from this narration. --- First published: August 26th, 2023 Source: https://forum.effectivealtruism.org/posts/n5GJEP3tMrzdfYPGG/how-much-do-eags-cost-and-why --- Narrated by TYPE III AUDIO.
“Empowering Numbers: FEM since 2021” by hhart, Anna_Christina
∼250,000 new contraceptive users, ∼20 million new listeners and up to ∼60x cost effectivenessTL;DR: In 2021, FEM launched our pilot: a 3-month family planning radio campaign, educating listeners about maternal health and effective contraception. Our ads and shows played up to 860 times in Kano state in northern Nigeria, reaching ~5.6 million. An independent survey showed that in 11 months, contraceptive use increased by ~75% among all women in the state. Since then, we launched a 9-month campaign, aired proofs of concept in three new locations and re-aired in 8 more states, reaching an estimated 20 million new listeners. We were recommended by Giving What We Can and Founders Pledge, who assess our campaigns as ~22 times as effective as cash transfers. We developed new technologies that will allow us to customise health advice to listeners and run a randomised controlled trial. We continue to build organisational capacity to scale across [...] ---Outline:(00:14) TL;DR:(01:21) Promising results from our pilot (Oct - Dec 2021)(03:58) New technology to evaluate our impact (Jan -Dec 2022)(05:40) Launching our first 9 month campaign (May 2022 - Feb 2023)(06:49) Scaling our work: three new proofs of concept (Oct - Dec 2022)(08:13) Cost-effectiveness estimates(09:47) Meet the FEM team(12:57) Key learnings(14:47) Our biggest mistakes(16:00) Our biggest successes(16:55) What we’re doing now(18:17) Our plan for the years ahead(20:30) Support our work--- First published: August 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/Xd9ZZuPCKAvKpzvdB/empowering-numbers-fem-since-2021 --- Narrated by TYPE III AUDIO.
“Impact obsession: Feeling like you never do enough good” by David_Althaus, Ewelina_Tur
SummaryImpact obsession is a potentially unhelpful way of relating to doing good which we’ve observed among effective altruists, including ourselves. (More)What do we mean by impact obsession?One can distinguish unhealthy and healthy forms of impact obsession. (More)Common characteristics include an overwhelming desire for doing the most good one can do, basing one’s self-worth on one’s own impact, judging it by increasingly demanding standards (“impact treadmill”), overexerting oneself, neglecting or subjugating non-altruistic interests, and anxiety about having no or negative impact. (More)Is impact obsession good or bad?Many aspects of impact obsession are reasonable and even desirable. (More)Others can have detrimental consequences like depression, anxiety, guilt, exhaustion, burnout, and disillusionment. (More)What to do about (unhealthy) impact obsession?Besides useful standard (mental) health advice, potentially helpful strategies involve, for example: reflecting on our relationship with and motives for having impact, integrating conflicting desires, shifting from avoidance to approach motivation, cultivating additional sources of meaning and self-worth, reducing resistance [...] ---Outline:(05:31) Why we wrote this post(06:21) What do we mean by impact obsession?(06:25) Healthy vs. unhealthy impact obsession(08:25) Common characteristics(08:56) Overwhelming desire for maximizing positive impact(09:41) Self-worth and identity are linked to impact(10:09) Personally demanding (or unreasonable) standards(10:39) Excessive comparisons and the impact treadmill(11:50) Pushing oneself too hard and neglecting non-altruistic interests(12:34) Black-and-white thinking(13:07) Frequent worries about (prioritization) mistakes(13:40) Obsessive thoughts about impact(14:11) Impact obsession, clinical perfectionism, and scrupulosity(15:57) Benefits and costs(16:00) Isn’t impact obsession reasonable?(17:43) Benefits(18:54) Potential negative consequences of unhealthy impact obsession(19:11) Depression, feeling worthless or unable to contribute(21:05) Anxiety and guilt(22:47) Burnout and (chronic) fatigue(25:09) Reduced curiosity, excitement, and interests(27:02) Less likely to enter flow states and reduced creativity(27:58) Competitive comparisons, shame, and isolation(29:13) Other risks(29:51) What might help(31:11) Reflect on your relationship with having impact and your conflicting motivations(32:49) Strengthen additional sources of meaning and self-worth(35:48) What about value drift?(37:25) Approach motivation vs. avoidance motivation(38:51) Obligation vs. exciting opportunity(39:20) Focusing on the positive(41:09) Focus less on yourself, compare yourself less(42:11) You want to be the least impactful person in the world(43:57) Leaning into absurdity(45:22) Accepting what we cannot change(46:31) Beware self-improvement perfectionism(46:57) Welcoming and exploring negative emotions with acceptance and curiosity(50:48) Fear and coming to terms with the possibility of having no impact(53:48) Sadness, despair, and guilt(55:22) Feeling like a failure or inadequate(58:30) Replacing self-criticism with self-compassion(01:00:32) Fully commit to rest(01:00:51) Keeping yourself busy with semi-useful tasks(01:02:08) Not committing to rest(01:03:56) It’s fine if you need more rest than others(01:04:25) Other related relevant resources(01:04:58) AcknowledgementsThe original text contained 29 footnotes which were omitted from this narration. --- First published: August 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/sBJLPeYdybSCiGpGh/impact-obsession-feeling-like-you-never-do-enough-good --- Narrated by TYPE III AUDIO.
“Select examples of adverse selection in longtermist grantmaking” by Linch
Sometimes, there is a reason other grantmakers aren't funding a fairly well-known EA (-adjacent) project. This post is written in a professional capacity, as a volunteer/sometimes contractor for EA Funds’ Long-Term Future Fund (LTFF), which is a fiscally sponsored project of Effective Ventures Foundation (UK) and Effective Ventures Foundation USA Inc. I am not and have never been an employee at either Effective Ventures entity. Opinions are my own and do not necessarily represent that of any of my employers or of either Effective Ventures entity. I originally wanted to make this post a personal shortform, but Caleb Parikh encouraged me to make it a top-level post instead.There is an increasing number of new grantmakers popping up, and also some fairly rich donors in longtermist EA that are thinking of playing a more active role in their own giving (instead of deferring). I am broadly excited about the diversification of [...] ---Outline:(02:13) Reasons against broadly sharing reasons for rejection(03:16) Select examples(06:52) Some tradeoffs and other considerationsThe original text contained 1 footnote which was omitted from this narration. --- First published: August 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/sWMwGNgpzPn7X9oSk/select-examples-of-adverse-selection-in-longtermist --- Narrated by TYPE III AUDIO.
“An Elephant in the Community Building room.” by Kaleem
These are my own views, and not those of my employer, EVOps, or of CEA who I have contracted for in the past and am currently contracting for now. This was meant to be a strategy fortnight contribution, but it's now a super delayed/unofficial, and underwritten strategy fortnight contribution. [1] Before you read this: This is pretty emotionally raw, so please 1) don’t update too much on it if you think I’m just being dramatic 2) I might come back and endorse or delete this at some point. I’ve put off writing this for a long time, because I know that some of the conclusions or implications might be hurtful or cause me to become even more unpopular than I already feel I am - as a result, I’ve left it really brief, but I’m willing to make it more through if I get the sense that people think it’d be [...] ---Outline:(00:21) Before you read this:(01:20) Summary:(01:40) Introduction(02:07) Global EA(03:19) Narrow EA(04:58) Reasons I think this is important and should be addressed:(07:14) Reasons not to address this:(07:48) Reasons I might be wrong:(08:16) Other notes:(09:08) About me:(09:36) Thanks:The original text contained 3 footnotes which were omitted from this narration. --- First published: August 21st, 2023 Source: https://forum.effectivealtruism.org/posts/uxrAdXdYpXodrggto/an-elephant-in-the-community-building-room --- Narrated by TYPE III AUDIO.
“Beware surprising and suspicious convergence” by Gregory Lewis
Imagine this: Oliver: … Thus we see that donating to the opera is the best way of promoting the arts. Eleanor: Okay, but I’m principally interested in improving human welfare. Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too. Generally, what is best for one thing is usually not the best for something else, and thus Oliver's claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver's claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare. The rest of this essay tries to better establish what is going on [...] ---Outline:(01:31) Varieties of convergence(07:00) Proxy measures and prediction(08:20) Pragmatic defeat and Poor Propagation(13:23) EA examples(18:18) Conclusion--- First published: January 24th, 2016 Source: https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
“Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat” by Jacob_Peacock
Plant-based meats, like the Beyond Sausage or Impossible Burger, and cultivated meats have become a source of optimism for reducing animal-based meat usage. - Public health, environmental, and animal welfare advocates aim to mitigate the myriad harms of meat usage. - The price, taste, and convenience (PTC) hypothesis posits that if plant-based meat is competitive with animal-based meat on these three criteria, the large majority of current consumers would replace animal-based meat with plant-based meat. - The PTC hypothesis rests on the premise that PTC primarily drive food choice. The PTC hypothesis and premise are both likely false. The original text contained 9 footnotes which were omitted from this narration. --- First published: August 15th, 2023 Source: https://forum.effectivealtruism.org/posts/iukeBPYNhKcddfFki/price-taste-and-convenience-competitive-plant-based-meat --- Narrated by TYPE III AUDIO.
[Classic post] “Effective Altruism is a Question (not an ideology)” by Helen
What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist? I don’t think that any of these questions make sense. It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement. But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.) Effective Altruism isn’t [...] --- First published: October 16th, 2014 Source: https://forum.effectivealtruism.org/posts/FpjQMYQmS3rWewZ83/effective-altruism-is-a-question-not-an-ideology --- Narrated by TYPE III AUDIO.
“Two Years Community Building, Ten Lessons (Re)Learned” by Rockwell
Today is my two-year anniversary with EA NYC, where I serve as director. To say I've learned a lot would be a tremendous understatement. The more visible parts of that learning are often technical: who's doing what in which organization, why [niche thing I'd never heard of] is actually really important, what mentorship opportunities exist for people with [these very specific qualities], how to wrangle 100+ people into a group photo and still have everyone visible. (I've gotten super good at that last part!) But I've also learned—or relearned—some much larger life lessons.None of the below lessons are wholly novel, but I think they're worth stating for the broader community. Regardless of where I personally am in five or ten years, I think this is a list I'll return to. The contents are simple but so, so easy to forget. Without further ado, here are ten major lessons I've from [...] ---Outline:(01:48) 2. Extremely impressive people have imposter syndrome(02:30) 3. People often want permission to do good(03:39) 4. Tempering rejection is a learned skill(04:22) 5. There are no adults in the room(04:56) 6. Getting adults in the room requires making room(05:20) 7. Blank slates are scarier for some than others(05:51) 8. Inclusivity requires exclusivity(06:20) 9. People who care a lot can be easy targets for predators(07:00) 10. You will run into your ex at EAG, or: Social/professional divides are hard without active effort--- First published: August 10th, 2023 Source: https://forum.effectivealtruism.org/posts/oPao8avpq48GPvzDZ/two-years-community-building-ten-lessons-re-learned --- Narrated by TYPE III AUDIO.
“Update on cause area focus working group” by Bastian_Stern
Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small [...] --- First published: August 10th, 2023 Source: https://forum.effectivealtruism.org/posts/3kMQTjtdWqkxGuWxB/update-on-cause-area-focus-working-group --- Narrated by TYPE III AUDIO.
“Best Use of 2 Minutes this Month (U.S.)” by Rockwell
Actions that have a large impact sometimes don't feel like much. To counteract that bias, I'm sharing arguably the best use of two minutes this month for those in the U.S.BackgroundIn May, the US Supreme Court upheld the ability of US states to require certain standards for animal products sold within their borders, e.g. California's Prop 12, which banned the sale of animal products that involve certain intensive confinement practices.It was a huge victory! But after their defeat in the Supreme Court, the animal farming industry has turned to Congress, pushing the EATS Act.The proposed legislation would take away state power to regulate the kind of agricultural products that enter their borders. Essentially, if any one state permits the production or sale of a particular agricultural product, every other state could have to do so as well, regardless of how dangerous or unethical the product is and regardless of existing [...] --- First published: August 7th, 2023 Source: https://forum.effectivealtruism.org/posts/qgbLr6es2jCwwcGuH/best-use-of-2-minutes-this-month-u-s --- Narrated by TYPE III AUDIO.
[Linkpost] “I saved a kid’s life today” by michel
I'm working writing more, quicker, and not directly for an EA Forum audience. This is a post copied over from my blog.I wonder what they’re doing today, the kid whose life I saved. Maybe playing with other kids in their village. Maybe seeking shade with his or her siblings, trying to escape the sub-Saharan African heat. Maybe being held by their mother or grandmother, nurtured.Whatever they’re doing today, some day they’ll grow up, and they’ll live. They’ll have a first kiss, a favorite dance, a hobby that makes them feel free, a role model they look up to, a best friend… all of it. They’ll live. And I think it will be because of what I did today.. . .It isn’t thrilling or adventurous, saving a life in the 21st century. I opened my laptop, clicked my way to a bookmarked website, and donated to a standout charity. Someone watching [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: August 7th, 2023 Source: https://forum.effectivealtruism.org/posts/Nvw7dGi4kmuXCDDhH/i-saved-a-kid-s-life-today Linkpost URL:https://substack.com/inbox/post/135792543 --- Narrated by TYPE III AUDIO.
“Soaking Beans - a cost-effectiveness analysis” by NickLaing
TLDR: On early-stage analysis, persuading people to soak their beans before cooking could cost-effectively save Sub-saharan Africans a significant amount of money, and modestly reduce carbon emissions. (great uncertainty) IntroductionAcross East Africa, hundreds of millions of people cook and eat beans multiple times every week. In Uganda where I live, beans make up an estimated 25% of the average Ugandan’s calorie intake and 40% of their daily protein intake.[1] Unfortunately cooking beans takes an absurd amount of time - usually two to three hours using charcoal or wood. The great news is that just soaking beans in water for 6-12 hours reduces cooking time by between 20% and 50% and has no negative effect on bean taste or nutrients [2] [3]. When we tested soaking vs. not soaking, cooking time reduced by half. Despite the obvious benefits of massively reduced cooking time using less fuel., very few people in Uganda soak their beans - [...] ---Outline:(00:18) Introduction(02:08) Potential impact calculations(02:12) Assumptions(03:23) CO2 emissions prevented through soaking(03:49) Charcoal: CO2 equivalent saved by bean soaking(06:18) Wood CO2 equivalent saved by bean soaking(08:01) Money Saved by soaking beans(10:19) Tractability(12:02) Why Bean Soaking might not be tractable(12:37) Overall Cost-effectiveness Estimate(14:05) How to help 1% (or more) of Ugandans soak their beans?The original text contained 8 footnotes which were omitted from this narration. --- First published: August 6th, 2023 Source: https://forum.effectivealtruism.org/posts/EAA5YeR6s6Ye2cZjD/soaking-beans-a-cost-effectiveness-analysis --- Narrated by TYPE III AUDIO.
“How can we improve Infohazard Governance in EA Biosecurity?” by Nadia Montazeri
Or: “Why EA biosecurity epistemics are whack”The effective altruism (EA) biosecurity community focuses on reducing the risks associated with global biological catastrophes (GCBRs). This includes preparing for pandemics, improving global surveillance, and developing technologies to mitigate the risks of engineered pathogens. While the work of this community is important, there are significant challenges to developing good epistemics, or practices for acquiring and evaluating knowledge, in this area.One major challenge is the issue of infohazards. Infohazards are ideas or information that, if widely disseminated, could cause harm. In the context of biosecurity, this could mean that knowledge of specific pathogens or their capabilities could be used to create bioweapons. As a result, members of the EA biosecurity community are often cautious about sharing information, particularly in online forums where it could be easily disseminated. [1]The issue of infohazards is not straightforward. Even senior biosecurity professionals may have different thresholds for what they [...] ---Outline:(01:57) Challenges for cause and intervention prioritisation(04:01) Challenges for transparent and trustworthy advocacy(06:06) What are known best practices?The original text contained 3 footnotes which were omitted from this narration. --- First published: August 5th, 2023 Source: https://forum.effectivealtruism.org/posts/3a6QWDhxYTz5dEMag/how-can-we-improve-infohazard-governance-in-ea-biosecurity --- Narrated by TYPE III AUDIO.
“University EA Groups Need Fixing” by Dave Banerjee
(Cross-posted from my website.) I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don’t claim to have a complete picture of the university group ecosystem. Disclaimer: I’ve written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal. My EA Experience During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on [...] ---Outline:(00:57) My EA Experience(06:38) Epistemic Problems in Undergraduate EA Communities(08:33) My Best Guess on Why AI Safety Grips Undergraduate Students(11:56) Caveats(12:19) How Retreats Can Foster an Epistemically Unhealthy Culture(12:43) Against Taking Ideas Seriously(13:54) Why Do People Take Ideas Seriously in Retreats?(15:51) Other Retreat Issues(16:58) University Group Organizer Funding(17:18) Why I Think Paying Organizers May Be Bad(18:17) Potential Solutions(19:08) Final Remarks--- First published: August 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing --- Narrated by TYPE III AUDIO.
“Problems with free services for EA projects” by Lizka
EA-motivated specialists sometimes offer free or subsidized versions of normally expensive services to EA projects. I think this is often counterproductive and outline my reasoning in this post. The key problem with free services is that we don’t have market-based information about their quality, so the beneficiaries of a free service might be getting less value than it might appear they are getting. As a result, service providers waste their time providing expensive services to people who wouldn’t pay the full price (instead, providers could charge and donate[1] or spend that time on other impactful work). Additionally, community-level overestimates of the quality of free services are more likely and might lead people who need good services to use the free versions even when they’re worse suited to their needs. If you’re offering, taking, or advertising a free service like this, I think you should believe the situation is an exception to the general heuristic. [...] ---Outline:(03:59) Problems(04:02) It’s harder to evaluate the quality/value of free services, so (1) providers might keep offering free services that are a bad use of their time, and (2) people might use services that are not what they need(08:46) It might be more effective to charge the full cost and donate the profits(10:30) More minor/complicated issues with services that are offered for free(12:04) Situations in which it’s potentially reasonable to offer/advertise free services to EAs(16:30) Other nuances and counterpointsThe original text contained 14 footnotes which were omitted from this narration. --- First published: August 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/rPj6Fh4ZTEpRah3uf/problems-with-free-services-for-ea-projects --- Narrated by TYPE III AUDIO.
“Reflections on my time on the Long-Term Future Fund” by abergal
I'm stepping down as chair of the Long-Term Future Fund. I'm writing this post partially as a loose set of reflections on my time there, and partially as an overall update on what's going on with the fund, as I think we should generally be transparent with donors and grantees, and my sense is the broader community has fairly little insight into the fund's current operations. I'll start with a brief history of what's happened since I joined the fund, and its impact, and move to a few reflections on ways the fund is working now.(Also-- you can donate to the Long-Term Future Fund here, and let us know here if you might be interested in becoming a fund manager. (The Long-Term Future Fund is part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (EV UK) and Effective Ventures Foundation USA Inc. (EV US). [...] ---Outline:(00:56) A brief history of my time on the Long-Term Future Fund(03:29) The fund’s impact(05:42) Reflections(05:45) Problems with scale(11:22) Transparency--- First published: August 2nd, 2023 Source: https://forum.effectivealtruism.org/posts/9vazTE4nTCEivYSC6/reflections-on-my-time-on-the-long-term-future-fund --- Narrated by TYPE III AUDIO.
“Thoughts on far-UVC after working in the field for 8 months” by Max Görlitz
Views expressed in this article are my own and do not necessarily reflect those of my employer SecureBio.SummaryFar-UVC has great promise, but a lot of work still needs to be doneThere still are many important open research questions that need to be answered before the technology can become widely adoptedRight now, a key priority is to grow the research field and improve coordinationThe main reason far-UVC is so promising is that widespread installation could passively suppress future pandemics before we even learn that an outbreak has occurred Higher doses mean more rapid inactivation of airborne pathogens but also more risk for harm to skin, eyes, and through indoor air chemistry. Therefore, the important question in safety is, “How high can far-UVC doses go while maintaining a reasonable risk profile?”Existing evidence for skin safety within current exposure guidelines seems pretty robust, and I expect that skin safety won't be the bottleneck for far-UVC deployment at [...] --- First published: July 31st, 2023 Source: https://forum.effectivealtruism.org/posts/z8ZWwm4xeHBAiLZ6d/thoughts-on-far-uvc-after-working-in-the-field-for-8-months --- Narrated by TYPE III AUDIO.
[Linkpost] “Partial Transcript of Recent Senate Hearing Discussing AI X-Risk” by Daniel_Eth
On Tuesday, the US Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing on AI. The hearing involved 3 witnesses – Dario Amodei (CEO of Anthopic), Yoshua Bengio (Turing Award winner, and the second-most cited AI resarcher in the world), and Stuart Russell (Professor of CS at Berkeley, and co-author of the standard textbook for AI). The hearing wound up focusing a surprising amount on AI X-risk and related topics. I originally planned on jotting down all the quotes related to these topics, thinking it would make for a short post of a handful of quotes, which is something I did for a similar hearing by the same subcommittee 2 months ago. Instead, this hearing focused so much on these topics that I wound up with something that’s better described as a partial transcript.All the quotes below are verbatim. Text that is bolded is simply stuff I thought readers might find particularly interesting. If you want to listen to the hearing, you can do so here (it’s around 2.5 hours). You might also find it interesting to compare this post to the one from 2 months ago, to see how the discourse has progressed.Opening remarksSenator Blumenthal:What I have [...] --- First published: July 27th, 2023 Source: https://forum.effectivealtruism.org/posts/67zFQT4GeJdgvdFuk/partial-transcript-of-recent-senate-hearing-discussing-ai-x Linkpost URL:https://medium.com/@daniel_eth/ai-x-risk-at-senate-hearing-7104f371ca0b --- Narrated by TYPE III AUDIO.
“General support for ‘General EA’” by Arthur Malone🔸
TL;DR: When I say “General EA” I am referring to the cluster including the term “effective altruism,” the idea of “big-tent EA,” as well as branding and support of those ideas. This post is a response in opposition to many calls for renaming EA or backing away from an umbrella movement. I make some strategic recommendations and take something of a deep dive using my own personal history/cause prioritization as a case study for why “General EA” works (longpost is long, so there's TL;DRs for each major section). I’m primarily aiming to see if I’m right that there's a comparatively silent group that supports EA largely as it. If you’re in that category and don’t need the full rationale and story, the call to action is to add a comment linking your favorite “EA win” (success story/accomplishment you’d like to have people associate with EA). Since long [...] ---Outline:(03:09) “Effective Altruism”(07:26) In support of “Big Tent EA”(13:39) In support of maintaining the EA brand(18:10) ConclusionThe original text contained 4 footnotes which were omitted from this narration. --- First published: July 26th, 2023 Source: https://forum.effectivealtruism.org/posts/FzoMPHtXzTig8pXuh/general-support-for-general-ea --- Narrated by TYPE III AUDIO.
“Launching the meta charity funding circle (MCF): Apply for funding or join as a donor!” by Joey, Vilhelm Skoglund, Gage Weston
SummaryWe are launching the Meta Charity Funders, a growing network of donors sharing knowledge and discussing funding opportunities in the EA meta space. Apply for funding by August 27th or join the circle as a donor. See below or visit our website to learn more!If you are doing EA-aligned “meta” work, and have not received substantial funding for several years, you might be worried about funding. Over the past 10 years, Open Philanthropy and EA Funds comprised a large percent of total meta funding and are far from independent of each other. This lack of diversity means potentially effective projects outside their priorities often struggle to stay afloat or scale, and the beliefs of just a few grant-makers can massively shape the EA movement’s trajectory. It can be difficult for funders within meta as well. Individual donors often don’t know where to give if they don’t share EA Funds’ approach. Thorough vetting is scarce and expensive, with only a handful of grant-makers deploying tens of millions per year in meta grants, resulting in sub-optimal allocations. This is why we are launching the Meta Charity Funders, a growing network of donors sharing knowledge, discussing funding opportunities, and running joint open grant rounds in the EA meta [...] --- First published: July 26th, 2023 Source: https://forum.effectivealtruism.org/posts/5WLGmCg7vSfXeqSWC/launching-the-meta-charity-funding-circle-mcf-apply-for --- Narrated by TYPE III AUDIO.
“Underwater Torture Chambers: The Horror Of Fish Farming” by Omnizoid
Crossposted from my blog. The horror of fish farmingOne of my professors in college taught a class about effective altruism—a social movement about doing good effectively. Whenever he was talking about the scale of animal suffering, all the statistics he talked about were about the suffering of non-fish. It became a running joke—friends and I would make jokes like the following: “a gunman robs a bank, kills over 4 non-fish” or “one (non-fish) death is a tragedy, a million fish deaths are a statistic.” But this professor, despite explicitly ignoring fish whenever he talked about a problem, was nonetheless more pro-fish than almost all people. Because no one cares about fish—at all.I remember when I was young, my grandmother would take me and my brother fishing. This was seen as a totally innocuous, fun way to spend a weekend. The general attitude towards fishing is almost exactly opposite to the attitude towards, for example, hunting: liberal parents in blue states are not happy letting their children hunt land creatures, the way they are happy to let their children hunt fish. This is unsettling—hooking fish into the mouth to yank them out of the water so that they suffocate to death [...] --- First published: July 26th, 2023 Source: https://forum.effectivealtruism.org/posts/kwxE7HYjRpYwSEiKb/underwater-torture-chambers-the-horror-of-fish-farming --- Narrated by TYPE III AUDIO.
[Linkpost] “Shaping Humanity’s Longterm Trajectory” by Toby_Ord
Since writing The Precipice, one of my aims has been to better understand how reducing existential risk compares with other ways of influencing the l… --- First published: July 18th, 2023 Source: https://forum.effectivealtruism.org/posts/Doa69pezbZBqrcucs/shaping-humanity-s-longterm-trajectory Linkpost URL:http://files.tobyord.com/shaping-humanity's-longterm-trajectory.pdf --- Narrated by TYPE III AUDIO.
“Recovering from Rejection: My piece for the In-Depth EA Program” by Aaron Gertler
Formerly known as "Aaron's Epistemic Stories", which stops working as a title when it's on the Forum and people aren't required to read it.What is this post?A story about how I reacted poorly to my first few EA job rejections, and what I learned from reflecting on my mistakes.Context: When I worked at CEA, my colleague was working on EA Virtual Program curricula. She asked me to respond to this prompt:"What made you start caring about having good epistemics? What made you start trying to improve your epistemics? Why?"I wrote a meandering, stream-of-consciousness response and shared it. I assumed it would either be ignored or briefly summarized as part of a larger piece. Instead, it — went directly to the curriculum for the In-Depth Program?That was a surprise.[1] It was a much bigger surprise when people started reaching out to tell me how much it had helped them: maybe a dozen times over the last two years. From the emails alone, it seems to be the most important thing I've written.[2]So I'm sharing a lightly edited version on the Forum, in case it helps anyone else. Recovering from rejectionAfter I graduated from college, I took the most profitable job I could find, at a company in a cheap city. I wanted to save money so I could be flexible later. So far, so good.I started an EA group at the company, which kept me thinking about effective altruism on a regular basis even without my college group. It wasn’t nearly as fun to run as the college group — people who work full-time jobs don't like extra meetings, and my co-organizers kept getting other jobs and leaving. But I still felt like “part of EA”.Eventually, I decided to move on from the company. So I applied to GiveWell, got to the very last step of the application process… and got rejected.Well, I thought, I guess it makes sense that I’m not qualified for an EA job. My grades weren’t great, and I was never a big researcher in college. Time to do something else.(This is a story about a mistake. Do you see it?)I moved to San Diego and spent the next 18 months as a freelance tutor and writer, feeling generally dissatisfied with my life. My local group met rarely and far away; I had no car, I was busy with family stuff, and I became less and less engaged with EA.Through an old connection, I was introduced to a couple who ran an EA-aligned foundation and lived nearby. I ended up doing part-time operations work for them — reading papers, emailing charities with questions, and other EA-flavored stuff.This boosted my confidence and led me to think harder about my career, though I kept running into limitations. For example, GiveDirectly’s CEO wanted to hire a research assistant for his lab at UCSD, but I’d totally forgotten my old R classes and wasn’t a good candidate, despite having a great connection from my operations work. There goes maybe the best opportunity I’ll ever [...] --- First published: July 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/NDRBZNc2sBy5MC8Fw/recovering-from-rejection-my-piece-for-the-in-depth-ea --- Narrated by TYPE III AUDIO.
“Recovering from Rejection” by Aaron Gertler
Formerly known as "Aaron's Epistemic Stories", which stops working as a title when it's on the Forum and people aren't required to read it. What is this post? A story about how I reacted poorly to my first few EA job rejections, and what I learned from reflecting on my mistakes. Context: When I worked at CEA, my colleague was working on EA Virtual Program curricula. She asked me to respond to this prompt: "What made you start caring about having good epistemics? What made you start trying to improve your epistemics? Why?" I wrote a meandering, stream-of-consciousness response and shared it. I assumed it would either be ignored or briefly summarized as part of a larger piece. Instead, it — went directly to the curriculum for the In-Depth Program? That was a surprise.[1] It was a much bigger surprise when people started reaching out to tell me how [...] ---Outline:(00:12) What is this post?(01:20) Recovering from rejection(07:56) What was the actual mistake, though?(11:10) Reflection: Doing epistemics better(16:51) Bonus story: Asking the right questionsThe original text contained 5 footnotes which were omitted from this narration. --- First published: July 3rd, 2023 Source: https://forum.effectivealtruism.org/posts/NDRBZNc2sBy5MC8Fw/recovering-from-rejection --- Narrated by TYPE III AUDIO.
“Nailing the basics – Theories of change” by Aidan Alexander, CE
WHY WRITE A POST ABOUT THEORIES OF CHANGE?As participants in a movement with ‘effective’ in its name, it’s easy to think of ourselves as being above falling for the most common mistakes made in the… The original text contained 5 footnotes which were omitted from this narration. --- First published: July 16th, 2023 Source: https://forum.effectivealtruism.org/posts/9t7St3pfEEiDsQ2Tr/nailing-the-basics-theories-of-change --- Narrated by TYPE III AUDIO.
[Linkpost] “Fatebook: the fastest way to make and track predictions” by Adam Binks, Sage
This is a link post. Announcing Fatebook: a website that makes it extremely low friction to make and track predictions. It's designed to be very fast - just open a new tab, go to fatebook.io, type your prediction, and hit enter. Later, you'll get an email reminding you to resolve your question as YES, NO, or AMBIGUOUS. It's private by default, so you can track personal questions and give forecasts that you don't want to share publicly. You can also share questions with specific people, or publicly. Fatebook syncs with Fatebook for Slack - if you log in with the email you use for Slack, you’ll see all of your questions on the website. As you resolve your forecasts, you'll build a track record - Brier score, Relative Brier score, and see your calibration chart. You can use this to track the development of your forecasting skills. Some stories [...] --- First published: July 11th, 2023 Source: https://forum.effectivealtruism.org/posts/DWFRBzK3rAH3HFDZr/fatebook-the-fastest-way-to-make-and-track-predictions Linkpost URL:https://fatebook.io --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
[Linkpost] “Fatebook: the fastest way to make and track predictions” by Adam Binks, Sage
Announcing Fatebook: a website that makes it extremely low friction to make and track predictions. … --- First published: July 11th, 2023 Source: https://forum.effectivealtruism.org/posts/DWFRBzK3rAH3HFDZr/fatebook-the-fastest-way-to-make-and-track-predictions Linkpost URL:https://fatebook.io --- Narrated by TYPE III AUDIO.
“Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity” by MHR
Update: Andrés Jiménez Zorrilla (CEO of SWP) has provided some additional information in the comments. In particular: … The original text contained 1 footnote which was omitted from this narration. --- First published: July 13th, 2023 Source: https://forum.effectivealtruism.org/posts/CmAexqqvnRLcBojpB/electric-shrimp-stunning-a-potential-high-impact-donation --- Narrated by TYPE III AUDIO.
[Linkpost] “Announcing Manifund Regrants” by Austin, Rachel Weinberg
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.What is regranting?Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.What is Manifund?Manifund is the charitable arm of Manifold Markets. Some of our past work:Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews ContestForecasting tournaments, with Charity Entrepreneurship and Clearer ThinkingDonating prediction market winnings to charity, funded by the Future FundHow does regranting on Manifund work?Our website makes the process simple, transparent, and fast:A donor contributes money to Manifold for Charity, our registered 501c3 nonprofitThe donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor’s values.Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient’s Manifund account.The recipient withdraws money from their Manifund account to be used for their project.Differences from the Future Fund’s regranting programAnyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there’s no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund’s regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor’s writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).We’re starting with less money. The Future [...] --- First published: July 5th, 2023 Source: https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants Linkpost URL:https://manifund.org/rounds/regrants --- Narrated by TYPE III AUDIO.
“Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge]” by Vadim Albinsky
I would like to thank Lant Pritchett, David Roodman and Matt Lerner for their invaluable comments.You can follow these links to comments from Lant Pritchett and David Roodman.A number of EA forum posts (1, 2) have pointed out that effective altruism has not been interested in education interventions, whether that is measured by funding from GiveWell or Open Philanthropy, or writing by 80,000 hours. Based on brief conversations with people who have explored education at EA organizations and reading GiveWell’s report on the topic, I believe most of the reason for this comes down to two concerns about the existing evidence that drive very steep discounts to expected income effects of most interventions. The first of these is skepticism about the potential for years of schooling to drive income gains because the quasi-experimental evidence for these effects is not very robust. The second is the lack of RCT evidence linking specific interventions in low and middle income countries (LMICs) to income gains.I believe the first concern can be addressed by focusing on the evidence for the income gains from interventions that boost student achievement rather than the weaker evidence around interventions that increase years of schooling. The second concern can be addressed in the same way that GiveWell has addressed less-than-ideal evidence for income effects for their other interventions: looking broadly for evidence across the academic literature, and then applying a discount to the expected result based on the strength of the evidence. In this case that means including relevant studies outside of the LMIC context and those that examine country-level effects. I identify five separate lines of evidence that all find similar long-term income impacts of education interventions that boost test scores. None of these lines of evidence is strong on its own, with some suffering from weak evidence for causality, others from contexts different from those where the most cost-effective charities operate, and yet others from small sample sizes or the possibility of negative effects on non-program participants. However, by converging on similar estimates from a broader range of evidence than EA organizations have considered, the evidence becomes compelling. I will argue that the combined evidence for the income impacts of interventions that boost test scores is much stronger than the evidence GiveWell has used to value the income effects of fighting malaria, deworming, or making vaccines, vitamin A, and iodine more available. Even after applying very conservative discounts to expected effect sizes to account for the applicability of the evidence to potential funding opportunities, we find the best education interventions to be in the same range of cost-effectiveness as GiveWell’s top charities.The argument proceeds as follows:I. There are five separate lines of academic literature all pointing to income gains that are surprisingly clustered around the average value of 19% per standard deviation (SD) increase in test scores. They come to these estimates using widely varying levels of analysis and techniques, and between them address all of the major alternative explanations. A. The most direct evidence for the likely impact of charities that [...] The original text contained 17 footnotes which were omitted from this narration. --- First published: July 13th, 2023 Source: https://forum.effectivealtruism.org/posts/8qXrou57tMGz8cWCL/are-education-interventions-as-cost-effective-as-the-top --- Narrated by TYPE III AUDIO.
“Announcing ‘Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament’” by Forecasting Research Institute
This is a linkpost for "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament," accessible here: https://forecastingresearch.org/s/XPT.pdfToday, the Forecasting Research Institute (FRI) released "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament", which describes the results of the Existential-Risk Persuasion Tournament (XPT).The XPT, which ran from June through October of 2022, brought together forecasters from two groups with distinctive claims to knowledge about humanity’s future — experts in various domains relevant to existential risk, and "superforecasters" with a track record of predictive accuracy over short time horizons. We asked tournament participants to predict the likelihood of global risks related to nuclear weapon use, biorisks, and AI, along with dozens of other related, shorter-run forecasts.Some major takeaways from the XPT include:The median domain expert predicted a 20% chance of catastrophe and a 6% chance of human extinction by 2100. The median superforecaster predicted a 9% chance of catastrophe and a 1% chance of extinction. Superforecasters predicted considerably lower chances of both catastrophe and extinction than did experts, but the disagreement between experts and superforecasters was not uniform across topics. Experts and superforecasters were furthest apart (in percentage point terms) on AI risk, and most similar on the risk of nuclear war.Predictions about risk were highly correlated across topics. For example, participants who gave higher risk estimates for AI also gave (on average) higher risk estimates for biorisks and nuclear weapon use.Forecasters with higher “intersubjective accuracy”—i.e., those best at predicting the views of other participants—estimated lower probabilities of catastrophic and extinction risks from all sources.Few minds were changed during the XPT, even among the most active participants, and despite monetary incentives for persuading others.See the full working paper here.FRI hopes that the XPT will not only inform our understanding of existential risks, but will also advance the science of forecasting by:Collecting a large set of forecasts resolving on a long timescale, in a rigorous setting. This will allow us to measure correlations between short-run (2024), medium-run (2030) and longer-run (2050) accuracy in the coming decades.Exploring the use of bonus payments for participants who both 1) produced persuasive rationales and 2) made accurate “intersubjective” forecasts (i.e., predictions of the predictions of other participants), which we are testing as early indicators of the reliability of long-range forecasts.Encouraging experts and superforecasters to interact: to share knowledge, debate, and attempt to persuade each other. We plan to explore the value of these interactions in future work.As a follow-up to our report release, we are producing a series of posts on the EA Forum that will cover the XPT's findings on:AI risk (in 6 posts):OverviewDetails on AI riskDetails on AI timelinesXPT forecasts on some key AI inputs from Ajeya Cotra's biological anchors reportXPT forecasts on some key AI inputs from Epoch's direct approach modelConsensus on the expected shape of development of AI progressOverview of findings on biorisk (1 post)Overview of findings on nuclear risk (1 post)Overview of findings from miscellaneous forecasting questions (1 post)FRI's planned next steps for this research agenda, along with a request for input on what FRI should do next (1 post) --- First published: July 10th, 2023 Source: https://forum.effectivealtruism.org/posts/un42vaZgyX7ch2kaj/announcing-forecasting-existential-risks-evidence-from-a --- Narrated by TYPE III AUDIO.
[Linkpost] “The Seeker’s Game – Vignettes from the Bay” by Yulia
IntroductionLast year, one conversation left a lasting impression on me. A friend remarked on the challenges of navigating "corrupting forces" in the Bay Area. Intrigued by this statement, I decided to investigate the state of affairs in the Bay if I had the chance. So when I got the opportunity to visit Berkeley in February 2023, I prepared a set of interview questions. Can you share an experience where you had difficulty voicing your opinion? What topics are hard to clearly think about due to social pressures and factors related to your EA community or EA in general? Is there anything about your EA community that makes you feel alienated? What is your attitude towards dominant narratives in Berkeley? [1] In the end, I formally interviewed fewer than ten people and had more casual conversations about these topics with around 30 people. Most people were involved in AI alignment to some extent. The content for this collection of vignettes draws from the experience of around ten people. [2] I chose the content for the vignettes for one of two reasons – potential representativity and potential extraordinariness. I hypothesized that some experiences represent the wider EA Berkeley community accurately. Others, I included because they surprised me, and I wanted to find out how common they are. All individuals gave me their consent to post the vignettes in their current form. How did I arrive at these vignettes? It was a four-step process. First, I conducted the interviews while jotting down notes. For the more casual conversations, I took notes afterwards. The second step involved transcribing these notes into write-ups. After that, I obscured any identifying details to ensure the anonymity of the interviewees. Lastly, I converted the write-ups into vignettes by condensing them into narratives and honing in on key points while trying to retain the essence of what was said.I tried to reduce artistic liberties by asking participants to give feedback on how close the vignettes were to the spirit of what they meant (or think they meant at the time). It is worth noting that I bridged some gaps with my own interpretations of the conversations, relying on the participants to point out inaccuracies. By doing that, I might have anchored their responses. Moreover, people provided different levels of feedback. Some shared thorough, detailed reviews pointing out many imprecisions and misconceptions. Sometimes, that process spanned multiple feedback cycles. Other participants gave minimal commentary.Because I am publishing the vignettes months after the conversations and interviews, I want to include how attitudes have changed in the intervening period. I generalised the attitudes into the following categories:Withdrawn endorsement (Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.)Weakened endorsement (Status: The interviewee has weakened their endorsement of the following content since the interview.)Unchanged endorsement (Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.)Strengthened endorsement (Status: The interviewee has strengthened their endorsement of the following content since the interview.)I clustered the vignettes according [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: July 9th, 2023 Source: https://forum.effectivealtruism.org/posts/WxqyXbyQiEjiAsoJr/the-seeker-s-game-vignettes-from-the-bay Linkpost URL:https://www.lesswrong.com/posts/yXLEcd9eixWucKGHg/the-seeker-s-game-vignettes-from-the-bay --- Narrated by TYPE III AUDIO.
[Linkpost] “Some Observations on Alcoholism” by Devin Kalish
This is a tough one to post, it’s also a little off topic for the forum. I went back and forth a great deal about whether to crosspost it anyway, and ultimately decided to, since I have in some ways posted on this topic here before, and since there are several parts that are directly relevant to Effective Altruism (the last three sections all have substantial relevance of some sort). Doing this also makes it easier for this to be relatively public, and so to get some difficult conversations over with. The short version of it is that I’ve been an alcoholic, mostly in secret, for about three years now. This blogpost is a lengthy dive into different observations about it, and ways it has changed my mind on various issues. I don’t want to post the whole thing below because, well, frankly it’s huge and only occasionally relevant, so instead I’m going to post some relevant quotes as people often do with linkposts of things they didn’t write. There’s a good deal more in the link. First, here’s a quick summary of how things got started: “I started drinking during early 2020, when as far as I can tell there was no special drama going on with Effective Altruism, and I had already been involved with it in a similar capacity for a couple years. Most of the alcoholics I’ve met at this point either got started or got significantly worse during the pandemic, I was no different. But the truth is my drinking even then wasn’t terribly dramatic a coping mechanism. There was never anything that meaningfully ‘drove me to drink’. The idea that drinking at this point could land me here wasn’t part of my decision at all, I was just kind of bored and lonely and decided it would be a fun treat to drink a beer or two at night – something I had very rarely done before. As the pandemic wore on, it became something I looked forward to more and more, and eventually I discovered the appeal of hard liquor, which I never switched back from, and eventually I started working on my thesis for my first MA. The combination of my thesis and hard liquor turned a casual habit and minor coping mechanism into something more obviously hard for me to let go of. Over the course of the next three years things got slowly worse from there, and I came to realize more and more how little control I had. It wasn’t some meaningful part of the larger story of my life, replete with a buried darkness in my soul coming to the forefront, or a unique challenge driven by terrible circumstances. I have had to push back in therapy repeatedly on these subtler and more interesting attempts to make something of the event. The truth is sober reflection makes it all look like little more than a meaningless tragedy.” Here are some reflections on what [...] --- First published: July 8th, 2023 Source: https://forum.effectivealtruism.org/posts/F2MfbmRAiMx2PDhaD/some-observations-on-alcoholism Linkpost URL:https://www.thinkingmuchbetter.com/main/alcoholism/ --- Narrated by TYPE III AUDIO.
“Who Was the Funder that Counterfactually Resulted in LEEP Starting?” by Joey et al.
Lead Exposure Elimination Project (LEEP) is an outstanding Charity Entrepreneurship-incubated charity recognized externally for its impactful work by RP, Founders Pledge, Schmidt Futures, and Open Philanthropy. It's one of the clearest cases of new charities having a profound impact on the world. However, everything is clear in hindsight; it now seems obvious that this was a great idea and team to fund, but who funded LEEP at the earliest stage? Before any of the aforementioned bodies would have considered or looked at them, who provided funding when $60k made the difference between launching and not existing? The CE Seed Network, so far, has been a rather well-kept secret. They are the first people to see each new batch of CE-incubated charities and make a decision on whether and how much to support them. A handful of donors supported LEEP in its earliest days, culminating in the excellent charity we see today. Some of them donated anonymously, never seeking credit or the limelight, just quietly making a significant impact. Others engaged deeply and regularly with the team, eventually becoming trusted board members. Historically, the Seed Network has been a small group (~30) of primarily E2G-focused EAs, invited by the CE team or alumni from the CE program to join. However, now we are opening it up for expressions of interest for those who might want to join in future rounds. Our charity production has doubled (from 5 to 10 charities a year) and although our Seed Network has grown, there is still room for more members to join to support our next batches of charities. We have now created a website to describe how it works. On that website, there's an application form for those who might be a good fit to be a member in the future. It’s not a great fit for everyone as it focuses on the CE (near-termist) cause areas and donors who could donate over $10k a year to new charities and can make a decision on whether and whom to fund with how much in a short period of time when we send out the newest project proposals (~9 days). But for those who fit, we think it's one of the most impactful ways to donate. --- First published: July 4th, 2023 Source: https://forum.effectivealtruism.org/posts/t6JzBxtrXjLRufE8o/who-was-the-funder-that-counterfactually-resulted-in-leep --- Narrated by TYPE III AUDIO.
“Announcing CE’s new Research Training Program - Apply Now!” by KarolinaSarek et al.
TL;DR: We are excited to announce our Research Training Program. This online program is designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities and interventions. It is a full-time, fully cost-covered program that will run online for 11 weeks. Apply here!Deadline for application: July 17, 2023The program dates are: October 2 - December 17, 2023So far, Charity Entrepreneurship has launched and run two successful training programs: a Charity Incubation Program and a Foundation Program. Now we are piloting a third - a Research Training Program, which will tackle a different problem. The Problem:People: Many individuals are eager to enter research careers, level up their current knowledge and skills from junior to senior, or simply make their existing skills more applicable to work within EA frameworks/organizations. At the same time, research organizations have trouble filling a senior-level researcher talent gap. There is a scarcity of specific training opportunities for the niche skills required, such as intervention prioritization and cost-effectiveness analyses, which are hard to learn through traditional avenues. Ideas: A lack of capacity for exhaustive investigation means there is a multitude of potentially impactful intervention ideas that remain unexplored. There may be great ideas being missed, as with limited time, we will only get to the most obvious solutions that other people are likely to have thought of as well. Evaluation: Unlike the for-profit sector, the nonprofit sector lacks clear metrics for assessing an organization's actual impact. External evaluations can help nonprofits evaluate and reorganize their own effectiveness and also allow funders to choose the highest impact opportunities available to them- potentially unlocking more funding (sometimes limited by lack of public external evaluation). There are some great organizations that carry out evaluations (e.g., GiveWell), but they are constrained by capacity and have limited scope; this results in several potentially worthwhile organizations remaining unassessed.Who Is This Program For?Motivated researchers who want to produce trusted research outputs to improve the prioritization and allocation decisions of effectiveness-minded organizationsEarly career individuals who are seeking to build their research toolkits and gain practical experience through real projectsExisting researchers in the broader Global Health and Well-being communities (global health, animal advocacy, mental health, health/biosecurity, etc.) who are interested in approaching research from an effectiveness-minded perspectiveWhat Does Being a Fellow Involve?Similar to our Charity Incubation Program, the program focuses on learning generalizable and specific research skills. It involves watching training videos, reading materials, and practicing by applying those skills to concrete mini-research projects. Participants learn by doing while we provide guidance and lots of feedback.You will also focus on applying skills, working on different stages of the research process, and producing final research reports that could be used to guide real decision-making.Frequent feedback on your projects from expert researchersRegular check-in calls with a mentor for troubleshooting, guidance on research, and your career Writing reports on selected topicsOpportunities to connect with established researchers and explore potential job opportunitiesAssistance with editing your cause area report for publication and disseminationWhat Are We Offering?11 weeks of online, full-time training with practical research assignments, expert mentoring, feedback, and published output "Shovel ready" research topics that are highly promising yet neglectedStipends to cover [...] --- First published: June 27th, 2023 Source: https://forum.effectivealtruism.org/posts/AdouuTH7esiDQPExz/announcing-ce-s-new-research-training-program-apply-now --- Narrated by TYPE III AUDIO.
“Munk AI debate: confusions and possible cruxes” by Steven Byrnes
There was a debate on the statement “AI research and development poses an existential threat” (“x-risk” for short), with Max Tegmark and Yoshua Bengio arguing in favor, and Yann LeCun and Melanie Mitchell arguing against. The YouTube link is here, and a previous discussion on this forum is here.The first part of this blog post is a list of five ways that I think the two sides were talking past each other. The second part is some apparent key underlying beliefs of Yann and Melanie, and how I might try to change their minds.[1]While I am very much on the “in favor” side of this debate, I didn’t want to make this just a “why Yann’s and Melanie’s arguments are all wrong” blog post. OK, granted, it’s a bit of that, especially in the second half. But I hope people on the “anti” side will find this post interesting and not-too-annoying.Five ways people were talking past each other1. Treating efforts to solve the problem as exogenous or notThis subsection doesn’t apply to Melanie, who rejected the idea that there is any existential risk in the foreseeable future. But Yann suggested that there was no existential risk because we will solve it; whereas Max and Yoshua argued that we should acknowledge that there is an existential risk so that we can solve it.By analogy, fires tend not to spread through cities because the fire department and fire codes keep them from spreading. Two perspectives on this are:If you’re an outside observer, you can say that “fires can spread through a city” is evidently not a huge problem in practice.If you’re the chief of the fire department, or if you’re developing and enforcing fire codes, then “fires can spread through a city” is an extremely serious problem that you’re thinking about constantly.I don’t think this was a major source of talking-past-each-other, but added a nonzero amount of confusion.2. Ambiguously changing the subject to “timelines to x-risk-level AI”, or to “whether large language models (LLMs) will scale to x-risk-level AI”The statement under debate was “AI research and development poses an existential threat”. This statement does not refer to any particular line of AI research, nor any particular time interval. The four participants’ positions in this regard seemed to be:Max and Yoshua: Superhuman AI might happen in 5-20 years, and LLMs have a lot to do with why a reasonable person might believe that.Yann: Human-level AI might happen in 5-20 years, but LLMs have nothing to do with that. LLMs have fundamental limitations. But other types of ML research could get there—e.g. my (Yann’s) own research program.Melanie: LLMs have fundamental limitations, and Yann’s research program is doomed to fail as well. The kind of AI that might pose an x-risk will absolutely not happen in the foreseeable future. (She didn’t quantify how many years is the “foreseeable future”.)It seemed to me that all four participants (and the moderator!) were making timelines and LLM-related arguments, in ways that were both annoyingly vague, and unrelated to the statement under debate.(If astronomers found a [...] --- First published: June 27th, 2023 Source: https://forum.effectivealtruism.org/posts/LEEcSn4gt7nBwBghk/munk-ai-debate-confusions-and-possible-cruxes --- Narrated by TYPE III AUDIO.
“Decision-making and decentralisation in EA” by William_MacAskill
This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I’m not speaking on behalf of any organisation I’m involved with. For some context on how I’m now thinking about talking in public, I’ve made a shortform post here [link]. Thanks to the many people who provided comments on a draft of this post. Intro and OverviewHow does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised? These are the questions I’m going to address in this post. In what follows, I’ll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.It’s hard to know whether the right response to this is to become more centralised or less. In this post, I’m mainly hoping just to start a discussion of this issue, as it’s one that impacts a wide number of decisions in EA. [1] At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now. But centralisation isn’t a single spectrum, and we can break it down into sub-components. I’ll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie’s post, which he wrote independently of this one.)We should, insofar as we can, cultivate a diversity of EA-associated public figures.[Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).[Maybe] CEA could be renamed. (This is suggested by Kaleem here.)Funding: It’s hard to fix, but it would be great to have a greater diversity of funding sources. That means:Recruiting more large donors.Some significant donor or donors start a regranters program.More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people’s decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.Decision-making: Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.[Maybe] CEA could partly dissolve into sub-projects.Culture:We could [...] --- First published: June 26th, 2023 Source: https://forum.effectivealtruism.org/posts/DdSszj5NXk45MhQoq/decision-making-and-decentralisation-in-ea --- Narrated by TYPE III AUDIO.
“Downsides of Small Organizations in EA” by Ozzie Gooen
Epistemic Status: This is a subject I've been casually thinking about for a while, but I wrote this document fairly quickly. Take this with a big grain of salt. This is written in a personal capacity.A lot of EA, especially in meta and longtermism, is made up of small organizations and independent researchers. This provides some benefits, but I think the downsides are substantial and often unappreciated. More clearly:EA funding mostly comes from a very few funders, but it goes to a mass of small organizations. My impression is that this is an unusual combination.I think that there are a lot of important downsides to having things split up into a bunch of small nonprofits.I'm suspicious of many of the reasons for having small organizations that I've come across. There might well still be good reasons I haven't heard or that haven't been suggested.I suggest some potential changes we could make to try to get some of the best incremental tradeoffs.DownsidesLow Management FlexibilityIf you want to quickly create a new project in a sizeable organization, you can pull people from existing teams. This requires upper management but is normal for said management. On the other hand, if you instead have a bunch of tiny independent organizations, your options are much more limited. Managers of tiny organizations can be near-impossible to move around because many of them own key funding relationships. Pulling together employees from different organizations is a pain, as no one has the authority to directly do this. The best you can do is slowly encourage people to join said new project.Moving people around is crucial for startups and tech firms. The first version Amazon Prime was made in under two months, in large part because Jeff Bezos was able to rapidly deploy the right people to it. At other tech companies, some amounts of regularly rotating team members is considered healthy. Strong software engineers get to work on many projects and people.Small nonprofit teams with locked-in mission statements are the opposite of this. This rigidness could be good for donors with little trust, but it comes at a substantial cost of flexibility.I’ve seen several projects in EA come up that could use rapid labor. Funding rounds seem particularly labor-intensive. It often seems to me like it should be possible to pull trusted people from existing organizations for a few weeks or months, but doing so is awkward because they’re formally part of separate organizations with specific mission statements and funding agreements.A major thing that managers at sizeable organizations do is size up requests for labor changes. The really good requests (with good managers) get quickly moved forward, and the bad ones are shot down. This is hard to do without clear, able, and available authorities.Low Employee FlexibilityEmployees that join small organizations with narrow missions can be assured that they will work on those missions. But if they ever want to try working with a different team or project, even just for a few months, the only option is often that [...] --- First published: June 24th, 2023 Source: https://forum.effectivealtruism.org/posts/P55P4YJoncfQmZ2RR/downsides-of-small-organizations-in-ea --- Narrated by TYPE III AUDIO.
“EA’s success no one cares about” by Jakub Stencel
StatusWhile this is part of EA Strategy Fortnight, my intention is to focus more on what effective altruism did well, rather than what it should do. At the same time, my hope would be that the post provides the community with at least some valuable context to the discussions about the path forward.This post will be very subjective and a lot less thought-through than what I am usually comfortable sharing, although built on ~13 years of experience in the field. Some details may be off, for example by memory distortion or indirect testimonies. Nevertheless, it’s truthful to my internal models – when I walk and talk to my dog about this kind of stuff, you can expect my mind to go in the same direction as this post.Kisiel, my co-author, that prefer to stay as a lurker.ContextRecently, there has been a lot of attention directed at effective altruism. Some was external, but, from my perspective, most of it came from within the movement. My interpretation was that at least a portion of it was built on feelings of anxiety, doubt, and maybe some anger or fear. Of course, a lot of concerns seemed to me legitimized by what was happening or what we were discovering.In some way, I was worried about the community I identify as part of, but at the same time, there was this feeling of appreciation that we can go together through a crisis. It’s a lesson for a young movement, and experience is invaluable. Just like it’s better to learn hard lessons about life as a teenager than an adult, of course ideally with not much harm involved.The energy spent on inward focus felt encouraging, even though I disagreed with a chunk of the opinions. After all, some of the values of effective altruism I’m the most optimistic about are openness to criticism, intellectual humility, and truth-seeking. But the more external and internal takes I was reading, the more something seemed off. Something was missing.It felt one-sided. There was almost no mention of successes and wins – some appreciation of what this very young and weird movement managed to achieve in such a short period of time. Maybe I shouldn’t expect this in adversarial pieces about EA, and maybe it was implied when people were making criticism internally, but it still didn’t feel fully right to me.It felt like we all take effective altruism for granted. There was not much gratitude in the air.Maybe one can argue that EA hasn’t done much. While I have my strong intuitions on the counterfactual impact of EA in many areas, in the end I don’t feel fully qualified here, so I would prefer to defer. Yet, I’m confident that there is at least one success we should celebrate, and it’s very much absent from the discourse – making historical progress for animals.Animal advocate’s lens on effective altruismThis is my take on the short path of effective altruism’s impact on animals. Please note that I came from the part of [...] --- First published: June 24th, 2023 Source: https://forum.effectivealtruism.org/posts/GCaRhu84NuCdBiRz8/ea-s-success-no-one-cares-about --- Narrated by TYPE III AUDIO.
“On focusing resources more on particular fields vs. EA per se - considerations and takes ” by Ardenlk
Epistemic status: This post is an edited version of an informal memo I wrote several months ago. I adapted it for the forum at the prompting of EA strategy fortnight. At the time of writing I conceived of its value as mostly in laying out considerations / trying to structure a conversation that felt a bit messy to me at the time, though I do give some of my personal takes too.I went back and forth a decent amount about whether to post this - I'm not sure about a lot of it. But some people I showed it to thought it would be good to post, and it feels like it's in the spirit of EA strategy fortnight to have a lower bar for posting, so I'm going for it.Overall takeSome people argue that the effective altruism community should focus more of its resources on building cause-specific fields (such as AI safety, biosecurity, global health, and farmed animal welfare), and less on effective altruism community building per se. I take the latter to mean something like: community building around the basic ideas/principles, and which invests in particular causes always with a more tentative attitude of "we're doing this only insofar as/while we're convinced this is actually the way to do the most good." (I'll call this "EA per se" for the rest of the post.)I think there are reasons for some shift in this direction. But I also have some resistance to some of the arguments I think people have for it.My guess is that Allocating some resources from "EA per se" to field-specific development will be an overall good thing, but My best guess (not confident) is that a modest reallocation is warranted, andI worry some reasons for reallocation are overrated.In this post I'll Articulate the reasons I think people have for favouring shifting resources in this way (just below), and give my takes on them (this will doubtless miss some reasons).Explain some reasons in favour of continuing (substantial) support for EA per se. Reasons I think people might have for a shift away from EA per se, and my quick takes on them1. The reason: The EA brand is (maybe) heavily damaged post FTX — making building EA per se less tractable and less valuable because getting involved in EA per se now has bigger costs.My take: I think how strong this is basically depends on how people perceive EA now post-FTX, and I'm not convinced that the public feels as badly about it as some other people seem to think. I think it's hard to infer how people think about EA just by looking at headlines or Twitter coverage about it over the course of a few months. My impression is that lots of people are still learning about EA and finding it intuitively appealing, and I think it's unclear how much this has changed on net post-FTX. Also, I think EA per se has a lot to contribute to the conversation about AI risk — and was talking about it before AI concern became mainstream — so [...] --- First published: June 24th, 2023 Source: https://forum.effectivealtruism.org/posts/CEtKAP5Gr7QrTXHRW/on-focusing-resources-more-on-particular-fields-vs-ea-per-se --- Narrated by TYPE III AUDIO.
“Four claims about the role of effective giving in the EA community” by Sjir Hoeijmakers
I’m sharing the below as part of the EA Strategy Fortnight. I think there's value in discussing what the role of effective giving in the EA community should be, as (1) I expect people have quite different views on this, and (2) I think there are concrete things we should do differently based on our views here (I share some suggestions at the bottom of this post). These claims or similar ones have been made by others in various places (e.g. here, here, and here), but I thought it'd be useful to put them together in one place so people can critique them not only one-by-one but also as a set. This post doesn’t make a well-supported argument for all these claims and suggestions: many are hypotheses on which I’d love to see more data and/or pushback. Full disclosure: I work at Giving What We Can (though these are my personal views). [...] ---Outline:(00:58) Claim 1: Giving effectively and significantly should be normal in the EA community(02:21) Claim 2: Giving effectively and significantly should not be required in the EA community(03:18) Claim 3: Giving effectively and significantly should be sufficient to be part of the EA community(05:00) Claim 4: Giving effectively and significantly should not require one to be part of the EA community(05:52) A few recommendations based on these claims(08:44) CreditsThe original text contained 12 footnotes which were omitted from this narration. --- First published: June 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/zohs3eYHd8WdhF88M/four-claims-about-the-role-of-effective-giving-in-the-ea --- Narrated by TYPE III AUDIO.
“The flow of funding in EA movement building” by Vaidehi Agarwalla
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.I’ve been reflecting on the role of funding in the EA movement & community over time. Specifically I wanted to improve common knowledge around funding flows in the EA movement building space. It seems that many people may not be aware of it. Funders (and the main organizations they have supported) have shaped the EA community in many ways - the rate & speed at which EA has grown (example), the people that are attracted and given access to opportunities, and the culture and norms the community embodies and the overall ecosystem. I share some preliminary results from research I’ve conducted looking at the historical flow of data to movement building sources. I wanted to share what I have so far for the strategy fortnight to get conversation started. I think there is enough information here to understand the general pattern of funding flows. If you want to play around with the data, here is my (raw, messy) spreadsheet.Key observations Total funding 2012-2013 by known sourcesAccording to known funding sources, approximately $245M have been granted to EA movement building organizations and projects since 2012. I’d estimate the real number is something like $250-280M. The Open Philanthropy EA Community Growth (Longtermism) team (OP LT) has directed ~64% ($159M) of known movement building funding (incl. ~5% or $12M to the EAIF) since 2016. Note that OP launched an EACG program for Global Health and Wellbeing in 2022, which started making grants in 2023. Their budget is significantly smaller (currently ~$10M per year) and they currently prioritize effective giving organizations.The unlabeled dark blue segment is “other donors”Funders of EA Groups from 2015-2022 See discussion below for description of the "CEA - imputed" category. Note that I’ve primarily estimated paid organizer time, not general groups expenses. EA groups are an important movement building project. The Centre for Effective Altruism (CEA) has had an outsized influence on EA groups for much of the history of the EA movement. Until May 2021, CEA was the primary funder of part- and full-time work on EA groups. In May 2021, CEA narrowed its scope to certain university & city/national groups, and the EA Infrastructure Fund (EAIF) started making grants to non-target groups. In 2022, OP LT took over most university groups funding from both CEA (in April) and EAIF (in August). Until 2021 most of CEA’s funding has come from OP LT, so its EA groups funding can be seen as an OP LT regrant. Breakdown of funding by source and time (known sources)2012-2016Before 2016, there was very limited funding available for meta projects and almost no support from institutional funders. Most organizations active during this period were funded by individual earning-to-givers and major donors or volunteer-run. Here’s a view of funding from 2012-2016:No donations from Jaan Tallinn during this period were via SFF as it didn’t exist yet. There is a $10K donation from OP to a UC Berkeley group in 2015 that is not visible in the main chart. “Other donors” includes mostly individual [...] --- First published: June 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building --- Narrated by TYPE III AUDIO.
[Linkpost] “Lab-grown meat is cleared for sale in the United States” by Ben_West
Upside Foods and Good Meat, two companies that make what they call “cultivated chicken,” said Wednesday that they have gotten approval from the US Department of Agriculture to start producing their cell-based proteins.Good Meat, which is owned by plant-based egg substitute maker Eat Just, said that production is starting immediately. --- First published: June 22nd, 2023 Source: https://forum.effectivealtruism.org/posts/sPnNyG79CcSZq9avo/lab-grown-meat-is-cleared-for-sale-in-the-united-states Linkpost URL:https://edition.cnn.com/2023/06/21/business/cultivated-meat-us-approval/index.html --- Narrated by TYPE III AUDIO.
“Five Years of Rethink Priorities: What We’ve Learned” by Peter Wildeford
The post contains a reflection on our journey as co-founders of Rethink Priorities. We are Peter Wildeford and Marcus A. Davis. In 2017, we were at a crossroads.We had been working on creating new global health and development interventions, co-founding an organization that used text message reminders to encourage new parents in India to get their children vaccinated.However, we felt there was potentially more value in creating an organization that would help tackle important questions within cause and intervention prioritization. We were convinced that farmed and wild animal welfare were very important, but we didn’t know which approaches to helping those animals would be impactful. Hits-based giving seemed like an important idea, but we were unsure how to empirically compare that type of approach to the mostly higher-certainty outcomes available from funding GiveWell’s top charities.So, we chose to create a research organization. Our aim was to take the large amount of evidence base and strong approaches used to understand global health interventions and apply them to other neglected cause areas, such as animal welfare and reducing risks posed by unprecedented new technologies like AI. We wanted to identify neglected interventions and do the research needed to make them happen.~~~Five years later, Rethink Priorities is now a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.Reflecting on everything the organization has accomplished and everything we want to happen in the next five years, we’re proud of a lot of the work our team has done.For example, we went from being unsure if invertebrates were capable of suffering to researching the issue and establishing invertebrate welfare as a proposition worth taking seriously. Following through, we helped create some of the first groups in the effective animal advocacy space working on interventions targeting invertebrates. Our team did the deep philosophical work and the practical research needed to establish specific interventions, and we incubated groups to implement them.Building on this work, our ambitious Moral Weight Project improved our understanding of both capacity for welfare and intensity of valenced experiences across species, and the moral implications of those possible differences. By doing so, the Moral Weight Project laid the foundation for cross-animal species cost-effectiveness analyses that inform important decisions regarding how many resources grantmakers and organizations should tentatively allocate towards helping each of these species.We have also produced dozens of in-depth research pieces. Our global health and development team, alone, has produced 23 reports commissioned by Open Philanthropy that increased the scope of impactful interventions considered in their global health and development portfolio. This work has influenced decisions directing millions of dollars towards the most effective interventions.Our survey and data analysis team also worked closely with more than a dozen groups in EA including the Centre for Effective Altruism, Open Philanthropy, 80,000 Hours, and 1 Day Sooner to help them fine-tune their messaging, improve their advertising, and have better data analysis for their impact tracking.RP has provided [...] --- First published: June 21st, 2023 Source: https://forum.effectivealtruism.org/posts/kP95dWZJR5qKwdThA/five-years-of-rethink-priorities-what-we-ve-learned --- Narrated by TYPE III AUDIO.
“Rethink Priorities’ Worldview Investigation Team: Introductions and Next Steps” by Bob Fischer
Some months ago, Rethink Priorities announced its interdisciplinary Worldview Investigation Team (WIT). Now, we’re pleased to introduce the team’s members:Bob Fischer is a Senior Research Manager at Rethink Priorities, an Associate Professor of Philosophy at Texas State University, and the Director of the Society for the Study of Ethics & Animals. Before leading WIT, he ran RP’s Moral Weight Project.Laura Duffy is an Executive Research Coordinator for Co-CEO Marcus Davis and works on the Worldview Investigations Project. She is a graduate of the University of Chicago, where she earned a Bachelor of Science in Statistics and co-facilitated UChicago Effective Altruism’s Introductory Fellowship.Arvo Muñoz Morán is a Quantitative Researcher working on the Worldview Investigations Team at Rethink Priorities and a research assistant at Oxford's Global Priorities Institute. Before that, he was a Research Analyst at the Forethought Foundation for Global Priorities Research and earned an MPhil in Economics from Oxford. His background is in mathematics and philosophy.Hayley Clatterbuck is a Philosophy Researcher at Rethink Priorities and an Associate Professor of Philosophy at the University of Wisconsin-Madison. She has published on topics in probability, evolutionary biology, and animal minds. Derek Shiller is a Philosophy Researcher at Rethink Priorities. He has a PhD in philosophy and has written on topics in metaethics, consciousness, and the philosophy of probability. Before joining Rethink Priorities, Derek worked as the lead web developer for The Humane League.David Bernard is a Quantitative Researcher at Rethink Priorities. He will soon complete his PhD in economics at the Paris School of Economics, where his research focuses on forecasting and causal inference in the short and long-run. He was a Fulbright Scholar at UC Berkeley and a Global Priorities fellow at the Global Priorities Institute. Over the next few months, the team will be working on cause prioritization—a topic that raises hard normative, metanormative, decision-theoretic, and empirical issues. We aren’t going to resolve them anytime soon. So, we need to decide how to navigate a sea of open questions. In part, this involves making our assumptions explicit, producing the best models we can, and then conducting sensitivity analyses to determine both how robust our models are to uncertainty and where the value of information lies.Accordingly, WIT’s goal is to make several contributions to the broader conversation about global priorities. Among the planned contributions, you can expect:A cross-cause cost-effectiveness model. This tool will allow users to compare interventions like corporate animal welfare campaigns with work on AI safety, the Against Malaria Foundation with attempts to reduce the risk of nuclear war, biosecurity projects with community building, and so on. We’ve been working on a draft of this model in recent months and we recently hired two programmers to accelerate its public release. While this tool won’t resolve all disputes about resource allocation, we hope it will help the community reason more transparently about these issues.Surveys of key stakeholders about the inputs to the model. Many people have thought long and hard about how much x-risk certain interventions can reduce, the relative importance of improving human and [...] --- First published: June 21st, 2023 Source: https://forum.effectivealtruism.org/posts/kSrjdtazFhkwwLuK8/rethink-priorities-worldview-investigation-team --- Narrated by TYPE III AUDIO.
“My tentative best guess on how EAs and Rationalists sometimes turn crazy” by Habryka
Epistemic status: This is a pretty detailed hypothesis that I think overall doesn’t add up to more than 50% of my probability mass on explaining datapoints like FTX, Leverage Research, the Zizians etc. I might also be really confused about the whole topic.Since the FTX explosion, I’ve been thinking a lot about what caused FTX and, relatedly, what caused other similarly crazy- or immoral-seeming groups of people in connection with the EA/Rationality/X-risk communities. I think there is a common thread between a lot of the people behaving in crazy or reckless ways, that it can be explained, and that understanding what is going on there might be of enormous importance in modeling the future impact of the extended LW/EA social network.The central thesis: "People want to fit in"I think the vast majority of the variance in whether people turn crazy (and ironically also whether people end up aggressively “normal”) is dependent on their desire to fit into their social environment. The forces of conformity are enormous and strong, and most people are willing to quite drastically change how they relate to themselves, and what they are willing to do, based on relatively weak social forces, especially in the context of a bunch of social hyperstimulus (lovebombing is one central example of social hyperstimulus, but also twitter-mobs and social-justice cancelling behaviors seem similar to me in that they evoke extraordinarily strong reactions in people). My current model of this kind of motivation in people is quite path-dependent and myopic. Just because someone could leave a social context that seems kind of crazy or abusive to them and find a different social context that is better, with often only a few weeks of effort, they rarely do this (they won't necessarily find a great social context, since social relationships do take quite a while to form, but at least when I've observed abusive dynamics, it wouldn't take them very long to find one that is better than the bad situation in which they are currently in). Instead people are very attached, much more than I think rational choice theory would generally predict, to the social context that they end up in, with people very rarely even considering the option of leaving and joining another one. This means that I currently think that the vast majority of people (around 90% of the population or so) are totally capable of being pressured into adopting extreme beliefs, being moved to extreme violence, or participating in highly immoral behavior, if you just put them into a social context where the incentives push in the right direction (see also Milgram and the effectiveness of military drafts). In this model, the primary reason for why people are not crazy is because social institutions and groups that drive people to extreme action tend to be short lived. The argument here is an argument from selection, not planning. Cults that drive people to extreme action die out quite quickly since they make enemies, or engage in various types of self-destructive behavior. Moderate religions that [...] --- First published: June 21st, 2023 Source: https://forum.effectivealtruism.org/posts/MMM24repKAzYxZqjn/my-tentative-best-guess-on-how-eas-and-rationalists --- Narrated by TYPE III AUDIO.
“Why Altruists Can’t Have Nice Things” by lincolnq
Gesturing at a thing to mostly avoid. My personal opinion. This topic has been discussed on the EA Forum before, e.g. Free-spending EA might be a big problem (2022) and The biggest risk of free-spending EA is grift (2022). I also wrote What's Your Hourly Rate? in 2013, and Value of time as an employee in 2019. This piece mostly stands on its own. There's a temptation, when solving the world's toughest and most-important problems, to throw money around. Lattes on tap! Full-time massage team! Business class flights! Retreat in the Bahamas! When you do the cost/benefit analysis it comes out positive: "An extra four hours of sleep on the plane is worth four thousand dollars, because of how much we're getting paid and how tight the time is." The problem, which we always underindex on, is that our culture doesn't stand up to this kind of assault on normalcy. No altruistic, mission-oriented culture can. "I have never witnessed so much money in my life." [1] What is culture? I often phrase it as "lessons from the early days of an org." How we survive; how we make it work despite the tough times; our story of how we started with something small and ended up with something great. That knowledge fundamentally pervades everything we do. It needs upkeep and constant reinforcement. "It is always Day One" [2] refers to how Amazon is trying hard, even as they have grown huge, to preserve their culture of scrappiness and caring. What perks say Fancy, unusual, expensive perks are costly signals. They're saying or implying the following: Your time is worth a lot of money You are special and important, you deserve this We are rich and successful; we are elite We are generous and you are lucky to be in our orbit You're in the inner ring; you're better than people who aren't part of this We desperately want to keep you around You are free from menial tasks You would never pay for this on your own—but through us, you can have it anyway We're just like Google! Some of these things might be locally true, but when I zoom out, I get a villainous vibe: this story flatters, it manipulates, it promotes hubris, it tells lies you want to believe. It's a Faustian trade: in exchange for these perks you "just" have to distort your reality, and we're not even asking you to believe hard scary things, just nice ego-boosting things about how special, irreplaceable, on-the-right-track we all are. Signals you might want to send instead The work cultures I prefer would signal something like the following: We're normal people who have chosen to take on especially important work We have an angle / insight that most people haven't realized/acted on yet We might be wrong, and are constantly seeking evidence that would change our minds We should try to be especially virtuous whenever we find ourselves setting a moral example for others (We aren't morally better by default, [...] --- First published: June 21st, 2023 Source: https://forum.effectivealtruism.org/posts/t9e6enPXcH6HFzQku/why-altruists-can-t-have-nice-things --- Narrated by TYPE III AUDIO.
“Longtermists are perceived as power-seeking” by OllieBase
A short and arguably unfinished blog post that I'm sharing as part of EA strategy fortnight. There's probably a lot more to say about this, but I've sat on this draft for a few months and don't expect to have time to develop the argument much further. -I understand longtermism to be the claim that positively shaping the long-term future is a moral priority. The argument for longtermism goes:The future could be extremely large;The beings who will inhabit that future matter, morally;There are things we can do to improve the lives of those beings (one of which is reducing existential risk);Therefore, positively shaping the long-term future should be a moral priority.However, I have one core worry about longtermism and it’s this: people (reasonably) see its adherents as power-seeking. I think this worry somewhat extends to broad existential risk reduction work, but much less so. cool planetArguments for longtermism tell us something important and surprising; that there is an extremely large thing that people aren’t paying attention to. That thing is the long-term future. In some ways, it’s odd that we have to draw attention to this extremely large thing. Everyone believes the future will exist and most people don’t expect the world to end that soon.[1]Perhaps what longtermism introduces to most people is actually premises 2 and 3 (above) — that we might have some reason to take it seriously, morally, and that we can shape it.In any case, longtermism seems to point to something that people vaguely know about or even agree with already and then say that we have reason to try and influence that thing.This would all be fine if everyone felt like they were on the same team. That, when longtermists say “we should try and influence the long-term future”, everyone listening sees themselves as part of that “we”.This doesn’t seem to be what’s happening. For whatever reason, when people hear longtermists say “we should try and influence the long-term future”, they hear the “we” as just the longtermists.[2]This is worrying to them. It sounds like this small group of people making this clever argument will take control of this extremely big thing that no one thought you could (or should) control.The only thing that could make this worse is if this small group of people were somehow undeserving of more power and influence, such as relatively wealthy[3], well-educated white men. Unfortunately, many people making this argument are relatively wealthy, well-educated white men (including me).To be clear, I think longtermists do not view accruing power as a core goal or as an implication of longtermism.[4] Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.[5]Ironically, it seems that, because no one else is paying attention to the extremely big thing, they’re going to have to be the first ones to pay attention to it.—I don’t have much in the way of a solution here. I mostly wanted to point to this worry and spell it out more clearly so that those [...] --- First published: June 20th, 2023 Source: https://forum.effectivealtruism.org/posts/KYApMdtPsveYPAoZk/longtermists-are-perceived-as-power-seeking --- Narrated by TYPE III AUDIO.
“We can all help solve funding constraints. What stops us?” by Luke Freeman
This post is a personal reflection that follows my journey to effective altruism, my experiences within it, the concerns I've developed along the way and my hopes for addressing them. It culminates in my views on funding constraints — the role we can all play in solving them and a key question I have for you all: What stops us?My journeyWhile this starts with a reflection on my personal journey, I suspect it might feel familiar, it might strike a chord, at times it might rhyme with yours.I was about eight years old when I was first confronted with the tragic reality that an overwhelming number of children my age were suffering and dying from preventable diseases and unjust economic conditions.It broke my heart. I knew that I had done nothing to deserve my incredibly privileged position of being born healthy to a loving, stable, middle-income family in Australia (a country with one of the highest standards of living).Throughout my early years, I took many opportunities to do what I could to right this wrong. In school, that meant participating in fundraisers and advocacy. As a young professional, that meant living frugally but still giving a relatively meagre amount to help others. When I got my first stable job, I decided it was time to give 10% to help others... But when I calculated that that would be $5,000, this commitment began to feel like a pretty big deal. I wasn't going to back down, but I wanted to be more confident that it'd actually result in something good. I felt a responsibility to donate wisely.Some Googling quickly led me to discover Giving What We Can, GiveWell, and Julia Wise's blog Giving Gladly. From this first introduction to what would soon be known as the effective altruism (EA) community, I found the information I needed to help guide me, and the inspiration I needed to help me follow through.I also took several opportunities to pursue a more impact-oriented career, and even tried getting involved in politics. These attempts had varying success, but that was okay: I had one constant opportunity to help others by giving.Around this time, the EA community started expanding their lines of reasoning beyond effective giving advice to other areas like careers and advocacy. I was thrilled to see this. We all have an opportunity to use various resources to make a dent in the world's problems, and the same community that had made good progress on philanthropy seemed to me well-positioned to make progress on other fronts too.By 2016, effective altruism was well and truly “a thing” and I discovered that there was an EA group and conference near me. So, I ventured out to actually meet some of these "effective altruism" people in person.It hit me: I'd finally found "my people."These were people who actually cared enough to put their money where their mouths were, to use the best tools they could find to make the biggest possible difference, and to advocate for others to join them. None of these things were easy, but these [...] --- First published: June 18th, 2023 Source: https://forum.effectivealtruism.org/posts/WMdEJjLAHmdwyA5Wm/we-can-all-help-solve-funding-constraints-what-stops-us --- Narrated by TYPE III AUDIO.
“Third Wave Effective Altruism” by Ben_West
This is a frame that I have found useful and I'm sharing in case others find it useful. EA has arguably gone through several waves:Waves of EA (highly simplified model — see caveats below) First waveSecond waveThird waveTime period2010[1]-2017[2]2017-20232023-??Primary constraintMoneyTalent???Primary call to actionDonations to effective charitiesCareer changePrimary target audienceMiddle-upper-class peopleUniversity students and early career professionalsFlagship cause areaGlobal health and developmentLongtermismMajor hubsOxford > SF Bay > Berlin (?)SF Bay > Oxford > London > DC > BostonThe boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:ubstantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]AI safety becoming (relatively) mainstreamIf I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.Third wave EA: what are some possibilities?Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.Example future scenarioPolitics and Civil Society[4]Forefront of weirdnessReturn to non-AI causesDescription of the possible “third wave” — chosen to illustrate the breadth of possibilitiesThere is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI.AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window.AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA.Primary constraintPolitical willResearchMoneyPrimary call to actionVoting/advocacyResearchDonationsPrimary target audienceVoters in US/EUFuture researchers (university students)Middle-upper class peopleFlagship cause areaAI regulationDigital sentienceAnimal welfareWhere do we go from here?I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.I'm particularly interested [...] --- First published: June 17th, 2023 Source: https://forum.effectivealtruism.org/posts/XTBGAWAXR25atu39P/third-wave-effective-altruism --- Narrated by TYPE III AUDIO.
“EA organizations should have a transparent scope” by Joey
Executive summaryOne of the biggest challenges of being in a community that really cares about counterfactuals is knowing where the most important gaps are and which areas are already effectively covered. This can be even more complex with meta organizations and funders that often have broad scopes that change over time. However, I think it is really important for every meta organization to clearly establish what they cover and thus where these gaps are; there is a substantial negative flowthrough effect when a community thinks an area is covered when it is not. Why this mattersThe topic of having a transparent scope recently came up at a conference as one of the top concerns with many EA meta orgs. Some negative effects that have been felt by the community are in large part due to unclear scopes, including: Organizations leaving a space thinking it's covered when it's not. Funders reducing funding in an area due to an assumption that someone else is covering it when there are still major gaps.Two organizations working on the same thing without knowledge of each other, due to both having a broad mandate, but simultaneously putting resources into an overlapping subcomponent of this mandate.Talent being turned off or feeling misled by EA when they think an org misportrays itself. Talent ‘dropping out of the funnel’ when they go to what they believe is the primary organization covering an area and finding that what they care about isn’t covered, due to the organization claiming too broad a mandate.There can also be a significant amount of general frustration caused when people think an organization will cover, or is covering, an area and then an organization fails to deliver (often on something they did not even plan on doing). What do I mean when I say that organizations should have a transparent scope: Broadly, I mean organizations being publicly clear and specific about what they are planning to cover both in terms of action and cause area. In a relevant timeframe: I think this is most important in the short term (e.g., there is a ton of value in an organization saying what they are going to cover over the next 12 months, and what they have covered over the last months). For the most important questions: This clarity needs to both be in priorities (e.g., cause prioritization) and planned actions (e.g., working with student chapters). This can include things the organization might like or think is impactful to do but are not doing due to capacity constraints or its current strategic direction.For the areas most likely for people to confuse: It is particularly important to provide clarity about things that people think one might be doing (for example, Charity Entrepreneurship probably doesn’t need to clarify that it doesn’t sell flowers, but should really be transparent over whether it plans to incubate projects in a certain cause area or not). How to do thisWhen I have talked to organizations about this, I sometimes think that the “perfect” becomes the enemy of the good and they do not [...] --- First published: June 14th, 2023 Source: https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope --- Narrated by TYPE III AUDIO.
“Improving EA Communication Surrounding Disability” by MHR
Epistemic Status: Low-to-medium confidence, informed by my experience with having a disability as an EA. I think the included recommendations are reasonable best practices, but I’m uncertain as to whether they would make a tangible change to perceptions of the EA movement. SummaryThe EA movement has historically faced criticism from disability rights advocates, potentially reducing support for EA and limiting its ability to do good. This tension between EA and disability advocacy may be as much a matter of poor EA communication around issues of disability as a matter of fundamental philosophical disagreement. Changes to communications practices regarding disability might therefore deliver major benefits for relatively little effort. Particular recommendations for improving communications include:Avoiding unnecessarily presenting EA and disability advocacy as being in oppositionBeing careful to only use DALYs when appropriate and when properly contextualizedIncreasing the quantity and diversity of EA writing on disability IntroductionThe Effective Altruism movement has had a somewhat contentious relationship with the disability advocacy community. Disability advocates have critiqued EA via protests, articles, and social media posts, arguing that the movement is ableist, eugenicist, and/or insufficiently attentive to the needs of disabled individuals. Yet the EA community is often substantially more inclusive than society at large for people with many disabilities, through aspects such as availability of remote work, social acceptance of specialized dietary needs, and provision of information in a wide variety of formats. Moreover, while there are some areas in which EA’s typical consequentialism may have fundamental conflicts with theories of disability justice, these areas are likely much more limited than many would assume. In fact, since people with disabilities tend to be overrepresented among those living in extreme poverty and/or experiencing severe pain, typical EA approaches that prioritize these problems are likely to be substantially net beneficial to the lives of disabled individuals. Given this context, I think it is likely that the conflict between disability advocates and effective altruists is as much a problem of poor EA communication as it is a problem of fundamental philosophical difference. This breakdown implies that conflicts between EAs and disability advocates might be substantially reduced via changes to EA communications practices. While changes to communication approaches carry some costs, I believe the benefits from improved communications around disability would probably outweigh them. There are three potential areas in which I think the status quo hurts the EA movement. First of all, it likely drives off potential donors, employees, and advocates with disabilities, reducing the resources with which the EA movement is able to do good. Second, it may prevent dialogue between the EA and disability advocacy communities that might productively identify effective interventions focused on people with disabilities. Finally, it may reduce support for the EA movement among the wider community of people who care about the interests and concerns of the disabled community. In comparison to these harms, I think the modest efforts required to improve on current EA communications around disability issues are likely to be noticeably less costly. In the next section, I identify three practical areas in which communications could likely be [...] --- First published: June 13th, 2023 Source: https://forum.effectivealtruism.org/posts/iAsepse4jx6zLH4tZ/improving-ea-communication-surrounding-disability --- Narrated by TYPE III AUDIO.
“Why I spoke to TIME magazine, and My Experience as a Female AI Researcher in Silicon Valley [SA Sequence Intro, Advice, and AMA]” by Lucretia
Crossposted on Medium here.Twitter: @lucreti_aFrom Lore Olympus.Thank you to the supportive EA members who encouraged me to publicly share this difficult experience, to my friends and research collaborators for your kindness, and to the courageous women who helped me in writing this post, who I hope can someday speak publicly.To those who know me, please call me Lucretia.This is a megapost. Each section has a distinct purpose and may evolve into its own standalone post. For the full picture, I recommend reading to the end. My cross-posted version on Medium is broken into sections for easier reading.0. OverviewIntroduction. I was one of the women who spoke to TIME magazine about sexual harassment and abuse in EA. Here is my story without media distortions.Advice for Female Founders and AI Researchers in the Valley. Silicon Valley can be a brutal place for women. This is what I wish I knew five years ago.My Case Study: I am an AI researcher. I believe my AI alignment research career was needlessly encumbered by:My experience with the sexually abusive red pill and pickup artist sphere, which entwined with a branch of AI safety in Cambridge, MA and Silicon Valley. I describe the unethical core of red pill ideology, including the running of “rape scripts.”The recent retaliation by a Silicon Valley AI community to my report of harm. This community’s aggressive reaction showed many gender biases latent in AI culture.Systemic Sexual Violence in Silicon Valley. I believe the male-dominated environment, nepotistic connections to investor money, extreme power disparities between wealthy AI researchers and aspiring young women in the AI and startup sectors, hacker house party culture, psychedelics misused as date rape drugs, cults of personality, substantial population of low empathy, risk-seeking, and/or narcissistic men, and lack of functional policing mechanisms make sexual violence a systemic problem in a critical X-risk industry.Why I Spoke to TIME. I address some misconceptions about the original TIME article on sexual harassment, and why I spoke to TIME in the first place.Helpful Books and Movies. I share learnings about sexual harassment and abuse after ~15 months of focusing on the problem, including my favorite books and movies about sexual harassment/abuse to flesh out more conceptual space. For all the seriousness of this post, these books and movies are entertaining, gorgeous, and healing!Future Sequences? Depending on the reactions to this post, I would love to write a Sequence of sexual harassment and abuse from first principles.Call to Action: Recovery and Litigation Funds. AGI should neither be built nor aligned in environments of deceit. We propose a call-to-action for a Recovery Fund and Sociological AI Alignment Fund / Litigation Fund to counteract the sexual predation Moloch in Silicon Valley, which is a sociological AI safety problem.AppendixExcerpts from red pill literatureNotes on Rape vs Consent Culture1. IntroductionSome recent posts on the EA forum have thoughtfully and earnestly addressed sexual harassment and abuse. Thank you to the EA community for your insightful posts and comments, and for genuinely trying to address the problem, which made [...] --- First published: June 11th, 2023 Source: https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female --- Narrated by TYPE III AUDIO.
Cause area report: Antimicrobial Resistance
This post is a summary of some of my work as a field strategy consultant at Schmidt Futures' Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed here.Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.Source:https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistanceNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration. ---
“EA Strategy Fortnight (June 12-24)” by Ben_West
Tl;dr: I’m kicking off a push for public discussions about EA strategy that will be happening June 12-24. You’ll see new posts under this tag, and you can find details about people who’ve committed to participating and more below. Motivation and what this is(n’t)I feel (and, from conversations in person and seeing discussions on the Forum, think that I am not alone in feeling) like there’s been a dearth of public discussion about EA strategy recently, particularly from people in leadership positions at EA organizations. To help address this, I’m setting up an “EA strategy fortnight” — two weeks where we’ll put in extra energy to make those discussions happen. A set of folks have already volunteered to post thoughts about major strategic EA questions, like how centralized EA should be or current priorities for GH&W EA.This event and these posts are generally intended to start discussion, rather than give the final word on any given subject. I expect that people participating in this event will also often disagree with each other, and participation in this shouldn’t imply an endorsement of anything or anyone in particular.I see this mostly as an experiment into whether having a simple “event” can cause people to publish more stuff. Please don't interpret any of these posts as something like an official consensus statement.Some people have already agreed to participateI reached out to people through a combination of a) thinking of people who had shared private strategy documents with me before that still had not been published b) contacting leaders of EA organizations, and c) soliciting suggestions from others. About half of the people I contacted agreed to participate. I think you should view this as a convenience sample, heavily skewed towards the people who find writing Forum posts to be low cost. Also note that I contacted some of these people specifically because I disagree with them; no endorsement of these ideas is implied. People who’ve already agreed to post stuff during this fortnight [in random order]:Habryka - How EAs and Rationalists turn crazyMaxDalton - In Praise of PraiseMichaelA - Interim updates on the RP AI Governance & Strategy teamWilliam_MacAskill - Decision-making in EAMichelle_Hutchinson - TBDArdenlk - Reallocating resources from EA per se to specific fieldsOzzie Gooen - Centralize Organizations, Decentralize PowerJulia_Wise - EA reform project updatesShakeel Hashim - EA Communications UpdatesJakub Stencel - EA’s success no one cares aboutlincolnq - Why Altruists Can't Have Nice ThingsBen_West and 2ndRichter - FTX’s impacts on EA brand and engagement with CEA projectsjeffsebo and Sofia_Fogel - EA and the nature and value of digital mindsAnonymous – Diseconomies of scale in community buildingLuke Freeman - TBDkuhanj - TBDJoey - The community wide advantages of having a transparent scopeJamesSnowden - Current priorities for Open Philanthropy's Effective Altruism, Global Health and Wellbeing programNicole_Ross - Crisis bootcamp: lessons learned and implications for EARob Gledhill - AIS vs EA groups for city and national groupsVaidehi Agarwalla - TBDRenan Araujo - Thoughts about AI safety field-building in LMICsIf you would like to participateIf you are able to pre-commit to writing a [...] --- Source: https://forum.effectivealtruism.org/posts/ct3zLpD5FMwBwYCZ7/ea-strategy-fortnight-june-12-24 --- Narrated by TYPE III AUDIO.
“I made a news site based on prediction markets” by vandemonian
Introduction“News through prediction markets”The Base Rate Times is a nascent news site that incorporates prediction markets prominently into its coverage.Please see current iteration: www.baseratetimes.comTwitter: www.twitter.com/base_rate_timesWhat problem does it solve?Forecasts are underutilized by the mediaPrediction markets are more accurate than pundits, yet the media has made limited use of their forecasts. This is a big problem: one of the most rigorous information sources is being omitted from public discourse!The Base Rate Times creates prediction markets content, substituting for inferior news sources. This improves the epistemics of its audience.Forecasts are dispersed, generally inconvenient to consumePrediction markets are dispersed among many different platforms, fragmenting the information forecasters provide. For example, different platforms ask similar questions in different ways. Furthermore, platforms’ UX is orientated towards forecasters, not information consumers. Overall, trying to use prediction markets as ‘news replacement’ is cumbersome.There is value in aggregating and curating forecasts from various platforms. We need engaging ways of sharing prediction markets’ insights. The Base Rate Times aims to make prediction markets easily digestible to the general public.How does it work?News media (emotive narrative) vs Base Rate Times (actionable odds)For example, this is a real headline from a reputable newspaper: “Taiwan braces for China's fury over Pelosi visit”. Emotive and incendiary, it does not help you form an accurate model of the situation.By contrast, The Base Rate Times: “China-Taiwan conflict risk 14%, up 2x from 7% after Pelosi visit”. That's an actionable insight. It can inform your decision on whether to stay in Taiwan or to flee, for example.News aggregation, summarizing prediction marketsNaturally, the probabilities in the example above come from prediction markets. The Base Rate Times presents what prediction markets are telling us about news in an engaging way.Stories that shift market odds are highlighted. And if a seemingly important story doesn’t shift market odds, that also tells you something.On The Base Rate Times, right now you can see the latest odds on:Putin staying in powerRussian territorial gains in UkraineEscalation risk of NATO involvementand more...By glancing at a few charts, you can form a more accurate model (in less time) of Russia-Ukraine than reading countless narrative-based news stories.InspirationA key inspiration was Scott Alexander’s Prediction Market FAQ:I recently had to read many articles on Elon Musk’s takeover of Twitter, which all repeated that “rumors said” Twitter was about to go down because of his mass firing. Meanwhile, there were several prediction markets on whether this would happen, and they were all around 40%. If some journalist had thought to check the prediction markets and cite them in their article, they could have not only provided more value (a clear percent chance instead of just “there are some rumors saying this”), but also been right when everyone else was wrong.Also Scott’s 'Mantic Monday' posts and Zvi’s blog.This simple chart by @ClayGraubard was another inspiration. Wanted something like this, but for all major news stories. Couldn't find it, so making it myself. (Clay is making geopolitics videos and podcasts now, check it out.)GoalsLike 538, but for prediction marketsThe Base Rate Times is a bet that forecasts [...] --- Source: https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets --- Narrated by TYPE III AUDIO.