EA - Launching the meta charity funding circle (MCF): Apply for funding or join as a donor! by Joey
<a href="https://forum.effectivealtruism.org/posts/5WLGmCg7vSfXeqSWC/launching-the-meta-charity-funding-circle-mcf-apply-for">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching the meta charity funding circle (MCF): Apply for funding or join as a donor!, published by Joey on July 26, 2023 on The Effective Altruism Forum. Summary We are launching the Meta Charity Funders, a growing network of donors sharing knowledge and discussing funding opportunities in the EA meta space. Apply for funding by August 27th or join the circle as a donor. See below or visit our website to learn more! If you are doing EA-aligned "meta" work, and have not received substantial funding for several years, you might be worried about funding. Over the past 10 years, Open Philanthropy and EA Funds comprised a large percent of total meta funding and are far from independent of each other. This lack of diversity means potentially effective projects outside their priorities often struggle to stay afloat or scale, and the beliefs of just a few grant-makers can massively shape the EA movement's trajectory. It can be difficult for funders within meta as well. Individual donors often don't know where to give if they don't share EA Funds' approach. Thorough vetting is scarce and expensive, with only a handful of grant-makers deploying tens of millions per year in meta grants, resulting in sub-optimal allocations. This is why we are launching the Meta Charity Funders, a growing network of donors sharing knowledge, discussing funding opportunities, and running joint open grant rounds in the EA meta space. We believe many charitable projects create a huge impact by working at one level removed from direct impact to instead enhance the impact of others. Often these projects cut across causes and don't fit neatly into a box, thus being neglected by funders. Well known examples of meta organizations include charity evaluators like GiveWell, incubators like Charity Entrepreneurship, cause prioritization research organizations like Rethink Priorities, or field-building projects promoting effective giving or impactful careers. Grantees: Apply to many HNW donors at once - 1st round closes August 27. We are open to funding meta work across a range of causes, organizational stages, strategies, etc. We are most interested in applications that have not already been substantially supported by similar actors such as EA Funds or Open Philanthropy, though we will still consider these. We expect most of our grants to range from $10,000 to $500,000 and consider grants to both individuals and organizations. We expect our first round to be between $500,000 and $1.5m of total funding. Please lean in favor of applying if you are unsure if you would be a good fit! Donors: Join us! Find neglected opportunities, get help with ops and vetting, and give on your own terms. People who are unable to commit to regular meetings are still encouraged to apply and may be invited to our Slack and email list and gain access to our grant opportunities database. Meta Charity Funding Circle is a project of Charity Entrepreneurship and Impactful Grantmaking. It is organized by this post's authors: Gage Weston, Vilhelm Skoglund, and Joey Savoie. Our members are anonymous. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
First published
07/26/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching the meta charity funding circle (MCF): Apply for funding or join as a donor!, published by Joey on July 26, 2023 on The Effective Altruism Forum. Summary We are launching the Meta Charity Funders, a growing network of donors sharing knowledge and discussing funding opportunities in the EA meta space. Apply for funding by August 27th or join the circle as a donor. See below or visit our website to learn more! If you are doing EA-aligned "meta" work, and have not received substantial funding for several years, you might be worried about funding. Over the past 10 years, Open Philanthropy and EA Funds comprised a large percent of total meta funding and are far from independent of each other. This lack of diversity means potentially effective projects outside their priorities often struggle to stay afloat or scale, and the beliefs of just a few grant-makers can massively shape the EA movement's trajectory. It can be difficult for funders within meta as well. Individual donors often don't know where to give if they don't share EA Funds' approach. Thorough vetting is scarce and expensive, with only a handful of grant-makers deploying tens of millions per year in meta grants, resulting in sub-optimal allocations. This is why we are launching the Meta Charity Funders, a growing network of donors sharing knowledge, discussing funding opportunities, and running joint open grant rounds in the EA meta space. We believe many charitable projects create a huge impact by working at one level removed from direct impact to instead enhance the impact of others. Often these projects cut across causes and don't fit neatly into a box, thus being neglected by funders. Well known examples of meta organizations include charity evaluators like GiveWell, incubators like Charity Entrepreneurship, cause prioritization research organizations like Rethink Priorities, or field-building projects promoting effective giving or impactful careers. Grantees: Apply to many HNW donors at once - 1st round closes August 27. We are open to funding meta work across a range of causes, organizational stages, strategies, etc. We are most interested in applications that have not already been substantially supported by similar actors such as EA Funds or Open Philanthropy, though we will still consider these. We expect most of our grants to range from $10,000 to $500,000 and consider grants to both individuals and organizations. We expect our first round to be between $500,000 and $1.5m of total funding. Please lean in favor of applying if you are unsure if you would be a good fit! Donors: Join us! Find neglected opportunities, get help with ops and vetting, and give on your own terms. People who are unable to commit to regular meetings are still encouraged to apply and may be invited to our Slack and email list and gain access to our grant opportunities database. Meta Charity Funding Circle is a project of Charity Entrepreneurship and Impactful Grantmaking. It is organized by this post's authors: Gage Weston, Vilhelm Skoglund, and Joey Savoie. Our members are anonymous. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Duration
3 hours and 4 minutes
Parent Podcast
The Nonlinear Library: EA Forum Weekly
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Effective Altruism Forum Podcast
Release Date: 07/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No