just now

Podcast Image

The Nonlinear Library: EA Forum Daily

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Details

Language:

en-us

Release Date:

05/02/2022 17:19:04

Authors:

The Nonlinear Fund

Genres:

education

Share this podcast

Episodes

    EA - EffectiveAltruismData.com is now a spreadsheet by Hamish Doodles

    Release Date: 7/23/2023

    Duration: 143 Mins

    Authors: Hamish Doodles

    Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EffectiveAltruismData.com is now a spreadsheet, published by Hamish Doodles on July 23, 2023 on The Effective Altruism Forum. A few years ago I built EffectiveAltruismData.com which looked like this: A few people told me they liked the web app. Some even said they found it useful, especially the bits that made the funding landscape more legible. But anyway, I never got around to automating the data scraping, and the website ended up hopelessly out of date. So I killed it. But it recently occurred to me that I could do the data scraping, data aggregation, and data visualisation, all within Google Sheets. So with a bit of help from Chatty G, I put together a spreadsheet which: Downloads the latest grant data from the Open Philanthropy website every 24 hours (via Google Apps Scripts). Aggregate funding by cause area Aggregate funding by organization. Visualise all grant data in a pivot table that lets you expand/aggregate by Cause Area, then Organization Name, then individual grants But note that expanding/collapsing counts as editing the spreadsheet, so you'll have to make a copy to be able to do this. You can also change the scale of the bar chart using the dropdown And you can sort grants by size or by date using the "Sort Sheet Z to A" option on the Amount or Date columns. Here's a link to the spreadsheet. You can also find it at www.effectivealtruismdata.com. Other funding legibility projects Here's another thing I made. It gives time series and cumulative bar charts for funding based on funder and cause area. You can hover over points on the time series to get the total funding per cause/org per year. The data comes from this spreadsheet by TylerMaule. Another thing which may be of interest is openbook.fyi by Rachel Weinberg & Austin Chen, which let's you search/view individual grants from a range of EA-flavoured sources. Openbook gets its data from donations.vipulnaik.com/ by Vipul Naik. I'm currently working on another spreadsheet which scrapes, aggregates, and visualises all the Vipul Naiks's data. Feedback & Requests I enjoy working on spreadsheets and data viz and stuff. Let me know if you can think of any other stuff in this area which would be useful. This is a joke. This is also a joke. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    Is Closed Captioned: No

    Explicit: No

    EA - Thoughts on yesterday's UN Security Council meeting on AI by Greg Colbourn

    Release Date: 7/22/2023

    Duration: 149 Mins

    Authors: Greg_Colbourn

    Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on yesterday's UN Security Council meeting on AI, published by Greg Colbourn on July 22, 2023 on The Effective Altruism Forum. Firstly, it's encouraging that AI is being discussed as a threat at the highest global body dedicated to ensuring global peace and security. This seemed like a remote possibility just 4 months ago. However, throughout the meeting, (possibly near term) extinction risk from uncontrollable superintelligent AI was the elephant in the room. ~1% air time, when it needs to be ~99%, given the venue and its power to stop it. Let's hope future meetings improve on this. Ultimately we need the UNSC to put together a global non-proliferation treaty on AGI, if we are to stand a reasonable chance of making it out of this decade alive.There was plenty of mention of using AI for peacekeeping. However, this seems naive in light of the offence-defence asymmetry facilitated by generative AI (especially when it comes to threats like bio-terror/engineered pandemics, and cybercrime/warfare). And in the limit of outsourcing intelligence gathering and strategy recommendations to AI (whist still keeping a human in the loop), you get scenarios like this. Highlights: China mentioned Pause: "The international community needs to. ensure that risks beyond human control don't occur. We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments". (Zhang Jun, representing China at the UN Security Council meeting on AI)) Mozambique mentioned the Sorcerer's Apprentice, human loss of control, recursive self-improvement, accidents, catastrophic and existential risk: "In the event that credible evidence emerges indicating that AI poses and existential risk, it's crucial to negotiate an intergovernmental treaty to govern and monitor its use." (MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, at the UN Security Council meeting on AI)(A bunch of us protesting about this outside the UK Foreign Office last week.) (PauseAI's comments on the meeting on Twitter.) (Discussion with Jack Clark on Twitter re his lack of mention of x-risk. Note that the post war atomic settlement - Baruch Plan - would probably have been quite different if the first nuclear detonation was assessed to have a significant chance of igniting the entire atmosphere!)(My Tweet version of this post. I'm Tweeting more as I think it's time for mass public engagement on AGI x-risk.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    Is Closed Captioned: No

    Explicit: No

    EA - EA EDA: What do Forum Topics tell us about changes in EA? by JWS

    Release Date: 7/15/2023

    Duration: 760 Mins

    Authors: JWS

    Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA EDA: What do Forum Topics tell us about changes in EA?, published by JWS on July 15, 2023 on The Effective Altruism Forum. tl;dr2: Data on EA Forum posts and topics doesn't show clear 'waves' of EA tl;dr: I used the Forum API to collect data on the trends of EA Forum topics over time. While this analysis is by no means definitive, it doesn't support the simple narrative that there was a golden age of EA that has abandoned for a much worse one. There has been a rise in AI Safety posts, but that has also been fairly recent (within the last ~2 years) 1. Introduction I really liked Ben West's recent post about 'Third Wave Effective Altruism', especially for its historical reflection on what First and Second Wave EA looked like. This characterisation of EA's history seemed to strike a chord with many Forum users, and has been reflected in recent critical coverage of EA that claims the movement has abandoned its well-intentioned roots (e.g. donations for bed nets) and decided to focus fully on bizarre risks to save a distant, hypothetical future. I've always been a bit sceptical with how common this sort of framing seems to be, especially since the best evidence we have from funding for the overall EA picture shows that most funding is still going to Global Health areas. As something of a (data) scientist myself, I thought I'd turn to one of the primary sources of information for what EAs think to shed some more light on this problem - the Forum itself! This post is a write-up of the initial data collection and analysis that followed. It's not meant to be the definitive word on either how EA, or use of the EA Forum, has changed over time. Instead, I hope it will challenge some assumptions and intuitions, prompt some interesting discussion, and hopefully leads to future posts in a similar direction either from myself or others. 2. Methodology (Feel free to skip this section if you're not interested in all the caveats) You may not be aware, the Forum has an API! While I couldn't find clear documentation on how to use it or a fully defined schema, people have used it in the past for interesting projects and some have very kindly shared their results & methods. I found these following three especially useful (the first two have linked GitHubs with their code): The Tree of Tags by Filip Sondej Effective Altruism Data from Hamish This LessWrong tutorial from Issa Rice With these examples to help me, I created my own code to get every post made on the EA Forum to date (without those who have deleted their post). There are various caveats to make about the data representation and data quality. These include: I extracted the data on July 7th - so any totals (e.g. number of posts, post score etc) or other details are only correct as of that date. I could only extract the postedAt date - which isn't always when the post in question was actually posted. A case in point, I'm pretty sure this post wasn't actually posted in 1972. However, it's the best data I could find, so hopefully for the vast majority of posts the display date is the posted date. In looking for a starting point for the data, there was a discontinuity between August to September 2014, but the data was a lot more continuous after then. I analyse the data in terms of monthly totals, so I threw out the one-week of data I had for July. The final dataset is therefore 106 months from September 2014 to June 2023 (inclusive). There are around ~950 distinct tags/topics in my data, which are far too many to plot concisely and share useful information. I've decided to take the top 50 topics in terms of times used, which collectively account for 56% of all Forum tags and 92% of posts in the above time period. I only extracted the first listed Author of a post - however, only 1 graph shared below relies on a user-level aggregat...

    Is Closed Captioned: No

    Explicit: No

    EA - Consider Earning Less by ElliotJDavies

    Release Date: 7/1/2023

    Duration: 156 Mins

    Authors: ElliotJDavies

    Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider Earning Less, published by ElliotJDavies on July 1, 2023 on The Effective Altruism Forum. This post is aimed at those working in jobs which are funded by EA donors who might be interested in voluntarily earning less. This post isn't aimed to influence pay scales at organisations, or at those not interested in earning less. When the Future Fund was founded in 2022, there was a simultaneous upwards pressure on both ambitiousness and net-earnings in the wider EA community. The pressure to be ambitious resulted in EAs really considering the opportunity cost of key decisions. Meanwhile, the discussions around why EAs should consider ordering food or investing in a new laptop pointed towards a common solution: EAs in direct work earning more.The funding situation has significantly shifted from then, as has the supply-demand curve for EA jobs. This should put a deflationary pressure on EAs' salaries, but I'd argue we largely haven't seen this effect, likely because people's salaries are "sticky". One result of this is that there are a lot of impactful projects which are unable to find funding right now, and in a similar vein, there's a lot of productive potential employees who are unable to get hired right now. There's even a significant proportion of employees who will be made redundant. This seems a shame, since there's no good reasons for salaries to be sticky. It seems especially bad if we do in fact see significant redundancies, since under a "veil of ignorance" the optimal behaviour would be to voluntarily lower your salary (assuming you could get your colleagues to do the same). Members of German labour unions quite commonly do something similar (Kurzarbeit) during economic downturns, to avoid layoffs and enable faster growth during an upturn Some Reasons you Might Want to Earn Less: You want to do as much good as possible, and suspect your organisation will do more good if it had more money at hand. Your Organisation is likely to make redundancies, which could include you. You have short timelines, and you suspect that by earning less, more people could work on alignment. You can consider your voluntary pay-cut a donation, which you can report on your GWWC account. (The great thing about pay-cut donations is you essentially get a 100% tax refund, which is particularly nice if you live somewhere with high income tax). Some Reasons you May Not Want to Earn Less: It would cause you financial hardship. You would experience a significant drop in productivity. You suspect it would promote an unhealthy culture in your organisation. You expect you're much better than the next-best candidate, and you'd be less likely to work in a high impact role if you had to earn less. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

    Is Closed Captioned: No

    Explicit: No

    EA - SoGive rates Open-Phil-funded charity NTI “too rich” by Sanjay

    Release Date: 6/18/2023

    Duration: 1493 Mins

    Authors: Sanjay

    Description: Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SoGive rates Open-Phil-funded charity NTI “too rich”, published by Sanjay on June 18, 2023 on The Effective Altruism Forum. Exec summary Under SoGive’s methodology, charities holding more than 1.5 years’ expenditure are typically rated “too rich”, in the absence of a strong reason to judge otherwise. (more) Our level of confidence in the appropriateness of this policy depends on fundamental ethical considerations, and could be “clearly (c.95%) very well justified” or “c.50% to c.90% confident in this policy, depending on the charity” (more) We understand that the Nuclear Threat Initiative (NTI) holds > 4 years of spend (c$85m), as at the most recently published Form 990, well in excess of our warning threshold. (more) We are now around 90% confident that NTI’s reserves are well in excess of our warning threshold, indeed >3x annual spend, although there are some caveats. (more) Our conversation with NTI about this provides little reason to believe that we should deviate from our default rating of “too rich”. (more) It is possible that NTI could show us forecasts of their future income and spend that might make us less likely to be concerned about the value of donations to NTI, although this seems unlikely since they have already indicated that they do not wish to share this. (more) We do not typically recommend that donors donate to NTI. However we do think it’s valuable for donors to communicate that they are interested in supporting their work, but are avoiding donating to NTI because of their high reserves. (more) Although this post is primarily to help donors decide whether to donate to NTI, readers may find it interesting for understanding SoGive's approach to charities which are too rich, and how this interacts with different ethical systems. We thank NTI for agreeing to discuss this with us knowing that there was a good chance that we might publish something on the back of the discussion. We showed them a draft of this post before publishing; they indicated that they disagree with the premise of the piece, but declined to indicate what specifically they disagreed with. 0. Intent of this post Although this post highlights the fact that NTI has received funding from Open Philanthropy (Open Phil), the aim is not to put Open Philanthropy on the spot or demand any response from them. Rather, we have argued that it is often a good idea for donors to “coattail” (i.e. copy) donations made by Open Phil. For donors doing this, including donors supported by SoGive, we think it’s useful to know which Open Phil grantees we might give lower or higher priority to. 1. Background on SoGive’s methodology for assessing reserves The SoGive ratings scale has a category called “too rich”. It is used for charities which we deem to have a large enough amount of money that it no longer makes sense for donors to provide them with funds. We set this threshold at 18 months of spend (i.e. if the amount of unrestricted reserves is one and a half times as big as its annual spend then we typically deem the charity “too rich”). To be clear, this allows the charity carte blanche to hold as much money as it likes as long as it indicates that it has a non-binding plan for that money. So, having generously ignored the designated reserves, we then notionally apply the (normally severe) stress of all the income disappearing overnight. Our threshold considers the scenario where the charity has so much reserves that it could go for one and a half years without even having to take management actions such as downsizing its activities. In this scenario, we think it is likely better for donors to send their donations elsewhere, and allow the charity to use up its reserves. Originally we considered a different, possibly more lenient policy. We considered that charities should be considered too rich if they...

    Is Closed Captioned: No

    Explicit: No

Similar Podcasts

    The Nonlinear Library

    Release Date: 10/7/2021

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Section

    Release Date: 2/10/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong

    Release Date: 3/3/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Daily

    Release Date: 5/2/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Weekly

    Release Date: 5/2/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: EA Forum Weekly

    Release Date: 5/2/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Daily

    Release Date: 5/2/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: LessWrong Weekly

    Release Date: 5/2/2022

    Authors: The Nonlinear Fund

    Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    Explicit: No

    The Nonlinear Library: Alignment Forum Top Posts

    Release Date: 2/10/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    The Nonlinear Library: LessWrong Top Posts

    Release Date: 2/15/2022

    Authors: The Nonlinear Fund

    Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.

    Explicit: No

    Effective Altruism Forum Podcast

    Release Date: 7/17/2021

    Authors: Garrett Baker

    Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.

    Explicit: No

Reviews -

Comments (0) -