EA - Current plans as the incoming director of the Global Priorities Institute by Eva
<a href="https://forum.effectivealtruism.org/posts/sSGdKNPDEupfcoHNN/current-plans-as-the-incoming-director-of-the-global">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog. I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!) GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side. There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin. 1) Research on decision-making under uncertainty There is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper). If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category. 2) Increasing empirical research GPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research. Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others. 3) Expanding GPI’s network in economics There is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation. 4) Exploring expanding to other fields and topics There are a number of topics that appear relevant to gl...
First published
04/26/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Current plans as the incoming director of the Global Priorities Institute, published by Eva on April 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog. I am taking leave from the University of Toronto to serve as the Director of the Global Priorities Institute (GPI) at the University of Oxford. I can't express enough gratitude to the University of Toronto for enabling this. (I'll be back in the fall to fulfill my teaching obligations, though - keep inviting me to seminars and such!) GPI is an interdisciplinary research institute focusing on academic research that informs decision-makers on how to do good more effectively. In its first few years, under the leadership of its founding director, Hilary Greaves, GPI created and grew a community of academics in philosophy and economics interested in global priorities research. I am excited to build from this strong foundation and, in particular, to further develop the economics side. There are several areas I would like to focus on while at GPI. The below items reflect my current views, however, I expect these views to be refined over time. These items are not intended to be an exhaustive list, but they are things I would like GPI to do more of on the margin. 1) Research on decision-making under uncertainty There is a lot of uncertainty in estimates of the effects of various actions. My views here are coloured by my past work. In the early 2010s, I tried to compile estimates of the effects of popular development interventions such as insecticide-treated bed nets for malaria, deworming drugs, and unconditional cash transfers. My initial thought was that by synthesizing the evidence, I'd be able to say something more conclusive about "the best" intervention for a given outcome. Unfortunately, I found that results varied, a lot (you can read more about it in my JEEA paper). If it's really hard to predict effects in global development, which is a very well-studied area, it would seem even harder to know what to do in other areas with less evidence. Yet, decisions still have to be made. One of the core areas GPI has focused on in the past is decision-making under uncertainty, and I expect that to continue to be a priority research area. Some work on robustness might also fall under this category. 2) Increasing empirical research GPI is an interdisciplinary institute combining philosophy and economics. To date, the economics side has largely focused on theoretical issues. But I think it's important for there to be careful, rigorous empirical work at GPI. I think there are relevant hypotheses that can be tested that pertain to global priorities research. Many economists interested in global priorities research come from applied fields like development economics, and there's a talented pool of people who can do empirical work on, e.g., encouraging better uptake of evidence or forecasting. There's simply a lot to be done here, and I look forward to working with colleagues like Julian Jamison (on leave from Exeter), Benjamin Tereick, and Mattie Toma (visiting from Warwick Business School), among many others. 3) Expanding GPI’s network in economics There is an existing program at GPI for senior research affiliates based at other institutions. However, I think a lot more can be done with this, especially on the economics side. I'm still exploring the right structures, but suffice it to say, if you are an academic economist interested in global priorities research, please do get in touch. I am envisioning a network of loosely affiliated individuals in core fields of interest who would be sent notifications about research and funding opportunities. There may also be the occasional workshop or conference invitation. 4) Exploring expanding to other fields and topics There are a number of topics that appear relevant to gl...
Duration
5 minutes
Parent Podcast
The Nonlinear Library: EA Forum Weekly
View PodcastSimilar Episodes
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
What is the alternative to intent alignment called? Q by Richard Ngo
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is the alternative to intent alignment called? Q, published by Richard Ngo on the AI Alignment Forum. Paul defines intent alignment of an AI A to a human H as the criterion that A is trying to do what H wants it to do. What term do people use for the definition of alignment in which A is trying to achieve H's goals (whether or not H intends for A to achieve H's goals)? Secondly, this seems to basically map on to the distinction between an aligned genie and an aligned sovereign. Is this a fair characterisation? (Intent alignment definition from) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
Effective Altruism Forum Podcast
Release Date: 07/17/2021
Authors: Garrett Baker
Description: I (and hopefully many others soon) read particularly interesting or impactful posts from the EA forum.
Explicit: No