LW - Lessons On How To Get Things Right On The First Try by johnswentworth
<a href="https://www.lesswrong.com/posts/f3kM7NM5eGMTp3KtZ/lessons-on-how-to-get-things-right-on-the-first-try">Link to original article</a><br/><br/>Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons On How To Get Things Right On The First Try, published by johnswentworth on June 19, 2023 on LessWrong. This post is based on several true stories, from a workshop which John has run a few times over the past year. John: Welcome to the Ball -> Cup workshop! Your task for today is simple: I’m going to roll this metal ball: . down this hotwheels ramp: . and off the edge. Your job is to tell me how far from the bottom of the ramp to place a cup on the floor, such that the ball lands in the cup. Oh, and you only get one try. General notes: I won’t try to be tricky with this exercise. You are welcome to make whatever measurements you want of the ball, ramp, etc. You can even do partial runs, e.g. roll the ball down the ramp and stop it at the bottom, or throw the ball through the air. But you only get one full end-to-end run, and anything too close to an end-to-end run is discouraged. After all, in the AI situation for which the exercise is a metaphor, we don’t know exactly when something might foom; we want elbow room. That’s it! Good luck, and let me know when you’re ready to give it a shot. [At this point readers may wish to stop and consider the problem themselves.] Alison: Let’s get that ball in that cup. It looks like this is probably supposed to be a basic physics kind of problem.but there’s got to be some kind of twist or else why would he be having us do it? Maybe the ball is surprisingly light..or maybe the camera angle is misleading and we are supposed to think of something wacky like that?? The Unnoticed Observer: Muahahaha. Alison: That seems.hard. I’ll just start with the basic physics thing and if I run out of time before I can consider the wacky stuff, so be it. So I should probably split this problem into two parts. The part where the ball arcs through the air once off the table is pretty easy. The Unnoticed: True in this case, but how would you notice if it were false? What evidence have you seen? Alison: .but the trouble is getting the exact velocity. What information do I have? Well, I can ask whatever I want, so I should be able to get all the parameters I need for the standard equations. Let’s make a shopping list: I want the starting height of the ball on the ramp (from the table), the mass of the ball, the height of the ramp off the table from multiple points along it (to estimate the curvature,) uhhh. oh shit maybe the bendiness matters! That seems really tricky. I’ll look at that first. Hey, John, can you poke the ramp a bit to demonstrate how much it flexes? John pokes at the ramp and the ramp bends. Well it did flex, but. it can’t have that much of an effect. The Unnoticed: False in this case. Such is the danger of guessing without checking. Alison: Calculating the effect of the ramp’s bendiness seems unreasonably difficult and this workshop is only meant to take an hour or so, so let’s forget that. The Unnoticed: I am reminded of a parable about a quarter and a streetlight. Alison: On to curve estimation! The Unnoticed: Why on earth is she estimating the ramp’s curve anyway? Alison: .Well I don’t actually know how to do much better than the linear approximation I got from the direct measurements. I guess I can treat part of the ramp as linear and then the end part as part of a circle. That will probably be good enough. Ooh if I take a frame from the video, I can just directly measure what the radius circle with arc of best fit is! Okay now that I’ve got that. Well I guess it’s time to look up how to do these physics problems, guess I’m rustier than I thought. I’ll go do that now. Arrrgh okay I didn’t need to do any of that curve stuff after all, I just needed to do some potential/kinetic energy calculations (ignoring friction and air resistance etc) and that’s it! I should have figured it wouldn’t be that hard, this is just a workshop ...
First published
06/20/2023
Genres:
education
Listen to this episode
Summary
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons On How To Get Things Right On The First Try, published by johnswentworth on June 19, 2023 on LessWrong. This post is based on several true stories, from a workshop which John has run a few times over the past year. John: Welcome to the Ball -> Cup workshop! Your task for today is simple: I’m going to roll this metal ball: . down this hotwheels ramp: . and off the edge. Your job is to tell me how far from the bottom of the ramp to place a cup on the floor, such that the ball lands in the cup. Oh, and you only get one try. General notes: I won’t try to be tricky with this exercise. You are welcome to make whatever measurements you want of the ball, ramp, etc. You can even do partial runs, e.g. roll the ball down the ramp and stop it at the bottom, or throw the ball through the air. But you only get one full end-to-end run, and anything too close to an end-to-end run is discouraged. After all, in the AI situation for which the exercise is a metaphor, we don’t know exactly when something might foom; we want elbow room. That’s it! Good luck, and let me know when you’re ready to give it a shot. [At this point readers may wish to stop and consider the problem themselves.] Alison: Let’s get that ball in that cup. It looks like this is probably supposed to be a basic physics kind of problem.but there’s got to be some kind of twist or else why would he be having us do it? Maybe the ball is surprisingly light..or maybe the camera angle is misleading and we are supposed to think of something wacky like that?? The Unnoticed Observer: Muahahaha. Alison: That seems.hard. I’ll just start with the basic physics thing and if I run out of time before I can consider the wacky stuff, so be it. So I should probably split this problem into two parts. The part where the ball arcs through the air once off the table is pretty easy. The Unnoticed: True in this case, but how would you notice if it were false? What evidence have you seen? Alison: .but the trouble is getting the exact velocity. What information do I have? Well, I can ask whatever I want, so I should be able to get all the parameters I need for the standard equations. Let’s make a shopping list: I want the starting height of the ball on the ramp (from the table), the mass of the ball, the height of the ramp off the table from multiple points along it (to estimate the curvature,) uhhh. oh shit maybe the bendiness matters! That seems really tricky. I’ll look at that first. Hey, John, can you poke the ramp a bit to demonstrate how much it flexes? John pokes at the ramp and the ramp bends. Well it did flex, but. it can’t have that much of an effect. The Unnoticed: False in this case. Such is the danger of guessing without checking. Alison: Calculating the effect of the ramp’s bendiness seems unreasonably difficult and this workshop is only meant to take an hour or so, so let’s forget that. The Unnoticed: I am reminded of a parable about a quarter and a streetlight. Alison: On to curve estimation! The Unnoticed: Why on earth is she estimating the ramp’s curve anyway? Alison: .Well I don’t actually know how to do much better than the linear approximation I got from the direct measurements. I guess I can treat part of the ramp as linear and then the end part as part of a circle. That will probably be good enough. Ooh if I take a frame from the video, I can just directly measure what the radius circle with arc of best fit is! Okay now that I’ve got that. Well I guess it’s time to look up how to do these physics problems, guess I’m rustier than I thought. I’ll go do that now. Arrrgh okay I didn’t need to do any of that curve stuff after all, I just needed to do some potential/kinetic energy calculations (ignoring friction and air resistance etc) and that’s it! I should have figured it wouldn’t be that hard, this is just a workshop ...
Duration
14 minutes
Parent Podcast
The Nonlinear Library: LessWrong Weekly
View PodcastSimilar Episodes
Announcing AlignmentForum.org Beta by Raymond Arnold.
Release Date: 12/03/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing AlignmentForum.org Beta, published by Raymond Arnold on the AI Alignment Forum. We've just launched the beta for AlignmentForum.org. Much of the value of LessWrong has come from the development of technical research on AI Alignment. In particular, having those discussions be in an accessible place has allowed newcomers to get up to speed and involved. But the alignment research community has at least some needs that are best met with a semi-private forum. For the past few years, agentfoundations.org has served as a space for highly technical discussion of AI safety. But some aspects of the site design have made it a bit difficult to maintain, and harder to onboard new researchers. Meanwhile, as the AI landscape has shifted, it seemed valuable to expand the scope of the site. Agent Foundations is one particular paradigm with respect to AGI alignment, and it seemed important for researchers in other paradigms to be in communication with each other. So for several months, the LessWrong and AgentFoundations teams have been discussing the possibility of using the LW codebase as the basis for a new alignment forum. Over the past couple weeks we've gotten ready for a closed beta test, both to iron out bugs and (more importantly) get feedback from researchers on whether the overall approach makes sense. The current features of the Alignment Forum (subject to change) are: A small number of admins can invite new members, granting them posting and commenting permissions. This will be the case during the beta - the exact mechanism of curation after launch is still under discussion. When a researcher posts on AlignmentForum, the post is shared with LessWrong. On LessWrong, anyone can comment. On AlignmentForum, only AF members can comment. (AF comments are also crossposted to LW). The intent is for AF members to have a focused, technical discussion, while still allowing newcomers to LessWrong to see and discuss what's going on. AlignmentForum posts and comments on LW will be marked as such. AF members will have a separate karma total for AlignmentForum (so AF karma will more closely represent what technical researchers think about a given topic). On AlignmentForum, only AF Karma is visible. (note: not currently implemented but will be by end of day) On LessWrong, AF Karma will be displayed (smaller) alongside regular karma. If a commenter on LessWrong is making particularly good contributions to an AF discussion, an AF Admin can tag the comment as an AF comment, which will be visible on the AlignmentForum. The LessWrong user will then have voting privileges (but not necessarily posting privileges), allowing them to start to accrue AF karma, and to vote on AF comments and threads. We’ve currently copied over some LessWrong posts that seemed like a good fit, and invited a few people to write posts today. (These don’t necessarily represent the longterm vision of the site, but seemed like a good way to begin the beta test) This is a fairly major experiment, and we’re interested in feedback both from AI alignment researchers (who we’ll be reaching out to more individually in the next two weeks) and LessWrong users, about the overall approach and the integration with LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Ajeya Cotra
Release Date: 11/17/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA on EA Forum: Ajeya Cotra, researcher at Open Phil, published by Ajeya Cotra on the AI Alignment Forum. This is a linkpost for Hi all, I'm Ajeya, and I'll be doing an AMA on the EA Forum (this is a linkpost for my announcement there). I would love to get questions from LessWrong and Alignment Forum users as well -- please head on over if you have any questions for me! I’ll plan to start answering questions Monday Feb 1 at 10 AM Pacific. I will be blocking off much of Monday and Tuesday for question-answering, and may continue to answer a few more questions through the week if there are ones left, though I might not get to everything. About me: I’m a Senior Research Analyst at Open Philanthropy, where I focus on cause prioritization and AI. 80,000 Hours released a podcast episode with me last week discussing some of my work, and last September I put out a draft report on AI timelines which is discussed in the podcast. Currently, I’m trying to think about AI threat models and how much x-risk reduction we could expect the “last long-termist dollar” to buy. I joined Open Phil in the summer of 2016, and before that I was a student at UC Berkeley, where I studied computer science, co-ran the Effective Altruists of Berkeley student group, and taught a student-run course on EA. I’m most excited about answering questions related to AI timelines, AI risk more broadly, and cause prioritization, but feel free to ask me anything! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AMA: Paul Christiano, alignment researcher by Paul Christiano
Release Date: 12/06/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Paul Christiano, alignment researcher, published by Paul Christiano on the AI Alignment Forum. I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1). If you want to ask something just post a top-level comment; I'll spend at least a day answering questions. You can find some background about me here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
AI alignment landscape by Paul Christiano
Release Date: 11/19/2021
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment landscape, published byPaul Christiano on the AI Alignment Forum. Here (link) is a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions. Here is the main overview slide from the talk: The highlighted boxes are where I spend most of my time. Here are the full slides from the talk. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Explicit: No
Similar Podcasts
The Nonlinear Library: LessWrong
Release Date: 03/03/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library
Release Date: 10/07/2021
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Section
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: EA Forum Weekly
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: Alignment Forum Daily
Release Date: 05/02/2022
Authors: The Nonlinear Fund
Description: The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Explicit: No
The Nonlinear Library: LessWrong Top Posts
Release Date: 02/15/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Nonlinear Library: Alignment Forum Top Posts
Release Date: 02/10/2022
Authors: The Nonlinear Fund
Description: Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
Explicit: No
The Library Laura Podcast
Release Date: 09/25/2020
Authors: Library Laura
Description: The Library Laura Podcast brings you your weekly dose of book recommendations, library love, and literary enthusiasm.
Explicit: No