"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

EPISODE · Feb 21, 2024 · 44 MIN

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

from For Humanity: An AI Risk Podcast · host The AI Risk Network. AI Safety

In Episode #16, AI Risk Denier Down, things get weird.This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...Tim was not prepared to discuss this work, which is when things started to get off the rails.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions)OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23-You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you.-Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.”-In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this.-Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe?-A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really?-You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.”-Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special?-Does this claim rest on the security protocols of the big AI companies?-Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe

NOW PLAYING

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

0:00 44:17

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

AI – IC之音竹科廣播 FM97.5 IC之音竹科廣播 全球華人的心靈故鄉 Photo Breakdown Scott Wyden Kivowitz Photo Breakdown is a podcast in which we explore the world of photography with a trusted guide, host Scott Wyden Kivowitz. His expertise and passion bring the industry to life as we explore the stories, trends, and ideas shaping it today. Join us as we dissect everything from incredible photographs and creative techniques to the latest gear releases and hot topics in the photography community.In each episode, we break down what’s happening behind the scenes - whether it’s making a powerful image, a candid discussion on industry trends, or a reflection on the tools and technology changing how we make photographs. You’ll get insights, expert opinions, and a fresh perspective on what’s top of mind for photographers right now.Anticipate short, engaging episodes brimming with ideas and inspiration. Be part of the conversation by sharing your thoughts, voice notes, and comments. Your participation is what makes our community vibrant and dynamic.It’s more than just photography - everyth The Last Outlaws Impact Studios at UTS In a History Lab season like no other, we're pulling on the threads of one of Australia's great misunderstood histories, moving beyond the myths to learn what the Aboriginal brothers Jimmy and Joe Governor faced in both life and death.Australia's budding Federation is the background setting to this remarkable story, that sees the Governor brothers tied to the inauguration of a 'new' nation and Australia's dark history of frontier violence, racial injustice and the global trade and defilement of Aboriginal ancestral remains. This Impact Studios production is a collaboration with the Governor family, UTS Faculty of Law and Jumbunna Institute for Indigenous Education and Research.The Last Outlaws teamKatherine Biber - UTS Law Professor and Chief InvestigatorAunty Loretta Parsley - Great-granddaughter of Jimmy Governor and the Governor Family Historian Leroy Parsons - Governor descendant, Narrator and Co-WriterKaitlyn Sawrey - Host, Writer and Senior ProducerFrank Lopez - Writer, Managing Next Generation Energy Systems Cambridge University Background Stakeholders working with energy systems have to make complex decisions formulated from risk-based assessments about the future. The move towards more renewables in our energy systems complicates matters even further, requiring the development of an integrated power grid and continuous and steady transformation of the UK power system. Network flows must be managed reliably under uncertain demands, uncertain supply, emerging network technologies and possible failures and, further, prices in related markets can be highly volatile. Mathematicians working with engineers and economists, can make significant contributions to address such issues, by helping to develop fit-for-purpose models for next generation energy systems. These interdisciplinary approaches are looking to address a range of associated problems, including modelling, prediction, simulation, control, market and mechanism design and optimisation. This knowledge exchange workshop was part of the four months Res
URL copied to clipboard!