Scott Aaronson: Against AI Doomerism

Release Date:

In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson. Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:45) Scott’s background* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning* (18:45) ML problems that involve quantum mechanics and Scott’s work* (21:50) Scott’s recent work at OpenAI* (22:30) Why Scott was skeptical of AI alignment work early on* (26:30) Unexpected improvements in modern AI and Scott’s belief update* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)* (34:15) Watermarking GPT outputs* (41:00) Motivations for watermarking and language model detection* (45:00) Ways around watermarking* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems* (49:10) Thoughts on definitions for humanistic concepts in AI* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling* (1:08:45) OutroLinks:* Scott’s blog* AI-related work* Quantum Machine Learning Algorithms: Read the Fine Print* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis* New AI classifier for indicating AI-written text and Watermarking GPT Outputs* Writing* Should GPT exist?* AI Safety Lecture* Why I’m not terrified of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe

Scott Aaronson: Against AI Doomerism

Title
Scott Aaronson: Against AI Doomerism
Copyright
Release Date

flashback