Watch Episode Here
Read Episode Description
Nathan Labenz dives in with Jaan Tallinn, a technologist, entrepreneur (Kazaa, Skype), and investor (DeepMind and more) whose unique life journey has intersected with some of the most important social and technological events of our collective lifetime. Jaan has since invested in nearly 180 startups, including dozens of AI application layer companies and some half dozen startup labs that focus on fundamental AI research, all in an effort to support the teams that he believes most likely to lead us to AI safety, and to have a seat at the table at organizations that he worries might take on too much risk. He's also founded several philanthropic nonprofits, including the Future of Life Institute, which recently published the open letter calling for a six-month pause on training new AI systems. In this discussion, we focused on:
- The current state of AI development and safety
- Jaan's expectations for possible economic transformation
- What catastrophic failure modes worry him most in the near term
- How big of a bullet we dodged with the training of GPT-4
- Which organizations really matter for immediate-term pause purposes
- How AI race dynamics are likely to evolve over the next couple of years
Also, check out the debut of co-host Erik's new long-form interview podcast Upstream, whose guests in the first three episodes were Ezra Klein, Balaji Srinivasan, and Marc Andreessen. This coming season will feature interviews with David Sacks, Katherine Boyle, and more. Subscribe here: https://www.youtube.com/@UpstreamwithErikTorenberg
LINKS REFERENCED IN THE EPISODE:
Future of Life's open letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Eliezer Yudkowsky's TIME article: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Daniela and Dario Amodei Podcast: https://podcasts.apple.com/ie/podcast/daniela-and-dario-amodei-on-anthropic/id1170991978?i=1000552976406
Zvi on the pause: https://thezvi.substack.com/p/on-the-fli-ai-risk-open-letter
TIMESTAMPS:
(0:00) Episode Preview
(1:30) Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter
(3:26) AI safety and Future of Life Institute
(6:55) Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute
(13:00) Future of AI evolution
(15:55) Sponsor: Omneky
(17:20) Jaan's investments in AI companies
(24:22) The emerging danger paradigm
(28:10) Economic transformation with AI
(33:48) AI supervising itself
(35:23) Language models and validation
(40:06) Evolution, useful heuristics, and lack of insight into selection process
(43:13) Current estimate for life-ending catastrophe
(46:09) Inverse scaling law
(54:20) Our luck given the softness of language models
(56:24) Future of Language Models
(1:01:00) The Moore’s law of mad science
(1:03:02) GPT-5 type project
(1:09:00) The AI race dynamics
(1:11:00) AI alignment with the latest models
(1:14:31) AI research investment and safety
(1:21:00) What a six month pause buys us
(1:27:01) AI’s Turing Test Passing
(1:29:33) AI safety and risk
(1:33:18) Responsible AI development.
(1:41:20) Neuralink implant technology
TWITTER:
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/
Music Credit: OpenAI's Jukebox