**Nathan Labenz** (0:00)
Hello, and welcome to The Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together, we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan LeBenz, joined by my co-host, Erik Torenberg. Hello, and welcome back to The Cognitive Revolution. Today, I'm pleased to share part two of my recent appearance on the 80,000 Hours Podcast. Which presents in-depth conversations about the world's most pressing problems, and what you can do to solve them.
It's my view, and the premise of this show, that the pace of change in AI is making it nearly impossible for leaders, both in society at large and even within the field itself, to keep up with all of the latest developments. And that the growing disconnect between what exists and what people understand, represents an increasingly pressing problem, which if not effectively addressed, will likely lead to increasingly dysfunctional discourse, and ultimately major blunders by key decision makers.
It was a real honor to be invited on the 80,000 Hours Podcast, which I've listened to for years. And I thought that this conversation with Rob Wiblin, which summarizes my worldview far more than a typical Cognitive Revolution episode would. In this episode, we cover what AI systems can and can't do as of late 2023, spanning language and vision, medicine, scientific research, self-driving cars, robotics, and even weapons. We also cover what the next big breakthroughs could be. The state of AI discourse and the need for positions, which combine the best of accelerationist and safety-focused perspectives. The chance that irresponsible development provokes a societal backlash and or heavy-handed regulation. A bunch of shout outs to the folks that I follow and trust to keep me up to speed with everything that's going on. And lots more along the way as well. I definitely encourage you to subscribe to the 80,000 Hours Podcast feed, where you can find part one of this conversation, which centered on OpenAI's leadership drama and safety records, and lots more conversations with inspiring change makers.
As always, I would ask that you take a moment to share the Cognitive Revolution with your friends. For now, I hope you enjoy this wide-ranging AI Scouting report from my appearance on the 80,000 Hours Podcast with host Rob Wiblin.
**Rob Wiblin** (2:15)
Hey, listeners. Rob here, Head of Research at 80,000 Hours.
Today we continue my interview with Nathan LeBenz. If you missed part one, which was released right before Christmas, do go back and listen to it. That's episode 176, Nathan LeBenz on the final push for AGI and understanding OpenAI's leadership drama. But you don't have to listen to that one to follow the conversation here. We've designed it so that each part stands alone just fine.
All right, and buckle up, because without further ado, I again bring you Nathan LeBenz.
Nathan, a message that you've been pushing in the show recently is that people just don't pay enough attention. They don't spend enough time just stopping and asking the question, what can AI do? On one level, of course, this is something that people are very focused on. But it doesn't seem like there are that many people who keep abreast of it at a high level. And I mean, it's quite hard to keep track of it because the results are coming out in all kinds of different channels. So this is something you have unusual level of expertise in. Why do you think it would behoove us as a society to have more people who might have to think about governing, or regulating, or incorporating really advanced AI into society to stop and just find out what is possible?
**Nathan Labenz** (3:24)
Well, a lot of reasons really. I mean, the first is just, again, to give voice to the positive side of all of this.
There's a lot of utility that is just waiting to be picked up. Organizations of all kinds, individuals of a million different roles, stand to become more productive, to do a better job, to make fewer mistakes if they can make effective use of AI. Just one example from last night, I was texting with a friend about the city of Detroit. I live in the city of Detroit, famously, once an auto boom town, then a big bust town and has had a high poverty rate and just a huge amount of social problems.
And one big problem is just identifying what benefits individuals qualify for and helping people access the benefits that they qualify for. And something that AI could do a very good job of, if somebody could figure out how to get it implemented at the city level, would be just working through all the case files and identifying the different benefits that people, I'll say likely qualify for, because let's say we don't necessarily want to fully trust the AI, but we can certainly do very good and much wider screens and identifications of things that people may qualify for with AI than we can versus the human staff that they have. They've got a stack of cases that are just not getting the attention that in an ideal world they might, and AI could really bring us a lot closer to an ideal world. So I think there's just a lot of things, wherever you are, if you just take some time to think, what are the like really annoying pain points that I have operationally, the work that's kind of routine and just a bit of drudgery, AI might be able to help alleviate that problem.
140 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000654142222
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000654142222