**Shane Legg** (0:06)
Is human intelligence going to be the upper limit of what's possible? I think absolutely not. As our understanding of how to build intelligent systems develops, we're going to see these AIs go far beyond human intelligence.
**Hannah Fry** (0:22)
Welcome to Google DeepMind The Podcast with me, your host, Professor Hannah Fry. AGI is coming. That's what everyone seems to be saying. Well, today, my guest on the podcast is Shane Legg, chief AGI scientist and co-founder of Google DeepMind. Shane has been talking about AGI for decades, even back when it was considered, in his words, the lunatic fringe. He is credited with popularizing the term and making some of the earliest attempts to work out what it might actually be. Now, in the conversation today, we're going to talk to him about how AGI should be defined, how we might recognize it when it arrives, how to make sure that it is safe and ethical, and then crucially, what the world looks like once we get there. And I have to tell you, Shane was remarkably candid about the ways that the whole of society will be impacted over the coming decade. It's definitely worth staying with us for that discussion.
Welcome to the podcast, Shane. We last spoke to you five years ago, and then you were telling us your sort of vision for what AGI might look like. In terms of the AIs that we've got now today, do you think that they're showing little sparks of being AGI?
**Shane Legg** (1:36)
Yeah, I think it's a lot more than sparks.
**Hannah Fry** (1:37)
More than sparks?
**Shane Legg** (1:38)
Oh, yeah, yeah. So my definition of AGI, or sometimes I call minimal AGI, is an artificial agent that can at least do the kinds of cognitive things people can typically do. And I like that bar because if it's less than that, it feels like, well, it's failing to do cognitive things that we'd expect people to be able to do. So it feels like we're not really there yet. On the other hand, if I set the minimal bar much higher than that, I'm setting it at a level where a lot of people wouldn't actually be able to do some of the things we're requiring at the AGI. So we believe people have some sort of general intelligence, you might call it. So it feels like if an AI can do the kinds of cognitive things people can typically do, at least, possibly more, then we should consider it within that kind of a class.
**Hannah Fry** (2:25)
There's stuff that we have now. Where is it on those levels?
**Shane Legg** (2:28)
So it's uneven. So it's already much, much better than people that say speaking languages. So it'll speak 150 languages. I mean, nobody can do that. And its general knowledge is phenomenal. I can ask it about the suburb I grew up in a small town in New Zealand, and it happens to know things about it, right? On the other hand, they still fail to do things that we would expect people to typically be able to do. They're not very good at continual learning, learning new skills over an extended period of time. And that's incredibly important. For example, if you're taking on a new job, you're not expected to know everything to be performed in the job when you arrive, but you have to learn over time to do it. They also have some weaknesses in reasoning, particularly things like visual reasoning. So the AIs are very good at recognizing objects. They can recognize cats and dogs and all these sorts of things. They've done that for a while. But if you ask them to reason about things within a scene, they get a lot more shaky. So you might say, well, you can see a red car and a blue car, and you ask them which car is bigger. People understand that there's perspective involved, and maybe the blue car is bigger, but it looks smaller because it's further away, right? AIs are not so good at that. Or if you have some sort of diagram with nodes and edges between them-
**Hannah Fry** (3:38)
Like a network.
**Shane Legg** (3:39)
A network, yeah, or a graph as a mathematician would say, and you ask questions about that, it has to count the number of-
**Hannah Fry** (3:45)
Spokes, as it were.
**Shane Legg** (3:46)
Spokes that are coming out of one of the nodes on the graph. A person does that by paying attention to different points, and then actually mentally may be counting them or what have you. The AIs are not very good at doing that type of thing. I don't think there are fundamental blockers on any of these things, and we have ideas on how to develop systems that can do these things, and we see metrics improving over time in a bunch of these areas. So my expectation is over a number of years, these things will all get addressed, but they're not there yet. I think it's going to take a little bit of time to go through that, because quite a long tail of all sorts of cognitive things that people can do, where the AIs are still below human performance. As we reach that, and I think that's coming in a few years, unclear exactly, the AIs will be a lot more reliable, and that will increase their value of quite a lot in many ways. But they will also during that period become increasingly capable, like to professional level and beyond, and maybe encoding mathematics already in known model languages, general knowledge of the world, and stuff like this. It's an uneven thing.
41 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000740871915
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000740871915