**SPEAKER_1** (0:00)
You co-founded Circle, and now you're working on something new, an AI bank.
**Sean Neville** (0:04)
The internet itself is going agent-native. Really, agents doing anything. We think they'll need to get paid, they'll need to make payments, they'll need to generate some sort of return or be rewarded for holding a balance. They may want to lend, they may want to apply for credit in certain situations. They'll want to do all the things that a bank might do for a business. Ultimately, we'll need an AI bank that is actually for other AIs.
**SPEAKER_1** (0:22)
You believe that these agents are going to be the only ones that we actually trust with our assets. What is going to make people won over by bots?
**Sean Neville** (0:29)
Why would you ever trust it with your money? Worst outcome is you build a lot of amazing stuff that nobody wants.
**SPEAKER_1** (0:38)
Sean, thank you so much for being here.
**Sean Neville** (0:40)
It's a pleasure.
**SPEAKER_1** (0:40)
You co-founded Circle, you architected USDC, stablecoin that lots and lots of people throughout crypto and beyond are now using as crypto and TradFi intersect and now you're working on something new. You're building what you've described as an AI bank. What do you mean by that?
**Sean Neville** (0:57)
Well, we're taking the next step.
Once we have stable coins and we have the ability to represent dollars on Internet rails, what kind of new opportunities does that unlock? At the same time we were contemplating that, you could see clearly the web, the Internet itself is going agent native. And after working in AI for quite some time, developed conviction that as AI actors become economic participants, they will ultimately be the primary dominant economic participants in the world for all kinds of activities, payments and otherwise. I think, you know, flash forward in a few years, I think they may be the only actors that we trust with our assets and the only actors that are capable of generating meaningful return on our assets. We're certainly not there today. And so what do we need to do in order to get there to unlock a, you know, I think a level of prosperity, like so which the world hasn't seen yet. And it turns out there's some fundamental things missing. So one of the things we're doing is working on some infrastructure to make it safe and trustworthy for AI actors to participate in the economy. And then we're building on top of that foundation what has been called an AI bank or an AI native bank. So that's what we're working on.
**SPEAKER_1** (2:01)
So you said that you believe that these agents are going to be the only ones that we actually trust with our assets. I mean, we're living in a world right now where there's famous Gallup polls about trust in institutions just crashing, utterly crashing over like recent decades. Trust is a hard to come by resource nowadays. What is going to make people won over by bots?
**Sean Neville** (2:22)
It was similar in some ways to trust in money flows.
The old way of managing trust was, let's have a whole bunch of regulations that tie humans and the businesses they create to a set of rules, so that at least when they prove that they're not trustworthy, we have clear liability paths and repercussions. Now we have something that's an improvement on that, which is we have the ability to encode trust into software, using cryptography on rails that no one can control. That's a common good, a public utility for the world. And so similarly, when we look forward to semi-autonomous or autonomous actors that are acting on our behalf to do all kinds of things, a hyper-personalized bank for you that's different from me, but those things interact, what are the elements of trust to enable that sort of thing to happen? Because you mentioned the word bots. Today, one of the examples where that world is impossible is if you look at the existing risk infrastructure in financial institutions, which is designed to make sure no bots can participate because they're all bad. Make sure you're a KYC individual or a KYB business.
**SPEAKER_1** (3:20)
I use the word intentionally because people have a bad association with it.
**Sean Neville** (3:23)
Absolutely, and for good reason. But what we really need is a system of risk that can say, let's assume the only participants will be bots, but still keep the bad bots out, the bad actors out, but have some way of identifying the good bots that we want to interact with. And then beyond that, apply policies to them so we can say, I would like to interact with the Amazon bot agent. No one can agree on what an agent is, but let's stick with the word bot for now.
18 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000752931305
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000752931305