#726: Mapping The Mind Of The Machine with Brian Murray & Paul Itoi artwork

#726: Mapping The Mind Of The Machine with Brian Murray & Paul Itoi

TFTC: A Bitcoin Podcast

March 14, 2026

Marty sits down with Brian Murray and Paul Itoi to discuss the convergence of AI agents, graph databases as a solution to LLM memory limitations, and Bitcoin's Lightning Network as the native payment rail for the emerging agentic economy. Paul on X: https://x.com/paulitoi Brian on X: https://x.
Speakers: Marty Bent, Brian Murray, Paul Itoi
**Marty Bent** (0:07)
You've had a dynamic where money's become freer than free. When you talk about a Fed just gone nuts, all the central banks going nuts. So it's all acting like safe haven.

**SPEAKER_2** (0:18)
I believe that in a world where central bankers are tripping over themselves to devalue their currency, Bitcoin wins. In the world of fiat currencies, Bitcoin is the victor.

**Marty Bent** (0:29)
I mean, that's part of the bold case for Bitcoin.

**SPEAKER_2** (0:31)
If you're not paying attention, you probably should be.

**Marty Bent** (0:36)
I don't know. We're like two months into it. I'm like, I'm going to have to set up a new one at some point because this is going to be outdated.

**Brian Murray** (0:42)
Really? OK.

**Marty Bent** (0:43)
I think that's the conclusion I'm coming to. You can switch out the models and stuff like that. But I think the context memory is the big problem, I think. End users like me who aren't as technically competent need to figure out, like, how do we nail that out of the gate?

**Paul Itoi** (0:57)
Sounds like a problem you're familiar with.

**Brian Murray** (0:58)
Sounds familiar, yeah.

**Paul Itoi** (1:01)
That's what I mean. That feels like a good starting point.

**Brian Murray** (1:04)
I think, yeah, what would be most helpful? I think about just catching up overall on what's happened in the space and talking about the pieces. You've heard me talk about it for a long time. So can we talk about the 1031 offsite we're at sometimes? Yeah. Yeah, I mean, just I get a little embarrassed getting up there and showing graph stuff just in front of everyone. I see everyone go, Oh, God, Paul, another year.

**Paul Itoi** (1:27)
The graph guy, the graph man.

**Brian Murray** (1:30)
So and you get a lot of grief for graphs on online because they've been tried so many times. But I've worked with Neo4j since 2010, 2011, sometime around then. I think they were just starting out. One of our technical guys brought it into the company. And I hated it because it just crashed all the time. So but now 15 years later, it's kind of seeing its light of day. So I think you just have to have all these kind of primitives in your toolbox. And then when the time's right, you pull them out. And so we just think that the memory issue you just brought up, graph databases just serve as a great scratch pad for that. And it doesn't have to be in a graph database. It can be in obsidian files. It's just the whole thing is relating one thing to another. But anyway.

**Marty Bent** (2:19)
Well, I think it would be worthwhile to go into differentiating like LLMs, how they work from these graph, this graph approach, because I think, again, we were just discussing before we hit record, ton of capital, time, and effort has been put into LLMs specifically. But I think some would argue, I think, yourself included, that LLMs may not be the best way to go about this problem.

**Brian Murray** (2:50)
Yeah, I think people anthropomorphize LLMs a lot. Because it's speaking language, because you can talk to it, you think that it's actually reasoning, and especially when they call it a reasoning model. And it does do an amazing job of mimicking logic, but it does not know why it's saying what it's saying. It's just a statistical output of the next word.

**Marty Bent** (3:15)
Yeah.

**Brian Murray** (3:16)
Yeah. I mean, how do you see it? When you think of an LLM, do you think of it as, do you find yourself thinking of it as a machine spitting out words, or do you think of it as, you know, especially when you name a bot or something like that, it really starts to feel like a human or something?

**Marty Bent** (3:31)
No, I definitely don't think it's human. Anytime I interact with our open call, I'm like, okay, what context do I need to feed it to make sure that it gives me the right response? Like, that's what I think most about. It's like, what do I need to preload this thing with?

**Brian Murray** (3:43)
Maybe we should start with what's everyone running right now. Like, that would be a good thing. You're running a bunch of cool stuff, right?

**Paul Itoi** (3:48)
Yeah. I mean, I'm using Hive, of course. Yep. Doing some cloud code.
But I think the context issue is something that everybody's kind of running into. They're able to just like stumble through it or hack their way through it. But I think, like, we're all going to feel this need of something better, like something that's going to be more accurate, give us better answers, help us do the next thing better. So, and you're starting to see it, like you open X, and there's more and more visualizations of graphs. I feel like the whole, like, ecosystem is drawn that direction. But yeah, those are some of the things I've been messing around with.

64 more minutes of transcript below

Feed this to your agent

Try it now — copy, paste, done:

curl -H "x-api-key: pt_demo" \
  https://spoken.md/transcripts/1000755276472

Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.

Get the full transcript

From $0.10 per transcript. No subscription. Credits never expire.

Using your own key:

curl -H "x-api-key: YOUR_KEY" \
  https://spoken.md/transcripts/1000755276472