Samsung Invests $70B in AI Chips, The Cubanator Joins, Apple's Do Nothing Win AI Strategy | Mark Cuban, John Kim, Eugen Alpeza, Ari Herbert-Voss, Alex Konrad, Carl Eschenbach & Pat Grady, Jim Cantrell, Tom Hulme artwork

Samsung Invests $70B in AI Chips, The Cubanator Joins, Apple's Do Nothing Win AI Strategy | Mark Cuban, John Kim, Eugen Alpeza, Ari Herbert-Voss, Alex Konrad, Carl Eschenbach & Pat Grady, Jim Cantrell, Tom Hulme

TBPN

March 19, 2026

Sign up for TBPNโ€™s daily newsletter at TBPN.com (01:45) - Samsung Invests $70B in AI Chips (09:15) - Composer 2 Available in Cursor (17:49) - ๐• Timeline Reactions (28:43) - Apple: Behind in AI, Ahead in Revenue (40:23) - ๐• Timeline Reactions (57:09) - Carl Eschenbach & Pat Grady.
Speakers: Jordi Hays, John Coogan, Jim Cantrell, Mark Cuban
**Jordi Hays** (0:00)
You're watching TBPN.

**John Coogan** (0:02)
Today is Thursday, March 19th, 2026 We are live from the TBPN Ultra Dome, the Temple of Technology, the Fortress of Finance, the Capital of Capital. Let me tell you about ramp.com. Time is money, save both, easy use corporate cards, bill pay, accounting and a whole lot more. Don't test me with the soundboard. Don't go soundboard for soundboard with me. You know I got you.

**Jim Cantrell** (0:20)
That's a narrative violation.

**John Coogan** (0:22)
No, no. We're having some fun.

**Mark Cuban** (0:26)
We're out of control.

**John Coogan** (0:27)
We got a great show for you today, folks. Carl Eschenbach is Eschenbach at Sequoia. We love to see it. We had the pleasure of chatting with Carl a couple months ago. And I've always been a big fan of his, but we'll let him introduce himself. Let's pull up the Linear lineup. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on Linear are using agents, and you should be too. We also have Mark Cuban coming on the show.
And what a fantastic return to form for us, because the first time we had him on the show, we discussed, and we can talk about Cuban in a second, but of course we have our Lambda lightning round, and Alex Konrad from Upstarts Media is joining as well. Anyway, last time we had Mark Cuban on the show, we were debating ads in LLMs. And since then, we've gotten a bunch of data points about ads in LLMs, and I think that some of his takes have probably aged well, some of our takes have probably aged well, it will be an interesting time to re-evaluate what's actually happening. There's been a lot more points.

**Mark Cuban** (1:29)
I don't know, John.

**John Coogan** (1:30)
We just have more evidence.

**Mark Cuban** (1:30)
I said that ads would be fine. And now the world is ending. Yes.

**John Coogan** (1:38)
It's not because of the ads, though. It's not because of the ads.
It is much more complicated than that. But here's a white pill. Samsung is investing $70 billion to advance their fab capacity. They're getting back in the AI chips game. They've always been in the AI chips game. So brief history of Samsung. You probably know them from the phones, from the TVs. They, of course, are a major player in HBM, high bandwidth memory. They are a massive company, over a quarter million employees. They're close to touching a trillion dollars in USD market cap. They pull in around 200 billion USD a year in revenue. Maybe 250 billion this year in revenue. Really good. All that's USD. When you look up Samsung, you get South Korean won. But I like to think in USD, because I'm an American. And it's actually kind of complicated thinking in foreign currency. They're the global leaders in memory and OLED displays as well. So a lot of the displays that you see in other electronics, even if it has a different brand name, it's still Samsung actually making that OLED display. But they're second in smartphones to the iPhone and Apple, and they're second in the semiconductor foundry business to TSMC. Semiconductors still make up 30 to 40 percent of their business, and they supply HBM to NVIDIA for the H100 and Blackwell systems. So it's not like they're sitting out the AI bull market. They are doing great. They are definitely participating.
They're incredibly important in the AI build out. But if TSMC is bottlenecked, and TSMC is sort of risk off, and they're not going to be guiding to insane capex numbers while every American hyperscaler is, well, that creates an opportunity for Samsung. And so Samsung is stepping up, and they're announcing that, hey, we're going to put another 70 billion to work on this particular business. So Tesla has been working with Samsung on the foundry side in AI for a while, so Samsung has never really been on the frontier with a direct competitor to the H100 or the Blackwell chip. That's been more of like AMD's game, and AMD also fabs at TSMC, so there hasn't really been this like neck and neck battle between TSMC and Samsung. But it's like you can do AI inference on a Samsung chip, and we know that because Tesla went to Samsung years ago and said, we need a chip that can take in pictures from the road, decide where the lines are, and decide.

**Mark Cuban** (4:09)
They want their chips with the dip.

**John Coogan** (4:10)
They want their chips with the dip, and that's how Samsung does too, that's all you know. And so the FSD system, if you have a Tesla, you might be familiar with like HW3, Hardware 3, that has been deployed into millions of cars, and it was fabbed on Samsung's 14 nanometer process, which is a lagging node. We're not in the three nanometer, the crazy frontier stuff, but it's working and it's on the road, and according to a US regulatory probe, there were 3.2 million vehicles, Teslas, on the road in America with FSD systems that were basically all running Samsung chips inside. And so now, to be clear, Tesla, just like any foundation model lab company, they have training and then they also have inference. They're a little bit different than many of the labs that you know and love, because they do training in a data center using what's called the Dojo chip, and that is fab.tsmc. But, so they train the system, they take all the data in from every Tesla camera, every road, all the information that they have, every time that there's a disengagement, that's feedback to the reinforcement learning system. It says, hey, we were in FSD mode, but then someone grabbed the wheel, or someone stepped on the brakes, you made a mistake, understand what happened to get you to that point where you made that mistake. And so all that data gets collected in a Tesla data center, runs on these Dojo chips, they do the training, and then they deploy the model onto the Samsung chips in the actual cars. So if you're driving a Tesla, you have a Samsung chip in there that was trained, and the model was trained on TSMC chips. And so the Dojo D1 is one example of their training chip. That was fabbed at TSMC on 7 nanometer, and it's completely separate from the in-car FSD chip. So with the backdrop of NVIDIA's massive GTC news cycle, they've done so much press around GTC and so many different launches, you know that NVIDIA is just gonna suck a lot of the air out of the semiconductor discussion this week.

188 more minutes of transcript below

Feed this to your agent

Try it now โ€” copy, paste, done:

curl -H "x-api-key: pt_demo" \
  https://spoken.md/transcripts/1000756223458

Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.

Get the full transcript

From $0.10 per transcript. No subscription. Credits never expire.

Using your own key:

curl -H "x-api-key: YOUR_KEY" \
  https://spoken.md/transcripts/1000756223458