**John Coogan** (0:00)
Last time we had Mark Cuban on the show, we were debating ads in LLMs. And since then, we've gotten a bunch of data points about ads in LLMs. And I think that some of his takes have probably aged well. Some of our takes have probably aged well. It'll be an interesting time to reevaluate what's actually happening. There's been a lot more points.
**Jordi Hays** (0:18)
I don't know, John.
**John Coogan** (0:19)
We just have more evidence.
**Jordi Hays** (0:19)
I said that ads would be fine, and now the world is ending.
**John Coogan** (0:25)
Here's a white pill. Samsung is investing $70 billion to advance their fab capacity.
They're getting back in the AI chips game. They've always been in the AI chips game. So brief history of Samsung. You probably know them from the phones, from the TVs. They, of course, are a major player in HBM, high bandwidth memory. They are a massive company, over a quarter million employees. They're close to touching a trillion dollars in USD market cap. They pull in around 200 billion USD a year in revenue, maybe 250 billion this year in revenue. Really good. All that's USD. I like to think in USD because I'm an American. They're the global leaders in memory and OLED displays as well. So a lot of the displays that you see in other electronics, even it has a different brand name, it's still Samsung actually making that OLED display. But they're second in smartphones to the iPhone and Apple. And they're second in the semiconductor foundry business to TSMC. Semiconductors still make up 30% to 40% of their business and they supply HBM to NVIDIA for the H100 and Blackwell systems. So it's not like they're sitting out the AI bull market. They are doing great. They are definitely participating. They're incredibly important in the AI build out. But if TSMC is bottlenecked and TSMC is sort of risk off and they're not going to be guiding to insane capex numbers while every American hyperscaler is, well, that creates an opportunity for Samsung. And so Samsung is stepping up and they're announcing that, hey, we're going to put another 70 billion to work on this particular business. So Tesla has been working with Samsung on the Foundry side in AI for a while. So Samsung has never really been on the frontier with a direct competitor to the H100 or the Blackwell chip. That's been more of like AMD's game and AMD also founds at TSMC. So there hasn't really been this like neck and neck battle between TSMC and Samsung. But it's like you can do AI inference on a Samsung chip. And we know that because Tesla went to Samsung years ago and said, we need a chip that can take in pictures from the road, decide where the lines are, and decide.
**Jordi Hays** (2:30)
They want their chips with the dip.
**John Coogan** (2:31)
They want their chips with the dip and that's how Samsung does too. That's all you know. And so the FSD system, if you have a Tesla, you might be familiar with like HW3, hardware 3, that has been deployed into millions of cars. And it was fabbed on Samsung's 14 nanometer process, which is a lagging node. It were not in the 3 nanometer, the crazy frontier stuff, but it's working and it's on the road. And according to a US regulatory probe, there were 3.2 million vehicles, Teslas on the road in America with FSD systems that were basically all running Samsung chips inside. Now to be clear, Tesla, just like any foundation model lab company, they have training and then they also have inference. They're a little bit different than many of the labs that you know and love because they do training in a data center using what's called the Dojo chip and that is fab.tsmc. So they train the system, they take all the data in from every Tesla camera, every road, all the information that they have. Every time that there's a disengagement, that's feedback to the reinforcement learning system. It says, hey, we were in FSD mode, but then someone grabbed the wheel or someone stepped on the brakes. You made a mistake, understand what happened to get you to that point where you made that mistake.
All that data gets collected in the Tesla data center, runs on these Dojo chips, they do the training, and then they deploy the model onto the Samsung chips in the actual cars. So with the backdrop of NVIDIA's massive GTC news cycle, they've done so much press around GTC and so many different launches, you know that NVIDIA's just going to suck a lot of the air out of the semiconductor discussion this week.
27 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000756229863
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000756229863