AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka artwork

AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

February 26, 2026

In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026.
Speakers: Sebastian Raschka, Sam Charrington
**Sebastian Raschka** (0:00)
The R&D, like the research and development of the focus of the research team, I think it's more focused nowadays on the post-training, like getting more performance out of that, because it's more like the newer paradigm, and there are still low-hanging fruits to be picked, where in pre-training, it's already pretty sophisticated. You will still get better results if you use more data, optimize the data mix, maybe multi-token prediction and these types of things, but most of the interesting things are happening now on the post-training front in the reasoning realm. So I think we will see more there.

**Sam Charrington** (0:45)
All right, everyone, welcome to another episode of The TWIML AI Podcast. I am your host, Sam Charrington. Today, I'm joined by Sebastian Raschka. Sebastian is an independent LLM researcher. Before we get going, be sure to take a moment to hit that subscribe button wherever you're listening to today's show. Sebastian, welcome back to the podcast. It's been a little bit.

**Sebastian Raschka** (1:06)
Yeah, thank you for inviting me back, Sam. I'm happy to be back and to chat about LLMs, AI, and whatever you have in mind. I had a lot of fun last time, so I hope we can make it fun and interesting again.

**Sam Charrington** (1:19)
My joke around this time, it's getting a bit old, but it's like the last time we spoke was three years ago, not much has changed, right?

**Sebastian Raschka** (1:28)
Well, all good things come in threes, I think there's a saying, right?

**Sam Charrington** (1:33)
And in fact, a ton has changed, and we're going to be focusing on the most recent and most important of those changes, in particular, what's new with LLMs and what to expect with LLMs in 2026 This is an area that you spend a lot of time focusing on with your research and education work. You know, maybe we can start with just, you know, kind of top of mind, like if you think about, you know, very big picture where we are now compared to where we were a year ago. You know, what is your broad reflection about the evolution of the space?

**Sebastian Raschka** (2:11)
Look at today compared to one year ago, it's almost like the anniversary of DeepSeq, the big DeepSeq version 3 model accompanied by the R1 model, the reasoning, I would say reasoning revolution in quotation marks. It's still LLMs, it's still the same base model, but we have now more techniques on top of that to make the models smarter in terms of solving more complex problems. And so I would say architecture wise, LLM architectures are looking still very similar, but the reasoning, training is one of the new things if we compare today to last year. And then also, I think there's a more heavy focus on tool use. So back then, when ChatchaPD was launched, or also the first iteration of LLMs, the focus was mainly on general purpose tasks, but then also having the LLM answer all the things we are curious about, like from memory. Like if we ask it a math question or a knowledge question, the LLM would basically draw from its memory and then write the answer. But that's not always, let's say, the most effective or accurate thing to do. Similar for us humans. I mean, LLMs are different from how humans think, but we as humans, if you ask me a complicated math question, or just like multiplying two large numbers, I would pull out my calculator and calculate that on the calculator. I wouldn't do that in my head. I maybe could, but it would take a long time. It's more error prone and so forth, and there's no need to do that. And the same with LLMs. Now with more modern tooling, it becomes more and more popular to use, or to have the LLM use tools too. It requires training the LLM to use those tools. But with that, I think we can reduce hallucination rates, not completely getting rid of those, but reducing those, and then also making answers more accurate. And then with reasoning capabilities, it's essentially giving the LLM more time, in quotation marks, to think through a problem. So these are, I think, the two main, I would say, nops that we can tune and to make progress on in the last year, if we look particularly like last year and now, the difference, yeah.

**Sam Charrington** (4:32)
We'll dig in to the technical aspects of, like, how we've evolved in reasoning and how we've evolved in tool use, among other things. But before we do that, I was thinking it might be interesting to talk a little bit about, from a practical perspective, how do you think where we are today is different and has shifted. And it's super interesting. We're talking in kind of the second week of February. And already this year, in 2026, there's been a ton of new news, new models, Opus 4.6, OpenAI 5.3. There's been a whole OpenClaw and Multbot. You know, talk a little bit about, you know, what we've seen already this year, but in the context of, like, where you see LLMs are from a practical perspective.

61 more minutes of transcript below

Feed this to your agent

Try it now — copy, paste, done:

curl -H "x-api-key: pt_demo" \
  https://spoken.md/transcripts/1000751830991

Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.

Get the full transcript

From $0.10 per transcript. No subscription. Credits never expire.

Using your own key:

curl -H "x-api-key: YOUR_KEY" \
  https://spoken.md/transcripts/1000751830991