**Sam Charrington** (0:00)
I'd like to thank our friends at Capital One for sponsoring today's episode. Capital One's tech team isn't just talking about multi-agentic AI, they already deployed one. It's called Chat Concierge and it's simplifying car shopping. Using self-reflection and layered reasoning with live API checks, it doesn't just help buyers find a car they love, it helps schedule a test drive, get pre-approved for financing, and estimate trade-in value. Advanced, intuitive, and deployed. That's how they stack. That's technology at Capital One.
**Nikita Rudin** (0:33)
My hot take on that and I'll be happy to be proven wrong, but I think there is not a single humanoid robot today that actually generates value. Meaning, there might be a robot that does something fairly close to what it's supposed to do in a factory or in a warehouse, but it's not the exact task. So in the end, it's not generating value because it's not doing the actual thing, it's personal.
**Sam Charrington** (1:07)
All right, everyone, welcome to another episode of The TWIML AI Podcast. I am your host, Sam Charrington. Today, I'm joined by Nikita Rudin. Nikita is co-founder and CEO of Flexion Robotics. Before we get going, be sure to take a moment to hit that subscribe button wherever you're listening to today's show. Nikita, welcome to the podcast.
**Nikita Rudin** (1:26)
Thank you, we're excited to be here.
**Sam Charrington** (1:29)
I'm excited to have you on the show, and I'm looking forward to digging into our topic for the conversation, which is really digging into the gap between where we are today with robotics and where we need to be to fulfill the vision of the technology. You've been working in this space for quite a while. You did your PhD at ETH Zurich and spent some time at NVIDIA. Why don't you share a little bit about your PhD and the focus of your research?
**Nikita Rudin** (2:00)
When I started, we were trying to use simulation with reinforcement learning to teach a legged robot very simple things, like just walking on flat ground. And when the robot could take a few steps, that was already a big success. And the core focus was to reduce the training time needed to achieve that.
**Sam Charrington** (2:18)
And when you say legged, like a quadruped?
**Nikita Rudin** (2:20)
Exactly, like a quadruped, a big legged robot dog. We're not using botanomic spots, we're using antibiotics animal. Antibiotics is a Swiss startup that was a spinoff from our lab. Very similar to a spot, but it's red and made in Switzerland. Yeah, we're really trying to reduce the training time needed to achieve that. So before I started, there were some results of reinforcement learning for such quadrupeds, but it would take weeks of computation to it to achieve anything. And using GPUs and massively parallel simulators, we managed to reduce that to just a few minutes. Actually, we had a demo on stage at some conference where we were running training live on a laptop while I was holding the robot. And every 15 seconds, the laptop would send the latest policy to the robot. And you could literally see how it went from just falling over to taking a first step. And then after three or four minutes, it would be able to walk around the stage. That was a pretty cool visual demo for everyone to see exactly how the learning process happens.
And from there, my PhD was pushing the agility of that robot, using the similar techniques. It was still training neural networks in simulation and then transferring them to the real world. But the inputs got more complicated, the tasks got more complicated. So by the end, we could go to a search and rescue facility here in Switzerland. So you have to imagine collapsed buildings, a lot of mud, moss, gravel, big rocks, terrain that is very hard to navigate even for a human. And we would just tell the robot to go from point A to point B and would use its whole body. So we would use the knees to climb on top of big, big rocks and then jump over gaps. And all again, all autonomously, all end to end using images and the state of the robot to plan its next actions.
**Sam Charrington** (4:10)
And telling that story about, you know, deploying this robot, a search and rescue context, envisioning the demo, I've seen similar things, the robot dog is going, you know, maybe opening some doors, maybe that wasn't part of your demo, but I've seen similar demos of, you know, the robot dog like climbing hills and crossing rubble. And I think those demos like, you know, attempt in many cases to land the idea that, hey, you know, like flag underground, we're done here. Like talk a little bit about the distance between, you know, what you were able to accomplish with that demo and like what you think needs to be done to deploy one of these robot dogs in a real search and rescue scenario, for example.
50 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000744357356
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000744357356