**Chris Williamson** (0:00)
What is the journey of how you arrived thinking about the problems of AI?
**Tristan Harris** (0:08)
Well, most people know me or our work through the film The Social Dilemma, and I used to be a design ethicist at Google in 2012, 2013 So that basically meant, how do you ethically design technology that is going to reshape, especially the attention and information environment of humanity? So it's like, there I was at Google, it was 2012, 2013 This is in the heat of the kind of social media boom. I think Instagram had just been bought by Facebook. My friends in college started Instagram. So I was part of this cohort and milieu of people who really built this technology that the rest of the world just thought was natural. Like this is just drinking water. Like I just drink Instagram. I just live in this environment. So while I saw billions of people enter into this psychological habitat that I knew the handful of five or six people that were designing and tweaking it and making it work a certain way. Yeah, exactly. I think that that's just a fundamental thing I want people to get is, you know, you think of technology like it just lands and it's just inevitable and there's just nothing we can do and it just comes from above. And it's like there are human beings making choices. And, you know, as someone who grew up in the era of, you know, the Macintosh, like my co-founder, so I have a nonprofit called the Center for Humane Technology. My co-founder, Aza Raskin, his father invented the Macintosh project before Steve Jobs took it over. So this is the original Macintosh, you know, the thing that we now, the MacBook, the iMac, the MacBook Pro, all of that started with his father, Jeff Raskin, and the idea of creating humane technology where technology could be choicefully designed to be really easy to use, to be accessible, to be an empowering extension of our humanity, like a cello, like a piano, like a creative tool, like if you're a video person, you can make films and videos. And just so people understand, because we're probably going to be talking about some darker things on this podcast, the premise of all this is not to be a speaker of doom or something like that, it's to say, I want to live in a world where technology is in service of people and connection and all of the things that matter to us as humans and then have technology wrap around ergonomically us to create that. So that was kind of a side journey. There I was at Google in 2012, 2013, and I saw how essentially there is this arms race for human attention and whichever company was willing to go lower on the brain stem to manipulate human psychology. This is exploiting like a back door in the human mind. So I think of it just like software has back doors and zero-day vulnerabilities, you can hack software. The human mind has vulnerabilities. And as a magician, as a kid, I understood some of those. Studying at a lab at Stanford called the Persuasive Technology Lab, where a lot of the Instagram co-founders had studied, I understood the psychological influences dynamics. And so it wasn't just that we were making technology in this beautiful and empowering kind of Macintosh way. It's that basically more and more of my friends were sucked into developing technology to hack human psychology. And so I saw that problem, I became concerned about it, and I made a presentation at Google. And I feel like I repeat the story everywhere, but it's just important for my history, I guess. I made a presentation saying never before in history have 50 designers in San Francisco basically, through their choices, rewired the entire psychological habitat of humanity.
And we need to get this right, we have a moral responsibility to get this right. And I sent it to 50 people at Google, and when I clicked on the presentation the next day, on the top right of Google slides, it shows you the number of simultaneous viewers, you know how that works? And it had like 150 simultaneous viewers and then 500 simultaneous viewers. And so it's like, oh, this is spreading throughout the whole company. And that's what led to me becoming a design ethicist where I had to research and ask the questions, what does it mean to ethically design and persuade people's psychological vulnerabilities? When you can't not make choices about the psychological habitat, you have to make a choice about how infinite, whether you're going to do infinite scroll or not, or autoplay or not, or notifications or not, or these 10 people followed you or not. Like, what does it mean to ethically make those choices?
121 more minutes of transcript below
Try it now — copy, paste, done:
curl -H "x-api-key: pt_demo" \
https://spoken.md/transcripts/1000758785557
Works with Claude, ChatGPT, Cursor, and any agent that makes HTTP calls.
Get the full transcriptFrom $0.10 per transcript. No subscription. Credits never expire.
Using your own key:
curl -H "x-api-key: YOUR_KEY" \
https://spoken.md/transcripts/1000758785557