Google’s Original X-Man: A Conversation with Sebastian Thrun

Anonymous. Foreign Affairs. Volume 92, Issue 6. November/December 2013.

Sebastian Thrun is one of the world’s leading experts on robotics and artificial intelligence. Born in Solingen, Germany, in 1967, he received his undergraduate education at the University of Hildesheim and his graduate education at the University of Bonn. He joined the computer science department at Carnegie Mellon University in 1995 and moved to Stanford University in 2003. Thrun led the team that won the 2005 darpa Grand Challenge, a driverless car competition sponsored by the U.S. Defense Department, and in 2007, he joined the staff of Google, eventually becoming the first head of Google X, the company’s secretive big-think research lab. He co-founded the online-education start-up Udacity in 2012. In late August, he spoke to Foreign Affairs editor Gideon Rose in the Udacity offices.

Why robotics?

I ultimately got into robotics because for me, it was the best way to study intelligence. When you program a robot to be intelligent, you learn a number of things. You become very humble and develop enormous respect for natural intelligence, because even if you work day and night for several years, your robot isn’t that smart after all. But since every element of its behavior is something that you created, you can actually understand it.

How did you get involved with driverless cars?

In 2004, my CMU colleague Red Whittaker engaged in an epic race called the DARPA Grand Challenge. The U.S. government had put up a million bucks as prize money for whoever could build a car that could drive itself. The original mission was to go from Los Angeles to Las Vegas, but that was quickly found not to be safe, so the race moved to going from Barstow, California, to Primm, Nevada, along a 140-mile premarked desert route. In the first race, which I did not participate in, Red had the best-performing team, but his robot went less than eight miles. DARPA scheduled a second race for the following year, and having come freshly to Stanford and having nothing to do because it was a new job, I decided, why not give it a try?

So we put together a team to build a robot car, Stanley, that could drive by itself in desert terrain. We started with a class of about 20 students. Some of them stayed on, some of them went as far away as they could when they realized what a consuming experience it is to build a robot of that proportion. And over the next several months, I spent most of my time in the Mojave Desert, behind the steering wheel, writing computer code on my laptop together with my graduate students.

What was the result?

Well, we were lucky. Five teams finished that year, and in my book, they all won equally. But we happened to be the fastest by 11 minutes, so we got the $2 million check. [Darpa had doubled the prize for the second race.]

Why did your project end up working so well?

Many of the people who participated in the race had a strong hardware focus, so a lot of teams ended up building their own robots. Our calculus was that this was not about the strength of the robot or the design of the chassis. Humans could drive those trails perfectly; it was not complicated off-road terrain. It was really just desert trails. So we decided it was purely a matter of artificial intelligence. All we had to do was put a computer inside the car, give it the appropriate eyes and ears, and make it smart.

In trying to make it smart, we found that driving is really governed not by two or three rules but by tens of thousands of rules. There are so many different contingencies. We had a day when birds were sitting on the road and flew up as our vehicle approached. And we learned that to a robot eye, a bird looks exactly the same as a rock. So we had to make the machine smart enough to distinguish birds from rocks.

In the end, we started relying on what we call machine learning, or big data. That is, instead of trying to program all these rules by hand, we taught our robot the same way we would teach a human driver. We would go into the desert, and I would drive, and the robot would watch me and try to emulate the behaviors involved. Or we would let the robot drive, and it would make a mistake, and we would go back to the data and explain to the robot why this was a mistake and give the robot a chance to adjust.

So you developed a robot that could learn?

Yes. Our robot was learning. It was learning before the race, and it was learning in the race.

It was at that event that you met Larry Page?

Yes. Larry had a long-standing interest in many things and chose to come to the DARPA Grand Challenges. He came unnoticed, wearing sunglasses, but we hooked up during the morning. In most races that I’ve participated in, during the race you sweat a lot. In this race, there was nothing to do. We were just sitting on the sidelines and letting our creations compete on our behalf. So we started talking about robotics.

Why driverless cars?

It’s a no-brainer. If you look at the twentieth century, the car has transformed society more than pretty much any other invention. But cars today are vastly unsafe. It’s estimated that more than a million people die every year because of traffic accidents. And driving cars consumes immense amounts of time. For the average American worker, it’s about 52 minutes a day. And they tie up resources. Most cars are parked at any point in time; my estimate is that I use my car about three percent of the time.

But if the car could drive itself, you could be much safer, and you could achieve something during your commute. You can also envision a futuristic society in which we share cars much better. Cars could come to you when you need them; you wouldn’t have to have private car ownership, which means no need for a garage, no need for a driveway, no need for your workplace to have as many parking spots.

Is this personal for you?

Absolutely. When I was 18, my best friend lost his life when his friend made a split-second poor decision to speed on ice and lost control of the vehicle and crashed into a truck. And one morning, when I myself was working on driverless cars, when we were expecting a government delegation to be briefed on my progress, my head administrator at Stanford went out to get breakfast for us and never came back. She was hit by a speeding car at a traffic light, and she went into a coma, never to wake up. This is extremely personal for me.

These moments make clear to me that while the car is a beautiful invention of society, there’s so much space for improvement. It’s really hard to find meaning in the loss of a life in a traffic accident, but I carry this with me every day. I feel that any single life saved in traffic is worth my work.

We are now at a point where the car drives about 50,000 miles between what I would call critical incidents, moments when a human driver has to take over, otherwise something bad might happen. At this point, most of us believe the car drives better than the best human drivers. It keeps the lane better, it keeps the systems better, it drives more smoothly, it drives more defensively. My wife tells me, “When you are in the self-driving car, can you please let the car take over?”

Another big project at Google X, where you were working on the driverless car, was Google Glass. How did that come about, and how does it relate to the lab’s other projects?

One of the things that has excited me in working at Google and with Google leadership is thinking about big, audacious problems. We often call them “moonshot” problems.

The self-driving car was a first instance of this, where we set ourselves a target that we believed could not be met. When the project started, we decided to carve out a thousand miles of specific streets in California that were really hard for humans to drive, including Lombard Street in San Francisco and Highway 1, the coastal route from San Francisco to Los Angeles. Even I believed this was hard to do.

So we set this audacious goal, and it took less than two years to achieve it. And what it took to get there was a committed team of the world’s best people basically left alone to do whatever it took to reach the goal.

I wanted to test that recipe in other areas. So Google entrusted me with the founding of a new group called Google X. (The “X” was originally a placeholder until a correct name could be found.) We looked at a number of other audacious projects, and one of them was, can we bring computation closer to our own perception?

We hired an ingenious professor from the University of Washington, Babak Parviz, who became the project leader. And under his leadership, we developed early prototypes of Google Glass and shaped up the concept into something that people know today—that is, a very lightweight computer that is equipped with a camera, display, trackpad, speaker, Bluetooth, WiFi, a head-tracking unit. It’s a full computer, not dissimilar to the pcs I was playing with when I was a teenager, but it weighs only 45 grams.

How did you get from there into online education?

I went into education because I learned from my friends at Google how important it is to aim high. Ever since I started working at Google, I have felt I should spend my time on things that really matter when they are successful. I believe online education can make a difference in the world, more so than almost anything else I’ve done in my life.

Access to high-quality education is way too limited. The United States has the world’s most admirable higher education system, and yet it is very restrictive. It’s so hard to get into. I never got into it as a student. There are also fascinating opportunities that exist today that did not exist even 20 years ago.

The conventional paradigm in education is based on synchronicity. We know for a fact that students learn best if they’re paired one-on-one with a mentor, a tutor. Unfortunately, we can’t afford a tutor for every student. Therefore, we put students into groups. And in these groups, we force students, by and large, to progress at the same speed. Progression at the same speed can cause some students-like me, when I was young to feel a bit underwhelmed. But it can also cause a lot of students to drop out.

A lot of students, when they aren’t quite up to the speed that’s been given to them, get a grade like a C. But instead of giving them more time to get to the mastery it would take to get an A, they get put into the next cohort, where they start with a disadvantage, with low self-esteem. And they often end up at that level for the rest of their student career.

Salman Khan, whom I admire, has made this point very clearly by showing that he can bring C-level math students to an A+ level if he lets them go at their own pace. So what digital media allow us to do is to invent a medium where students can learn at their own pace, and that is a very powerful idea. When you go at your own pace, we can move instruction toward exploration and play-based learning.

When I enter a video game, I learn something about a fictitious world. And in that video game, I’m allowed to go at my own pace. I’m constantly assessed- assessment becomes my friend. I feel good when I master the next level. If you could only take that experience of a video game back into student learning, we could make learning addictive. My deep, deep desire is to find a magic formula for learning in the online age that would make it as addictive as playing video games.

Your projects are extraordinarily radical. Is that what attracts you to them?

I aspire to work on subjects where a number of things have to be the case. One is they have to really change the world if they succeed. I need to be able to tell myself a story that, no matter how slim the chances of success, if it succeeds, it is going to massively change society for the better. I think that’s the case for safety in driving and transportation. It’s the case for bringing the Internet to everybody. And it’s the case for education.

I love to work on problems that are hard, because I love to learn. And all these problems have their own dimension of hardness. Some of them are more technological, some are more societal. When these things come together, I get very excited.

What drives or generates innovation? What creates a Sebastian Thrun?

I feel like I’m overrated. Most of what I do is just listen carefully to people. But truly great innovators, like Larry Page and Sergey Brin, or Elon Musk, or Mir Imran, bring to bear really great visions of where society should be, often fearless visions. And then just a good chunk of logical thinking-as Elon Musk puts it, “thinking by first principles.” Not thinking by analogy, whereby we end up confining our thought to what’s the case today, but thinking about what should be the case, and how we should get there, and whether it is feasible to do it.

Once you have the vision and the clear thought together, what’s missing is just really good execution. And execution to me is all about the way you would climb a mountain you’ve never climbed before. If you waver along the way, if you debate, if you become uncertain about the objective, then you’re not going to make it. It’s important that you keep climbing. And it’s important that you acknowledge that you don’t have all the answers. So you will make mistakes, and you will have to back up, learn, and improve. That is a normal component of the innovative process. But you should not change your goal.

Are there drivers of innovation at the societal and national level? You’ve said that you moved from Germany to the United States because the more open, less hierarchical system here was one in which you felt more able to thrive.

Yes. I think there’s a genuine innovative element in America that you find in almost no other culture. And I believe it.