- Country: Germany
- Title: First Head of Google X, Co-founder of Udacity
- Education: University of Hildesheim, University of Bonn
- Awards: CAREER award from the National Science Foundation (1999-2003), Olympus award, German Society Pattern for Recognition (2001)
- Website: Sebastian Thrun
Sebastian Thrun is one of the world’s leading experts on robotics and artificial intelligence. Born in Solingen, Germany, in 1967, he received his undergraduate education at the University of Hildesheim and his graduate education at the University of Bonn. He joined the computer science department at Carnegie Mellon University in 1995 and moved to Stanford University in 2003. Thrun led the team that won the 2005 DARPA Grand Challenge, a driverless car competition sponsored by the U.S. Defense Department, and in 2007, he joined the staff of Google, eventually becoming the first head of Google X, the company’s secretive big-think research lab. He co-founded the online-education start-up Udacity in 2012. In late August, he spoke to Foreign Affairs editor Gideon Rose in the Udacity offices.
How and why did you first get into science and technology?
As a child, I spent a lot of time with things like Lego, building trains, cars, complex structures, and I really liked that. When I was about 11, I got a TI-57 programmable calculator. This let you write programs of up to 50 steps, which would be erased when you switched it off. I got very enthusiastic about seeing just what you could do with that. Could you program a game, could you program complex geometry, could you solve financial equations? (The answer for all of those is yes.) I had a little booklet in which I kept my 50-step programs, of which I was very proud. A few years later, I got a NorthStar Horizon computer, which I used to program my own video games, which was extremely fun.
As a college student, what really interested me was the human brain and human intelligence. I dabbled in philosophy and medicine and psychology and eventually found that the most constructive way to approach those problems was through computer science and artificial intelligence: you could actually build something from the ground up that would then manifest intelligence, even if only a little bit of it, and that fascinated me.
I ultimately got into robotics because for me, it was the best way to study intelligence. When you program a robot to be intelligent, you learn a number of things. You become very humble and develop enormous respect for natural intelligence, because even if you work day and night for several years, your robot isn’t that smart after all. But since every element of its behavior is something that you created, you can actually understand it.
I started out in 1994 programming a robot called RHINO that we shipped to the United States to a big robotic competition. The goal was to build a robot that could clean up a kitchen. It wasn't a real kitchen; it was sort of a researchers’ version, where all the objects that had to be picked up were clearly marked. But it represented the state of the art at the time. We came home with second prize, which was wonderful because we were the only non-U.S. team in that competition.
Then, we came up with this idea of building robotic tour guides for museums. In 1997, we got a chance to install one in the Deutsches Museum in Bonn, and the following year, we got our big chance at the Smithsonian’s National Museum of American History, where we had a two-week exhibition at which you would be greeted by a robotic tour guide. We built the tour guide, a robot we named Minerva, from scratch. It was completely autonomous; it made all its own decisions. It was programmed to find visitors and interact with them, directing them to and explaining specific exhibits that we had pre-chosen. It had a face, it could smile, it could frown, and it was great fun.
One of my favorite moments was actually when the robot was switched off in the middle of the night, and we were sitting in a corner programming. A human tour guide came by, and not realizing I was watching, she looked the robot up and down and said to it, "You are not going to replace me."
My next big project, after I had moved to Carnegie Mellon University, was Nursebot. I started a number of interdisciplinary courses with the University of Pittsburgh and became an adjunct professor of nursing there, and we built robots for elderly care. Then, we built a robotic system for mapping abandoned mines. Around Pittsburgh, there are an enormous number of abandoned coal mines. There are mine fires that have been going for decades. And many of these mines lack active maps -- where the mines were done illegally or where the maps got lost over the decades. So we decided to look into what it would take to make robots that could explore abandoned mines. In this period, I also did a lot on autonomous helicopters and helicopter mapping. But eventually, I decided to move from Carnegie Mellon to Stanford.
How did you get involved with driverless cars?
In 2004, my CMU colleague Red Whittaker engaged in an epic race called the DARPA Grand Challenge. The U.S. government had put up a million bucks as prize money for whoever could build a car that could drive itself. The original mission was to go from Los Angeles to Las Vegas, but that was quickly found not to be safe, so the race moved to going from Barstow, California, to Primm, Nevada, along a 140-mile premarked desert route. In the first race, which I did not participate in, Red had the best-performing team, but his robot went less than eight miles. DARPA scheduled a second race for the following year, and having come freshly to Stanford and having nothing to do because it was a new job, I decided, why not give it a try?
So we put together a team to build a robot car, Stanley, that could drive by itself in desert terrain. We started with a class of about 20 students. Some of them stayed on, some of them went as far away as they could when they realized what a consuming experience it is to build a robot of that proportion. And over the next several months, I spent most of my time in the Mojave Desert, behind the steering wheel, writing computer code on my laptop together with my graduate students.
What was the result? Well, we were lucky. Five teams finished that year, and in my book, they all won equally. But we happened to be the fastest by 11 minutes, so we got the $2 million check. [DARPA had doubled the prize for the second race.]
Had you expected to actually complete the race?
I always love to be careful with my expectations, so that life has pleasant surprises for me. But I was very proud -- not just proud of myself but proud of the community. There were about a thousand people from various countries and various grad schools and companies that jointly tried to solve this problem, and I think we achieved something big that day together.
Why did your project end up working so well?
Many of the people who participated in the race had a strong hardware focus, so a lot of teams ended up building their own robots. Our calculus was that this was not about the strength of the robot or the design of the chassis. Humans could drive those trails perfectly; it was not complicated off-road terrain. It was really just desert trails. So we decided it was purely a matter of artificial intelligence. All we had to do was put a computer inside the car, give it the appropriate eyes and ears, and make it smart.
In trying to make it smart, we found that driving is really governed not by two or three rules but by tens of thousands of rules. There are so many different contingencies. We had a day when birds were sitting on the road and flew up as our vehicle approached. And we learned that to a robot eye, a bird looks exactly the same as a rock. So we had to make the machine smart enough to distinguish birds from rocks.
In the end, we started relying on what we call machine learning, or big data. That is, instead of trying to program all these rules by hand, we taught our robot the same way we would teach a human driver. We would go into the desert, and I would drive, and the robot would watch me and try to emulate the behaviors involved. Or we would let the robot drive, and it would make a mistake, and we would go back to the data and explain to the robot why this was a mistake and give the robot a chance to adjust.
So you developed a robot that could learn? Yes. Our robot was learning. It was learning before the race, and it was learning in the race.
Was this connected to the probabilistic learning that you had worked on?
Yes. That's the core of what was this was all about.
How would you describe that to a layman?
When you raise a child, you don’t sit down and take all the rules of life, write them into a big catalog, and start reading the child all these individual rules from A to Z. When we raise a child, a lot of what we do is let the child experiment and guide the experimentation. The child basically has to process his own data and learn from experience.
We did exactly the same thing with the robot. We said, "Look, we could write down all the rules, but there are so many of them, it would take us so long. It'll be much better if we just let the robot grow up like a child." And when the robot made a mistake, we sat there as the parents, observed the mistake, and said, "This was a mistake; don't do it again." And the robot would then reason about what things to do differently to avoid making the same mistake in the future.
It was at that event that you met Larry Page?
Yes. Larry had a long-standing interest in many things and chose to come to the DARPA Grand Challenges. He came unnoticed, wearing sunglasses, but we hooked up during the morning. In most races that I’ve participated in, during the race you sweat a lot. In this race, there was nothing to do. We were just sitting on the sidelines and letting our creations compete on our behalf. So we started talking about robotics.
In the middle of the race, there was a point where I was absolutely certain that our car had failed. I'd been watching the progress of the little dot on the map very carefully, and our robot hadn't moved in six minutes. I knew the robot was never programmed to stop, so the fact that it had stopped had to mean that it was broken. It turned out, in hindsight, that the car had been paused by the organizers to give more space to another car that was ahead of us. The time didn't count against us, and the robot was perfectly fine. But for a moment, I was conceding defeat and trying to explain [to Larry] why we lost.
Did this lead to a connection with Google?
The connection with Google came a little later. Larry and I remained friends. There was another race two years later called the DARPA Urban Challenge, in which we came in second, after a team from Carnegie Mellon. Then, I got involved in Google Street View. I had a brilliant master’s student who effectively built a small version of Street View. And when I showed it to Larry, it became clear that the scope of photographing the world was beyond what a single master’s student at Stanford could accomplish, so we decided to join forces. The decision entailed my taking a sabbatical and joining Google as a full-time manager, and four of my students switched over, too.
Why driverless cars? It’s a no-brainer. If you look at the twentieth century, the car has transformed society more than pretty much any other invention. But cars today are vastly unsafe. It’s estimated that more than a million people die every year because of traffic accidents. And driving cars consumes immense amounts of time. For the average American worker, it’s about 52 minutes a day. And they tie up resources. Most cars are parked at any point in time; my estimate is that I use my car about three percent of the time.
But if the car could drive itself, you could be much safer, and you could achieve something during your commute. You can also envision a futuristic society in which we share cars much better. Cars could come to you when you need them; you wouldn’t have to have private car ownership, which means no need for a garage, no need for a driveway, no need for your workplace to have as many parking spots.
Like Zipcars on a grand scale?
Yes, think car sharing on a grand scale. One of the difficulties in car sharing today is that you have to pick up the car being shared. If the car came to you, it'd be much, much easier.
Is this personal for you? Absolutely. When I was 18, my best friend lost his life when his friend made a split-second poor decision to speed on ice and lost control of the vehicle and crashed into a truck. And one morning, when I myself was working on driverless cars, when we were expecting a government delegation to be briefed on my progress, my head administrator at Stanford went out to get breakfast for us and never came back. She was hit by a speeding car at a traffic light, and she went into a coma, never to wake up. This is extremely personal for me.
These moments make clear to me that while the car is a beautiful invention of society, there’s so much space for improvement. It’s really hard to find meaning in the loss of a life in a traffic accident, but I carry this with me every day. I feel that any single life saved in traffic is worth my work.
We are now at a point where the car drives about 50,000 miles between what I would call critical incidents, moments when a human driver has to take over, otherwise something bad might happen. At this point, most of us believe the car drives better than the best human drivers. It keeps the lane better, it keeps the systems better, it drives more smoothly, it drives more defensively. My wife tells me, “When you are in the self-driving car, can you please let the car take over?”
Another big project at Google X, where you were working on the driverless car, was Google Glass. How did that come about, and how does it relate to the lab’s other projects?
One of the things that has excited me in working at Google and with Google leadership is thinking about big, audacious problems. We often call them “moonshot” problems.
The self-driving car was a first instance of this, where we set ourselves a target that we believed could not be met. When the project started, we decided to carve out a thousand miles of specific streets in California that were really hard for humans to drive, including Lombard Street in San Francisco and Highway 1, the coastal route from San Francisco to Los Angeles. Even I believed this was hard to do.
So we set this audacious goal, and it took less than two years to achieve it. And what it took to get there was a committed team of the world’s best people basically left alone to do whatever it took to reach the goal.
I wanted to test that recipe in other areas. So Google entrusted me with the founding of a new group called Google X. (The “X” was originally a placeholder until a correct name could be found.) We looked at a number of other audacious projects, and one of them was, can we bring computation closer to our own perception?
We hired an ingenious professor from the University of Washington, Babak Parviz, who became the project leader. And under his leadership, we developed early prototypes of Google Glass and shaped up the concept into something that people know today -- that is, a very lightweight computer that is equipped with a camera, display, trackpad, speaker, Bluetooth, WiFi, a head-tracking unit. It’s a full computer, not dissimilar to the PCs I was playing with when I was a teenager, but it weighs only 45 grams.
How did you get from there into online education?
I went into education because I learned from my friends at Google how important it is to aim high. Ever since I started working at Google, I have felt I should spend my time on things that really matter when they are successful. I believe online education can make a difference in the world, more so than almost anything else I’ve done in my life.
Access to high-quality education is way too limited. The United States has the world’s most admirable higher education system, and yet it is very restrictive. It’s so hard to get into. I never got into it as a student. There are also fascinating opportunities that exist today that did not exist even 20 years ago.
The conventional paradigm in education is based on synchronicity. We know for a fact that students learn best if they’re paired one-on-one with a mentor, a tutor. Unfortunately, we can’t afford a tutor for every student. Therefore, we put students into groups. And in these groups, we force students, by and large, to progress at the same speed. Progression at the same speed can cause some students -- like me, when I was young -- to feel a bit underwhelmed. But it can also cause a lot of students to drop out.
A lot of students, when they aren’t quite up to the speed that’s been given to them, get a grade like a C. But instead of giving them more time to get to the mastery it would take to get an A, they get put into the next cohort, where they start with a disadvantage, with low self-esteem. And they often end up at that level for the rest of their student career.
Salman Khan, whom I admire, has made this point very clearly by showing that he can bring C-level math students to an A+ level if he lets them go at their own pace. So what digital media allow us to do is to invent a medium where students can learn at their own pace, and that is a very powerful idea. When you go at your own pace, we can move instruction toward exploration and play-based learning.
When I enter a video game, I learn something about a fictitious world. And in that video game, I’m allowed to go at my own pace. I’m constantly assessed -- assessment becomes my friend. I feel good when I master the next level. If you could only take that experience of a video game back into student learning, we could make learning addictive. My deep, deep desire is to find a magic formula for learning in the online age that would make it as addictive as playing video games.
So the “gamification” of education is a good thing? I'm hesitant to say that gamification is a good thing, because it comes with many superficial things. And I don't wish to replace a master's degree in physics with mastery in Angry Birds. That's obviously not good enough. But on the other hand, when you play Angry Birds, there is no lecture, there are no office hours, there is no final exam. You get in, and many of us get addicted. So you could take the addiction and excitement and personalization of Angry Birds back into mainstream learning and marry the best of both worlds -- go after very deep academic topics but do it with playfulness, with student choice, with student empowerment, and with active exploration. Then, I think we can change everything.
I’ve read that you feel the high points of your life are when you feel stupid, because you're confronted with something that you don't understand and you have an opportunity to learn. Is that true?
Yes. It's true that for me the biggest moments are when I have a new insight. And one of the reasons why I love to venture into new territories is because I don't know what the solution is, so it affords me a chance to explore and to learn something new. With the desire to learn comes the acknowledgement that I don't know, otherwise no learning would take place. And in the presence of ignorance, it follows logically that I will make poor choices, make mistakes that in hindsight could have been easily avoided. Those are called failures. So failures are an essential component of the process of innovation. If there are no failures, I'm not really innovating.
Therefore, failures make me very proud. I'm actually happy to fail, because it gives me a chance to learn and iterate and avoid the same mistake in the future. I honestly believe that if we were to embrace failure as much as success, and celebrate failure as much as success, then we could shed the fear of failure. And if you shed the fear of failure, then you'd be much more able to make the right choices.
Is what you're doing with your educational transformation, trying to create a system that will inculcate that kind of attitude? I would hope so. We have a very strong emphasis on experiential learning, in which the student is asked to solve a problem. We don't give them the solution in advance; we only give them the solution after the student has had a chance to solve it first. The reason we do this is that we believe the mind grows much faster by trying to find a solution itself. And the mind is open for input after having tried it.
Now, I have to admit that we have students that don't like this. They say, "I'm used to the teacher telling me the solution, and then I just learn that solution and practice it." And a number of students have left Udacity for that reason, because it feels kind of stressful to be asked a question without knowing the answer.
But the students that are actively engaged have all shown enormous growth in their ability to solve problems. And the growth doesn't come from listening to a famous professor. It comes almost exclusively from working on actual problems. The role of the professor then becomes to curate those challenges and make them gradually more difficult, so you can unfold the student’s full potential.
Your wife is a professor of comparative literature. Does this kind of approach work as well for the humanities as it does for the sciences?
I would argue that the humanities people have been a step ahead of most of the engineering professors in that they already employ what's called the “flipped classroom” model, in which the students read the literature at home and come to class to discuss it. That's different from what most engineering classes look like, where the professor tends to lecture. In both cases, I would argue, the students are forced to learn at the same pace at the same time. The magic of the online world will be that we can give people their own paths and their own pace and thereby really change everything.
Are you using the concepts and tools of artificial intelligence to develop this kind of personalized tutorial approach?
More and more so. One of the great advantages of teaching online is that we have enormous amounts of data about student behavior. And just as we were able to teach Stanley to maximize its chances to navigate a desert floor, we are using data to maximize the chances of educating a student. That might sound a little uninspirational, but to me, it's an amazing way to turn education into a truly data-driven science.
Your projects are extraordinarily radical. Is that what attracts you to them? I aspire to work on subjects where a number of things have to be the case. One is they have to really change the world if they succeed. I need to be able to tell myself a story that, no matter how slim the chances of success, if it succeeds, it is going to massively change society for the better. I think that’s the case for safety in driving and transportation. It’s the case for bringing the Internet to everybody. And it’s the case for education.
I love to work on problems that are hard, because I love to learn. And all these problems have their own dimension of hardness. Some of them are more technological, some are more societal. When these things come together, I get very excited.
What drives or generates innovation? What creates a Sebastian Thrun?
I feel like I’m overrated. Most of what I do is just listen carefully to people. But truly great innovators, like Larry Page and Sergey Brin, or Elon Musk, or Mir Imran, bring to bear really great visions of where society should be, often fearless visions. And then just a good chunk of logical thinking -- as Elon Musk puts it, “thinking by first principles.” Not thinking by analogy, whereby we end up confining our thought to what’s the case today, but thinking about what should be the case, and how we should get there, and whether it is feasible to do it.
Once you have the vision and the clear thought together, what’s missing is just really good execution. And execution to me is all about the way you would climb a mountain you’ve never climbed before. If you waver along the way, if you debate, if you become uncertain about the objective, then you’re not going to make it. It’s important that you keep climbing. And it’s important that you acknowledge that you don’t have all the answers. So you will make mistakes, and you will have to back up, learn, and improve. That is a normal component of the innovative process. But you should not change your goal.
Are there drivers of innovation at the societal and national level? You’ve said that you moved from Germany to the United States because the more open, less hierarchical system here was one in which you felt more able to thrive.
Yes. I think there’s a genuine innovative element in America that you find in almost no other culture. And I believe it goes back to the founding of this wonderful country, where the people who came over had to be innovative to make their own rules and clear the land and build society up from scratch. And I think that gene, that genuinely American gene that is behind the American dream, remains here today, more than in any other place I know of. And it’s a wonderful thing, it makes me very happy to be part of such an amazing group of smart and driven entrepreneurs in Silicon Valley.
Does the government have a role to play in fostering innovation?
I'm hesitant to give governments a strong role. And the reason is the pace at which innovation happens in Silicon Valley is so much faster than the pace at which government intervention can keep track. And there's always a danger in all our legislative and regulatory efforts of overspecifying a status quo when the world has moved on.
What about funding for basic public goods of innovation? Aside from Google, isn't private-sector R & D diminishing rather than expanding?
I would love to see more companies push basic innovation. For one thing, it's their responsibility, but even more so, any company that wishes to survive over the next 30 years needs to focus on basic innovation. Google has been criticized for investing in issues such as self-driving cars or mobile technologies such as Android. But I believe it's a big gamble that will pay out in the long term as the needs of society change. So if there's any point in the history of this country when basic research and basic innovation should really be funded, it's today, when societies are moving at a faster pace than ever before.
As robots get more autonomous, are we going to get to a point where we enter Isaac Asimov territory, where we need his “three laws of robotics” or something similar?
We have already created life forms that can't be extinct and that will be with us for a long time to come -- they're called viruses and computer worms. But I am a strong believer that at the end of the day, the technologies that we build, we also tame. In many dimensions of human skills, technology has long taken over. Your pocket calculator can calculate numbers better than most people. Computers can play chess better. Soon, computers will be able to drive better. Does this mean that robotic cars or self-driving cars are taking over the world? No. We will use them to make our lives better and make ourselves more effective.
Are we going to see emerging consciousness in robots or computers in the next generation or two?
There's emerging consciousness in my spell checker, which tells me what word is spelled wrong. There's emerging consciousness in my fuel-injection car. My elevator is conscious because it knows what floor to go to. These are not consciousness on the level of human consciousness. But I believe we have reached a level where the line between human intelligence and machine intelligence is clearly blurred. In some cases, the intelligence or ability of machines is superior. Wikipedia knows more about the world than I do. I don't see a danger of machines becoming hostile in that context. People can be hostile and can use technology against other people. But I think machines will continue to be subordinate to humans, and that makes me happy.
There are people who feel that the prospects of life are diminishing and that the next generation is not going to have a better life than the previous one. Do you think your child’s life will be more interesting and exciting and filled with larger prospects than yours?
If you look at history, the fear that the next generation would be worse off than the previous one has been around for many centuries. It’s not a new fear. And it’s often due to the lack of imagination of people in understanding how innovation is moving forward. But if you graph progress and quality of life over time, and you zoom out a little and look at the centuries, it’s gotten better and better and better and better.
Our ability to be at peace with each other has grown. Our ability to have cultural interchanges has improved. We have more global languages, we have faster travel, we have better communication, we have better health. I think these trends will be sustained going forward, absolutely no question. If you look at the type of things that are happening right now in leading research labs, I see so many great new technologies coming out in the next ten to 20 years. It ought to be great.
So you disagree with the notion that innovation is dead, or that we’re in a great stagnation, or a period of decline? I think anybody who believes that we are in a period of decline or stagnation probably hasn’t been paying attention. If you look at the way society has transformed itself in the last 20 years, it’s more fundamental than the 50 years before and maybe even bigger than the 200 years before.