MM: There are many complexities. At this level, what the Urban Challenge demonstrated, cars have to stay in their lanes, merge safely, not run into other cars... This generates a certain level of reliability – but in order to really drive well you have to be able to drive with other humans, aggressive human drivers.
You have to be able to understand and predict the behaviour of pedestrians and bicycles that the [robotic] cars can't see right now. And you have to be capable of handling all the weird events that happen during regular driving – like the mattress that falls off the truck in front of you as you burn down the highway.
Instead of seeing the world as black and white where there's one right answer and the rest are wrong answers, you take the world as noisy and simulate probability distribution for the answers.
TR: How safe do the robotic cars have to be?
MM: There are over 40,000 people who die in traffic accidents every year, but if you look at how many miles we drive, we're actually very safe drivers. We go 70 million miles between fatalities. So if we really want to build a robot that's safer than a person, it has to be able to drive more than 70 million miles on average without a serious accident.
TR: The UK is starting a similar robotic car competition. What advice do you have for them?
MM: I think one of the strengths of the DARPA Grand Challenges was that no human interaction was allowed. You pressed the start button on your robot and it either finished the race or it didn't. In that sense, it's a true test of robot capability.
If we are serious about autonomous robots doing real work in the world, we need to be confident that they can do their job without humans looking over their shoulders. Some robot competitions mix autonomy and human interaction, but I think this makes the resulting systems harder to evaluate.
TR: What are some of the technical challenges involved when a car needs to be able to sense the road?
MM: We take for granted how easy it is for us as humans to look at a street scene and understand what we are seeing. We can instantly pick out all of the cars, pedestrians and street signs. We can also make very good guesses about the future behaviour of these objects, like knowing that a pedestrian is about to move into a crosswalk.
While robots can detect the presence of pedestrians using lasers and cameras, deciding if what they are seeing is a pedestrian or just a mailbox on the side of the road is a big challenge. Predicting what that pedestrian will do in the future is even harder.
TR: Which is more important: the quality of the sensor or the quality of the software?
MM: Good sensing is important. For example, the availability of 3D laser range finders has allowed us to detect kerbs at long range and track other cars even if the road is not flat. However, you still need good algorithms to interpret that data. Robots need to understand their surroundings at a higher level, not just perceive them.
TR: What did you learn about robotics at the last DARPA Challenge?
MM: Both the desert and urban robot races have shown us how much the robotics field has progressed in recent years. Five teams finished the desert race, and six teams finished the much more complex urban race just two years later. This makes us hopeful that autonomous cars are something that we could see in the near future, and not just in science fiction.
TR: How will robot cars develop over the next 10 years?
MM: They won't become autonomous overnight. Instead, they will become a little more autonomous every year though the introduction of high-end features. Anti-lock brakes and adaptive cruise control are two examples of features that take a little control away from the driver in exchange for extra safety or comfort.
At some point, our cars may be completely autonomous, but only in certain environments like a special robot lane on the highway or in stop-and-go traffic. Maybe you can hop out of your car and have it park itself.
TR: What advice do you have for a computing enthusiast who wants to become a professional roboticist?
MM: One lesson that the Grand Challenges emphasised for us is the importance of experimentation. We would often argue about the 'right' way to solve a problem. However, when we took the robot out into the real world, some of the problems we thought would be easy were quite difficult – and vice versa.
We learned a great deal about robotic driving by collecting real sensor data in the desert, trying out different approaches to processing it, and throwing most of them away. I think this is true for students or hobbyists interested in robotics.
TR: Given the scale of the challenge, how can computers hope to process everything they need?
MM: I think it's important to remember that robots and people are different, that there are certain things that robots are really good at that people aren't. And there are certain things that people are really good at and robots aren't.
First published in PC Plus, Issue 275
Now read 10 life-saving car technologies