When programming a robotic car, even finding a parking space can be a Herculean effort. Mike Montemerlo knows all about the effort required for complex AI routines, having programmed the driving decisions for two autonomous driving vehicles at the DARPA Challenges. We asked him to pop the hood and let us take a look inside his creations.
TechRadar: Would you describe some of the technical functions in the Junior and Stanley robotic cars and some of the challenges you had when writing the software code?
Mike Montemerlo: Software for robotic cars can be roughly broken up into two parts: perception and decision-making (sometimes called planning). The perception software takes in raw sensor data and builds a model of the world around the robot.
In the case of autonomous driving, we are most interested in the hazards around the vehicle, such as kerbs, other cars, pedestrians, cyclists and sign posts. The decision- making software, or 'planner', combines this world model and its goal, and decides on an action to take that is safe, rule abiding and moves the car towards the goal.
Some of the specific tasks that Junior needs to handle while driving include obstacle detection and avoidance, localisation and lane centring, detecting and tracking other vehicles, and planning routes to far off checkpoints. Robotic perception and decision making are very difficult in the real world because the real world is uncertain.
Our sensors are noisy, and our actions don't always work out in the way we would expect. For this reason, we take a probabilistic approach to robotics, modelling the noise on our sensors and actions.
TR: How are developments in robot cars helping the normal models being produced for sale now?
MM: Cars are understanding the world better and in some cases are actually taking small actions to help make you safer. Anti-lock brakes are a very simple example. They measure the speed of your wheels and then apply the brakes to give you control in a skid.
When you're braking very hard, the steering takes control. Now there's things like adaptive cruise control where the car maintains the distance to a car in front of you and adjusts your speed to make sure that you don't have to constantly fiddle with your controls. You can think of that as the car taking a little bit of control away from you, being a little bit more autonomous. Kind of like a back seat driver where the robot is saying, 'You're going to merge into traffic, but there's a car you haven't seen'.
The car can shake your seat or apply the brakes or otherwise do something to hopefully avoid an accident before it happens.
TR: Describe some of the programming that is required for something like pushing a button and saying, 'Take me to London'. What are the differences in programming for the various tasks needed?
MM: Junior thinks about the problem at several different levels. First, he thinks about it on the global level, like your GPS device, guiding you from A to B. That's an easy problem to solve. The next level is Junior thinking of the world in terms of trajectories.
He has a short – maybe 100 feet long – trajectory that he's planning in order to stay centred in the lane and avoid the curves, and he has to make decisions like, 'What lane should I be in to make the fastest progress?' and 'Will I have enough time to get back in the lane I want to be in to make my turn?'
TR: What are some of the complexities associated with autonomous driving?
MM: There are many complexities. At this level, what the Urban Challenge demonstrated, cars have to stay in their lanes, merge safely, not run into other cars... This generates a certain level of reliability – but in order to really drive well you have to be able to drive with other humans, aggressive human drivers.
You have to be able to understand and predict the behaviour of pedestrians and bicycles that the [robotic] cars can't see right now. And you have to be capable of handling all the weird events that happen during regular driving – like the mattress that falls off the truck in front of you as you burn down the highway.
Instead of seeing the world as black and white where there's one right answer and the rest are wrong answers, you take the world as noisy and simulate probability distribution for the answers.
TR: How safe do the robotic cars have to be?
MM: There are over 40,000 people who die in traffic accidents every year, but if you look at how many miles we drive, we're actually very safe drivers. We go 70 million miles between fatalities. So if we really want to build a robot that's safer than a person, it has to be able to drive more than 70 million miles on average without a serious accident.
TR: The UK is starting a similar robotic car competition. What advice do you have for them?
MM: I think one of the strengths of the DARPA Grand Challenges was that no human interaction was allowed. You pressed the start button on your robot and it either finished the race or it didn't. In that sense, it's a true test of robot capability.
If we are serious about autonomous robots doing real work in the world, we need to be confident that they can do their job without humans looking over their shoulders. Some robot competitions mix autonomy and human interaction, but I think this makes the resulting systems harder to evaluate.
TR: What are some of the technical challenges involved when a car needs to be able to sense the road?
MM: We take for granted how easy it is for us as humans to look at a street scene and understand what we are seeing. We can instantly pick out all of the cars, pedestrians and street signs. We can also make very good guesses about the future behaviour of these objects, like knowing that a pedestrian is about to move into a crosswalk.
While robots can detect the presence of pedestrians using lasers and cameras, deciding if what they are seeing is a pedestrian or just a mailbox on the side of the road is a big challenge. Predicting what that pedestrian will do in the future is even harder.
TR: Which is more important: the quality of the sensor or the quality of the software?
MM: Good sensing is important. For example, the availability of 3D laser range finders has allowed us to detect kerbs at long range and track other cars even if the road is not flat. However, you still need good algorithms to interpret that data. Robots need to understand their surroundings at a higher level, not just perceive them.
TR: What did you learn about robotics at the last DARPA Challenge?
MM: Both the desert and urban robot races have shown us how much the robotics field has progressed in recent years. Five teams finished the desert race, and six teams finished the much more complex urban race just two years later. This makes us hopeful that autonomous cars are something that we could see in the near future, and not just in science fiction.
TR: How will robot cars develop over the next 10 years?
MM: They won't become autonomous overnight. Instead, they will become a little more autonomous every year though the introduction of high-end features. Anti-lock brakes and adaptive cruise control are two examples of features that take a little control away from the driver in exchange for extra safety or comfort.
At some point, our cars may be completely autonomous, but only in certain environments like a special robot lane on the highway or in stop-and-go traffic. Maybe you can hop out of your car and have it park itself.
TR: What advice do you have for a computing enthusiast who wants to become a professional roboticist?
MM: One lesson that the Grand Challenges emphasised for us is the importance of experimentation. We would often argue about the 'right' way to solve a problem. However, when we took the robot out into the real world, some of the problems we thought would be easy were quite difficult – and vice versa.
We learned a great deal about robotic driving by collecting real sensor data in the desert, trying out different approaches to processing it, and throwing most of them away. I think this is true for students or hobbyists interested in robotics.
TR: Given the scale of the challenge, how can computers hope to process everything they need?
MM: I think it's important to remember that robots and people are different, that there are certain things that robots are really good at that people aren't. And there are certain things that people are really good at and robots aren't.
First published in PC Plus, Issue 275
Now read 10 life-saving car technologies