When programming a robotic car, even finding a parking space can be a Herculean effort. Mike Montemerlo knows all about the effort required for complex AI routines, having programmed the driving decisions for two autonomous driving vehicles at the DARPA Challenges. We asked him to pop the hood and let us take a look inside his creations.
TechRadar: Would you describe some of the technical functions in the Junior and Stanley robotic cars and some of the challenges you had when writing the software code?
Mike Montemerlo: Software for robotic cars can be roughly broken up into two parts: perception and decision-making (sometimes called planning). The perception software takes in raw sensor data and builds a model of the world around the robot.
In the case of autonomous driving, we are most interested in the hazards around the vehicle, such as kerbs, other cars, pedestrians, cyclists and sign posts. The decision- making software, or 'planner', combines this world model and its goal, and decides on an action to take that is safe, rule abiding and moves the car towards the goal.
Some of the specific tasks that Junior needs to handle while driving include obstacle detection and avoidance, localisation and lane centring, detecting and tracking other vehicles, and planning routes to far off checkpoints. Robotic perception and decision making are very difficult in the real world because the real world is uncertain.
Our sensors are noisy, and our actions don't always work out in the way we would expect. For this reason, we take a probabilistic approach to robotics, modelling the noise on our sensors and actions.
TR: How are developments in robot cars helping the normal models being produced for sale now?
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
MM: Cars are understanding the world better and in some cases are actually taking small actions to help make you safer. Anti-lock brakes are a very simple example. They measure the speed of your wheels and then apply the brakes to give you control in a skid.
When you're braking very hard, the steering takes control. Now there's things like adaptive cruise control where the car maintains the distance to a car in front of you and adjusts your speed to make sure that you don't have to constantly fiddle with your controls. You can think of that as the car taking a little bit of control away from you, being a little bit more autonomous. Kind of like a back seat driver where the robot is saying, 'You're going to merge into traffic, but there's a car you haven't seen'.
The car can shake your seat or apply the brakes or otherwise do something to hopefully avoid an accident before it happens.
TR: Describe some of the programming that is required for something like pushing a button and saying, 'Take me to London'. What are the differences in programming for the various tasks needed?
MM: Junior thinks about the problem at several different levels. First, he thinks about it on the global level, like your GPS device, guiding you from A to B. That's an easy problem to solve. The next level is Junior thinking of the world in terms of trajectories.
He has a short – maybe 100 feet long – trajectory that he's planning in order to stay centred in the lane and avoid the curves, and he has to make decisions like, 'What lane should I be in to make the fastest progress?' and 'Will I have enough time to get back in the lane I want to be in to make my turn?'
TR: What are some of the complexities associated with autonomous driving?