The company uses multiple approaches to detection: comparison to the reference, analysis of the image itself and comparison against other search results. A 'classifier' gathers all statistics describing an image, such as skin tones, shadows, facial hair and other attributes. These classifiers are fed into what Baluja calls 'visual rank', which determines the accuracy of the search. They decide which images form the basis for what the person normally looks like. That's why, in the Lindsay Lohan search, a cartoon of her might appear much lower in the search results.
Eventually, Google will apply its computer vision research to more than just faces. For example, Baluja explained that they plan to use a visual search system for products. When you type the search term 'Apple iPhone', you might want to see a computer vision search that shows the device in use in the field, the iconic image from Apple or the cartoon images where people make fun of the 'Jesus phone'. Baluja says they are agnostic about the kind of images they are searching, but the main goal is to provide results that they think its customers want, which could in turn raise advertising revenues.
Google Android update
More and more users are searching the Internet from their phones, and the phone itself is evolving into a computer platform. In the future, there may be no desktop or laptop computers; instead, the only computer you use could be your phone, especially once computer scientists figure out complex issues as speech-to-text recognition, cell phone video projection and virtual keyboards.
Google knows that this platform shift will happen and it has chosen to be much more involved in the core architecture. It's interesting to note that Google fans were disappointed when they realised Google will not be releasing an Apple iPhone competitor. Yet, it turns out Google has much loftier ambitions: to deliver the OS for future computers.
In another interesting twist to the story, Android itself is not a fully functional OS with all the applets and features you would ever need. Instead, Google has tapped a much more extensive resource of third-party developers. The model is similar to the one Nokia uses with the Symbian OS (which it just squired) where the most innovative applications are all user created. Its recent developer challenge led to some amazingly innovative apps that can determine your location and help you find a taxi, feed weather information to your location-aware device or let you search for movie showtimes.
The Android platform is a departure from the hub and spoke model of the iPhone. There is no home screen. Instead, all apps can run concurrently and are 'application aware'. You can click on a contact and click an option to see a map of where that person lives, dial their mobile phone, copy the data to a clipboard or browse for it on the web. Applications can share information between them as well: a contact program can pull information from a spreadsheet.
Oddly, while every other Google project involves hard factual data and some interesting implications for real world use, most of the Android project seems to depend on the users to tap the power of the OS. For example, when we spoke with Erick Tseng, the Android product manager, he could not quantify for us whether Android would support multitouch (where you can zoom in or pan across an image using your fingers) or haptics (where the device provides a tactile response).