Singapore police testing out public surveillance robot raises privacy concerns

An Xavier Robot On Patrol In Singapore
(Image credit: HTX Singapore)

Authorities in Singapore are testing out their latest AI-powered tool to improve public safety while raising some serious privacy concerns in the process.

A pair of Xaviers, wheeled, AI-powered robots with 360-degree cameras, will deploy in the highly foot-trafficked Toa Payoh Central to monitor the area for "undesireable social behavior" like smoking and hanging out in groups larger than five people to help prevent the spread of the novel coronavirus, Business Insider reports.

"The deployment of ground robots will help to augment our surveillance and enforcement resources," said Lily Ling, Singapore Food Agency's director of East Regional Office, said in a statement. "For instance, the surveillance of illegal hawkers can be manpower intensive as officers need to be deployed at various areas across the island. The adoption of robotics technology can be used to enhance such operations, and reduce the need for our officers to do physical patrols."

The use of AI and robotics in law enforcement is nothing new, as companies like Amazon have partnered with police to use its facial recognition software Rekognition to identify criminal suspects or "persons of interest", but it's no less disconcerting for many privacy advocates who worry about the data collection capabilities of government agencies who may have few if any restraint on their use.


Analysis: Are snitch bots inevitable? That's up to us

As Gizmodo points out, Singapore has a notoriously strict legal culture so something like a "snitch bot" patrolling public spaces might not be too out of place, but law enforcement in the US have tried to deploy such technologies as well, most recently in New York City when the NYPD debuted its own version of Boston Dynamics "Spot" robot.

The swift and ferocious public outcry about a police robot patrolling predominantly poor neighborhoods led to the NYPD cancelling its contract with Boston Dynamics. But issues around these technologies persist as tech companies continue to partner with law enforcement and other government agencies to bring new AI-powered technologies to domains where civil liberties concerns aren't just valid but have often been violated by these same government authorities.

There's nothing we can do about that technology itself, which already exists, but how it is used will ultimately be up to how much we care to make it an issue.

John Loeffler
Components Editor

John (He/Him) is the Components Editor here at TechRadar and he is also a programmer, gamer, activist, and Brooklyn College alum currently living in Brooklyn, NY. 

Named by the CTA as a CES 2020 Media Trailblazer for his science and technology reporting, John specializes in all areas of computer science, including industry news, hardware reviews, PC gaming, as well as general science writing and the social impact of the tech industry.

You can find him online on Threads @johnloeffler.

Currently playing: Baldur's Gate 3 (just like everyone else).