The White House is launching an AI 'bill of rights'

A profile of a human brain against a digital background.
(Image credit: Pixabay)

The White House have released a blueprint for an “AI bill of rights” that looks to increase the privacy and safety of American citizens who encounter automated systems.

The announcement of the blueprint, developed by the government’s Office of Science and Technology Policy (OSTP), looks to promote five key areas around AI safety: “Safe and Effective Systems”, “Algorithmic Discrimination Protections”, “Data Privacy”, “Notice and Explanation”, and “Human Alternatives, Consideration, and Fallback”.

The blueprint will apply to any automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” the White House wrote. 

Regulating artificial intelligence

At a glance, the ideas in the blueprint are exactly the sort of thing the federal government should be looking to address as businesses and governments across the whole world moves toward automation in their processes. 

The trouble is that these are just ideas. It’s what the federal government believes should become legislation, but nothing in the blueprint is legally binding, and - fundamentally - nothing has changed.

The blueprint also takes the rise of artificially intelligent automation systems as an inevitability, rather than a threat to be opposed. 

The OSTP’s heart is in the right place as it looks to protect marginalised Americans from predictive policing (whereby an automated system may suspect a person of committing a crime before doing so, usually on the basis of ethnicity or gender), but it can do better than simply trusting that businesses will make its proposed changes to their automated systems.

A stylistic representation of a person looking down at their phone while streams of data flow beside them.

Image credit: Unsplash (Image credit: Unsplash)

Notably, the OSTP wants human oversight to be the “fallback” when automation fails, and never the primary implementation of a system, regardless of whether, in certain scenarios such as healthcare and insurance, that would make for a safer system.

Speaking to Wired, Annette Zimmermann, researcher of AI, justice and moral philosophy at the University of Wisconsin-Madison, believes that the blueprint failing to consider simply not deploying automation is the biggest threat to the right Americans have to justice.

“We can’t articulate a bill of rights without considering non-deployment, the most rights-protecting option,” she claimed.

Elsewhere in the world, legislation taking a hard line against AI’s role in people’s lives could be on the way.

This year, the European Parliament has deliberated on redrafting the European Union's AI Act, with some MEPs supporting a ban on predictive policing. A vote is expected to take place by the end of 2022, while those leading the amendment process have stated predictive policing “violates the presumption of innocence as well as human dignity.”

The White House’s proposals could be interesting to watch develop, but, in comparison with efforts in the EU, may not be enough, and ultimately lead to nothing.

Luke Hughes
Staff Writer

 Luke Hughes holds the role of Staff Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.