Google creates its own AI rulebook - promises it won't go all Terminator on us

Google may have quietly dropped its Don't Be Evil tagline but it is still firmly in the 'we're not planning to end the world' camp when it comes to AI. 

To prove it, it's released a comprehensive list of guidelines that it is going to adhere to when it comes to everything it is doing with AI.

The first batch of rules read like something out of The Scouts' handbook and go under the banner: Objectives for AI Applications. It comprises seven rules, that cover everything from privacy to being accountable  to upholding high standards of scientific excellence. 

Making the rulebook

Where it gets really interesting, though, is the bit titled: AI Applications We Will Not Pursue. This is the section where Google outlines what is won't do. Choice keywords here include: weapons, harm, surveillance, violating... that sort of this.

"We will not design or deploy AI in technologies that cause or are likely to cause overall harm," explains Google. 

"Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints."

Google concludes: "We believe these principles are the right foundation for our company and our future development of AI. 

"We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time."

This is all great, but exactly what the T-800 would say if it was trying to blend in with the real world. 

Marc Chacksfield

Marc Chacksfield is the Editor In Chief, Shortlist.com at DC Thomson. He started out life as a movie writer for numerous (now defunct) magazines and soon found himself online - editing a gaggle of gadget sites, including TechRadar, Digital Camera World and Tom's Guide UK. At Shortlist you'll find him mostly writing about movies and tech, so no change there then.