‘Human lives are already being lost’: Open letter signed by hundreds of Google employees requests CEO reject ‘unethical and dangerous’ US military AI use

Google Gemini AI logo on a smartphone with Google background
(Image credit: Shutterstock/Poetra.RH)

  • Google employees sign open letter to CEO over concerns of military AI use
  • AI developers do not want their technology used for 'classified purposes'
  • Google is currently negotiating a contract with the Pengaton

Over 600 Google employees have signed a letter calling on CEO Sundar Pichai to reject any uses of its AI technology for military purposes.

The open letter highlights the serious ethical concerns the staff have, stating, “Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building.”

“As people working on AI, we know that these systems can centralize power and that they do make mistakes,” the letter said. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”

Article continues below

Another ‘supply chain risk’ designation?

In March, Anthropic CEO Dario Amodei rejected allowing the Pentagon to use the Claude model over fears they could be used for “mass domestic surveillance” and “fully autonomous weapons,” leading to Defense Secretary Pete Hegseth to declare the company a “supply chain risk.”

Shortly after, OpenAI stepped up to fill the gap left by Anthropic, with CEO Sam Altman facing both internal and external criticism over his seeming willingness to allow military use of ChatGPT.

The new OpenAI contract with the Pentagon was full of holes that would have allowed the same use of ChatGPT that Anthropic feared for Claude. The contract was amended to state that OpenAI’s models would not be used for “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Shortly after, Sam Altman told his employees that the Pentagon has said OpenAI does not “get to make operational decisions” on how the military uses AI technologies.

Now, Google employees are joining the growing number of AI company employees and members of the public opposed to the military use of AI tools. “Making the wrong call right now would cause irreparable damage to Google’s reputation, business and role in the world,” the letter states.

Following protests involving Google staff in 2018, the company amended its AI Principles to state that it would not deploy its AI tools where they were “likely to cause harm,” and would not “design or deploy” AI tools for surveillance or weapons. These clauses were quietly removed from its AI Principles on 4 February 2025.


Google logo on a black background next to text reading 'Click to follow TechRadar'

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.


Benedict Collins
Senior Writer, Security

Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.

Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.