Hundreds of Google and OpenAI employees sign open letter urging limits on military AI
AI workers call for limits on surveillance and autonomous weapons
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- Almost a thousand employees from Google and OpenAI signed an open letter calling for clear limits on military uses of AI
- The letter urges tech companies to push back against government plans for AI surveillance and autonomous weapons
- The move reflects growing tension inside the AI industry over government contracts and defense partnerships
Nearly a thousand employees of Google and OpenAI have signed an open letter urging their companies to resist pressure from the U.S. military to loosen restrictions on how AI systems can be used. The letter declares “We Will Not Be Divided” over the subject, even after the Pentagon designated Anthropic a “supply chain risk after the company refused to allow its technology to be used for domestic mass surveillance or fully autonomous weapons.
That move shocked many observers in Silicon Valley and sparked a wave of concern among the engineers building today’s frontier AI models. Especially as OpenAI and Google are reportedly negotiating to take up the arrangement rejected by Anthropic.
The signatories frame their message in unusually blunt language for an industry known for cautious corporate communication. The letter alleges that government officials are attempting to pressure AI companies to abandon certain ethical boundaries.
"They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand," the letter states. "This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War."
The open letter is notable as it includes people from rival companies that normally compete fiercely. The argument they put forward is that AI is now powerful enough that decisions about its use cannot be treated as routine business agreements.
These concerns are not purely theoretical. Governments around the world are exploring how AI might be integrated into defense planning and intelligence analysis. Military agencies have long used software tools for surveillance and targeting. Advanced generative models could accelerate those capabilities dramatically. And when studies are starting to show how AI prefers the nuclear option in war games, letting it control weapons and surveillance systems seems like an even worse idea.
AI war
It's a bit of a throwback for Google workers, thousands of whom protested the company’s involvement in the Pentagon's Project Maven plan to use machine learning to analyze drone footage in 2018. After widespread internal backlash, Google ultimately allowed that contract to expire and published a set of ethical guidelines known as its AI Principles.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Those principles were meant to define how Google would approach sensitive uses of artificial intelligence. At the time, the company said it would not develop technologies designed to cause harm or enable surveillance that violated international norms. The latest open letter suggests that similar tensions are resurfacing as governments become more interested in deploying powerful language models.
The letter may or may not change corporate decisions, but at least the workers can point to it as a message that can't be misconstrued.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.