Google’s new Pentagon deal widens AI’s role in war to 'any lawful government purpose'
AI companies are wading into ethical uncertainty as they deepen their ties to the military
- Google is reportedly in talks with the U.S. Department of Defense to deploy its AI models in classified environments
- This is a major shift in Google's stance on working with the military
- AI companies like OpenAI and Anthropic are already navigating military partnerships for their AI models
Google and the U.S. Department of Defense are exploring ways to deploy the company's most advanced AI models inside classified military environments, according to a report from The Information. The arrangement marks a milestone in Google's relationship with the Pentagon and thawing relations between AI developers and national security organizations.
That it's happening as AI models evolve toward something closer to strategic infrastructure than regular software is probably not a coincidence. That would also explain the sheer scope of the conversations between the DoD and Google. The agreement wouldn't limit Google's AI tools to specific tasks, but make them available for “any lawful government purpose,” one person involved said.
Bland language can't hide the sweeping implications of the phrase when applied to AI. Those models can analyze intelligence, shape strategic planning, and influence military decisions on a global scale. It sets the stage for a deeper shift in how AI companies define their role in national security. That's raising plenty of hackles, even before confronting studies showing how AI models can become worryingly fond of nuclear threats.
Article continues belowGoogle’s second act with the Pentagon
Google’s relationship with military AI has always been uneasy. Its withdrawal from Project Maven in 2018 was driven by employee protests and produced a set of AI principles meant to guide future decisions and reassure both employees and the public.
The current negotiations suggest those principles are being reinterpreted rather than abandoned. Allowing classified use for “any lawful government purpose” gives Google room to maintain that it is operating within legal and ethical boundaries while still opening the door to a wide range of applications.
That hasn't stopped sharp retorts from within Google. Hundreds of employees have already signed a letter urging leadership to reject what they describe as dangerous military applications of AI.
Google’s leadership appears to be betting that participation offers more control than distance. By working with the Pentagon, the company can at least attempt to shape how its models are deployed. The risk is that once the door is open, it is difficult to close.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The pitfalls of OpenAI and Anthropic
OpenAI has already moved into similar territory, agreeing to arrangements that allow government use of its models under broad legal guidelines while maintaining internal safety frameworks. The company presents this as a pragmatic compromise and earned some support along with plenty of skepticism from consumers and the resignation of its head of robotics.
Anthropic has taken a more cautious path, at least in public. It has emphasized stricter limits on surveillance and weapons-related uses. That led to very public fights with the Pentagon and calls for calm from OpenAI CEO Sam Altman.
There's little room for a clean ethical stance that doesn't involve walking away entirely. Refuse too much and risk being sidelined. Accept too much, and companies risk losing control over how their technology is used.
The phrase “any lawful government purpose” becomes a kind of compromise language in this environment. It satisfies government requirements for flexibility while allowing companies to anchor their decisions in existing legal frameworks. What it does not do is resolve the deeper question of how the military should and will use AI.
Battle of military AI
Supporters of military AI often point to how improved intelligence and faster processing can reduce uncertainty and, in some cases, prevent unnecessary harm. In a competitive global environment, they also argue that failing to adopt these tools would create its own risks.
The difficulty is that AI isn't just speeding up existing tools. The models can generate plausible but incorrect answers. They reflect biases embedded in their training data, but sound confident when they should be cautious.
It's bad enough in consumer apps. An AI's flawed recommendation or slightly inaccurate summary won't lead to anyone dying. That's not always true when weapons of war come into play. And it's harder to track responsibility when AI is part of the decision-making process. The model provides analysis, the operator interprets it, and the institution acts on it. Each step is connected, but none of them fully owns the outcome.
That ambiguity is not new, but AI amplifies it. The systems are powerful enough to influence decisions while remaining opaque enough to complicate explanations after the fact.
The emerging pattern across Google, OpenAI, and Anthropic suggests that the next phase of AI development will be defined as much by contracts as by algorithms. Agreements with governments determine where the technology can go, how it can be used, and who gets access to its most advanced capabilities.
The industry appears to have reached a point where opting out is no longer a simple option. Once one major company agrees to broad terms like “any lawful government purpose,” others face pressure to follow or risk losing relevance in a critical market. The result is a gradual normalization of military AI partnerships, even among companies that once positioned themselves as reluctant participants.
There is no single outcome that resolves all of these tensions. That little phrase signals where AI development is going, and how far it's already come.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.