‘The biggest losers in all of this are everyday people and civilians in conflict zones’: OpenAI is filling the gap left by Anthropic — but almost left in the same loopholes for mass domestic surveillance

Sam Altman talking
(Image credit: Getty Images / Tomohiro Ohsumi / Stringer)

  • OpenAI has signed a new contract with the Pentagon
  • The contract wording left room for AI to be used for mass domestic surveillance
  • Sam Altman is being criticized for his stance on the matter

Following Anthropic’s designation as a supply chain risk by Defense Secretary Pete Hegseth and the loss of its $200 million Pentagon contract, OpenAI is now in the firing line for its own agreement with the Pentagon.

Despite OpenAI having a contract clause forbidding its AI models be used by the US military in 2023, several OpenAI employees have revealed its models were previously used by the Pentagon.

At the time, the Pentagon had a contract with Microsoft, who had license to use OpenAI technology, allowing the Pentagon access through Azure OpenAI which was not subject to the same policies.

OpenAI contact with Pentagon questioned

With Anthropic out of the picture over its refusal to allow the Pentagon to use its models for autonomous weapons systems and mass domestic surveillance, OpenAI CEO Sam Altman is now being questioned over the company's latest contract with the US military.

In 2024, OpenAI removed the blanket ban on the military use of its models, and later went on to sign a contract with Anduril allowing the deployment of its models for national security purposes.

Altman has made clear his support for Anthropic’s position on preventing Claude being used for nefarious purposes, but the company’s new agreement with the US military left room for the exact same purposes, sources familiar with the matter told Wired.

Current regulations have fallen behind advancements made in AI, presenting opportunities for government agencies to purchase personal information on US citizens from data brokers, and then using AI models to categorize and sort the information to create highly accurate and detailed profiles of citizens.

Commenting on the latest agreement signed between OpenAI and the US military, Noam Brown, an OpenAI researcher, stated, ”Over the weekend it became clear that the original language in the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance.”

Brown continued, “The language is now updated to address this, but I also strongly believe that the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security.”

Sarah Shoker, the former head of OpenAI’s geopolitics team, said, “The biggest losers in all of this are everyday people and civilians in conflict zones. Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It’s black boxes all the way down.”


Best identity theft protection header
The best ID theft protection for all budgets

➡️ Read our full guide to the best identity theft protection
1. Best overall:
Aura
2. Best for families:
IdentityForce
3. Best for credit beginners:
Experian IdentityWorks

TOPICS
Benedict Collins
Senior Writer, Security

Benedict has been with TechRadar Pro for over two years, and has specialized in writing about cybersecurity, threat intelligence, and B2B security solutions. His coverage explores the critical areas of national security, including state-sponsored threat actors, APT groups, critical infrastructure, and social engineering.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the Centre for Security and Intelligence Studies at the University of Buckingham, providing him with a strong academic foundation for his reporting on geopolitics, threat intelligence, and cyber-warfare.

Prior to his postgraduate studies, Benedict earned a BA in Politics with Journalism, providing him with the skills to translate complex political and security issues into comprehensible copy.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.