Pentagon may sever Anthropic relationship over AI safeguards - Claude maker expresses concerns over 'hard limits around fully autonomous weapons and mass domestic surveillance'

Claude by Anthropic
(Image credit: Shutterstock)

  • The Pentagon and Anthropic are in a standoff over usage of Claude
  • The AI model was reportedly used to capture Nicolás Maduro
  • Anthropic refuses to let its models be used in "fully autonomous weapons and mass domestic surveillance"

A rift between the Pentagon and several AI companies has emerged over how their models can be used as part of operations.

The Pentagon has requested AI providers Anthropic, OpenAI, Google, and xAI to allow the use of their models for “all lawful purposes”.

Anthropic has voiced fears its Claude models would be used in autonomous weapons systems and mass domestic surveillance, with the Pentagon threatening to terminate its $200 million contract with the AI provider in response.

Article continues below

$200 million standoff over AI weapons

Speaking to Axios, an anonymous Trump administration advisor said one of the companies has agreed to allow the Pentagon full use of its model, with the other two showing flexibility in how their AI models can be used.

The Pentagon’s relationship with Anthropic has been shaken since January over the use of its Claude models, with the Wall Street Journal reporting that Claude was used in the US military operation to capture Venezuelan then-President Nicolás Maduro.

An Anthropic spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War”. The company did state that its Usage Policy with the Pentagon was under review, with specific reference to “our hard limits around fully autonomous weapons and mass domestic surveillance.”

Chief Pentagon spokesman Sean Parnell stated that “Our nation requires that our partners be willing to help our warfighters win in any fight.”

Security experts, policy makers, and Anthropic Chief Executive Dario Amodei have called for greater regulation on AI development and increased requirements on safeguarding, with specific reference to the use of AI in weapons systems and military technology.


Best parental controls header
The best parental controls for all budgets

➡️ Read our full guide to the best parental controls
1. Best overall:
Aura
2. Best package:
Qustodio
3. Best for filtering:
Net Nanny

Benedict Collins
Senior Writer, Security

Benedict is a Senior Security Writer at TechRadar Pro, where he has specialized in covering the intersection of geopolitics, cyber-warfare, and business security.

Benedict provides detailed analysis on state-sponsored threat actors, APT groups, and the protection of critical national infrastructure, with his reporting bridging the gap between technical threat intelligence and B2B security strategy.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the University of Buckingham Centre for Security and Intelligence Studies (BUCSIS), with his specialization providing him with a robust academic framework for deconstructing complex international conflicts and intelligence operations, and the ability to translate intricate security data into actionable insights.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.