‘We cannot in good conscience accede to their request’: Anthropic CEO Dario Amodei draws a line in the sand in standoff with US government

Homepage of the Department of War (DOW) is seen on the screen of a computer. President Trump renames the Department of Defense (DOD) to Department of War.
(Image credit: Shutterstock)

  • Anthropic CEO Dario Amodei does not want Claude used by the Pentagon for mass domestic surveillance and autonomous weapons
  • A statement has laid bare Anthropic's reasons for retaining Claude's safety rails
  • Pete Hegseth gave Anthropic until Friday to provide the DoD with full access

Anthropic CEO Dario Amodei has released a statement concerning the company's ongoing disagreement with the US Department of Defense.

Amodei declared Anthropic “cannot in good conscience accede" to the DoD's request to provide full access to its AI models, over fears they could be used for ‘mass domestic surveillance’ and ‘fully autonomous weapons’.

US Defense Secretary Pete Hegseth has threatened to label Anthropic as a “supply chain risk” and invoke the Defense Production Act to force the company to comply.

Unprecedented threats against Anthropic

In his statement, Amodei said Anthropic has historically had a very good relationship with the US government, including being the first AI company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security.

Amodei also noted the company has complied with US regulations on the use and sale of AI models to China, to the extent that it chose to “forgo several hundred million dollars in revenue” by preventing the use of Claude by the Chinese Communist Party.

“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei continued. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”

But the hesitations to provide the DoD with full access to Claude surround the potential misuse of the model for two nefarious purposes.

Regulations surrounding AI have not caught up with the capabilities of AI models such as Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass domestic surveillance.

Theoretically, the government could purchase highly detailed records and use AI models organize it into highly accurate reflection of US citizens at a scale never seen before.

As for AI use in weapons systems, Amodei says they “may prove critical for our national defense,” but he argues that current AI models are “simply not reliable enough to power fully autonomous weapons.” If an AI model in charge of an autonomous weapon system were to suffer a hallucination, the responsibility would likely fall on the model developer.

Amodei also addresses the threats made by Hegseth, stating that they “are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

The statement concludes that Anthropic’s “strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.”

“Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”


Best identity theft protection header
The best ID theft protection for all budgets

➡️ Read our full guide to the best identity theft protection
1. Best overall:
Aura
2. Best for families:
IdentityForce
3. Best for credit beginners:
Experian IdentityWorks

TOPICS
Benedict Collins
Senior Writer, Security

Benedict has been with TechRadar Pro for over two years, and has specialized in writing about cybersecurity, threat intelligence, and B2B security solutions. His coverage explores the critical areas of national security, including state-sponsored threat actors, APT groups, critical infrastructure, and social engineering.

Benedict holds an MA (Distinction) in Security, Intelligence, and Diplomacy from the Centre for Security and Intelligence Studies at the University of Buckingham, providing him with a strong academic foundation for his reporting on geopolitics, threat intelligence, and cyber-warfare.

Prior to his postgraduate studies, Benedict earned a BA in Politics with Journalism, providing him with the skills to translate complex political and security issues into comprehensible copy.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.