March 06, 2026
Pentagon Labels Anthropic a Supply-Chain Risk in Escalating AI Safety Feud
In a unprecedented move, the U.S. Defense Department formally designated AI firm Anthropic as a supply-chain risk on Thursday, March 5, 2026, restricting the Pentagon's use of its Claude AI models. This marks one of the first times such a label—typically reserved for foreign adversaries—has been applied to a domestic company. The decision stems from an ongoing dispute over Anthropic's safety guardrails, which limit military applications like autonomous weapons and mass surveillance, clashing with the Pentagon's demand for unrestricted access to the technology for lawful uses.
Anthropic CEO Dario Amodei responded by apologizing for a leaked internal memo that criticized the move as politically motivated and questioned rival OpenAI's Pentagon deals, calling the designation "not legally sound" and vowing to challenge it in court. Pentagon officials, including Defense Secretary Pete Hegseth, argued that Anthropic's restrictions undermine military readiness and chain of command. Senior officials emphasized that vendors cannot impose limits on lawful AI deployment, escalating a weeks-long feud that began with Hegseth's announcement last week.
The conflict highlights deepening tensions in AI safety and alignment, potentially chilling partnerships with investors like Amazon, Google, and Lockheed Martin. Critics, including Sen. Kirsten Gillibrand, decried the action as "reckless" and self-defeating, warning it could benefit adversaries like China. As Anthropic continues talks while preparing legal action, the saga underscores challenges in balancing AI ethical safeguards with national security needs.
Read Research Source →