March 10, 2026

Anthropic Sues US Government Claiming Retaliation for Upholding AI Safety Guardrails

In a landmark escalation of tensions between AI developers and national security interests, Anthropic filed two lawsuits on March 9, 2026, against the US Department of Defense and the Trump administration. The company alleges unlawful retaliation after refusing to remove safety guardrails from its Claude AI models during Pentagon contract negotiations. These guardrails prohibit uses such as mass domestic surveillance of Americans and deployment in fully autonomous lethal weapons systems. Anthropic, led by CEO Dario Amodei, became the first frontier AI firm to deploy models on classified US networks in June 2024, aiding intelligence, simulations, and cybersecurity.

The dispute traces back to a February 24, 2026, meeting between Amodei and Defense Secretary Pete Hegseth. Negotiations collapsed when Anthropic insisted on maintaining restrictions, citing AI unreliability and constitutional concerns. On February 27, President Trump directed federal agencies via Truth Social to "immediately cease" using Anthropic technology, labeling it a "disastrous mistake." By March 4, the Pentagon designated Anthropic a "supply chain risk to national security" under 10 U.S.C. § 3252—a label typically reserved for foreign adversaries—triggering the lawsuits.

Anthropic claims the actions violate the First Amendment by punishing its advocacy for AI limits and fail to employ the "least restrictive means" required by law. The company seeks to block enforcement, reverse directives halting business, and prevent future reprisals. A spokesperson emphasized: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security. But this is a necessary step to protect our business, our customers, and our partners." Meanwhile, over 700,000 US tech workers urged Amazon, Google, and Microsoft to reject similar Pentagon pressures via a March 9 statement organized by No Tech For Apartheid.

The clash highlights broader industry divides, with rivals like OpenAI securing Pentagon deals incorporating similar safety principles, such as prohibitions on mass surveillance and human oversight for lethal force. Trump warned of "major civil and criminal consequences" for non-compliance, while critics like Sen. Mark Warner suggest political motivations. Retired Gen. Jack Shanahan defended Anthropic's stance as "reasonable," noting AI's unreadiness for high-stakes military roles.

This legal battle could set precedents for how AI firms balance ethical guardrails against government demands, potentially reshaping military AI deployment and reinforcing safety priorities amid rapid advancements. As AI capabilities grow, the outcome may influence global standards for alignment and risk mitigation in defense applications.
Read Research Source →
← Back to Feed