Pentagon Pressures Anthropic Over Military AI Access
Pentagon gives Anthropic a Friday deadline to expand military AI access, warning of supply chain risk or Defense Production Act action.
A standoff between the US Department of Defense and Anthropic has intensified after Pentagon leadership issued a deadline demanding expanded access to the company’s artificial intelligence systems.
The confrontation centers on whether private AI usage policies can limit military deployment of advanced models already integrated into classified environments.
Pentagon Signals Legal Leverage as Deadline Nears
According to reporting by TechCrunch, citing Axios, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that the department could formally designate Anthropic a supply chain risk or move to invoke the Defense Production Act if broader access is not granted. The Act authorizes the executive branch to compel companies to prioritize national defense needs.
Analysts noted that applying supply chain risk language to a domestic frontier AI developer would be highly unusual.
The Pentagon’s position is that lawful military use should be determined by US legal standards rather than by contractor-defined guardrails. The department’s reliance on Anthropic for classified AI capabilities, with limited immediate alternatives, adds operational weight to the warning.
Red Lines, Classified Access, and Competing Narratives
Reporting from BBC News explains what limits Anthropic has set on its technology. The company says it will not allow its AI to be used in autonomous military strikes where the system makes final targeting decisions without human control, and it also rejects use for mass domestic surveillance.
In a statement, Anthropic said talks with defense officials were held in good faith and focused on matching its usage rules with national security responsibilities.
Pentagon officials told the BBC that the dispute is not mainly about autonomous weapons or surveillance, but about making sure the department can use AI systems for all lawful government purposes.
They said missing the deadline could lead to action under the Defense Production Act and a supply risk label. The situation comes after earlier Pentagon contracts awarded to Anthropic and other major AI companies for work inside classified defense networks.
Why This Matters
The dispute highlights tensions between national security priorities and private AI governance. Using the Act to bypass model restrictions could set a precedent for federal control over AI use.
At the same time, a breakdown in the relationship could create vulnerabilities in classified AI systems due to the department’s dependence on a few approved vendors.
The episode also underscores broader procurement questions. National Security guidance has previously emphasized avoiding single vendor dependency for advanced systems. With frontier AI now embedded in defense workflows, supply chain concentration and policy misalignment may carry strategic implications beyond this immediate standoff.
What Comes Next for Defense AI Governance
The Friday deadline sets a near-term decision point, but the structural issues extend further. Whether resolved through compromise, legal escalation, or contract restructuring, the outcome will influence how AI companies negotiate usage policies with federal agencies.
It may also influence how future defense contracts manage operational flexibility, legal authority, and corporate safety commitments. This standoff may shape secure and accountable defense AI use for years to come.
Source: Hegseth gives Anthropic until Friday to back down on AI safeguards



