- Pentagon official Emil Michael threatened to revoke contracts from AI firms that restrict lawful military use of tools.
- The Department of Defense labeled Anthropic a risk, effectively barring its technology from major national security projects.
- President Trump ordered a six-month phase-out of Anthropic technology across all federal agencies following the dispute.
(UNITED STATES) — Pentagon Chief Technical Officer Emil Michael warned AI companies that restricting lawful military uses of their tools could cost them defense contracts, sharpening a public clash with Anthropic over the company’s limits on its Claude chatbot.
Michael criticized Anthropic for restricting military uses of Claude and signaled that the Defense Department wants technology partners willing to support more autonomous defense capabilities, as the U.S. government races to deploy artificial intelligence across national security work.
The warning landed as the Pentagon moved to tighten procurement around AI tools. Reuters reported on March 5 that the Pentagon formally designated Anthropic a “supply-chain risk,” a label that immediately limits the use of its technology in U.S. military work and bars government contractors from using Anthropic tools in Pentagon-related projects.
The designation followed a deadline set by the Defense Department in a dispute over contract language and usage policies. On March 4, 2026, Pentagon spokesman Sean Parnell issued a deadline of 5:01 PM ET on Friday, March 6, threatening to terminate the partnership, designate Anthropic a “supply-chain risk,” and bar its use in military projects.
Defense Secretary Pete Hegseth met Anthropic CEO Dario Amodei on March 3, warning of contract cancellation or invocation of the Defense Production Act, the source content said. The same account described the conflict as centered on whether Anthropic would provide broader access to Claude for “all lawful purposes.”
Anthropic refused Pentagon demands for unrestricted access to Claude for “all lawful purposes,” citing risks of mass surveillance of Americans and fully autonomous weapons without human involvement, the source content said.
Anthropic rejected the proposed contract language, stating it “made virtually no progress” on protections and allowed safeguards to be disregarded. Amodei said the company “cannot in good conscience accede” while committing to good-faith talks aligned with its usage policy.
The dispute involves a roughly $200 million contract, with Anthropic already supplying Claude in some government settings, the source content said. The Pentagon’s position, as described, treated Anthropic’s software like ordinary enterprise software, with military use governed by U.S. law rather than by a company’s own usage restrictions.
The standoff also became a test of how government procurement offices treat risk labels that can reshape a vendor’s role overnight. A “supply-chain risk” designation changes what government buyers can approve and what contractors can deploy, pushing teams to limit deployments, rework compliance reviews, and prepare alternatives when a product becomes barred from Pentagon-related projects.
For procurement and compliance teams, the label can trigger substitution planning inside ongoing work, including contractor efforts to maintain continuity while meeting restrictions. The designation can also raise the prospect of re-competition and termination decisions when programs depend on a tool that can no longer be used in military projects.
The Pentagon’s hard line with Anthropic arrived alongside broader policy language that the department said should govern military AI procurement. On January 9, 2026, the Department released the Artificial Intelligence Strategy for the Department of War, mandating AI models free from “ideological ‘tuning’” or usage policies limiting lawful military applications, with new contract language for “any lawful use,” the source content said.
The strategy’s framing underscored a central argument in the standoff: whether an AI vendor’s acceptable-use policy can restrict uses once the government buys and deploys a system. In the department’s telling, contracts should reflect a broad ability to employ AI systems for “any lawful use,” while vendors like Anthropic have sought to preserve limits they view as necessary to prevent dangerous deployments.
The same policy push aligned with FY26 NDAA Section 1513, which the source content said requires risk-based cybersecurity frameworks for AI. Those compliance expectations can add another layer to vendor selection, as agencies weigh security architecture, documentation, and monitoring obligations alongside model performance and cost.
Organizational change inside the Pentagon also featured in the policy picture described in the source content. It said the Chief Technology Officer role was consolidated under Under Secretary of War for Research & Engineering, a restructuring that can matter in acquisition decisions by centralizing oversight of technical standards and contracting expectations.
While the confrontation focused on Anthropic and its Claude chatbot, the implications extended beyond one company’s relationship with the Pentagon. Defense AI contracting can reshape demand for cloud deployment, model evaluation, cybersecurity work, and compliance roles as agencies seek systems that meet both operational needs and procurement rules.
Those pressures can ripple through subcontractors and integrators that build tools on top of foundation models or connect models into broader defense systems. When a major vendor becomes restricted for Pentagon work, contractors can face sudden redesign decisions and staffing shifts tied to migration plans and new review requirements.
The episode also highlighted how acceptable-use policies can become a deciding factor as the Defense Department weighs vendors for sensitive work. The source content described the government’s message as a warning that companies with restrictive use policies may find themselves excluded from defense contracts, a posture that could influence which tools program offices select and which companies win long-term defense AI work.
After the March 6 deadline, the source content said President Donald Trump directed federal agencies to phase out Anthropic tech over six months, with Hegseth confirming the supply-chain risk label on March 6. A phase-out directive can force agencies and contractors to plan replacements, manage migration risk, and maintain continuity requirements during transitions as they remove a restricted vendor’s technology from Pentagon-related projects.
Agencies often manage such transitions by maintaining service while shifting workloads to other tools, sometimes relying on bridging arrangements, parallel deployments, or additional oversight as they complete a migration. The source content did not lay out the precise mechanics agencies will use in this case, but it described the government’s move as a directive to phase out Anthropic technology over a defined period.
At the same time, the Pentagon continued to pursue alternative suppliers and contract structures to keep projects moving. The source content said OpenAI secured a March 2026 Pentagon deal for classified deployment, including safeguards against mass surveillance, autonomous weapons, and high-stakes automated decisions like social credit systems, via cloud API, cleared personnel oversight, and contractual protections.
OpenAI CEO Sam Altman called initial terms “opportunistic and sloppy,” the source content said. OpenAI head of national security partnerships Katrina Mulligan stressed deployment architecture over contract language alone.
The contrasting approaches underscored a second fault line in the industry: whether safeguards rest primarily in written terms or in technical controls around deployment. The OpenAI deal described in the source content emphasized a combination of contractual protections and oversight by cleared personnel, along with architecture choices delivered via cloud API for classified deployment.
Industry groups also warned that aggressive exclusions could narrow the military’s access to advanced products and services. Reuters reported on March 4 that a major Big Tech industry group warned Hegseth that labeling Anthropic a supply-chain risk could create uncertainty for other companies and potentially reduce the military’s access to top products and services.
Some AI leaders also expressed sympathy with vendor red lines. Former Google AI leader Kevin Shanahan sympathized with Anthropic, saying Claude’s current unreliability for battlefield use and reasonable red lines, the source content said.
The dispute unfolded against a backdrop of employee activism and executive moves tied to surveillance and weapons concerns. Reuters also reported that OpenAI’s head of robotics and consumer hardware, Caitlin Kalinowski, resigned after the company’s Pentagon deal, citing concerns about surveillance and autonomous weapons safeguards.
Pentagon contracting has expanded across multiple firms, the source content said. It said Pentagon contracts now exist with Google, OpenAI, and xAI.
For immigration-focused readers, the dispute also intersected with the labor market for highly skilled technical workers who build and deploy government-facing systems. The source content said many of the engineers and specialists building advanced AI systems in the United States are immigrants or former international students.
As defense-linked AI work grows, the same account said hiring could increasingly favor workers in cloud infrastructure, cybersecurity, model deployment, compliance, and government-facing enterprise software. It also said internal employee resistance and ethics disputes could make some roles more politically and legally sensitive, especially for foreign-born professionals navigating visa-sponsored careers.
The Pentagon’s stance, as framed by Michael’s warning, placed vendor policies at the center of contracting decisions. That can shape which projects get funded, which tools agencies standardize on, and how companies structure internal governance when their products face pressure to support “any lawful use.”
Anthropic has said it plans to challenge the “supply-chain risk” designation in court, the source content said. The timeline for that challenge and its impact on procurement decisions will matter for contractors deciding which AI systems they can safely build into defense programs.
For now, the Pentagon’s message to AI suppliers put contract eligibility on the line. Michael’s warning, the designation of Anthropic as a “supply-chain risk,” and Trump’s directive to phase out Anthropic technology over six months showed how quickly acceptable-use disputes can turn into procurement actions that reshape the defense AI market and the jobs tied to it.