- Microsoft will continue offering Anthropic models to commercial customers despite a U.S. Department of War restriction.
- The Pentagon recently labeled the AI startup a supply-chain risk following failed negotiations over military use.
- Anthropic plans to challenge the designation in court to clarify its impact on commercial and defense markets.
(UNITED STATES) — Microsoft said it will keep offering Anthropic’s AI models to most customers even after the U.S. Department of War labeled the startup a supply-chain risk, carving out an exception for Pentagon use and leaving contractors and enterprise compliance teams to sort through a widening split between defense-linked and commercial AI markets.
Microsoft’s legal team concluded Anthropic products, including Claude, can remain available through Microsoft 365, GitHub and Microsoft AI Foundry, except for Department of War use, a March 6, 2026 report from The Times of India citing CNBC said.
“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry,” Microsoft told CNBC, according to The Times of India.
The stance made Microsoft one of the first major technology companies to publicly signal it is not fully distancing itself from Anthropic despite the Pentagon-related restriction, the report said.
For enterprise buyers and developers using Microsoft ecosystems, the immediate practical meaning is that Anthropic models remain available across mainstream corporate and developer tooling, while Defense Department-linked use faces tighter limits that can trigger contract-by-contract decisions.
The split also forces companies to define, sometimes narrowly, what counts as Department of War use versus broader commercial availability, and to decide which business units, subsidiaries and projects fall into which category.
The designation followed a set of government moves that changed the compliance environment for federal agencies and Department of War-linked work, with a phase-out window described in the report.
In late February, President Donald Trump directed federal agencies to stop using Anthropic’s AI products with a six-month phase-out, the report said, describing the action as part of an effort by Trump and Defense Secretary Pete Hegseth to push agencies away from Anthropic’s technology.
Days later in early March, the Pentagon formally notified Anthropic’s leadership that it classified the company and its products as a supply-chain risk effective immediately, the report said.
Hegseth framed the dispute around the military’s ability to use technology “for all lawful purposes” without vendor restrictions, the report said, while also describing a clash over what Anthropic would allow Claude to be used for.
The report linked the dispute to failed negotiations over issues including mass domestic surveillance and autonomous weapons.
The government actions, as described, drew a bright line between direct Department of War use of Anthropic products and the broader commercial availability that Microsoft says it can continue to support.
That distinction matters because federal contracting rules and Pentagon-linked procurement decisions can ripple outward, reaching not only prime contractors but also subcontractors, suppliers and mixed-use corporate environments where the same tools might be used across commercial and government-adjacent work.
Microsoft’s approach also reflects its footprint with Anthropic models inside widely used products, where a sudden removal could force costly changes for customers.
The Times of India report said Anthropic’s Claude models are integrated into GitHub Copilot and other Microsoft products, and it tied that to Microsoft CEO Satya Nadella’s strategy of giving customers “model choice” rather than forcing them into a single AI ecosystem.
The business relationship between the two companies also sits in the background of the decision to maintain access outside Pentagon use.
Microsoft agreed to invest up to $5 billion in Anthropic, the report said, while Anthropic committed to large Azure spending.
Microsoft counsel concluded the designation permits Anthropic products for non-Department of War use, preserving those integrations while attempting to keep Department of War restrictions contained to a specific customer segment.
For corporate compliance teams, the practical problem becomes scoping and segmentation: identifying which internal users, environments and customer engagements count as Department of War-linked, and ensuring the right AI tools are available only where allowed.
Large organizations often run both commercial and government-adjacent portfolios at the same time, and developers may move between projects while using a shared set of collaboration tools, code repositories and AI-assisted coding systems.
That shared infrastructure can turn a targeted restriction into a broader operational issue if companies respond by blocking tools across the board rather than segmenting access.
In the contractor market, the report said the restrictions already began to cascade.
Some defense contractors have instructed employees to stop using Claude models, the Times of India report said, even as Microsoft took a narrower view of the designation.
The report also raised the possibility that prohibitions could be invoked under the Federal Acquisition Supply Chain Security Act (FASCSA) of 2018, which can shape how federal buyers treat products viewed as supply-chain risks.
Such restrictions can force operational changes that are mundane but consequential: disabling access to certain models in specific systems, revising approved vendor lists, auditing AI tool usage, and documenting how teams comply when they work on Department of War-linked tasks.
Even where a company believes it can legally continue using a tool for commercial work, contractor compliance teams may still choose a stricter approach to avoid breach risks in mixed-use environments.
That can affect subcontractors and suppliers as well, particularly when primes flow down tool restrictions to reduce the chance that restricted technology touches Department of War work.
The reported standoff also opened the door to litigation.
Anthropic CEO Dario Amodei plans to challenge the supply-chain-risk designation in court, the report said, arguing that the action applies only to direct Department of War contract use, not to all commercial use of Claude by customers with defense ties.
That argument, as described, goes to the scope of the government action: whether it targets Pentagon systems and direct contracting relationships, or whether it reaches more broadly into the commercial market because of downstream defense connections.
The dispute comes as competitors jockey for advantage in defense-linked AI procurement, with the Times of India report pointing to moves by OpenAI.
Rival OpenAI was reported to be strengthening its Pentagon relationship, the report said, and it described a new classified Pentagon deal for OpenAI shortly after, a development that can shift procurement perceptions in government-adjacent markets.
Such competitive dynamics can influence enterprise confidence, especially for companies that sell into regulated sectors and prefer vendors with clearances or stronger government relationships.
For Microsoft, maintaining Anthropic’s availability to non-Department of War customers can reduce disruption for commercial clients that have embedded Claude-enabled tools in everyday workflows, while attempting to comply with defense-linked restrictions.
That posture still leaves customers to decide whether internal policy should follow Microsoft’s narrower segmentation or the stricter approach some contractors have adopted.
The Times of India report connected the dispute to questions about AI jobs and U.S. tech policy, particularly in sectors that straddle commercial software and national-security work.
For the immigration and education audience, the immediate significance is not a direct change to visa rules, the report said, but it described the possibility of uncertainty in the AI labor market tied to shifting defense policy and procurement decisions.
Companies operating near sensitive sectors often manage staffing plans alongside contract eligibility requirements, internal tool approvals and customer assurances about what technology enters restricted environments.
Defense-linked restrictions can also influence what work gets assigned to which teams, how roles are scoped, and what projects companies feel comfortable staffing aggressively when tool access could change.
Microsoft’s decision to keep Anthropic available outside Department of War use, as described in the report, can preserve continuity for commercial AI development and deployment on Microsoft platforms even as defense-adjacent roles face tighter procurement scrutiny.
In practice, companies responding to the designation often start with procurement controls and contract scoping, as compliance groups try to define whether restrictions apply to specific statements of work, particular customer programs, or a broader set of environments.
Some organizations tighten vendor-management rules in defense-adjacent settings first, then expand inward to shared tools like code repositories and productivity suites when they discover developers and analysts use the same AI features across multiple projects.
Access-control changes can follow quickly, including disabling certain models within designated systems, blocking specific integrations through single sign-on controls, and updating internal policy documents that govern which AI tools teams can use for Department of War-linked work.
Internal audits can become part of the response as well, as companies inventory where Claude models appear in their workflow and track whether any usage touches restricted environments.
At the same time, companies may monitor the court challenge described by Anthropic and seek clarity on whether the designation reaches beyond direct Department of War contracts, particularly for customers that have defense ties but also operate large commercial businesses.
For firms with mixed portfolios, segmentation becomes an organizing principle: separating commercial and government-adjacent environments, separating teams that work on restricted contracts from those that do not, and separating the AI tools allowed in each zone.
The report’s account also underscored how disputes rooted in defense procurement can reshape technology markets that otherwise function as mainstream commercial platforms.
Microsoft’s message, as presented in the Times of India report, is that Anthropic may be restricted in Pentagon-linked use but remains present in enterprise and developer platforms used far beyond government work.
That bifurcation can shape where products get built and sold, and it can influence how employers assess risk when they decide which teams receive funding, which product lines ship features tied to particular models, and which roles they staff.
What comes next, as described in the report, depends on how broadly government restrictions get applied, how Anthropic’s court challenge proceeds, and whether other vendors face similar Department of War designations that force customers to draw new lines between defense-linked and commercial AI use.