(UNITED STATES) — President Trump on Friday directed all federal agencies to immediately stop using artificial intelligence technology developed by Anthropic, ordering a governmentwide halt that reaches across defense, national security and civilian operations.
The directive also sets a phase-out period for systems already embedded inside government workflows, a distinction that acknowledges some uses cannot be shut off instantly without disrupting sensitive missions and day-to-day functions.
Trump announced the move publicly, casting the decision as a rebuke of what he described as corporate constraints on the military and a warning to other technology firms that the U.S. Government would dictate terms for national security work.
“I am directing every agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again,” Trump said.
The action marks a rare attempt by the federal government to cut off a major AI company at scale, turning what had been contract-by-contract procurement decisions into a centralized directive affecting agencies, contractors and vendors that built Anthropic’s systems into larger tools.
A dispute between the Trump administration, the Pentagon and Anthropic over military deployment restrictions set the stage for the order, after negotiations broke down over AI safeguards and limits on how the models could be used.
At the center of the conflict is Anthropic’s refusal to remove safeguards that prevent its Claude AI system from being used for mass surveillance of U.S. citizens or to guide fully autonomous weapons.
Anthropic CEO Dario Amodei rejected Pentagon demands in a Thursday statement, saying he would not allow such uses despite a Friday 4 p.m. deadline set by Defense Secretary Pete Hegseth.
Trump described Anthropic’s stance as an effort to “strong arm” the Defense Department and make it “obey their terms of service,” sharpening the political framing of what had been a technical and contractual fight over permitted uses.
Pentagon officials argued broader access to advanced AI is necessary for national security, while AI researchers warned that rapid militarization without safeguards could introduce serious risks.
Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” a label that carries compliance consequences far beyond the Pentagon because it signals to agencies and contractors that the company sits in a higher-risk category.
Hegseth ordered Pentagon contractors and suppliers to cease doing business with the company, widening the impact from federal employees using Anthropic tools directly to the broader ecosystem of integrators and subcontractors that support defense and intelligence systems.
“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.
The General Services Administration removed Anthropic from its Multiple Award Schedule and terminated the company’s OneGov deal, steps that end contract availability across the Executive, Legislative, and Judicial branches and push agencies to find alternatives even for routine procurement.
That procurement move also affects the mechanics of how federal buyers acquire software and services, because schedule access can serve as a default pathway for agencies that need to adopt tools quickly under established contracting terms.
Federal agencies that relied on embedded AI tools now face a practical replacement challenge, because Claude had already been deployed in intelligence and defense environments and is woven into workflows, vendor offerings and integrations.
Replacing systems of that kind can require more than swapping out a model, since agencies must validate outputs, adjust data handling, and rebuild interfaces that link AI functions to other software used for mission and administrative tasks.
Trump coupled the order with a warning about the phase-out, signaling he expects company cooperation even after the directive shuts off new use.
“Anthropic had better get their act together and be helpful during this phase out period, or I will use the full power of my Presidency to make them comply, with major civil and criminal consequences to follow,” Trump said.
The dispute intensified after a recent U.S. military operation used Claude technology and showcased its military applications, a development that hardened Pentagon demands for fewer restrictions on how the system could be deployed.
That use also heightened questions about oversight, rules of engagement and accountability, because the more directly AI systems touch operational decisions, the higher the stakes for controls that govern what the tools can and cannot do.
Reports of the operation’s outcome became a catalyst for tougher positioning, shifting the negotiations from guardrails in principle to immediate arguments over mission needs and deployment flexibility.
A competitive scramble followed the ban, with rival firms moving quickly to position their models as replacements for agencies and contractors forced to unwind Anthropic deployments.
OpenAI announced a new agreement with the U.S. Defense Department shortly after Trump’s announcement, placing a leading competitor in front of agencies looking for continuity and signaling that defense-linked AI procurement will remain a major market.
The quick pivot also underscores how procurement shocks can realign federal buying, pushing officials to reassess vendor risk, contract clauses, and continuity planning when a single decision can cut off a widely used tool.
Agency buyers can face bottlenecks when replacing AI systems tied to sensitive environments, since switching providers can trigger re-certifications, re-integrations and review of how models handle data used in government operations.
The political dimension of the order sits alongside the procurement mechanics, with the Trump administration promoting AI development “free from ideological bias” and rolling back earlier federal AI safety frameworks introduced by previous administrations.
That framing puts corporate governance and safety standards into direct conflict with defense operational demands, turning questions about safeguards into a larger dispute about who sets the rules for advanced AI: governments or technology companies.
Four senators overseeing defense policy sent a letter to Hegseth and Amodei urging both sides to extend negotiations, warning that the “hasty” deadline could negatively impact national security and the tech industry’s willingness to contract with Washington.
The senators’ intervention highlighted how the confrontation could ripple beyond one company, because defense and intelligence agencies rely on stable partnerships with private firms to develop and maintain advanced systems.
The supply-chain risk designation also raises procedural questions because, in practice, such labels can function like exclusion even without a traditional debarment process, affecting not only new awards but also existing relationships that depend on vendor approvals.
National security and procurement authorities often give agencies broad discretion to restrict vendors, but disputes can arise over process, evidence, proportionality and contract rights when restrictions effectively close a major segment of the federal market.
Any legal challenge could focus on how the designation was applied and how quickly agencies and contractors were forced to unwind deployments, particularly where existing contracts or embedded systems create obligations that cannot be met overnight.
While the order targets Anthropic, the disruption extends into the labor market that supports federal technology projects, where staffing often follows contract scopes and vendor toolchains.
Vendor swaps can change staffing needs across integrators and subcontractors, because each AI platform can require different engineering, security and compliance expertise to deploy, maintain and audit in government environments.
Contract changes can also reshape team composition and clearance requirements, and they can constrain remote work when projects shift deeper into defense and intelligence settings with tighter access controls.
Those shifts matter for immigration-linked hiring pipelines, since thousands of engineers, researchers, contractors, and foreign tech workers — including H-1B professionals — depend on government AI projects and can face disruption when a contractor loses a tool it relied on.
Immigration timelines can be sensitive to project continuity and an employer’s ability to place workers on billable assignments, especially when contract modifications alter start dates, job duties, or where work can be performed.
Near-term attention now turns to how quickly agencies can replace embedded AI systems, how they will validate and govern the next set of tools, and whether the scrutiny that landed on Anthropic spreads to other AI firms depending on their own defense-use negotiations.
A replacement effort across federal agencies can require coordinated contract modifications, integration work and testing, along with decisions about how AI systems will be governed when they are inserted into sensitive workflows.
Anthropic’s next steps remain a central question, including whether it contests the supply-chain risk designation and how that would play out alongside the federal phase-out already in motion.
Policy analysts described the order as one of the most aggressive federal technology interventions in decades, a sign that AI governance is increasingly treated as a geopolitical issue rather than only a business debate.
The episode also shows how national security contracting can steer private AI labs, since access to government work can shape investment, product design and the pace at which companies pursue certain capabilities.
Tension between safety governance and military demand now sits at the center of U.S. AI policy, with the federal government asserting leverage over how advanced systems can be used and companies weighing whether safeguards remain enforceable under defense pressure.
Shifts in federal partnerships can also influence where global AI talent chooses to work, as engineers and researchers follow the projects, funding and legal frameworks that determine which applications are encouraged, restricted or abruptly cut off.
Trump Labels Anthropic a Security Risk, Orders U.S. Government to Ban AI
President Trump has banned federal agencies from using Anthropic’s AI technology after the company refused to remove safeguards limiting military use. Designated as a national security risk by Defense Secretary Pete Hegseth, the company faces immediate contract terminations and a phase-out of existing systems. This aggressive move signals a shift toward government-mandated terms for AI in defense, prompting agencies to seek alternative providers like OpenAI.