Trump orders Us federal agencies to cut ties with anthropic Ai over security fears

Trump has ordered every U.S. federal agency to cut ties with Anthropic’s artificial intelligence systems, intensifying a fast-moving confrontation between the White House, the Pentagon, and one of the leading AI developers.

In a post on Truth Social on Friday, the president instructed departments to “immediately cease” relying on Anthropic tools. Agencies that already have the company’s models integrated into their systems were granted a six‑month window to fully phase them out.

Trump framed the move as a matter of national security and civilian control over the armed forces. He accused Anthropic of pushing a “radical left, woke” agenda and insisted that decisions about how the U.S. military operates must remain the sole responsibility of the commander-in-chief and senior defense leadership he appoints.

The order came directly on the heels of a clash between Anthropic and the Pentagon over how its flagship AI, Claude, can be deployed in defense contexts. Anthropic has built in technical and policy safeguards designed to stop its systems from being used for large‑scale domestic surveillance, offensive military targeting, or other high‑risk applications that might escalate conflict or undermine civil liberties.

When the company declined to weaken or remove those protections, tensions escalated. Trump’s directive makes clear that, from the administration’s perspective, any contractor that limits how the military can apply its technology risks being cut off from federal business altogether.

For Anthropic, the dispute goes to the core of how it presents itself: as an AI lab that places safety, ethics, and long‑term risk mitigation at the center of its work. The company has repeatedly said it wants its models to be useful while still constrained from enabling human rights abuses, unlawful surveillance, or indiscriminate weapons systems. That philosophy now appears to be directly at odds with the administration’s expectations for defense‑related AI.

The practical impact of Trump’s decision will depend on how widely Anthropic’s tools have already been adopted across government. Some civilian agencies have reportedly used large language models for tasks like summarizing documents, assisting with coding, and streamlining internal workflows. Replacing those systems will require new procurement processes, technical migrations, and security reviews-likely within a tight political timeline.

Politically, the move fits into a broader pattern in which AI companies are pulled into partisan battles over “wokeness,” censorship, and perceived ideological bias in automated systems. Right‑leaning critics argue that mainstream AI models over‑police certain topics, refuse to generate content they deem controversial, or frame issues using what they see as progressive assumptions. AI labs counter that content restrictions are necessary to reduce harm, limit misinformation, and comply with law and policy.

The Pentagon, meanwhile, is racing to modernize its technological capabilities with AI playing a central role-across logistics, intelligence analysis, cyber defense, and battlefield decision‑support. Defense officials have publicly emphasized the importance of “responsible” AI, but they also want systems that are flexible enough to be integrated into military operations at scale. The clash with Anthropic highlights how difficult it is to square those ambitions with the much stricter safety constraints favored by some AI researchers.

The order also sends a clear message to other technology providers: if a company’s internal guardrails conflict with what the administration considers operational needs, access to the federal market can be revoked. That may push some contractors to quietly relax their restrictions in defense contexts, while prompting others to double down on tighter controls and focus on purely commercial or non‑defense government work.

For AI policy more broadly, the dispute raises unresolved questions:

– Who ultimately decides how general‑purpose AI systems can and cannot be used-the developer, the customer, or the state?
– Should companies be permitted-or even encouraged-to refuse certain categories of government use on ethical grounds?
– How far can an administration go in pressuring private firms to align their safety policies with military or intelligence priorities?

Civil liberties advocates are likely to focus on Anthropic’s concern about “mass domestic surveillance.” AI drastically lowers the cost of monitoring communications, analyzing video feeds, and profiling large populations. Whether such capabilities are deployed and how tightly they are controlled will shape the balance between security and privacy for years to come.

For the AI industry, Trump’s directive underscores the strategic risk of building products that sit close to the fault line between commercial utility and state power. Anthropic’s stance may bolster its reputation among customers and researchers who prioritize safety and human rights, even as it loses existing or potential government contracts. Competitors will be watching closely to see whether aligning more tightly with defense priorities brings them new opportunities-or their own public controversies.

Federal agencies now face immediate, concrete tasks: inventorying where Anthropic technology is in use, assessing what it does, and selecting replacement vendors or in‑house solutions. That transition won’t be purely technical; legal, security, and compliance teams will all have to sign off on new systems, particularly if they touch sensitive data or national security functions.

Longer term, the episode is likely to accelerate efforts inside government to define clearer standards for “trustworthy” AI. If agencies want both powerful tools and predictability from their suppliers, they may have to publish more detailed requirements about permissible constraints, auditability, and human oversight-rather than reacting case by case when conflicts with vendors become public.

The standoff also illustrates a broader shift: AI is no longer a neutral backend technology. It has become a political object in itself, bound up with ideological battles, defense strategy, economic competition, and public fears about automation and control. As that happens, confrontations like the one between Trump’s administration and Anthropic are likely to become more common-not less.

In the immediate future, the practical outcomes will revolve around three fronts: whether Anthropic adjusts its policies to regain federal trust, whether rival AI providers move to fill the gap in government demand, and whether Congress steps in with legislation that more clearly delineates the boundaries between corporate AI safety measures and military or law‑enforcement prerogatives.

Whichever way those questions are answered, the directive to purge Anthropic’s tools from federal use marks a turning point. It shows that choices about AI safeguards are no longer just internal design decisions; they can carry direct geopolitical consequences, reshaping who builds the systems that governments rely on-and on what terms.