IT & Telecommunications

Trump directs US agencies to stop using Anthropic AI

WASHINGTON
Trump directs US agencies to stop using Anthropic AI

US President Donald Trump has ordered all federal agencies to immediately cease their use of artificial intelligence products developed by the startup Anthropic, in the latest escalation of tensions between the White House and Silicon Valley over the role of AI in national security.

In a post on his social media platform, Trump framed the move as a matter of national security and presidential authority, declaring that the government “doesn’t need it, doesn’t want it, and will not do business with them again” and setting a six-month phase-out period for departments currently using the technology.

At the centre of the dispute is Anthropic’s flagship AI model, Claude, which has been deployed across various federal agencies, including in classified US military networks.

Trump said in his social media post: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” 

“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump said.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”

Shortly after Trump’s announcement, Defence Secretary Pete Hegseth declared that the Pentagon — re-branded in administrative posts as the “Department of War” — will designate Anthropic as a “supply-chain risk to national security,” a label normally reserved for adversarial foreign companies.

Under the designation, contractors, suppliers, and partners that work with the US military would be barred from conducting commercial activity with Anthropic, intensifying the potential fallout for the AI firm.

Roots of the Dispute

The clash stems from months of negotiations between Anthropic and Pentagon officials over how the company’s AI can be used in defence settings.

Anthropic CEO Dario Amodei has publicly rejected Pentagon demands to remove safeguards limiting the use of Claude for controversial applications — especially mass domestic surveillance and fully autonomous weapons — saying the company “cannot in good conscience accede” to such conditions.

Officials argue that the Department of Defense must have unfettered access to AI technologies for all lawful uses, without restrictions imposed by private companies.

The dispute reached a head this week when the Pentagon set a deadline for Anthropic to comply with its demands, threatening consequences if the company failed to agree. The White House directive effectively ended negotiations.

Anthropic’s response 

Anthropic has responded strongly, vowing to challenge the government’s actions in court. The company said that labelling it a supply-chain risk would be legally unsound and could set a dangerous precedent for US tech firms that negotiate terms with federal agencies.

In statements, the startup emphasised that it had attempted to reach a compromise and that its objections are rooted in ethical and safety concerns about how advanced AI systems are deployed.

What Comes Next

With the directive now issued, agencies are expected to begin unwinding their reliance on Anthropic’s systems over the next six months — a complex process given how deeply embedded the technology has become in some government workflows.

Whether Anthropic’s legal challenge succeeds and what long-term impact this standoff will have on US. AI policy remains to be seen, but the episode highlights the escalating tensions at the intersection of national security, AI ethics, and executive authority.