News and HeadlinesNational News

Actions

Trump directs all government agencies to stop using Anthropic's AI tools

The new directive gives federal agencies six months to unwind their use of Anthropic's Claude AI and other products.
Trump directs all government agencies to stop using Anthropic's AI tools
AI Anthropic Copyright
Posted
and last updated

President Donald Trump on Friday announced all federal agencies, including the Department of Defense, would immediately stop the use of Anthropic's AI technologies.

"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military," the president wrote on social media.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY."

The new directive gives federal agencies six months to unwind their use of Anthropic's Claude AI and other products.

Scripps News has reached out to Anthropic for comment.

An abrupt change of plans

Earlier this week, the Trump administration and Anthropic hit an impasse as military officials demanded the artificial intelligence company bend its ethical policies by Friday or risk damaging its business.

Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the Pentagon’s final demand to allow unrestricted use of its technology.

Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company's meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.

Military officials warned they would not just pull Anthropic's contract but also “deem them a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company's critical partnerships with other businesses.

And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic dangers.

Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” adding, “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

RELATED NEWS | Hegseth reportedly gives Anthropic deadline to allow unrestricted AI military use

This comes during the same week that the company published a new blog post, outlining its new policy loosening its core safety principle in response to competition. Rather than continuing to operate under the strict internal limits it previously set for building more powerful AI systems, Anthropic is shifting to a more flexible, voluntary safety framework; one the company acknowledges can evolve over time.

In the blog post, Anthropic said parts of its two-year-old Responsible Scaling Policy had become too rigid and could slow its ability to compete in an increasingly crowded and fast-moving AI marketplace.

That was after Sean Parnell, the Pentagon’s top spokesman, posted on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions,” and that the narrative that the Pentagon is looking to use AI to conduct mass surveillance of Americans, noting that’s illegal, and AI development of autonomous weapons is “fake.” Anthropic has “until 5:01 p.m. ET on Friday to decide” if it would meet the demands or face consequences, Parnell said.

Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

That message hasn't resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic's top rivals, OpenAI and Google, voiced support for Amodei's stand late Thursday in an open letter.

In the letter, hundreds of Google employees and dozens from OpenAI wrote that the Pentagon is negotiating with Google and OpenAI to “try to get them to agree to what Anthropic has refused,” noting “[the Pentagon is] trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand … We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands.”

Elon Musk’s xAI also has contracts to supply its AI models to the military.

Musk sided with President Donald Trump's Republican administration on Friday, saying on his social media platform X that “Anthropic hates Western Civilization” after Michael drew attention to a previous version of Claude's guiding principles that encouraged “consideration of non-Western perspectives.” All of the leading AI models, including Musk's Grok and OpenAI's ChatGPT, are programmed with a set of instructions that guide a chatbot's values and behavior. Anthropic calls that guidance a constitution.

While some Trump-allied tech leaders have joined the fray — including Musk and Palmer Luckey, co-founder of defense contractor Anduril — the polarizing debate over “woke AI” has put others in a difficult position.

But in a surprise move from one of Amodei's fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic and questioned the Pentagon's “threatening” move in a CNBC interview, suggesting that OpenAI and most of the AI field share the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.

“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC. “I’ve been happy that they’ve been supporting our warfighters. I’m not sure where this is going to go.”

Also raising concerns about the Pentagon's approach were Republican and Democratic lawmakers and a former leader of the Defense Department's AI initiatives.

“Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a social media post.

Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry.

“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”

He said Claude is already being widely used across the government, including in classified settings, and Anthropic's red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.

“They’re not trying to play cute here,” he wrote.

MORE ON AI | Inside the secretive data centers powering the AI boom

The attitude shift follows a meeting Tuesday between Hegseth and Amodei, during which a source familiar with the meeting told Scripps News the tone of the meeting was cordial and respectful and that there were no raised voices. According the source, during the meeting, Hegseth praised Anthropic’s products, saying they want to continue working with Anthropic. But it’s also when military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.

Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He said he hopes the Pentagon will reconsider given Claude's value to the military, but, if not, Anthropic “will work to enable a smooth transition to another provider.”