– The Washington Times
Artificial intelligence firm Anthropic sued the Trump administration Monday over its move to designate the company a “supply chain risk” to national security, setting up a high-stakes legal battle that could shape the future of AI use in the military.
The San Francisco-based company filed two lawsuits: one in federal court in California and a second in the U.S. Court of Appeals for the Federal Circuit in Washington. The lawsuits challenge the Pentagon’s extraordinary supply chain risk designation, announced last week.
It was the first time an American company has received that designation. It cuts off the company from federal contracts and prohibits other firms from doing business with Anthropic if those firms want their own lucrative deals with the government.
The lawsuits are the latest developments in a long-running saga centering on how Anthropic’s popular Claude AI tool can be used in the military.
Anthropic wants ironclad assurances that the tool would never be used to develop fully autonomous weapons or for mass domestic surveillance. The Pentagon said it would not use Claude for either purpose but appeared reluctant to put any promises to the company in writing. It insists that Claude must be available for “all lawful uses.”
In its court filing, Anthropic said the administration has overstepped.
“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the executive’s unlawful campaign of retaliation.”
The Defense Department declined to comment, citing a policy of not commenting on matters in litigation.
Defense officials tell The Washington Times that Claude is being used in the U.S. military campaign against Iran.
President Trump last week expressed frustration with the company because it refused to give the military unlimited access to its technology.
“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Mr. Trump told Politico in an interview.
Anthropic says its models are the only ones approved for use in any classified settings, but rival company OpenAI recently announced its own deal with the Pentagon. Notably, the OpenAI statement on the issue says its deal addresses concerns about domestic surveillance and autonomous weapons.
“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the company said.
OpenAI’s deal with the Pentagon has led to the resignation of OpenAI’s head of robotics, Caitlin Kalinowski.
“This wasn’t an easy call,” Ms. Kalinowski wrote on social media this weekend. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
The Pentagon said it intended to wind down Claude AI’s use for military applications over the next six months.
Even amid the supply chain risk designation and federal lawsuits, the two sides are still reportedly in talks and seeking a path forward for Claude in the military.
Anthropic also has sought to convince businesses and other government agencies that the Trump administration’s supply chain risk designation is narrow and affects military contractors only when they are using Claude in work for the Department of Defense.
Making that distinction clear is crucial for the privately held Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a recent investment announcement that valued the company at $380 billion.
• Defense and National Security Correspondent John T. Seward contributed to this article, which is based in part on wire service reports.
• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.















