Artificial intelligence company Anthropic filed a lawsuit against the Trump administration Monday challenging the Pentagon’s decision to label the company and its products as a “supply chain risk” after negotiations over safety guardrails fell apart earlier this month.
The suit, filed in federal court in California on Monday, argues the designation and President Trump’s order for all federal agencies to cease the use of Anthropic are “unprecedented and unlawful.”
The AI firm asked the court to reverse the Pentagon’s decision, warning the “consequences of this case are enormous.” The supply chain risk designation has typically been reserved for foreign adversaries and restricts defense contractors from using the company’s products.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the suit states. “No federal statute authorizes the actions taken here.”
Anthropic has taken to court as a “last resort,” the company’s lawyers said, alleging the federal government “retaliated” against the firm for “adhering to its protected viewpoint on a subject of great public significance — AI safety and the limitations of its own AI models.” The lawyers also accused the Trump administration of trying to “destroy” the economic value of the Anthropic.
The company filed a separate suit in the federal appeals court in Washington, D.C. as well, requesting a review of the Pentagon’s determination that Anthropic poses a supply chain risk to national security.
Anthropic, which was founded with a particular focus on safety, has provided the Pentagon and intelligence agencies with its technology since late 2024 through a partnership with Palantir.
The company has sought to set itself apart from AI competitors, calling for transparency and basic guardrails on the technology’s development.
During negotiations with the Department of Defense (DOD), Anthropic set two red lines on domestic mass surveillance and autonomous weapons, arguing AI is not reliable enough to make life-or-death decisions while changing what is possible with government surveillance.
The Pentagon rejected Anthropic’s argument and insisted it should be able to use AI for whatever it deems to fall under “all lawful purposes.”
Negotiations fell apart earlier this month, and Anthropic CEO Dario Amodei said the company could not “in good conscience” agree to the Pentagon’s terms. Pentagon Secretary Pete Hegseth notified the company last week of the designation while Trump ordered all federal agencies to immediately stop using Anthropic products, including its flagship Claude models.
The DOD, along with numerous federal agencies and the executive office of the president, are named in the suit.
When reached for comment, the DOD and the General Services Administration (GSA) seach said they do not comment on litigation as a matter of policy.
White House spokesperson Liz Huston said in a statement to The Hill that Trump will “never allow a radical left, woke company to jeopardize” the country’s national security.
“The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders,” Huston wrote.
Despite the Pentagon’s position, Anthropic has argued the restrictions cannot prohibit anyone who can do business with the military from doing business with the AI firm.
Technology giants seem to agree. Google, Amazon and Apple said last week that Anthropic’s AI tools will stay available on their platforms for work that does not involve the Pentagon.
Amid the fallout, Amodei said last Thursday the company was “having productive” conversations with the Pentagon last week. Meanwhile, Emil Michael, undersecretary of Defense for research and engineering, said late Thursday there was no “active” negotiation with Anthropic.
An internal memo by Amodei criticizing the Trump administration was also issued last week, though the company executive later apologized for the comments.

1 month ago
108












English (US) ·