

A federal court in San Francisco has issued a preliminary injunction barring the Trump administration from designating the tech giant Anthropic as a “supply chain risk.” Judge Rita F. Lin, a Biden appointee to the Northern District of California court that covers Silicon Valley, issued the order in a 43-page ruling on Thursday.
I’ve outlined the dispute between Anthropic, whose AI system is known as “Claude,” a few times (including this piece in the last issue of NR Magazine). The Defense Department (referred to by the administration’s preferred monicker, the “Department of War” in the litigation) has incorporated Claude into its classified systems, as have U.S. spy agencies. According to subject-matter analysts, Anthropic’s product is better than other AI for the purposes involved, so the collaboration has been a boon for national security. Consequently, notwithstanding the standoff between the company and the government, the Defense Department continues to use Claude in its warfighting operations against Iran.
Nevertheless, Defense Secretary Pete Hegseth and President Trump chafed at two restrictions the company contractually imposed on its AI. First, because the technology is advancing at a more rapid pace that even its expert producer’s capability to confidently control it, Anthropic prohibits the use of Claude to operate fully autonomous AI control of weapons systems — i.e., there must be human operators in the loop. Second, because of AI’s unprecedented capacity to process commercially available bulk data into detailed profiles of people, Anthropic prohibits the use of Claude for mass domestic surveillance. (The Defense Department and U.S. intelligence agencies, which exist to counter hostile foreign powers, are not supposed to do that, anyway.)
Ostensibly, the administration objects because, in its view, government officials rather than tech executives should decide what uses are made of weapons systems; hence, it argues that the only restriction should be that government uses pass muster under current law. Yet, as I’ve explained, AI is evolving faster than the government’s ability to regulate it sensibly. In any event, if the administration doesn’t like Anthropic’s terms, it can find another AI vendor. Indeed, when Anthropic refused to buckle under Secretary Hegseth’s mau-mauing, the administration made a deal with Open AI (to supplant Claude with its ChatGPT AI system).
In reality, however, what irks the Trump administration is the notion that someone other than Trump gets to decide anything and that there are limits to the government’s authority to dictate what private citizens and businesses must do. So, instead of contenting itself with finding another AI service, the administration used its familiar bill-of-attainder-style extortion tactics: first, the threaten Anthropic; then, to try to put the company out of business when it didn’t back down.
As Judge Lin observed, the case is not about the policy question of what lawful uses the government should be able to make of the AI if buys from a private company. It is about the unlawfulness of the punitive measures the administration took against Anthropic.
Lin concluded that one of the two relevant supply chain risk (SCR) statutes (Section 3252 of Title 10, U.S. Code) most likely does not apply to Anthropic. SCR is a designation that Congress intended to apply to adversaries of the U.S. government who may sabotage its technology systems.” Prior to Hegseth’s invocation of it here, the SCR tag had “never been applied to a domestic company” — as one would expect since it “is directed principally at foreign intelligence agencies, terrorists, and other hostile actors.”
Moreover, even if the SCR designation could theoretically apply, the administration flouted its procedural safeguards, such as Congress requirement that the government consider less intrusive measure than an SCR designation. Consequently, in Administrative Procedure Act terms, there are additional reasons to find that the administration actions are contrary to law, and in a manner that is arbitrary and capricious.
Finally, Lin found that the government had likely violated the constitutional free speech rights of the company. In zeroing in on its CEO, Dario Amodei, during and after the negotiations, it punished the company for bringing public scrutiny to the government’s stance, which the court concluded was “classic illegal First Amendment retaliation.” As Lin put it: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
So far, the administration’s response is what we’ve come to expect. The Pentagon’s chief technology officer, Emil Michael, took to social media to brand the judge’s ruling “a disgrace” filled with “dozens of factual errors,” that seeks “DURING A TIME OF CONFLICT” to “upend the [president’s] role as Commander in Chief and disrupt [the Defense Secretary’s] full ability to conduct military operations with the partners it chooses.”
Anthropic has brought two lawsuits against the administration. Lin’s decision primarily addresses Anthropic’s First Amendment contention that the government retaliated against it for its public speech in refusing to comply with the Pentagon’s demands.
A second suit, which Anthropic brought in the D.C. Circuit federal appeals court, challenges the legality of the supply chain risk designation itself, under Section 4173 of Title 41, U.S. Code. That is part of a 2018 set of provisions, the Federal Acquisition Supply Chain Security Act; under one of these provisions, Section 1327(b), such challenges are brought directly to the D.C. Circuit federal appeals court, bypassing the district courts.
The government has seven days to appeal Lin’s injunction. According to undersecretary Michael, the government continues to deem Anthropic a supply-chain risk while the D.C. Circuit case remains pending.