Hundreds of AI Tech Workers Are Demanding the Pentagon Drop Its Anthropic “Supply-Chain Risk” Label

Anthropic was on the cusp of landing a lucrative $200 million deal with the US Department of Defense. However, the company ultimately rejected the deal. It expressed concerns that AI would be used to surveil American citizens and used in fully autonomous weapons. This led to US President Trump calling on government agencies to stop using Anthropic’s AI. It even went as far as the Department of Defense designating Anthropic as a “supply-chain risk.” This designation of Anthropic is something that tech workers are now urging them to withdraw.

Tech workers want Anthropic’s “supply-chain risk” designation withdrawn

In a letter signed by hundreds of tech workers, they are calling on the Department of Defense and Congress to withdraw the “supply-chain risk” designation that was given to Anthropic.

According to the letter, “This situation sets a dangerous precedent. Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation. The United States is winning the AI competition because of its commitment to free enterprise and the rule of law; undermining that commitment to punish one company is short-sighted and antithetical to our national security interests.”

It adds, “We urge the Department of War to withdraw its supply chain risk designation and resolve this dispute through normal commercial channels. Further, we urge Congress to examine whether the use of these extraordinary authorities against an American technology company is appropriate.”

What does this mean?

Typically speaking, the US government designates foreign entities as “supply-chain risks.” Its legal definition refers to “the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system.”

However, Anthropic is in no immediate danger of being blacklisted just yet. For starters, the government needs to complete a risk assessment first. Then, it needs to notify Congress before ties are cut. Secondly, Anthropic says that the designation is “legally unsound” and is prepared to challenge it in court.

This designation could be a knee-jerk reaction to Anthropic rejecting the deal. For all we know, it will eventually blow over. In the meantime, OpenAI has taken Anthropic’s place.