Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Last July, Anthropic agreed to ink a $200 million contract with the Pentagon, allowing the department broad-based use of its Claude model as the two prospective partners gradually worked out the final terms of engagement. Those were supposed to get etched last week—only for Anthropic to undergo a decisive test for its oft-professed ethical boundaries.
Defense Secretary Pete Hegseth demanded that his team be allowed to deploy Claude’s software in whatever manner they deemed pertinent, including applications for domestic surveillance and fully autonomous weaponry, which were “red lines” for Anthropic CEO Dario Amodei. By Friday afternoon, President Donald Trump had ordered a six-month phaseout of all uses of Claude at the federal level. Hegseth then designated Anthropic a supply-chain risk to national security, all but forbidding any military contractors from doing future business with the company. A federal contract was subsequently bestowed upon Anthropic rival OpenAI, which unconvincingly claimed that it would try to safeguard tools like ChatGPT from use in population surveillance and autonomous weapons.
The fallout for Anthropic has been remarkable. It’s the first-ever American company to be deemed a supply-chain risk, which means it’s already lost several users across the federal government. But something even stranger emerged in the aftermath: a lotta liberal goodwill. Social media campaigners encouraged their followers, even the A.I. skeptics, to download Claude en masse. Extremely online observers came up with bizarre metaphors to characterize Anthropic’s heroism and pushed Claude to the top of the app-store charts over the weekend. By Monday morning, there was a Claude service outage that Anthropic attributed to “unprecedented demand” for its products. Even Sen. Brian Schatz and Katy Perry got in on the whole thing. (The fact that American commandos had amply used Claude to plan the Saturday strikes on Iran did not appear to faze many of these folks.) Meanwhile, the complementary OpenAI backlash has been so pitched that it’s pushed CEO Sam Altman into claiming he will amend the surveillance terms of its Pentagon partnership.
It’s understandable that the manic, first-term energy from the libs who embrace any Trump opponents has manifested yet again. But those who’ve chosen Anthropic as a pro-democracy signifier should reconsider their choice of mascot—because, as anyone who’s paid close attention to Anthropic over the past half-decade will tell you, not only is it far from an ethical company, but it embodies the very worst, most corrosive aspects of A.I.’s impacts on modern society, from creative exploitation to political opportunism to, yes, military lethality.
The hullaballoo around Anthropic’s fight overshadowed another major development last week: The company was ditching its “responsible scaling policy,” a safeguard, unique within the sector, meant to prevent it from developing risky A.I. tools too quickly. It’s not the first time Anthropic has been so flexible with its self-imposed rules. In 2024, it scrapped its blanket ban against selling Claude products to government spy agencies; just after Trump’s reelection, it also partnered with Palantir and Amazon to sell their tools to U.S. military customers. This year, the Pentagon made use of the Palantir-Anthropic suite in planning the kidnapping of Venezuelan President Nicolás Maduro, a campaign that killed dozens of locals. Even after the capture, Anthropic participated in a Pentagon bidding contest, proposing a system whereby Claude would interpret voice commands so as to guide offensive, autonomous drone swarms that will employ some human backup.
In the most technical sense, none of this violates the red lines that Amodei outlined around surveilling Americans or allowing its tech to power fully autonomous killing machines. But those lines appear all the thinner when you consider that Anthropic willingly outsourced Claude use to two corporations—Palantir and Amazon—that are activelyenthusiastic about both applications, especially in partnership with this administration.
That kind of convenient ethical punt has been a constant of Anthropic’s brief history. Long before it reneged on its promise of “responsible” and careful A.I. development, Anthropic used the same unethical shortcuts that have invited so much opprobrium upon competitors like Meta and OpenAI: mass-pirating copyright books and songs to speed up model training, circumventing Reddit’s anti-A.I.-crawler protections, and extending its timeline for retaining users’ private chats and Claude sessions. For a company founded by ex-OpenAI executives disaffected with Sam Altman’s business practices, it seemingly has little compunction about the aggressive tacks it’s already taken to shore up its $380 billion bottom line.
To be fair, Anthropic indeed deserves credit for holding to its red lines with the Trump administration, fending off Hegseth’s explicit threats to force it into compliance by invoking the Defense Production Act (which, thankfully for Anthropic, did not come to fruition). That’s no small thing when so many other tech companies and CEOs have discarded their professed Trump 1.0 principles—defending immigrant workers, decrying Trump’s racist statements, resigning from White House advisory positions—for the sake of government cash and business-friendly deregulation.
But to celebrate Anthropic’s move through a mass virtuous-capitalism campaign is to give it too much credit; the company did, after all, willingly lend itself to this administration and its most openly craven partners until the final minute. And considering Anthropic’s lifelong track record of forgoing the principles that supposedly animate its existence (including the “responsible development” ethos it cast off last week), no one paying attention should expect this conscientious objection to last either. Enjoy Claude if you want; it’s a remarkable chatbot. Just don’t expect it to do anything further to save our democracy, or anyone’s life, or your efforts to prevent A.I. from ruining everything.




