President Trump announced his intention to place Anthropic on the U.S. government blacklist—a move already being described as the most consequential and controversial decision yet at the intersection of artificial intelligence and national security. The trigger was the company’s refusal to comply with a Pentagon demand to fully lift restrictions on the military use of its Claude model. Anthropic said it was unwilling to do so because of the risks of mass domestic surveillance and the development of weapons systems capable of using force without human involvement.
The Pentagon is now following through on an earlier threat to designate Anthropic a “supply-chain risk”—a measure typically reserved for companies from adversarial states, including the Chinese technology giant Huawei. The Defense Department plans to terminate contracts with Anthropic worth up to $200 million and require all contractors to certify that Claude is not used in their workflows. The company will also be barred from any other cooperation with government bodies. A six-month transition period has been set for switching to alternative solutions.
“I am directing EVERY federal agency of the U.S. government to IMMEDIATELY CEASE any use of Anthropic technology,” Trump wrote on Truth Social. What makes the situation especially unusual is that Claude is currently the only AI model deployed in classified military systems. According to sources, it was used in an operation to capture Nicolas Maduro and could, in theory, be applied in a potential military operation against Iran.
Defense officials had previously praised Claude’s capabilities, acknowledging that abandoning it would be “a major headache.” The decision also creates complications for Palantir, which relies on Claude in some of its most sensitive military projects and will now likely be forced to negotiate with Anthropic’s competitors. Trump, for his part, said the United States would “NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS.” He also accused Anthropic’s leadership of trying to impose its own terms on the military rather than adhering to the Constitution.
Roughly a day earlier, Anthropic chief executive Dario Amodei publicly rejected what the Pentagon described as its “best and final offer,” saying the company “cannot, in good conscience, agree to these requirements.” In response, a senior Pentagon official, Emil Michael, called Amodei a “liar” with a “god complex,” accusing him of being willing to put the country’s security at risk. Neither Anthropic nor the Pentagon offered an immediate comment in response to journalists’ inquiries.
It remains unclear whether Anthropic will challenge the decision in court. Hundreds of millions of dollars are at stake, as many companies working with—or planning to work with—the government may abandon the use of Claude. In recent years, Anthropic has posted rapid growth and secured footholds in key corporate segments of the AI market. Over the past day, Amodei and his team have received broad support for their principled stance, though the financial cost of that decision remains uncertain.
Pentagon officials insist that the boundaries between mass surveillance and legitimate security measures, as well as between autonomous and semi-autonomous weapons systems, are often blurred. In the military’s view, it is impractical to scrutinize every individual use case with a private company. Their position is that once a technology is procured, the armed forces must be able to set their own standards and procedures for its deployment, and that all AI providers should therefore grant access to their models “for all lawful purposes.”
Defense Secretary Pete Hegseth has repeatedly spoken out against “woke AI,” and the Trump administration has grown increasingly hostile toward Anthropic despite the military’s rising reliance on its model. “The only reason we are still talking to them is that we need them—and we need them right now. Their problem is that they really are that good,” one defense official acknowledged ahead of a tense meeting between Hegseth and Amodei.
At the same time, Elon Musk’s xAI recently signed an agreement allowing the Grok model to be used in classified military systems, though sources doubt it can fully replace Claude. Google’s Gemini and OpenAI’s ChatGPT are already deployed in unclassified systems, and the Pentagon is accelerating talks to clear them for work with classified data. Over the past day, hundreds of employees at Google and OpenAI have signed a petition urging their companies to take the same stance as Anthropic. OpenAI chief executive Sam Altman told staff that the company would maintain the same “red lines” on surveillance and autonomous weapons, while still seeking to reach an agreement with the Pentagon.