The White House is increasingly treating Anthropic as both a threat and a necessary part of artificial intelligence development. That contradiction is already shaping policy: the administration is gradually concluding that it cannot do without a company with which it was openly in conflict only recently.
After months of tension and legal battles with the Pentagon, Washington has begun cautiously moving closer to Anthropic, because its most advanced models have proved impossible to ignore. At first, Donald Trump’s administration sought to maintain as hands-off and pro-innovation a line as possible on AI. But as the capabilities of such systems have grown, that approach has begun to break down.
The government is intervening more and more actively, deciding who gets access to frontier technologies and how they are used—under pressure from a growing recognition of their potential. The conflict with Anthropic intensified at the start of the year, when negotiations over the use of its AI in classified military projects reached a dead end. That led to public disputes, lawsuits, and attempts to strike alternative deals with other developers. In an unprecedented move, the company was even designated a supply-chain risk—a status usually reserved for foreign adversaries.
At one stage, the White House considered issuing an executive order that would in effect exclude Anthropic from government systems. Over time, however, it became clear that the company could not be fully shut out. That conclusion hardened after the launch of the powerful Mythos model: despite the dispute with the Pentagon, government bodies began testing it alongside other frontier cybersecurity tools.
Alongside the continuing legal battles, the administration has begun gradually trying to lower the temperature. As Jessica Tillipman of George Washington University notes, regulation through contract mechanisms concentrates significant power in the hands of the agency that signs the agreement, effectively turning its decisions into de facto policy for the whole administration. That, in turn, provokes resistance from other agencies, which want to avoid becoming dependent on the outcome of the Pentagon’s failed negotiations.
Anthropic itself says it is working with the US government in areas such as cybersecurity and preserving the country’s leadership in the AI race. A company representative stressed that computing resources are not the constraint, and that talks over broader access to the Mythos model are continuing.
The White House is considering a new executive action that could both define rules for the use of advanced AI systems in the public sector and create a framework for settling the conflict with the company. The discussions remain at an early stage, and no final decisions have yet been made; they involve technology and cybersecurity companies as well as industry associations.
It remains unclear, however, whether any such decision would lead to a settlement of the conflict with the Pentagon, where attitudes towards the company remain deeply hostile. Defense secretary Pete Hegseth told a congressional hearing on Thursday that Anthropic is “run by an ideological fanatic who should not be given sole authority to decide what we do”.