Sam Altman has circulated an internal memorandum to OpenAI staff, outlining the same foundational restrictions that have already placed Anthropic at the center of a bitter standoff with the Pentagon: no artificial intelligence for mass surveillance, and no autonomous lethal weapons systems. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans must maintain meaningful control over high-stakes automated decisions. These are our core red lines," he wrote.
Should other industry leaders—Google chief among them—adopt a similar stance, it would considerably complicate the Defense Department's efforts to find a replacement for Anthropic's Claude, which became the first AI tool embedded in the most classified operations of the American military. What's more, such a move would mark the first time the country's leading technology companies have collectively drawn boundaries around the permissible uses of their products by the state.
At the same time, Altman made clear he has no intention of severing ties with the Pentagon—on the contrary, he is seeking an agreement that would allow ChatGPT to be deployed within classified military systems. "We will try to reach an arrangement with [the Pentagon] that allows our models to be used in classified environments in a manner consistent with our principles. We would ask that the contract cover any use case except those that are illegal or incompatible with cloud deployments—such as domestic surveillance and autonomous offensive weapons," the memorandum states. The Wall Street Journal was the first to report on it.
Notably, ChatGPT already operates within unclassified military systems, and negotiations over its transition into classified environments have accelerated markedly amid the Pentagon's standoff with Anthropic. The military, however, had insisted that OpenAI and Google agree to have their models used "for any lawful purpose"—the very standard Anthropic rejected, as it made no accommodation for the company's own restrictions. Elon Musk's xAI recently accepted those terms, though Grok is not considered a viable substitute for Claude.
Among the mechanisms for enforcing its stated restrictions, OpenAI is, according to Axios sources, considering the continuous refinement of safety and monitoring systems as real-world deployment experience accumulates. The company is also pressing for the presence of researchers with security clearances who could track the technology's use and advise the government on risk. Finally, OpenAI is seeking technical safeguards—among them, constraining its models to cloud environments with no reach to edge devices such as autonomous weapons systems. Judging by the way Pentagon officials have described their position to Axios, these proposals may encounter the same resistance that proved Anthropic's undoing: the military has no intention of tolerating undue private-sector influence over critical government decisions.
Meanwhile, the confrontation surrounding Anthropic continues to intensify. After the company's CEO Dario Amodei publicly defended its foundational restrictions, employees at OpenAI and Google signed a collective letter urging their leadership to resist "pressure" from the Pentagon. Emil Michael—the official leading negotiations with the country's top AI companies—called Amodei a "liar" with a "God complex" who is "endangering national security." Many in Washington and Silicon Valley, by contrast, have come to regard Anthropic's stance as a principled one worthy of respect, even as it carries considerable financial risk. Altman and Amodei are former colleagues from OpenAI who became fierce rivals after the latter departed to found Anthropic.
"This is a case where it's important to me to do the right thing rather than merely perform toughness," Altman wrote. "But I understand this may not look great in the short term, and there is a lot of nuance and context here."