When leadership is aware that the decision was probably the right one but the optics were clearly poor, a certain type of corporate unease takes hold. “To try so hard to do the right thing and get so absolutely, like, personally crushed for it — is really painful,” said Sam Altman, a tech CEO, during an all-hands meeting in early March. The statement felt surprisingly raw. Those in charge of billion-dollar businesses don’t often say things like that. It’s difficult to tell if it was calculated, sincere, or somewhere in between. However, it ended up in a room full of people who had already formed their opinions.
The agreement between OpenAI and the U.S. Department of Defense, which permits the use of the company’s AI models—the same technology that powers ChatGPT—on classified military networks, is at the heart of all of this. Analyzing possible targets and setting strike priorities based on information from various sources are two of the real-world applications being discussed. This is a big change for a company that started out with a declared commitment to developing AI safely and for the good of humanity. Altman might actually think it’s the right choice. However, many people are finding it difficult to close the gap between OpenAI’s beginnings and current state.
Key Facts & Context
| Company | OpenAI — creator of ChatGPT; founded 2015; headquartered in San Francisco, California |
|---|---|
| CEO | Sam Altman — defended Pentagon deal at the Morgan Stanley Tech, Media & Communications Conference, San Francisco, March 5, 2026 |
| The Deal | OpenAI signed a contract with the U.S. Department of Defense allowing its AI models to be deployed on classified military networks |
| Why Anthropic Stepped Back | Anthropic’s CEO Dario Amodei rejected Pentagon demands including autonomous weapons development and mass surveillance of American citizens |
| Internal Backlash | Caitlin Kalinowski, OpenAI’s head of robotics, resigned on March 6, 2026, citing insufficient deliberation on lethal autonomy and domestic surveillance |
| Public Backlash | Users canceled ChatGPT subscriptions; coordinated “rating attacks” launched on app stores following news of the Pentagon agreement |
| NATO Ambitions | OpenAI disclosed it was exploring a contract with NATO; later clarified the opportunity was for unclassified, not classified, networks |
| Key Partner | Anduril — drone and counter-drone technology manufacturer; partnered with OpenAI to analyze drone attacks and assist in their neutralization |
| U.S. Policy Shift | Trump administration preparing new guidelines requiring AI companies to permit “all lawful uses” of their models in government contracts, prioritizing state discretion over corporate ethics policies |
Almost as fascinating as the contract itself is the tale of how OpenAI obtained it. The Pentagon had first been in contact with Anthropic, a company founded by former OpenAI researchers, including CEO Dario Amodei. Those discussions broke down. According to reports, Amodei rejected demands from the Pentagon that included the development of autonomous weapons and, perhaps more concerning, widespread surveillance of American citizens. Anthropic turned to leave. OpenAI intervened. The sequence of events raises an unanswered question: did OpenAI negotiate a genuinely different agreement, or did it accept terms that a competitor deemed unacceptable? The agreement, according to Altman, has “more guardrails than any previous agreement for classified work.” Whether that framing holds up under close examination is still up for debate.

Even by the turbulent standards of the tech industry, OpenAI’s internal response has been noteworthy. The day after Altman publicly defended the agreement in San Francisco, Caitlin Kalinowski, the head of OpenAI’s robotics division, resigned on March 6. Her stated justification went right to the core of the dispute: she believed the company had not given domestic surveillance and “lethal autonomy without human approval” enough thought. She wasn’t the only one who felt uneasy. Concerns had already been voiced by researchers and employees, and the criticism had spread outside the company’s walls into the public app stores, where organized users attacked ChatGPT’s ratings and cancelled subscriptions in protest. It’s difficult to ignore how swiftly this evolved from an internal ethics discussion to a consumer behavior narrative.
Altman has presented the military relationship in a way that is almost civic. Speaking at the Morgan Stanley Tech Conference, he made the case that the ultimate decision about the extent to which AI can be used in defense should be made by elected officials rather than business executives. That seems reasonable at first glance. It might even be true. However, there’s something a little awkward about a CEO who recently signed a classified military contract shifting the moral burden of that choice to democracy in theory. The decision to make the tools available was still made by the companies that manufactured them.
Additionally, OpenAI has stated that it is interested in signing a contract with NATO; however, the company later clarified that this would only apply to unclassified networks, not classified ones. That adjustment was made so fast that it felt more like a recalibration than a clarification. According to reports, the Trump administration is drafting new procurement rules that would mandate AI firms to allow “all lawful uses” of their models in government contracts, giving federal discretion precedence over any moral boundaries that individual businesses may have established. The controversy that OpenAI is currently engaged in could easily become the norm for all significant AI firms with government aspirations if those regulations are implemented.
As this develops, Sam Altman or any particular contract aren’t really the source of the deeper tension. It’s about what happens when a technology that people use to plan vacations, write emails, and inquire about their health turns into a system that aids in the prioritization of airstrikes. These two items are now part of the same product family. It’s not as easy to see the boundary between them as people would like. Years ago, Google had to deal with a similar situation with Project Maven and ultimately withdrew due to employee protests. OpenAI appears to be placing a different wager: that the backlash will subside and that the defense partnership is viable, even essential. It could be correct. However, press releases often fail to adequately convey the impact of resignation letters.
