OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

Why it matters: This saga reveals the complex, often opaque, intersection of AI development, corporate policy, and national security.
- OpenAI CEO Sam Altman is facing internal criticism for a new deal with the US military, which he admitted looked "sloppy" on social media.
- OpenAI's 2023 usage policy explicitly banned military use, but employees discovered the Pentagon was already experimenting with Azure OpenAI, a Microsoft-offered version of OpenAI's models, according to anonymous sources.
- Microsoft and OpenAI spokespeople clarify that Azure OpenAI products were not subject to OpenAI's policies, with Microsoft stating its service became available to the US Government in 2023.
- OpenAI updated its policies in January 2024 to remove the blanket ban on military use, a change many employees learned about through news articles.
- OpenAI announced a partnership with Anduril in December 2024 to develop AI systems for national security, signaling a clear shift in its stance.
OpenAI's CEO Sam Altman is under fire after the company signed a deal with the US military, a move employees criticized as "sloppy" and a reversal of prior policy. This follows revelations that the Pentagon had already been testing OpenAI's models through Microsoft's Azure OpenAI service, despite OpenAI's explicit 2023 ban on military use, creating confusion among employees about the true scope of their policies.



