Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon

Why it matters: It pits ethical AI limits against national‑security demand, reshaping future defense‑tech partnerships.
- Anthropic filed a lawsuit challenging the Pentagon’s supply‑chain‑risk label, arguing it’s retaliatory and blocks legitimate contracts.
- OpenAI and Google engineers (including Jeff Dean) filed an amicus brief supporting Anthropic, emphasizing risks of AI‑driven domestic surveillance and autonomous lethal weapons.
- U.S. Department of Defense maintains the designation, effectively blacklisting any contractor that uses Anthropic’s Claude, even though the model is already embedded in classified intelligence work.
- Other AI firms have signed “any lawful use” contracts with the Pentagon, contrasting with Anthropic’s red‑line stance and underscoring industry division.
Anthropic sued the Pentagon after the Trump administration labeled it a supply‑chain risk for refusing to enable mass surveillance and fully autonomous weapons. In a striking show of solidarity, nearly 40 engineers from OpenAI and Google filed an amicus brief warning that the designation is retaliatory, harms national security, and ignores genuine ethical concerns. The dispute highlights a growing rift between AI labs that want to set firm use‑case limits and a defense establishment pushing for unrestricted access.


