Inside Anthropic’s existential negotiations with the Pentagon

Why it matters: This clash signals a critical juncture in the AI industry, forcing companies to choose between lucrative government contracts and ethical constraints on AI's potential use in surveillance and autonomous weapons.
- Anthropic is resisting Pentagon pressure to adopt the "any lawful use" clause, risking its $200 million contract and potentially broader business relationships.
- The Pentagon, led by CTO Emil Michael, is threatening to designate Anthropic as a "supply chain risk," a move usually reserved for national security threats, to force compliance.
- Geoffrey Gertz (CNAS) notes the Pentagon's public threat is unusual, as it could have classified Anthropic without public disclosure, suggesting a deliberate attempt to deter other companies from working with Anthropic.
Anthropic is locked in a high-stakes negotiation with the Pentagon over its AI use policy, specifically the "any lawful use" clause, which could allow for AI-driven mass surveillance and lethal autonomous weapons. The Pentagon's unprecedented public threat to label Anthropic a "supply chain risk" highlights the tension between national security interests and AI ethics, potentially jeopardizing Anthropic's $200 million contract and broader market access.



