The US military is still using Claude — but defense-tech clients are fleeing

Why it matters: This case highlights the volatile intersection of AI, defense, and policy, setting a precedent for future tech-military partnerships.
- Anthropic's Claude AI is currently being used for targeting decisions in the U.S.'s aerial attacks on Iran, often in conjunction with Palantir's Maven system, suggesting hundreds of targets and prioritizing them in real-time (The Washington Post).
- President Trump's directive ordered civilian agencies to stop using Anthropic products, while giving the Department of Defense six months to wind down operations, a timeline complicated by the immediate outbreak of conflict.
- Secretary of Defense Pete Hegseth has pledged to designate Anthropic as a supply-chain risk, but no official steps have been taken, meaning no legal barriers currently prevent the military's use of Claude.
- Lockheed Martin and other defense contractors have already begun replacing Anthropic models with competitors this week, with many subcontractors also actively seeking alternatives (Reuters, CNBC).
Anthropic finds itself in a paradoxical position: its Claude AI is actively deployed for targeting decisions in the ongoing U.S.-Iran conflict, even as the company faces a mass exodus of its defense-tech clients. This split stems from conflicting U.S. government directives and a looming supply-chain risk designation, creating a complex scenario where a leading AI lab is simultaneously crucial to military operations and being systematically phased out of the defense industry.


