Filing: Anthropic says it cannot manipulate Claude once the military has deployed it, denying DOD accusations that Anthropic could tamper with models during war (Paresh Dave/Wired)

Why it matters: This clash over AI control in military tech sets a precedent for future defense contracts and AI ethics.
- Anthropic states it cannot manipulate its Claude AI models once deployed by the military, directly refuting DOD claims (Wired).
- The Department of Defense (DOD) alleges that Anthropic could potentially tamper with AI models during wartime (Wired).
- The dispute underscores critical questions about the autonomy and security of AI in defense, particularly regarding developer control post-deployment (Wired).
Anthropic has formally denied Department of Defense accusations that it could manipulate its Claude AI models once deployed by the military, asserting that such tampering is technically impossible. This dispute highlights growing concerns over the control and integrity of AI systems in critical defense applications, with the DOD alleging potential wartime sabotage. Anthropic's refutation, covered by Wired, emphasizes the company's inability to alter deployed models, pushing back against fears of external interference during conflict.


