Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

Why it matters: This case could set a precedent for AI developers' control over military use of their tech.
- US District Judge Rita Lin stated the Pentagon's designation of Anthropic as a supply-chain risk looks like an "attempt to cripple" the company and punish it for seeking public scrutiny, potentially violating the First Amendment.
- Anthropic has filed two federal lawsuits, seeking a temporary order to pause the security risk designation, which it alleges is illegal retaliation for its efforts to limit military use of its AI tools.
- The Department of Defense (now DoW) argues it followed proper procedures in determining Anthropic's AI tools could be unreliable and that Anthropic might "manipulate the software" if it disagrees with military applications, urging the judge not to second-guess its national security assessment.
- Trump administration attorney Eric Hamilton acknowledged that Defense Secretary Pete Hegseth lacked legal authority to bar all commercial activity with Anthropic, despite Hegseth's public statements on X.
- Judge Lin found it "troubling" that the security designation and broader directives limiting the use of Anthropic's AI tool Claude by government contractors "don't seem to be tailored to stated national security concerns," suggesting the actions went beyond simply canceling contracts.
A US judge has expressed serious concerns that the Pentagon may be illegally retaliating against AI company Anthropic for attempting to restrict military use of its technology, potentially violating the First Amendment. This dispute highlights a growing tension between Silicon Valley's ethical considerations and the government's deployment of advanced AI, with the Department of Defense arguing national security risks while Anthropic seeks to pause a damaging 'supply-chain risk' designation.

