Pentagon Threatens to Cut Anthropic Deal Over AI Use in Autonomous Weapons and Surveillance

Key Points

  • Pentagon requests AI partners to allow model use for all lawful purposes.
  • Anthropic warns its Claude models could be used in autonomous weapons and mass surveillance.
  • The department has threatened to end a $200 million contract with Anthropic over the dispute.
  • Anthropic says it has not discussed Claude’s use in any specific operation.
  • One AI firm has already agreed to full Pentagon access; others are showing flexibility.
  • Security experts and Anthropic CEO call for stronger AI regulation in military contexts.

Pentagon Threatens to Cut Anthropic Deal Over AI Use in Autonomous Weapons and Surveillance

Background of the Pentagon‑Anthropic Conflict

The United States Department of Defense recently issued a request to its AI partners—Anthropic, OpenAI, Google, and xAI—asking that their models be available for “all lawful purposes.” This broad request has sparked a sharp disagreement with Anthropic, the creator of the Claude family of language models.

Anthropic’s Stated Concerns

Anthropic has publicly expressed apprehension that its Claude models might be used in “fully autonomous weapons and mass domestic surveillance.” The company’s usage policy with the Pentagon is under review, specifically to address these hard limits. An Anthropic spokesperson clarified that the firm has not discussed the use of Claude for any particular military operation, although the Wall Street Journal reported that Claude was employed in a U.S. operation to capture former Venezuelan President Nicolás Maduro.

Pentagon’s Response

Chief Pentagon spokesman Sean Parnell emphasized the need for partners to help warfighters succeed in any conflict, framing the request as essential to national security. In response to Anthropic’s reservations, the Pentagon signaled that it could terminate the existing $200 million contract if the AI provider does not agree to the unrestricted use clause.

Industry Context

According to an anonymous advisor from the Trump administration, one of the AI companies has already agreed to grant the Pentagon full access to its model, while the other two are showing flexibility. The disagreement with Anthropic highlights a growing rift between the defense establishment and several leading AI firms over the permissible scope of AI applications in warfare.

Calls for Regulation

Security experts, policymakers, and Anthropic CEO Dario Amodei have advocated for stricter regulation of AI development, especially concerning weapons systems and other military technologies. The current standoff underscores the tension between rapid AI innovation and the need for ethical safeguards.

Source: techradar.com