Key Points
- Anthropic’s Claude model is tied to a $200 million Pentagon contract that includes a responsible‑use policy.
- The Pentagon seeks an “any lawful use” clause, expanding potential military applications.
- Anthropic refuses to support autonomous lethal weapons and mass domestic surveillance.
- Designation as a supply‑chain risk could end the contract and force contractors to drop Claude.
- Other AI firms have already aligned their contracts with the Pentagon’s broader usage demands.
- The dispute highlights tension between rapid AI adoption for defense and corporate ethical policies.
Background
Anthropic, known for its Claude AI model, signed a $200 million contract with the Department of Defense last year. The agreement embeds the company’s “acceptable use policy,” which bars the use of its technology for autonomous kinetic operations and mass domestic surveillance.
Pentagon’s Position
The Pentagon, led by Undersecretary of Defense for Research and Engineering Emil Michael, is pushing for an “any lawful use” clause that would give the military carte blanche to employ Anthropic’s services in any capacity, including mass surveillance and lethal autonomous weapons.
Anthropic’s Red Lines
Anthropic has made clear it will not comply with requests that conflict with its policy. The company cites existing DoD directives that require human judgment in the use of force and prohibit intelligence collection on U.S. persons without specific legal authority.
Potential Consequences
If the Pentagon classifies Anthropic as a supply‑chain risk, the $200 million contract could be terminated and defense contractors that rely on Claude may be forced to remove the technology from their systems. This would have a ripple effect across the defense industry, which currently uses Anthropic’s model for classified work.
Industry Reaction
Other AI firms such as OpenAI, xAI, and Google have already renegotiated their own Pentagon contracts to align with the “any lawful use” language. However, Anthropic’s stance has drawn criticism and support from various tech workers and policy experts, underscoring the broader debate over responsible AI deployment in national security contexts.
Source: theverge.com