Key Points
- Defense Secretary Pete Hegseth summons Anthropic CEO Dario Amodei to discuss Claude’s military use.
- Pentagon threatens to label Anthropic a supply‑chain risk after the company refused certain defense applications.
- Anthropic’s Claude was reportedly used in a raid that captured Venezuelan president Nicolás Maduro.
- Secretary Hegseth reportedly gave Amodei an ultimatum: cooperate or face exclusion.
- A supply‑chain risk designation could void Anthropic’s defense contract and force other partners to drop Claude.
Pentagon Raises Alarm Over Claude
U.S. Defense Secretary Pete Hegseth has called Anthropic CEO Dario Amodei to the Pentagon to discuss the department’s unease about the military deployment of the company’s artificial‑intelligence system Claude. The meeting follows a Pentagon warning that Anthropic could be designated a “supply‑chain risk,” a label normally reserved for foreign adversaries, because the firm declined to permit the Department of Defense to use its technology for large‑scale surveillance of U.S. citizens or for weapons that operate without human oversight.
Contractual Background and Recent Operations
Anthropic previously signed a contract with the defense department and its Claude model was reportedly employed during a special‑operations raid that resulted in the capture of Venezuelan president Nicolás Maduro. That operation highlighted the model’s role in sensitive missions and brought underlying tensions between the company and the Pentagon into sharper focus.
Ultimatum and Potential Fallout
According to a source, Secretary Hegseth presented Amodei with an ultimatum: cooperate with the department’s requirements or face exclusion from future defense work. Replacing Anthropic’s technology would be a significant undertaking for the Pentagon, but the stakes are high. A supply‑chain risk designation would nullify Anthropic’s existing contract and compel other defense contractors to stop using Claude altogether.
Implications for AI Governance
The confrontation underscores broader debates about the role of private AI firms in national security and the ethical limits of advanced machine‑learning tools. While the defense establishment seeks to harness cutting‑edge AI for strategic advantage, companies like Anthropic are pushing back against uses that could infringe on civil liberties or enable fully autonomous weaponry. The outcome of the upcoming meeting could set a precedent for how the U.S. government manages partnerships with AI innovators and how it balances security imperatives with ethical considerations.
Source: techcrunch.com