Pentagon taps Boeing, Lockheed Martin in first step to blacklist Anthropic: Report


Pentagon on Wednesday reportedly asked Boeing and Lockheed Martin to assess their reliance on Anthropic’s AI model, Claude — an early move that could pave the way for formally designating the firm a “supply chain risk”, reports Axios.

Such a designation in the US is typically reserved for companies linked to adversarial states. Applying it to a leading American technology company — particularly one whose software is embedded in classified military systems — would represent an extraordinary departure from precedent.

Pentagon Probes Contractors’ Exposure to Claude AI

The Pentagon contacted Boeing and Lockheed Martin to request an analysis of their exposure to Anthropic and its AI model, Claude, according to individuals familiar with the discussions.

A spokesperson for Lockheed Martin confirmed that the company had been approached by the Defense Department regarding an examination of its exposure and reliance on Anthropic ahead of “a potential supply chain risk declaration”. Boeing did not immediately respond to requests for comment.

Also Read | Anthropic engineer says AI will take over most internet-based jobs

The Pentagon intends to expand the inquiry to other major defence contractors — the so-called “traditional primes” responsible for supplying fighter aircraft, missile systems and other core military hardware — to determine whether and how they are integrating Claude into their workflows.

While such outreach does not in itself sever contractual ties, it signals that the department is laying the groundwork for a more severe measure should negotiations with Anthropic collapse.

Classified Systems and Strategic Operations at Stake

Claude currently holds a unique position within the US military’s AI architecture: it is the only AI model operating inside classified systems. Through Anthropic’s partnership with Palantir, the system was deployed during the operation to capture Venezuela’s Nicolás Maduro and is viewed internally as capable of supporting future contingencies, including a possible military campaign involving Iran.

Also Read | Pentagon pressures Anthropic to loosen AI safeguards, sets Friday deadline

Officials are said to be impressed with Claude’s performance across a range of military use cases. Yet frustration has mounted over Anthropic’s refusal to relax its safeguards to permit use of the model for what the Pentagon describes as “all lawful purposes”.

Anthropic has maintained firm restrictions, particularly prohibiting the use of Claude for mass surveillance of Americans or for developing weapons that operate without human involvement. Defence officials argue that seeking approval for discrete use cases is operationally impractical.

Tense Meeting and a Friday Deadline

The standoff intensified during a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic’s chief executive, Dario Amodei. During the session, Hegseth imposed a deadline: 5:01 pm on Friday.

Should Anthropic decline to amend its policies, the administration has warned it could invoke the Defense Production Act (DPA) to compel the company to tailor its model to military requirements, or alternatively declare Anthropic a supply chain risk.

Also Read | Pentagon vs Anthropic: Hegseth demands full military access to Claude AI

Invoking the DPA could allow the military to retain access to Claude while forcing compliance, though such a move would almost certainly invite legal challenge.

The Pentagon stated it was “preparing to execute on any decision that the secretary might make on Friday regarding Anthropic.”

Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

Supply Chain Risk: A Rare and Severe Measure

The “supply chain risk” label is generally associated with companies perceived to pose national security threats because of foreign influence or adversarial state ties. Chinese telecommunications giant Huawei is among the most prominent examples.

Applying such a label to a domestic AI company would be unprecedented and could have sweeping commercial implications. Contractors working with the federal government might be compelled to remove Claude from sensitive systems, potentially disrupting projects already reliant on the model.

At present, the Pentagon’s request for exposure assessments represents a preliminary step rather than an immediate directive to sever ties. Some observers view the manoeuvre as strategic brinkmanship intended to pressure Anthropic into concession.

Anthropic’s Position: Safeguards and National Security

Anthropic has publicly framed the discussions as constructive, albeit firm.

A company spokesperson described the meeting between Amodei and Hegseth as a continuation of “good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

The spokesperson declined to comment on the prospect of a supply chain risk designation.

Also Read | Who is Dario Amodei? Did you know Anthropic CEO was a former OpenAI employee?

Anthropic’s leadership has repeatedly articulated concerns about the societal dangers of advanced AI, including autonomous weapons and domestic surveillance. Those principles now sit at the heart of a confrontation with the Pentagon at a moment when military adoption of AI systems is accelerating globally.

Competitive Landscape: Google, OpenAI and xAI Enter the Frame

The dispute unfolds against a rapidly shifting competitive backdrop.

Elon Musk’s xAI recently secured an agreement to move its systems into classified military environments under an “all lawful use” standard — precisely the framework Anthropic has resisted.

Google and OpenAI, whose AI models are already deployed in unclassified government systems, are in negotiations to extend their presence into classified domains. One individual familiar with those discussions characterised Claude as the most capable model in several military applications but identified Google’s Gemini as a credible alternative.

The Pentagon has indicated that Google and OpenAI would similarly be expected to loosen safeguards if they are to secure classified contracts.