logo
blogtopicsabout
logo
blogtopicsabout

NSA Reportedly Using Anthropic's Mythos Despite Pentagon Feud

AIAnthropicSecurityGovernmentNSA
April 19, 2026

TL;DR

  • •The NSA is reportedly using Anthropic’s new AI model, Mythos Preview, despite ongoing disputes with the Pentagon.
  • •This comes after a Trump administration order to halt Anthropic services and a legal battle over 'supply chain risk' designation.
  • •Anthropic's CEO recently met with White House officials, and the use of Mythos is expanding within the NSA.

NSA Uses Anthropic's Mythos Amidst Government Conflict

Despite a recent and public feud between Anthropic and elements within the US government, the National Security Agency (NSA) is reportedly utilizing Anthropic’s new AI model, Mythos Preview, according to a report from Axios citing two sources with direct knowledge of the matter.

Anthropic unveiled Mythos Preview in early April, positioning it as a general-purpose language model particularly adept at computer security tasks. This capability is likely the primary driver for the NSA’s interest.

However, the situation is complex. In February, former President Trump directed all federal agencies to cease using Anthropic’s services. This directive stemmed from Anthropic’s refusal to concede to certain safeguards during contract negotiations regarding the military application of its AI. The details of these safeguards remain somewhat unclear.

Recent developments include a meeting between Anthropic CEO Dario Amodei and White House Chief of Staff Susie Wiles, alongside other officials. The White House described the meeting as “productive and constructive”, though President Trump claimed to be unaware of it.

Axios’ sources indicate that the NSA is one of approximately 40 organizations granted access to Mythos Preview, with usage reportedly “expanding within the department”.

Legal Battles and Supply Chain Concerns

Anthropic is currently involved in a legal dispute with the US government. In March, the company filed lawsuits against the Department of Defense after the Trump administration designated it a “supply chain risk.” The Pentagon quickly responded with its own filing. A court granted Anthropic a preliminary injunction to temporarily block this designation in one case, but another court denied a similar motion. These legal battles highlight the tensions surrounding the use of AI technology by government agencies and the concerns about potential security vulnerabilities.

Why It Matters

This situation presents a fascinating paradox. The executive branch, through a presidential order, attempted to sever ties with Anthropic, yet a key intelligence agency is actively leveraging its technology. This suggests a divergence in priorities and risk assessment between different parts of the government.

For developers, this highlights the increasing scrutiny placed on AI providers working with the government. The “supply chain risk” designation and the demands for specific safeguards demonstrate a growing awareness of the potential security implications of AI models. It may encourage more robust security practices and transparency in AI development.

For enterprises, the case underscores the need to understand the geopolitical landscape and potential regulatory hurdles when adopting AI technologies, especially those with national security implications. The legal uncertainty surrounding Anthropic could serve as a cautionary tale.

It remains uncertain how the legal battles and political tensions will ultimately resolve. The expanding use of Mythos within the NSA suggests a continued need for Anthropic’s capabilities, even amidst ongoing conflict. It will be important to watch how the court cases progress and whether the Biden administration will revisit the Trump-era directives regarding Anthropic.

Source:

Engadget ↗