The US Pentagon has advised its senior leaders that it might proceed to make use of Anthropic’s AI instruments past the six-month phase-out interval, in accordance with a report by Reuters. The report comes at a time when the hostilities between Anthropic and the US authorities are peaking, with the Claude maker submitting a lawsuit in opposition to the Trump administration over its designation as a provide chain threat.
In accordance with an inner March 6 memo signed by Pentagon Chief Info Officer Kirsten Davies and seen by Reuters, exemptions to Anthropic could also be granted “in uncommon and extraordinary circumstances” and “will solely be thought-about for mission-critical actions instantly supporting nationwide safety operations the place no viable various exists.”
Any Pentagon unit looking for to bypass the Anthropic ban should submit a complete threat mitigation plan for official approval.
Reportedly, the memo directed officers to take away Anthropic’s merchandise from techniques supporting vital missions like nuclear weapons and ballistic missile protection.
It’s also stated to have reaffirmed that the ban on Anthropic extends to defence contractors. The memo offers Pentagon contracting officers 30 days to inform contractors after which certify full compliance by the 180-day deadline.
Anthropic’s battle in opposition to Pentagon:
In the meantime, Anthropic has sought a keep from a US appeals court docket following the Pentagon’s designation of it as a ‘provide chain threat’. The corporate claimed in court docket that the designation may price it billions of {dollars}.
“By Anthropic’s finest estimate, for 2026, the federal government’s opposed actions threat a whole lot of hundreds of thousands, and even a number of billions, of {dollars} in misplaced income,” Anthropic’s lawyer stated in court docket.
“The federal authorities retaliated in opposition to a number one frontier AI developer for adhering to its protected viewpoint on a topic of nice public significance — AI security and the constraints of its personal AI mannequin — in violation of the Structure and legal guidelines of the USA,” the corporate added.
In the meantime, the AI startup additionally received help from over 30 OpenAI and Google workers who filed a movement within the case in help of the startup.
“The federal government’s designation of Anthropic as a provide chain threat was an improper and arbitrary use of energy that has severe ramifications for our business,” the workers stated in a short.
Tech big Microsoft additionally filed its personal movement in help of Anthropic. The corporate acknowledged that the Pentagon’s designation of Anthropic would have “unfavorable ramifications for your entire know-how sector and American enterprise neighborhood.”
“This isn’t the time to place in danger the very AI ecosystem that the Administration has helped to champion,” it added.