Pentagon: Anthropic dropped Friday unless Claude AI safety guardrails are removed
The Pentagon is preparing to remove Anthropic from its supply chain by Friday unless the company rolls back safety guardrails on its Claude AI model, an ultimatum Defense Secretary Pete Hegseth delivered this week, as reported by Politico. The dispute focuses on whether a defense vendor can limit military use of a model that the government procures for lawful missions.
At issue are Anthropic’s restrictions, designed to prevent certain applications, that collide with a Defense Department push for tools that function across “all lawful purposes.” The outcome could influence how future AI contracts specify permissible use, risk controls, and who gets the final say on mission scope.
Legal levers: Defense Production Act and supply chain risk designation
Officials have flagged two levers if negotiations fail: invoking the Defense Production Act (DPA) and assigning a supply chain risk designation. Using the DPA to compel a private AI company to alter or remove product safeguards would likely face legal scrutiny, as reported by The Washington Post, with questions about statutory fit and potential judicial pushback.
Anthropic’s leadership has publicly drawn ethical “red lines” for Claude: no use in fully autonomous weapons and no mass surveillance of U.S. citizens, according to the Associated Press. Those limits are framed as safety constraints rather than political positions, but they cut directly against defense buyers’ preference to rely on law, policy, and program-level rules of engagement rather than vendor-imposed prohibitions.
Senior Pentagon technologists argue that usage limits should be set by elected law and DoD policy, not by private vendors’ terms of service. “AI companies shouldn’t be able to sell models to the Department of War and then refuse to let them do Department of War things,” said Emil Michael, Under Secretary of Defense for Research and Engineering.
What a supply chain risk designation means for DoD contractors
A Pentagon “supply chain risk” designation would function as a red flag across acquisition pathways: primes and subs could be required to certify non-use of Anthropic technology in deliverables or tooling, and the label is typically applied to foreign adversaries, as reported by Axios. For contractors, that could constrain procurement choices, chill experimentation, and add diligence steps even where Claude is used only for low-risk internal tasks.
In practice, contractors would likely update supplier attestations, inventory AI dependencies, and document migrations away from affected services where necessary. Transition costs could include revalidating models, retraining staff on new systems, and schedule impacts if government program offices require evidence of removal or segregation.
At the time of this writing, broader tech equities provided a mixed backdrop; Amazon.com, Inc. (AMZN) traded at 208.78, up 1.71% intraday as of 3:21:27 PM EST, based on data from Yahoo Scout. Market conditions do not determine procurement outcomes, but they frame the operating environment for large defense and AI vendors.
| Disclaimer: The content on The CCPress is provided for informational purposes only and should not be considered financial or investment advice. Cryptocurrency investments carry inherent risks. Please consult a qualified financial advisor before making any investment decisions. |
