US Treasury Halts Use of Anthropic’s AI Technology Amid National Security Concerns
The US Treasury Department has decided to discontinue the use of all artificial intelligence tools developed by Anthropic, including the popular Claude platform. This move comes directly following a directive from former President Donald Trump, who expressed concerns regarding national security and the operational use of AI technologies within government agencies.
The announcement was made public by Treasury Secretary Scott Bessent on social media, where he stated, “Under President Trump, no private company will ever dictate the terms of our national security.” The department’s cessation of Anthropic products use reflects a broader federal review of AI tools and their alignment with the government’s national security priorities and operational policies.
Anthropic, an AI startup known for its Claude language model, has been under scrutiny after it reportedly declined the Pentagon’s request to allow unconditional military use of its AI technology. The Department of Defense was interested in deploying Claude models in fully autonomous weapons systems and mass surveillance programs, but Anthropic maintained contractual restrictions that prevent such applications. This disagreement triggered increasing tensions between the federal government and the AI company.
The Treasury’s stoppage is part of a larger governmental effort to regulate and monitor AI technologies, especially as they relate to sensitive uses and national security. Other important US government agencies, such as the Federal Housing Finance Agency (FHFA), the State Department, and the Department of Health and Human Services, have also moved to end their relationships with Anthropic’s technology in the wake of Trump’s directive.
This decision highlights the complex balance between embracing cutting-edge AI technology and ensuring that its deployment does not compromise national security or governmental control. By ceasing the use of Anthropic’s tools, the Treasury and other government bodies aim to maintain strong oversight over AI applications, preserving autonomy and safety in critical operations.
Meanwhile, alternative AI technologies remain available and authorized for mission-related use within various federal agencies. Competitors like OpenAI’s ChatGPT Enterprise and Google’s Gemini continue to be utilized under strict policy and federal information security requirements.
The broader fallout from this shutdown affects not only Anthropic but also the US government’s approach to AI integration. It signals a cautious and security-conscious stance, putting constraints on how private AI companies can engage with governmental clients, especially in defense-related areas.
For investors and technology watchers, this move underlines the increasing interplay between government policy, national security considerations, and the evolving AI industry. As AI companies navigate these regulatory challenges, the landscape could see significant shifts in partnerships, innovations, and market opportunities.
In short, the US Treasury’s decision to halt all use of Anthropic’s AI products is a clear message on prioritizing national security over rapid technological adoption. It marks a pivotal moment in how AI applications are governed within critical governmental sectors and may influence future collaborations between the public sector and AI developers.
