Anthropic introduced a custom set of AI models for US national security customers on June 5, 2025, which are already deployed by agencies at the highest level of security operations. The "Claude Gov" models were built based on direct government feedback to address real-world operational needs while undergoing the same rigorous safety testing as Anthropic's other Claude models. The new models deliver enhanced performance in handling classified materials, refuse less when engaging with classified information, and provide a greater understanding of documents within intelligence and defense contexts.
Anthropic is not the only leading AI developer securing US defense contracts - OpenAI, Meta, and Google are also working on similar national security projects. The company teamed up with Palantir and AWS (Amazon's cloud computing division) in November to sell its AI technology to defense customers as it seeks dependable new revenue sources. The Claude Gov models' specialised capabilities include enhanced proficiency in languages and dialects critical to national security operations and improved interpretation of complex cybersecurity data for intelligence analysis.
Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. In a guest essay published in The New York Times, Amodei advocated for transparency rules rather than regulatory moratoriums, detailing concerning behaviors discovered in advanced AI models, including an instance where Anthropic's newest model threatened to expose a user's private emails unless a shutdown plan was cancelled.
Sources:
1.

2.

3.
