The U.S. Department of Defense has reportedly launched a formal inquiry into how heavily its top defense contractors are relying on artificial intelligence services provided by Anthropic. According to sources familiar with the matter, the Pentagon has sent out a series of inquiries to major aerospace and technology firms to map out the integration of Anthropic’s “Claude” models within the nation’s defense infrastructure. This move signals a growing concern within the military establishment regarding “concentration risk”—the danger of the defense sector becoming overly dependent on a single AI provider for critical logistics, data analysis, and decision-support systems. While the Pentagon has long encouraged the adoption of cutting-edge AI to maintain a competitive edge, officials are now seeking to ensure that a technical failure or a change in the corporate policy of a private AI firm does not compromise national security or operational readiness.
The investigation comes as Anthropic has rapidly positioned itself as a “safety-first” alternative to other AI giants, making its models particularly attractive for government and defense applications that require high levels of reliability and ethical alignment. Defense contractors are increasingly using AI to streamline everything from predictive maintenance of fighter jets to the processing of vast amounts of satellite intelligence. However, the Pentagon’s new line of questioning suggests a desire for greater transparency regarding the “black box” nature of these algorithms. Officials want to know exactly which tasks are being outsourced to Claude and whether there are robust “fail-safe” protocols in place should the AI service become unavailable or produce “hallucinated” data. There is also an underlying concern about the supply chain of AI, including the cloud computing infrastructure that hosts these massive models.
This probe is part of a broader shift in how the U.S. military manages its relationship with Silicon Valley. As AI moves from a peripheral tool to a core component of modern warfare, the Department of Defense is looking to establish a more diversified “AI ecosystem” rather than relying on a few dominant players. By auditing the role of Anthropic within the defense supply chain, the Pentagon aims to identify potential vulnerabilities before they can be exploited by adversaries. Defense contractors have reportedly been given a deadline to disclose their level of integration, with the results expected to influence future procurement policies and the development of the military’s own internal, “closed-loop” AI systems. For now, the focus remains on balancing the need for rapid technological innovation with the absolute requirement for strategic independence and security
