Tip Sheets
Anthropic v. Pentagon reveals enduring rift between tech, national security
February 26, 2026
Media Contact
The Pentagon told Anthropic this week to open its AI technology for unrestricted military use by Friday or risk losing its government contract. The following Cornell University experts are available to discuss rising tensions.
Sarah Kreps, director of the Tech Policy Institute, focuses on the intersection of international politics, technology, and national security.
Kreps says:
“It’s striking that Anthropic appears caught off guard by how its model is being used. We’ve seen this pattern repeatedly with dual-use technologies. Engineers build tools to solve technical problems. Once those tools scale, governments and societies deploy them in ways the creators did not fully anticipate. Social media, encryption, nuclear research — each followed that trajectory. AI companies have spent years discussing risk and misuse, so there is some irony in seeing the same dynamic reappear here.
“The deeper issue is dual use. AI models are designed for broad civilian markets, but military and national security applications operate under a very different logic. Governments often develop bespoke systems for defense precisely because requirements around control, reliability, and authorization differ from commercial norms. But when civilian platforms are integrated into classified environments, they stop being ordinary software products. They become strategic assets. That shift changes expectations around access, safeguards, and control.
“It’s not surprising these two logics collided. What we’re seeing is less an anomaly than a recurring tension between commercial innovation and national security imperatives.”
Ayham Boucher is a lecturer of information science and the executive director of Cornell’s AI Innovation Hub.
Boucher says:
“What does Anthropic need to do to satisfy the Pentagon’s request? If it’s just a matter of a contractual agreement, that’s straightforward. But if the issue concerns how the model behaves, it becomes more complicated. Did the Pentagon encounter problems with Claude during Maduro’s arrest operation – such as Claude responding, ‘Sorry, I can’t help with that request, as it violates my policy,’ at a critical moment?
“Some guardrails are relatively easy to remove because they’re added as a System Prompt. Others, however, are embedded in the model’s core behavior. Addressing those would be costly and could require Anthropic to develop a specialized version of the model specifically for Department of Defense use. Current reports indicate that the DoD is not asking for a model behavior change, but an approval for any lawful military purpose.
“Anthropic doesn’t believe that their models can or should carry responsible warfare decisions on their own. They don’t think it’s mature enough to run a vending machine business autonomously — so definitely not mature enough to run critical defense department systems autonomously. The OpenClaw, or ClawdBot, project underscores how easily those models could be forced to hallucinate or make poor decisions like sharing sensitive information with an untrusted party.
“In warfare, how fast you can execute the OODA loop gives a clear advantage, and if enemies are going to use AI to make decisions much faster than humans can then you are certainly at a huge disadvantage. Is Claude ready to control sensitive defense systems on its own? Certainly not. Is the Pentagon pressured to build autonomous systems leveraging AI? Certainly yes.”