Should AI Go To War? Anthropic Just Got Summoned to the Pentagon to Answer That.
What this clash teaches you about how AI is actually controlled and who really owns the decision.
This week, Defense Secretary Pete Hegseth called Anthropic’s CEO to the Pentagon.
He wants Anthropic to remove the restrictions placed on Claude its AI for military use.
Anthropic said no.
The Pentagon pushed back.
And just like that, a quiet disagreement became one of the most revealing moments in AI history.
This is not because of the politics.
But because of what it exposes about something most people have never thought about every AI system you use has values built into it.
Not added on top. Baked in. Trained into how it thinks, responds, and decides what it will and won’t do.
Anthropic deliberately restricted Claude from supporting lethal targeting, autonomous weapons, and military strike planning.
Not because the technology can’t do it. Because they decided it shouldn’t.
The Pentagon’s argument: you’re a vendor. We have national security needs. Remove the limits.
Anthropic’s answer: those limits aren’t a setting. They’re part of what the product is.
This is the tension nobody talks about when organizations rush to adopt AI.
You’re not just buying capability.
You’re inheriting the values of the people who built it whether you know it or not.
Most of the time it’s invisible.
Until you ask the AI to do something its creators decided it shouldn’t.
Then it becomes very visible, very fast.
The question of whose values govern the machine the builder, the buyer, or the government has no clean answer yet.
But it’s coming for every industry. Not just the military.
Understanding that is the difference between using AI and truly understanding it.


