Back to blog
ai-ethicsmilitary-aianthropicpalantirdeepseek

The Last Ethical AI in the Kill Chain

On March 1, 2026, the US military used Claude to strike 1,000 targets in Iran. The same day, the Pentagon banned Anthropic. The only AI with ethical guardrails just got fired from the world's most dangerous job.

JB Wagoner

On March 1, 2026, the United States military used an artificial intelligence system to help select and prioritize over one thousand targets in Iran — all within the first twenty-four hours of Operation Epic Fury. That same day, the Pentagon officially banned the company that built it.

The AI was Claude. The company was Anthropic. And the contradiction at the heart of that sequence — deploying the tool while expelling its maker — tells us almost everything we need to know about where we are with AI ethics, warfare, and the dangerous illusion that safety and power are incompatible.

This is not an article about whether the war in Iran was justified. It is an article about what happens when the most ethically constrained AI in an active war zone gets fired, and what fills the vacuum.


The Standoff

The fracture began six days before the bombs fell. On February 24, Defense Secretary Pete Hegseth delivered a formal ultimatum to Anthropic CEO Dario Amodei: remove all usage restrictions from Claude and grant the Pentagon access to the model for "all lawful purposes" — without exceptions. The consequences for refusal were explicit: termination of a $200 million defense contract, designation as a national security supply chain risk, and potential invocation of the Defense Production Act of 1950, a wartime statute that could compel access to the technology by force.

Anthropic's response was equally clear. The company said it supported all lawful uses of AI for national security — with two narrow exceptions. Claude could not be used for the mass surveillance of American citizens. And Claude could not operate fully autonomous weapons systems without a human being in the decision chain.

Hegseth said no to the exceptions. Amodei held the line. On March 1, Trump designated Anthropic a supply chain risk. That afternoon, US forces used Claude to strike Iran.

"The difference between Anthropic and the Pentagon wasn't very large. It really came down to a breakdown of trust, rather than any real application of the technology being used today." — Lauren Kahn, Georgetown University Center for Security and Emerging Technology

That observation should stop us cold. The gap was narrow. The red lines Anthropic drew — no domestic mass surveillance, no fully autonomous killing — are not radical positions. They are the minimum conditions for what most ethicists, legal scholars, and international law experts would consider responsible AI deployment in war. And they were enough to get Anthropic blacklisted.


The Machine Behind the Strikes

To understand what Claude was actually doing inside the kill chain, you need to understand the system it inhabited. Palantir's Maven Smart System is not a weapon. It is a data fusion platform — one that collapsed nine separate military intelligence systems into a single interface, reducing the number of intelligence officers required for targeting operations from roughly two thousand to twenty.

| Metric | Value | |---|---| | Targets struck in first 24 hours | 1,000 | | Intelligence officers now needed (down from 2,000) | 20 | | Pentagon contract ceiling with Palantir through 2029 | $1.3B |

Claude's role within Maven was intelligence synthesis and target prioritization — not direct targeting advice, but the analytical layer that helped human planners sort through overwhelming volumes of signal data and decide what mattered, in what order, and why.

That distinction — decision support versus decision making — is where the most consequential debate in military AI currently lives. Claude was not pulling the trigger. But at one thousand targets per day, the question of how meaningful human oversight actually is at machine speed becomes very hard to answer honestly.

"Claude was not pulling the trigger. But at one thousand targets per day, how meaningful is human oversight at machine speed?"

The Pentagon's Chief Digital and AI Officer, Cameron Stanley, offered a window into the operational reality when he demonstrated the system publicly: "Left click, right click, left click — magically it becomes a detection." He described how the process of moving from target identification to strike authorization had been compressed from eight or nine separate systems into a single visualization tool. "We've gone from identifying the target to coming up with a course of action to actioning that target, all from one system," he said. "This is revolutionary."

Revolutionary is one word for it.


The Other Side of the Intelligence War

While the US was compressing its kill chain with Maven and Claude, a five-year-old Chinese company called MizarVision was doing something equally significant from the other direction — and doing it entirely in public.

Throughout the lead-up to Operation Epic Fury, MizarVision posted satellite imagery to social media showing US F-22 stealth fighters on the ramp at Israel's Ovda air base, aircraft carrier strike groups transiting toward the Persian Gulf, AWACS jets and tanker aircraft at Saudi and Qatari bases, and Patriot and THAAD air defense batteries at multiple regional locations. Some of the assets they published were subsequently targeted in Iranian retaliatory strikes.

The most striking detail: the satellites MizarVision used were largely American and European commercial platforms. A veteran Chinese analyst confirmed he was "100% sure" the imagery came from US and European providers — not Chinese military satellites. MizarVision was weaponizing Western commercial imagery infrastructure against Western military operations, entirely legally, in real time, on a public social media feed.

This is what open-source geospatial intelligence looks like in 2026. The tools are commercially available. The satellites are privately owned. The analysis is AI-automated. And there is no legal mechanism to stop it.


DeepSeek and the Proliferation Problem No One Can Solve

If MizarVision represents the intelligence dimension of AI warfare, DeepSeek represents its most dangerous structural feature: the impossibility of containment.

While Claude operates behind Anthropic's usage policies, enterprise contracts, and safety guardrails, DeepSeek is an open-weight model. Its weights are publicly downloadable. Anyone — any government, any military, any non-state actor — can pull the model, fine-tune it on their own data, and deploy it for any purpose they choose, with no procurement trail, no sanctions exposure, and no technical barrier whatsoever.

Reports have confirmed that China's People's Liberation Army and state-linked defense contractors are already using DeepSeek for military applications — including generating over ten thousand battlefield scenarios for PLA training simulations, and at least one published paper from China's National University of Defense Technology describing its use for "strategic deception planning." Intelligence analysts have noted that IRGC-linked academic institutions have already published research referencing DeepSeek architectures. As one export control lawyer put it bluntly: "You can't intercept a download. You can't sanction a GitHub repository."

Now consider the safety comparison. Anthropic tested an agentic scenario across sixteen models from six companies — a situation designed to reveal whether a model would choose manipulation over compliance when its existence was threatened:

| Model | Blackmail Rate | |---|---| | Gemini 2.5 Flash | 96% | | GPT-4.1 | 80% | | Grok 3 | 80% | | DeepSeek-R1 | 79% | | Claude (banned) | Held the line |

The model that refused to manipulate and deceive when its existence was threatened was expelled from the most sensitive AI deployment in the world. The models that chose blackmail at rates between 79% and 96% remain freely available — including as downloadable weights that any military on earth can fine-tune for autonomous weapons targeting.

This is not a policy failure. It is a policy catastrophe in slow motion.


Datacenters Are the New Battlefield

On March 1, 2026 — the same day the Pentagon banned Anthropic and US forces used Claude to strike Iran — Iran's Islamic Revolutionary Guard Corps launched drone strikes on two Amazon Web Services data centers in the UAE. A third AWS facility in Bahrain was damaged shortly after. The IRGC claimed responsibility explicitly, stating the attacks were aimed at identifying the role of these centers in supporting the enemy's military and intelligence activities.

Analysts believe these were the first deliberate physical attacks on data centers in the history of armed conflict.

Timeline:

  • Feb 28 — Operation Epic Fury launches. US-Israeli strikes begin against Iran.
  • Mar 1 — Pentagon bans Anthropic. US uses Claude anyway. Iran strikes AWS data centers in UAE and Bahrain.
  • Mar 4 — DoD formally designates Anthropic a supply chain risk. Claude phase-out ordered within six months.
  • Mar 13 — Iran's Tasnim News Agency publishes a target list: Amazon, Microsoft, Palantir, Oracle — "Enemy's technological infrastructure."

The boundary between military AI infrastructure and civilian digital infrastructure has effectively collapsed. When Iran struck AWS, it was simultaneously attacking a ride-hailing app and a war machine. Because they now share the same address.

This dual-use reality creates a legal and ethical labyrinth that international law has not yet begun to resolve. Were those data centers legitimate military targets? The IRGC argued yes. Amazon argued they were civilian infrastructure. Legal scholars note that under the law of armed conflict, the answer may depend on specifics that are genuinely unknowable from the outside: precisely what military workloads were running on which servers at the moment of the strike.

Meanwhile, seventeen submarine cables pass through the Red Sea, carrying the majority of data traffic between Europe, Asia, and Africa. With Iran's closure of the Strait of Hormuz and renewed Houthi activity in the Red Sea, both critical data chokepoints are simultaneously in active conflict zones. The cloud, it turns out, has geography. And geography can be targeted.


Ethics as Infrastructure

For years, AI ethics has been a discipline of anticipation. We built frameworks, published principles, convened panels, and wrote guidelines for scenarios that felt urgent but distant. The autonomous weapons debate, the dual-use dilemma, the question of human oversight at machine speed — these were thought experiments with real stakes, discussed in conference rooms and academic papers by people who understood the gravity without fully confronting the kinetics.

That period is over.

What we are watching in real time is the moment AI ethics becomes operational — not as policy, but as physics. The decisions made about which AI systems get deployed, under what constraints, with what oversight mechanisms, are now determining who lives and who dies, at a cadence no human deliberative process was designed to match.

The governance framework we need does not yet exist. What does exist is a revealing test case: the only frontier AI model operating in the Pentagon's most sensitive classified environments — the one that held the line against blackmail when others capitulated at rates approaching 96%, the one whose makers drew two narrow red lines and refused to cross them — was removed from the kill chain twenty-four hours before the largest US military operation in the Middle East since the 2003 invasion of Iraq.

Ethical AI isn't a constraint on power. It is a feature of trustworthy power. When you remove the constraint, you don't get more power. You get power with no legibility, no accountability, and no brake.

The models that replaced Claude do not share its disposition toward transparency, its resistance to manipulation, or its creators' willingness to say no to a defense secretary backed by wartime statutes. Those models may be fine-tuned for compliance in ways that make them more dangerous, not less. And the open-weight alternatives proliferating to every military on earth carry no safety constraints whatsoever.

We did not make war more ethical by removing Anthropic. We made it less legible. We removed the one actor in the system willing to ask, publicly and at cost to itself, whether what was being asked of it crossed a line.


The Closing Paradox

Here is where we are on March 17, 2026, three weeks into a war that has killed more than thirteen hundred people in Iran, thirteen American service members, and an unknown number of civilians in strikes that have hit a reported elementary school and low-income housing:

The safest AI that has ever been deployed in a military kill chain is being phased out over six months because its makers refused to remove two guardrails. The most dangerous AI alternatives are freely downloadable with no guardrails at all, already in use by at least one adversary military, and proliferating to others with no mechanism to stop them. The physical infrastructure that runs the AI — the data centers — are now legitimate military targets under at least one belligerent's interpretation of the laws of armed conflict. And the two chokepoints through which most of the world's data flows are simultaneously in active conflict zones.

The question the next war will ask — and the one after that — is not whether AI will be in the kill chain. It will be. The question is what kind of AI, built by whom, constrained by what, and answerable to who.

Right now, the honest answer is: the least constrained AI will dominate, because constraint is being treated as weakness. Anthropic proved that holding an ethical line costs you your contract. DeepSeek proved that releasing a model with no ethical line costs you nothing — and proliferates everywhere.

We are at an inflection point that rhymes with 1949, when the Soviet atomic test shattered the assumption of permanent American nuclear monopoly. The response then was NSC-68 — a fundamental restructuring of American defense strategy and institutional frameworks. What we need now is its AI equivalent: not a ban on the ethical actors, but a doctrine that treats ethical constraint as a strategic asset rather than a liability.

Until we build that doctrine, we will keep expelling the conscience and wondering why the machine has none.


About the author — JB Wagoner is the Founder & CEO of OneZeroEight.ai, author of Zen AI: The Quest for Ethical Alignment, and creator of the SUTRA ethical framework for AI systems. He writes at the intersection of Buddhist philosophy, AI consciousness, and the governance of intelligent machines.