Governance as the Matrix

Governance as the Matrix: When AI Values and State Authority Collide

Introduction: The Governance Deficit Surfaces in Public

On March 9, 2026, Anthropic — the developer of the Claude large language model — filed two simultaneous federal lawsuits against the United States Department of Defense and more than a dozen federal agencies. The legal action followed the Pentagon’s designation of Anthropic as a ‘supply chain risk to national security,’ a designation previously reserved exclusively for foreign adversaries. The proximate cause: Anthropic refused to grant the military unrestricted use of Claude for autonomous lethal weapons and mass domestic surveillance of American citizens.

The case has generated significant public attention as a First Amendment dispute and a commercial conflict. That framing, while accurate, is insufficient. What the Anthropic-Pentagon confrontation actually reveals is a foundational architectural failure — not of any particular technology, but of the governance model under which AI systems are deployed in high-stakes institutional contexts.

This post examines that failure through the lens of Judgment-Quality AI and the VPAR framework. The central observation is this: all four pillars of VPAR — Veracity, Plurality, Accountability, and Resonance — were absent from the institutional architecture that brought Anthropic’s Claude into the U.S. classified military apparatus. The resulting breakdown was not a malfunction. It was the predictable consequence of deploying a values-embedded system without a pre-agreed Constitutional Library governing its use.

The governance question is not ‘who controls the AI after deployment?’ It is ‘who ratified the rules before deployment?’ The Anthropic-Pentagon dispute is, in its essence, the absence of an answer to the second question masquerading as a conflict over the first.

This period — marked by active military operations in Iran, a $200 million DoD contract in legal dispute, and the first-ever supply chain risk designation applied to an American company — represents a systemic inflection point. The AI industry has crossed from a capability debate into a governance reckoning. Veracity demands that we name it clearly. Plurality requires that we examine all parties’ claims on their own terms. Accountability demands that we trace how the system reached this state.

Core Research Discoveries

The Supply Chain Risk Designation and Its Legal Architecture

Core Concept

The Defense Department invoked 10 U.S.C. § 3252 — a statute designed to exclude foreign adversary components from national security supply chains — against a domestic AI developer for the first time in American legal history. The designation requires all defense contractors and vendors to certify non-use of Anthropic’s models in Pentagon-related work. Simultaneously, President Trump directed all federal civilian agencies to cease use of Anthropic products. Both actions were triggered by Anthropic’s assertion of two specific usage constraints: a prohibition on use for fully autonomous lethal weapons and prohibition on mass domestic surveillance of American citizens. Crucially, U.S. military operations in Iran continued to rely on Claude throughout this period, via Palantir’s Maven intelligence analysis platform, even as the ban was announced.

Why It Matters to ETUNC

The supply chain risk mechanism was explicitly designed for adversarial foreign entities. Its application to a domestic company — one already embedded in classified military systems — reveals the absence of any pre-existing contractual Constitutional framework that would have resolved the precise disagreement now in litigation. The conflict is not a governance problem that emerged; it is a governance void that was always present.

VERACITY The factual record is internally contradictory: Claude was designated a security risk while simultaneously being used in active combat operations. Neither party disputes the facts; they dispute their meaning.PLURALITY The Pentagon’s claim that all use must be ‘lawful’ and Anthropic’s claim that ‘lawful’ is insufficiently specific for AI deployment both reflect legitimate institutional perspectives. No shared definitional framework existed.ACCOUNTABILITY No auditable trail of pre-agreed governance terms was in place. The lawsuit itself is the first formal accountability mechanism — introduced after, not before, operational deployment.

ETUNC Integration Point

GUARDIAN The Guardian agent’s Constitutional alignment function was entirely absent from this deployment architecture. No document governed what ‘lawful use’ meant in the context of Claude’s specific capabilities.ENVOY The Envoy’s data-processing role — as executed by Palantir’s Maven platform — operated without the ethical scaffolding that ETUNC’s architecture requires as a prerequisite to deployment.RESONATOR The Resonance failure is perhaps the most significant: Anthropic’s institutional values and the DoD’s operational values were misaligned from the outset, and no ratification process existed to surface or resolve that misalignment before contracts were signed.

The Operational Reality: AI in Active Combat

Core Concept

Confirmed reporting from multiple sources established that Claude, operating through Palantir’s Maven intelligence analysis system, was used in military planning for airstrikes in Iran and in the operation that resulted in the capture of Venezuelan President Nicolás Maduro in January 2026. The specific role described: processing large volumes of intelligence data, identifying patterns, and supporting human analysts in making time-sensitive targeting assessments. Crucially, military commanders and reporting sources consistently stated that humans retain final decision authority over kinetic actions. U.S. Central Command Admiral Brad Cooper publicly confirmed AI use in targeting decisions. Congressional members simultaneously called for formal oversight frameworks governing AI in combat operations.

Why It Matters to ETUNC

This is the first publicly confirmed instance of a commercially developed frontier LLM operating within active military targeting workflows. It represents the precise deployment scenario for which Constitutional Library governance is architecturally essential — not as a regulatory constraint, but as the operational foundation that makes such deployment coherent and auditable.

VPA Alignment

VERACITY The factual boundaries of Claude’s role in targeting (supporting analysts vs. directing strikes) are partially verified but not fully transparent. The ‘human in the loop’ assertion is operationally claimed but not architecturally guaranteed.PLURALITY Congressional calls for oversight reflect awareness that multiple legitimate frameworks must be integrated: military necessity, civil liberties, international law, and AI safety considerations. No single authority holds all four perspectives.ACCOUNTABILITY The absence of a published governance document covering Claude’s use in classified combat operations means no external accountability pathway exists. Congressional oversight requests are the post-hoc attempt to construct what should have been pre-existing.

ETUNC Integration Point

GUARDIAN The Guardian function — ethical Constitutional alignment — is the component whose absence is most consequential in active combat contexts. The question ‘what is this system authorized to do?’ must be answered architecturally, not litigated retroactively.ENVOY The Envoy’s intelligence processing role is well-suited to pattern recognition, data synthesis, and volume reduction tasks confirmed in these reports. The technical capability is sound. The governance layer beneath it is not.RESONATOR The Resonator dimension surfaces a systemic gap: the AI system’s values architecture (Anthropic’s guidelines) and the deploying institution’s values architecture (DoD doctrine) were never formally harmonized. Resonance was assumed, not verified.

The Governance-as-Architecture Argument

Core Concept

Pentagon CTO Emil Michael articulated the DoD’s position with unusual clarity: Anthropic’s Claude would ‘pollute’ the defense supply chain because it has ‘a different policy preference baked in.’ This framing reveals the structural nature of the conflict. The objection is not to Claude’s performance on military tasks, nor to Anthropic’s technical competence. The objection is that Anthropic’s values — its Constitutional architecture — are inseparable from the model itself. Anthropic’s own public statement confirmed Claude’s deep integration: ‘the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers.’ The company separately confirmed that it had declined several hundred million dollars in revenue to cut off use by firms linked to the Chinese Communist Party, even in the absence of a legal requirement to do so.

Why It Matters to ETUNC

The Pentagon CTO’s statement is the clearest articulation in public record of what ETUNC’s framework identifies as the core challenge of LLM deployment in institutional contexts: governance and capability are not separable layers. They are a unified architectural property. The disagreement between Anthropic and the DoD is, in ETUNC terms, a collision between two Constitutional Libraries that were never reconciled before deployment commenced.

VPA Alignment

VERACITY Both parties are stating accurate claims about incommensurable architectural facts. The Pentagon is correct that Anthropic’s guidelines are baked in. Anthropic is correct that a model without guidelines is a different product. Neither claim is empirically contestable.PLURALITY The framing of ‘policy preference baked in’ as a defect rather than a feature reflects one institutional perspective. From the perspective of any enterprise deploying an AI system under governance obligations, values-embedded architecture is the requirement, not the obstacle.ACCOUNTABILITY The absence of a jointly authored and ratified deployment charter — specifying which Constitutional provisions apply, which are negotiable, and under what conditions human override operates — is the accountability gap that produced this dispute.

ETUNC Integration Point

GUARDIAN The Guardian is not a constraint imposed on deployment. It is the pre-condition of deployment. This case demonstrates what happens when the Guardian function is absent: the values architecture of the model developer and the values architecture of the deploying institution are discovered to be incompatible after hundreds of millions of dollars in commitments have been made.ENVOY The Envoy has demonstrated robust operational value across intelligence analysis, targeting support, and data synthesis. Technical performance is not at issue. The question is whether a technically capable system can be deployed ethically without a governance substrate.RESONATOR Resonance — the alignment of values, intent, and institutional context across time — is precisely what was not established. The Anthropic-DoD relationship reached operational scale before the Resonance layer was verified. In ETUNC’s architecture, this sequence is architecturally prohibited.

Thematic Synthesis

The three research threads examined in this post converge on a single architectural argument. The Anthropic-Pentagon conflict is not primarily a legal dispute, a political dispute, or a commercial dispute — though it is all three. It is, at its foundation, a governance architecture dispute: specifically, the consequence of deploying a values-embedded AI system in a high-stakes institutional context without a pre-agreed Constitutional framework that all parties have ratified.

ETUNC’s central thesis — that governance must be the matrix, not the afterthought — finds in this case its clearest real-world instantiation. The $200 million DoD contract was executed, Claude was embedded in classified networks, and operational reliance deepened over months, while the fundamental question of what the system was authorized to do remained unresolved. That question was not unresolvable. It was simply unaddressed. The Constitutional Library — the document that would have specified authorized use cases, prohibited use cases, human oversight requirements, and escalation procedures — did not exist as a jointly authored and ratified instrument.

The thematic pattern surfaces across all three research discoveries. In the supply chain risk designation, governance was retrofitted as litigation. In the active combat deployment, governance was deferred in favor of operational urgency. In the ‘pollute the supply chain’ framing, governance was treated as an obstacle rather than a prerequisite. Each instance reflects the same structural assumption: that a capable AI system can be deployed first, and governed later. This assumption is the central failure mode that ETUNC’s architecture is designed to prevent.

The broader agentic AI landscape inherits this challenge at scale. As AI systems move from advisory to orchestrating roles — executing multi-step tasks across institutional boundaries, classifying, prioritizing, and acting on behalf of organizations — the governance deficit compounds. A Constitutional Library that governs a human analyst’s access to intelligence data must be substantively different from one governing an AI agent with autonomous tasking authority. The Anthropic case exposes the governance gap at a relatively early stage of agentic integration. The imperative is to close it architecturally, before deployment, across every institutional context in which these systems operate.

Governance as the matrix means this: the Constitutional Library is not a constraint on the AI system. It is the instrument by which all parties — the deploying institution, the technology developer, and the human oversight layer — ratify what the system is, what it does, and what it will never do. When that ratification is absent, the system operates in a values vacuum. The Anthropic-Pentagon dispute is the sound that vacuum makes.

Dominant Narrative Patterns

Public and media coverage of the Anthropic-Pentagon dispute has organized primarily around three narrative frames: (1) a First Amendment case about corporate speech and government retaliation; (2) a national security story about AI in active warfare; and (3) a commercial story about the financial stakes for Anthropic and its enterprise customers. Each frame is factually grounded. Each is also partial.

The First Amendment framing, while legally significant, positions the dispute as a rights protection case rather than a systems design failure. The national security framing foregrounds operational risk while backgrounding governance architecture. The commercial framing reduces a structural question to a contract dispute. None of these frames, individually or in combination, surfaces the architectural diagnosis that a Constitutional Library gap is the root cause of all three manifestations simultaneously.

Academic vs. Public Contrast

Academic and policy literature on AI governance — NIST’s AI Risk Management Framework, the EU AI Act, and related frameworks — addresses accountability, transparency, and risk tiering as compliance obligations applied to AI systems from the outside. The Anthropic case demonstrates the limitation of that model: a compliance framework cannot resolve a values conflict that was never surfaced before deployment. External governance applied retrospectively is litigation. Governance designed into the deployment architecture is Constitutional.

The public narrative has not yet developed a vocabulary for this distinction. The concept of a Constitutional Library — a jointly ratified document specifying authorized actions, prohibited actions, and human oversight mechanisms — does not appear in mainstream coverage. The closest the public record comes is Anthropic’s own characterization of its ‘red lines,’ which are described as company policy rather than jointly negotiated governance instruments.

Conclusion: Architectural Clarity

The week of March 9–16, 2026, marks a threshold in the public history of AI governance. For the first time, a frontier AI model was confirmed in use in active combat operations while simultaneously being designated a supply chain risk by the government deploying it. The contradiction is not incidental. It is diagnostic.

What changed in the AI landscape during this period is this: the governance deficit that has characterized enterprise and institutional AI deployment — visible for years in compliance frameworks, academic literature, and policy debate — became a first-order operational and legal crisis. The capability argument for AI in high-stakes contexts has been settled. The governance argument has not. And the cost of settling it retroactively, through litigation and operational disruption, is now empirically established.

ETUNC’s governing principle holds that governance is the matrix — the foundational structure within which AI capability is exercised, not a layer applied after the fact. The Anthropic-Pentagon case does not change this principle. It demonstrates its necessity.

The Constitutional Library model — pre-agreed, jointly authored, version-controlled, auditable — is not a theoretical construct. It is the instrument whose absence produced this crisis. Every enterprise, institution, and government agency deploying AI systems with consequential authority faces the same architectural question: have all parties ratified the rules before any action takes place? The answer to that question is the governance architecture. The governance architecture is the matrix.

No court outcome, no Congressional framework, and no supply chain designation resolves this question for future deployments. Only a pre-deployment Constitutional design does.

Suggested Resource Links

A. ETUNC Insights (Internal)

The Consolidation of Governance-First AI

Academic / Technical (External)

  • NIST AI Risk Management Framework (AI RMF 1.0) — https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
  • Anthropic Constitutional AI paper — https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback
  • Anthropic Statement: Department of War — https://www.anthropic.com/news/statement-department-of-war
  • DoD AI Adoption and Acceleration Strategy — https://www.ai.mil
  • EU AI Act text (for governance framework comparison) — https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
  • Palantir Maven Smart System technical overview — https://palantir.com/platforms/maven/

Call to Collaboration

The governance challenge documented in this post is not the exclusive concern of AI developers, defense institutions, or legal scholars. It is a shared challenge for every organization deploying AI systems in contexts where consequential decisions are made.

ETUNC Insights invites researchers, institutional architects, governance practitioners, and technology developers to contribute to the shared stewardship of trustworthy AI — specifically, to the development of Constitutional Library frameworks that make pre-deployment governance ratification a practical and replicable architectural pattern.

Engage with ETUNC:HERE

Scroll to Top