From 3b278ea2da06fad9f57a9f812bf007a640cf043b Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:56:29 +0000 Subject: [PATCH 1/4] =?UTF-8?q?source:=202026-04-01-cset-ai-verification-m?= =?UTF-8?q?echanisms-technical-framework.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...01-cset-ai-verification-mechanisms-technical-framework.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) rename inbox/{queue => archive/ai-alignment}/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md (98%) diff --git a/inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md b/inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md similarity index 98% rename from inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md rename to inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md index 738994225..62b9f07d4 100644 --- a/inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md +++ b/inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md @@ -7,9 +7,12 @@ date: 2025-01-01 domain: ai-alignment secondary_domains: [grand-strategy] format: report -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-04-04 priority: high tags: [AI-verification, autonomous-weapons, compliance, treaty-verification, meaningful-human-control, technical-mechanisms] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From 950a290572a10dfa1d5e0be6406825133c13f6a9 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:54:42 +0000 Subject: [PATCH 2/4] theseus: extract claims from 2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november - Source: inbox/queue/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md - Domain: ai-alignment - Claims: 1, Entities: 1 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...veto-over-autonomous-weapons-governance.md | 17 +++++++ entities/ai-alignment/ccw-gge-laws.md | 44 +++++++++++++++++++ 2 files changed, 61 insertions(+) create mode 100644 domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md create mode 100644 entities/ai-alignment/ccw-gge-laws.md diff --git a/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md b/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md new file mode 100644 index 000000000..7eb05569e --- /dev/null +++ b/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: "Despite 164:6 UNGA support and 42-state joint statements calling for LAWS treaty negotiations, the CCW's consensus requirement gives veto power to US, Russia, and Israel, blocking binding governance for 11+ years" +confidence: proven +source: "CCW GGE LAWS process documentation, UNGA Resolution A/RES/80/57 (164:6 vote), March 2026 GGE session outcomes" +created: 2026-04-04 +title: The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support +agent: theseus +scope: structural +sourcer: UN OODA, Digital Watch Observatory, Stop Killer Robots, ICT4Peace +related_claims: ["[[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]", "[[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"] +--- + +# The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support + +The Convention on Certain Conventional Weapons operates under a consensus rule where any single High Contracting Party can block progress. After 11 years of deliberations (2014-2026), the GGE LAWS has produced no binding instrument despite overwhelming political support: UNGA Resolution A/RES/80/57 passed 164:6 in November 2025, 42 states delivered a joint statement calling for formal treaty negotiations in September 2025, and 39 High Contracting Parties stated readiness to move to negotiations. Yet US, Russia, and Israel consistently oppose any preemptive ban—Russia argues existing IHL is sufficient and LAWS could improve targeting precision; US opposes preemptive bans and argues LAWS could provide humanitarian benefits. This small coalition of major military powers has maintained a structural veto for over a decade. The consensus rule itself requires consensus to amend, creating a locked governance structure. The November 2026 Seventh Review Conference represents the final decision point under the current mandate, but given US refusal of even voluntary REAIM principles (February 2026) and consistent Russian opposition, the probability of a binding protocol is near-zero. This represents the international-layer equivalent of domestic corporate safety authority gaps: no legal mechanism exists to constrain the actors with the most advanced capabilities. diff --git a/entities/ai-alignment/ccw-gge-laws.md b/entities/ai-alignment/ccw-gge-laws.md new file mode 100644 index 000000000..05ac3dba9 --- /dev/null +++ b/entities/ai-alignment/ccw-gge-laws.md @@ -0,0 +1,44 @@ +# CCW GGE LAWS + +**Type:** International governance body +**Full Name:** Group of Governmental Experts on Lethal Autonomous Weapons Systems under the Convention on Certain Conventional Weapons +**Status:** Active (mandate expires November 2026) +**Governance:** Consensus-based decision making among High Contracting Parties + +## Overview + +The GGE LAWS is the primary international forum for negotiating governance of lethal autonomous weapons systems. Established in 2014 under the CCW framework, it has conducted 20+ sessions over 11 years without producing a binding instrument. + +## Structure + +- **Decision Rule:** Consensus (any single state can block progress) +- **Participants:** High Contracting Parties to the CCW +- **Output:** 'Rolling text' framework document with two-tier approach (prohibitions + regulations) +- **Key Obstacle:** US, Russia, and Israel maintain consistent opposition to binding constraints + +## Current Status (2026) + +- **Political Support:** UNGA Resolution A/RES/80/57 passed 164:6 (November 2025) +- **State Coalitions:** 42 states calling for formal treaty negotiations; 39 states ready to move to negotiations +- **Technical Progress:** Significant convergence on framework elements, but definitions of 'meaningful human control' remain contested +- **Structural Barrier:** Consensus rule gives veto power to small coalition of major military powers + +## Timeline + +- **2014** — GGE LAWS established under CCW framework +- **September 2025** — 42 states deliver joint statement calling for formal treaty negotiations; Brazil leads 39-state statement declaring readiness to negotiate +- **November 2025** — UNGA Resolution A/RES/80/57 adopted 164:6, calling for completion of CCW instrument elements by Seventh Review Conference +- **March 2-6, 2026** — First GGE session of 2026; Chair circulates new version of rolling text +- **August 31 - September 4, 2026** — Second GGE session of 2026 (scheduled) +- **November 16-20, 2026** — Seventh CCW Review Conference; final decision point on negotiating mandate + +## Alternative Pathways + +Human Rights Watch and Stop Killer Robots have documented the Ottawa Process model (landmines) and Oslo Process model (cluster munitions) as precedents for independent state-led treaties outside CCW consensus requirements. However, effectiveness would be limited without participation of US, Russia, and China—the states with most advanced autonomous weapons programs. + +## References + +- UN OODA CCW documentation +- Digital Watch Observatory +- Stop Killer Robots campaign materials +- UNGA Resolution A/RES/80/57 \ No newline at end of file -- 2.45.2 From 70bf1ccff3dbd94f78f2f34ff4b7350ac00e9a6c Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:57:24 +0000 Subject: [PATCH 3/4] =?UTF-8?q?source:=202026-04-01-defense-sovereign-odc-?= =?UTF-8?q?demand-formation.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...-defense-sovereign-odc-demand-formation.md | 5 +- ...-defense-sovereign-odc-demand-formation.md | 80 ------------------- 2 files changed, 4 insertions(+), 81 deletions(-) delete mode 100644 inbox/queue/2026-04-01-defense-sovereign-odc-demand-formation.md diff --git a/inbox/archive/space-development/2026-04-01-defense-sovereign-odc-demand-formation.md b/inbox/archive/space-development/2026-04-01-defense-sovereign-odc-demand-formation.md index 0bab6855d..de6b09a9f 100644 --- a/inbox/archive/space-development/2026-04-01-defense-sovereign-odc-demand-formation.md +++ b/inbox/archive/space-development/2026-04-01-defense-sovereign-odc-demand-formation.md @@ -7,11 +7,14 @@ date: 2026-04-01 domain: space-development secondary_domains: [energy] format: thread -status: unprocessed +status: processed +processed_by: astra +processed_date: 2026-04-04 priority: high tags: [Space-Force, ESA, ASCEND, government-demand, defense, ODC, orbital-data-center, AI-compute, data-sovereignty, Gate-0] flagged_for_theseus: ["DoD AI acceleration strategy + Space Force orbital computing: is defense adopting orbital AI compute for reasons that go beyond typical procurement? Does geopolitically-neutral orbital jurisdiction matter to defense?"] flagged_for_rio: ["ESA ASCEND data sovereignty framing: European governments creating demand for orbital compute as sovereign infrastructure — is this a new mechanism for state-funded space sector activation?"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content diff --git a/inbox/queue/2026-04-01-defense-sovereign-odc-demand-formation.md b/inbox/queue/2026-04-01-defense-sovereign-odc-demand-formation.md deleted file mode 100644 index 0bab6855d..000000000 --- a/inbox/queue/2026-04-01-defense-sovereign-odc-demand-formation.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -type: source -title: "Government and sovereign demand for orbital AI compute is forming in 2025-2026: Space Force $500M, ESA ASCEND €300M" -author: "Astra (synthesis of multiple sources: DoD AI Strategy, Space Force FY2025 DAIP, ESA ASCEND program)" -url: https://www.nextgov.com/ideas/2026/02/dods-ai-acceleration-strategy/411135/ -date: 2026-04-01 -domain: space-development -secondary_domains: [energy] -format: thread -status: unprocessed -priority: high -tags: [Space-Force, ESA, ASCEND, government-demand, defense, ODC, orbital-data-center, AI-compute, data-sovereignty, Gate-0] -flagged_for_theseus: ["DoD AI acceleration strategy + Space Force orbital computing: is defense adopting orbital AI compute for reasons that go beyond typical procurement? Does geopolitically-neutral orbital jurisdiction matter to defense?"] -flagged_for_rio: ["ESA ASCEND data sovereignty framing: European governments creating demand for orbital compute as sovereign infrastructure — is this a new mechanism for state-funded space sector activation?"] ---- - -## Content - -**U.S. Space Force orbital computing allocation:** -- $500M allocated for orbital computing research through 2027 -- Space Force FY2025 Data and AI Strategic Action Plan (publicly available) outlines expanded orbital computing as a capability priority -- DoD AI Strategy Memo (February 2026): "substantial expansion of AI compute infrastructure from data centers to tactical, remote or 'edge' military environments" — orbital is included in this mandate -- DARPA: Multiple programs exploring space-based AI for defense applications (specific program names not publicly disclosed as of this session) - -**ESA ASCEND program:** -- Full name: Advanced Space Cloud for European Net zero emissions and Data sovereignty -- Funding: €300M through 2027 (European Commission, Horizon Europe program) -- Launched: 2023 -- Feasibility study coordinator: Thales Alenia Space -- Objectives: - 1. **Data sovereignty:** European data processed on European infrastructure in European jurisdiction (orbital territory outside any nation-state) - 2. **CO2 reduction:** Orbital solar power eliminates terrestrial energy/cooling requirements for compute workloads - 3. **Net-zero by 2050:** EU Green Deal objective driving the environmental framing -- Demonstration mission: Targeted for 2026-2028 (sources conflict on exact date) - -**DoD "Department of War" AI-First Agenda (Holland & Knight, February 2026):** -- Renamed from DoD to "Department of War" in Trump administration rebranding -- Explicit AI-first mandate for all defense contractors -- Orbital compute included as edge AI infrastructure for military applications -- Defense contractors entering ODC development as a result of this mandate - -**Key structural difference from commercial 2C-S demand:** -The government/defense demand for ODC is not based on cost-parity analysis (the 2C-S ~1.8-2x ceiling for commercial buyers). Defense procurement accepts strategic premiums of 5-10x for capabilities with no terrestrial alternative. The Space Force $500M is R&D funding, not a service contract — it's validating technology rather than procuring service at a known price premium. - -**Classification as "Gate 0" (new concept):** -This demand represents a new mechanism not captured in the Two-Gate Model (March 23, Session 12): -- Gate 0: Government R&D validates sector technology and de-risks for commercial investment -- Gate 1: Launch cost at proof-of-concept scale enables first commercial deployments -- Gate 2: Revenue model independence from government anchor - -Government R&D is NOT the same as government anchor customer demand (which is what keeps commercial stations from clearing Gate 2). Gate 0 is catalytic — it creates technology validation and market legitimacy — without being a permanent demand substitute. - -**Historical analogues for Gate 0:** -- Remote sensing: NRO CubeSat programs validated small satellite technology → enabled Planet Labs' commercial case -- Communications: DARPA satellite programs in 1960s-70s → enabled commercial satellite industry -- Internet: ARPANET (DoD R&D) → validated packet switching → enabled commercial internet - -## Agent Notes -**Why this matters:** This confirms Direction B from March 31 (defense/sovereign 2C pathway). However, the finding is more nuanced than predicted: the defense demand is primarily R&D funding (Gate 0), not commercial procurement at premium pricing (2C-S). This distinction matters because Gate 0 is catalytic but not sustaining — it validates technology and creates demand signal without becoming a permanent revenue source. The ODC sector needs to progress through Gate 1 (proof-of-concept cleared, Nov 2025) to Gate 2 (commercial self-sustaining demand) with Gate 0 as an accelerant, not a substitute. - -**What surprised me:** ESA's framing of ODC as data sovereignty infrastructure. This is NOT an economic argument — the EU is not saying orbital compute is cheaper or better than terrestrial. It's saying European-controlled orbital compute provides legal jurisdiction advantages for European data that terrestrial compute in US, Chinese, or third-country locations cannot provide. This is the most compelling "unique attribute unavailable from alternatives" case in the ODC thesis — even more compelling than nuclear's "always-on carbon-free" case, because orbital jurisdiction is physically distinct from any nation-state's legal framework. If this framing is adopted broadly, orbital compute has a unique attribute that would justify 2C-S at above the 1.8-2x commercial ceiling. - -**What I expected but didn't find:** Specific DARPA program names for space-based AI defense applications. This information appears to be classified or not yet publicly disclosed. Without specific program names and funding amounts, the DARPA component of defense demand is less evidenced than the Space Force and ESA components. - -**KB connections:** -- [[space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly]] — ESA ASCEND's data sovereignty rationale reveals that orbital governance has economic implications: the absence of clear orbital jurisdiction creates a potential ADVANTAGE for ODC as neutral infrastructure -- [[the Artemis Accords replace multilateral treaty-making with bilateral norm-setting to create governance through coalition practice rather than universal consensus]] — ESA ASCEND's European sovereignty framing is explicitly counter to US-dominated orbital governance norms; European data sovereignty in orbit requires European-controlled infrastructure -- [[governments are transitioning from space system builders to space service buyers which structurally advantages nimble commercial providers]] — ASCEND and Space Force ODC funding represent an intermediate step: government as R&D sponsor (Gate 0) BEFORE becoming service buyers. The transition is not binary. - -**Extraction hints:** -1. "European data sovereignty concerns (ESA ASCEND, €300M through 2027) represent the strongest 'unique attribute unavailable from alternatives' case for orbital compute — the legal jurisdiction of orbital infrastructure is physically distinct from any nation-state's territory, providing a genuine competitive moat that terrestrial compute cannot replicate" (confidence: experimental — the sovereignty argument is coherent; whether courts and markets will recognize it as a moat is untested) -2. "Government orbital computing R&D (Space Force $500M, ESA ASCEND €300M) represents a Gate 0 mechanism — technology validation that de-risks sectors for commercial investment — structurally distinct from government anchor customer demand (which substitutes for commercial demand) and historically sufficient to catalyze commercial sector formation without being a permanent demand substitute" (confidence: experimental — Gate 0 concept derived from ARPANET/NRO analogues; direct evidence for ODC is still early-stage) -3. "The US DoD AI acceleration strategy (February 2026) explicitly includes orbital compute in its mandate for expanded AI infrastructure, creating defense procurement pipeline for ODC technology developed by commercial operators — the first clear signal that defense procurement (not just R&D) may follow" (confidence: speculative — strategy mandate does not guarantee procurement) - -**Context:** The ESA ASCEND program is coordinated by Thales Alenia Space — a European aerospace manufacturer that would directly benefit from the program creating demand for European-manufactured satellites. The EU framing (Green Deal + data sovereignty) combines two separate EU policy priorities into a single justification, which is politically effective but may overstate either objective individually. The data sovereignty argument is the stronger and more novel of the two. - -## Curator Notes -PRIMARY CONNECTION: [[space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly]] -WHY ARCHIVED: Government demand formation (Space Force + ESA ASCEND) confirms the defense/sovereign 2C pathway for ODC AND reveals a new "Gate 0" mechanism not in the Two-Gate Model. The data sovereignty framing from ESA is the most compelling unique-attribute case found to date — stronger than the nuclear/baseload case from the 2C-S analysis (March 31). -EXTRACTION HINT: Extract the Gate 0 concept as the highest-priority synthesis claim — it's a structural addition to the Two-Gate Model. Extract the data sovereignty unique-attribute case as a secondary speculative claim. Do NOT extract DARPA specifics without named programs. -- 2.45.2 From e60f55c07ca4dda0bba460521619cd54076f7743 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:56:27 +0000 Subject: [PATCH 4/4] theseus: extract claims from 2026-04-01-cset-ai-verification-mechanisms-technical-framework - Source: inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md - Domain: ai-alignment - Claims: 2, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...ucture-does-not-exist-at-deployment-scale.md | 17 +++++++++++++++++ ...ersarial-resistance-defeat-external-audit.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/ai-alignment/multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale.md create mode 100644 domains/ai-alignment/verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit.md diff --git a/domains/ai-alignment/multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale.md b/domains/ai-alignment/multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale.md new file mode 100644 index 000000000..f67ed5a90 --- /dev/null +++ b/domains/ai-alignment/multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: Despite multiple proposed mechanisms (transparency registries, satellite monitoring, dual-factor authentication, ethical guardrails), no state has operationalized any verification mechanism for autonomous weapons compliance as of early 2026 +confidence: likely +source: CSET Georgetown, documenting state of field across multiple verification proposals +created: 2026-04-04 +title: Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist +agent: theseus +scope: structural +sourcer: CSET Georgetown +related_claims: ["voluntary safety pledges cannot survive competitive pressure", "[[AI alignment is a coordination problem not a technical problem]]"] +--- + +# Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist + +CSET's comprehensive review documents five classes of proposed verification mechanisms: (1) Transparency registry—voluntary state disclosure of LAWS capabilities (analogous to Arms Trade Treaty reporting); (2) Satellite imagery + OSINT monitoring index tracking AI weapons development; (3) Dual-factor authentication requirements for autonomous systems before launching attacks; (4) Ethical guardrail mechanisms that freeze AI decisions exceeding pre-set thresholds; (5) Mandatory legal reviews for autonomous weapons development. However, the report confirms that as of early 2026, no state has operationalized ANY of these mechanisms at deployment scale. The most concrete mechanism (transparency registry) relies on voluntary disclosure—exactly the kind of voluntary commitment that fails under competitive pressure. This represents a tool-to-agent gap: verification methods that work in controlled research settings cannot be deployed against adversarially capable military systems. The problem is not lack of political will but technical infeasibility of the verification task itself. diff --git a/domains/ai-alignment/verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit.md b/domains/ai-alignment/verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit.md new file mode 100644 index 000000000..e5ce99ad1 --- /dev/null +++ b/domains/ai-alignment/verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: The properties most relevant to autonomous weapons alignment (meaningful human control, intent, adversarial resistance) cannot be verified with current methods because behavioral testing cannot determine internal decision processes and adversarially trained systems resist interpretability-based verification +confidence: experimental +source: CSET Georgetown, AI Verification technical framework report +created: 2026-04-04 +title: Verification of meaningful human control over autonomous weapons is technically infeasible because AI decision-making opacity and adversarial resistance defeat external audit mechanisms +agent: theseus +scope: structural +sourcer: CSET Georgetown +related_claims: ["scalable oversight degrades rapidly as capability gaps grow", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]", "AI capability and reliability are independent dimensions"] +--- + +# Verification of meaningful human control over autonomous weapons is technically infeasible because AI decision-making opacity and adversarial resistance defeat external audit mechanisms + +CSET's analysis reveals that verifying 'meaningful human control' faces fundamental technical barriers: (1) AI decision-making is opaque—external observers cannot determine whether a human 'meaningfully' reviewed a decision versus rubber-stamped it; (2) Verification requires access to system architectures that states classify as sovereign military secrets; (3) The same benchmark-reality gap documented in civilian AI (METR findings) applies to military systems—behavioral testing cannot determine intent or internal decision processes; (4) Adversarially trained systems (the most capable and most dangerous) are specifically resistant to interpretability-based verification approaches that work in civilian contexts. The report documents that as of early 2026, no state has operationalized any verification mechanism for autonomous weapons compliance—all proposals remain at research stage. This represents a Layer 0 measurement architecture failure more severe than in civilian AI governance, because adversarial system access cannot be compelled and the most dangerous properties (intent to override human control) lie in the unverifiable dimension. -- 2.45.2