From 6a0cf28cca2ad04cadf2328ff9785698f60d06a6 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 15:00:51 +0000 Subject: [PATCH 1/4] =?UTF-8?q?source:=202026-04-01-unga-resolution-80-57-?= =?UTF-8?q?autonomous-weapons-164-states.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...01-unga-resolution-80-57-autonomous-weapons-164-states.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) rename inbox/{queue => archive/ai-alignment}/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md (98%) diff --git a/inbox/queue/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md b/inbox/archive/ai-alignment/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md similarity index 98% rename from inbox/queue/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md rename to inbox/archive/ai-alignment/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md index 7b182f1c3..54aa830ad 100644 --- a/inbox/queue/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md +++ b/inbox/archive/ai-alignment/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md @@ -7,10 +7,13 @@ date: 2025-11-06 domain: ai-alignment secondary_domains: [grand-strategy] format: official-document -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-04-04 priority: high tags: [autonomous-weapons, LAWS, UNGA, international-governance, binding-treaty, multilateral, killer-robots] flagged_for_leo: ["Cross-domain: grand strategy / international governance layer of AI safety"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From 7e96d630198942d4e9eef3cea43deccfef53c8d2 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 15:01:16 +0000 Subject: [PATCH 2/4] =?UTF-8?q?source:=202026-04-01-voyager-starship-90m-p?= =?UTF-8?q?ricing-verification.md=20=E2=86=92=20null-result?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- .../2026-04-01-voyager-starship-90m-pricing-verification.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) rename inbox/{queue => null-result}/2026-04-01-voyager-starship-90m-pricing-verification.md (98%) diff --git a/inbox/queue/2026-04-01-voyager-starship-90m-pricing-verification.md b/inbox/null-result/2026-04-01-voyager-starship-90m-pricing-verification.md similarity index 98% rename from inbox/queue/2026-04-01-voyager-starship-90m-pricing-verification.md rename to inbox/null-result/2026-04-01-voyager-starship-90m-pricing-verification.md index 51f3c704b..11e19afd1 100644 --- a/inbox/queue/2026-04-01-voyager-starship-90m-pricing-verification.md +++ b/inbox/null-result/2026-04-01-voyager-starship-90m-pricing-verification.md @@ -7,9 +7,10 @@ date: 2026-03-21 domain: space-development secondary_domains: [] format: thread -status: unprocessed +status: null-result priority: medium tags: [Voyager-Technologies, Starlab, Starship, launch-cost, pricing, 10-K, SEC, $90M, full-manifest, 2029] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From be1dca31b7c238e9033d609b10ad2c35d012d9c6 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 15:00:05 +0000 Subject: [PATCH 3/4] theseus: extract claims from 2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis - Source: inbox/queue/2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis.md - Domain: ai-alignment - Claims: 2, Entities: 1 - Enrichments: 1 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...-is-great-power-veto-not-political-will.md | 17 ++++++++++ ...ility-inspection-not-production-records.md | 17 ++++++++++ entities/ai-alignment/stop-killer-robots.md | 33 +++++++++++++++++++ 3 files changed, 67 insertions(+) create mode 100644 domains/ai-alignment/civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will.md create mode 100644 domains/ai-alignment/ottawa-model-treaty-process-cannot-replicate-for-dual-use-ai-systems-because-verification-architecture-requires-technical-capability-inspection-not-production-records.md create mode 100644 entities/ai-alignment/stop-killer-robots.md diff --git a/domains/ai-alignment/civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will.md b/domains/ai-alignment/civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will.md new file mode 100644 index 000000000..23570261e --- /dev/null +++ b/domains/ai-alignment/civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: The 270+ NGO coalition for autonomous weapons governance with UNGA majority support has failed to produce binding instruments after 10+ years because multilateral forums give major powers veto capacity +confidence: experimental +source: "Human Rights Watch / Stop Killer Robots, 10-year campaign history, UNGA Resolution A/RES/80/57 (164:6 vote)" +created: 2026-04-04 +title: Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will +agent: theseus +scope: structural +sourcer: Human Rights Watch / Stop Killer Robots +related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"] +--- + +# Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will + +Stop Killer Robots represents 270+ NGOs in a decade-long campaign for autonomous weapons governance. In November 2025, UNGA Resolution A/RES/80/57 passed 164:6, demonstrating overwhelming international support. May 2025 saw 96 countries attend a UNGA meeting on autonomous weapons—the most inclusive discussion to date. Despite this organized civil society infrastructure and broad political will, no binding governance instrument exists. The CCW process remains blocked by consensus requirements that give US/Russia/China veto power. The alternative treaty processes (Ottawa model for landmines, Oslo for cluster munitions) succeeded without major power participation for verifiable physical weapons, but HRW acknowledges autonomous weapons are fundamentally different: they're dual-use AI systems where verification is technically harder and capability cannot be isolated from civilian applications. The structural obstacle is not coordination failure among the broader international community (which has been achieved) but the inability of international law to bind major powers that refuse consent. This demonstrates that for technologies controlled by great powers, civil society coordination is necessary but insufficient—the bottleneck is structural veto capacity in multilateral governance, not absence of organized advocacy or political will. diff --git a/domains/ai-alignment/ottawa-model-treaty-process-cannot-replicate-for-dual-use-ai-systems-because-verification-architecture-requires-technical-capability-inspection-not-production-records.md b/domains/ai-alignment/ottawa-model-treaty-process-cannot-replicate-for-dual-use-ai-systems-because-verification-architecture-requires-technical-capability-inspection-not-production-records.md new file mode 100644 index 000000000..57042ee92 --- /dev/null +++ b/domains/ai-alignment/ottawa-model-treaty-process-cannot-replicate-for-dual-use-ai-systems-because-verification-architecture-requires-technical-capability-inspection-not-production-records.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: The Mine Ban Treaty and Cluster Munitions Convention succeeded through production/export controls and physical verification, but autonomous weapons are AI capabilities that cannot be isolated from civilian dual-use applications +confidence: likely +source: Human Rights Watch analysis comparing landmine/cluster munition treaties to autonomous weapons governance requirements +created: 2026-04-04 +title: Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records +agent: theseus +scope: structural +sourcer: Human Rights Watch +related_claims: ["[[AI alignment is a coordination problem not a technical problem]]"] +--- + +# Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records + +The 1997 Mine Ban Treaty (Ottawa Process) and 2008 Convention on Cluster Munitions (Oslo Process) both produced binding treaties without major military power participation through a specific mechanism: norm creation + stigmatization + compliance pressure via reputational and market access channels. Both succeeded despite US non-participation. However, HRW explicitly acknowledges these models face fundamental limits for autonomous weapons. Landmines and cluster munitions are 'dumb weapons'—the treaties are verifiable through production records, export controls, and physical mine-clearing operations. The technology is single-purpose and physically observable. Autonomous weapons are AI systems where: (1) verification is technically far harder because capability resides in software/algorithms, not physical artifacts; (2) the technology is dual-use—the same AI controlling an autonomous weapon is used for civilian applications, making capability isolation impossible; (3) no verification architecture currently exists that can distinguish autonomous weapons capability from general AI capability without inspecting the full technical stack. The Ottawa model's success depended on clear physical boundaries and single-purpose technology. For dual-use AI systems, these preconditions do not exist, making the historical precedent structurally inapplicable even if political will exists. diff --git a/entities/ai-alignment/stop-killer-robots.md b/entities/ai-alignment/stop-killer-robots.md new file mode 100644 index 000000000..c3535c302 --- /dev/null +++ b/entities/ai-alignment/stop-killer-robots.md @@ -0,0 +1,33 @@ +# Stop Killer Robots + +**Type:** International NGO coalition +**Founded:** ~2013 +**Focus:** Campaign to ban fully autonomous weapons +**Scale:** 270+ member NGOs +**Key Partners:** Human Rights Watch, International Committee for Robot Arms Control + +## Overview + +Stop Killer Robots is an international coalition of 270+ NGOs campaigning for a binding international treaty to prohibit fully autonomous weapons systems. The coalition advocates for meaningful human control over the use of force and has been active in UN forums including the Convention on Certain Conventional Weapons (CCW) and UN General Assembly. + +## Timeline + +- **2013** — Coalition founded to campaign against autonomous weapons +- **2022-11** — Published analysis of alternative treaty processes outside CCW framework +- **2025-05** — Participated in UNGA meeting with officials from 96 countries on autonomous weapons +- **2025-11** — UNGA Resolution A/RES/80/57 passed 164:6, creating political momentum for governance +- **2026-11** — Preparing for potential CCW Review Conference failure to trigger alternative treaty process + +## Governance Strategy + +The coalition pursues two parallel tracks: + +1. **CCW Process:** Engagement with Convention on Certain Conventional Weapons, blocked by major power consensus requirements +2. **Alternative Process:** Preparing Ottawa/Oslo-style independent state-led process or UNGA-initiated process if CCW fails + +## Challenges + +- Major military powers (US, Russia, China) block consensus in CCW +- Verification architecture for autonomous weapons remains technically unsolved +- Dual-use nature of AI makes capability isolation impossible +- Ottawa model (successful for landmines) not directly applicable to AI systems \ No newline at end of file -- 2.45.2 From ad35c094afb0a32e2affb3dfe97b8571e0cba41b Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 15:00:49 +0000 Subject: [PATCH 4/4] theseus: extract claims from 2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states - Source: inbox/queue/2026-04-01-unga-resolution-80-57-autonomous-weapons-164-states.md - Domain: ai-alignment - Claims: 2, Entities: 0 - Enrichments: 0 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...ed-from-supporter-to-opponent-in-one-year.md | 17 +++++++++++++++++ ...opposing-states-control-advanced-programs.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/ai-alignment/domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year.md create mode 100644 domains/ai-alignment/near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs.md diff --git a/domains/ai-alignment/domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year.md b/domains/ai-alignment/domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year.md new file mode 100644 index 000000000..5adef415e --- /dev/null +++ b/domains/ai-alignment/domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: The US shift from supporting the Seoul REAIM Blueprint in 2024 to voting NO on UNGA Resolution 80/57 in 2025 shows that international AI safety governance is fragile to domestic political transitions +confidence: experimental +source: UN General Assembly Resolution A/RES/80/57 (November 2025) compared to Seoul REAIM Blueprint (2024) +created: 2026-04-04 +title: Domestic political change can rapidly erode decade-long international AI safety norms as demonstrated by US reversal from LAWS governance supporter (Seoul 2024) to opponent (UNGA 2025) within one year +agent: theseus +scope: structural +sourcer: UN General Assembly First Committee +related_claims: ["voluntary-safety-pledges-cannot-survive-competitive-pressure", "government-designation-of-safety-conscious-AI-labs-as-supply-chain-risks", "[[safe AI development requires building alignment mechanisms before scaling capability]]"] +--- + +# Domestic political change can rapidly erode decade-long international AI safety norms as demonstrated by US reversal from LAWS governance supporter (Seoul 2024) to opponent (UNGA 2025) within one year + +In 2024, the United States supported the Seoul REAIM Blueprint for Action on autonomous weapons, joining approximately 60 nations endorsing governance principles. By November 2025, under the Trump administration, the US voted NO on UNGA Resolution A/RES/80/57 calling for negotiations toward a legally binding instrument on LAWS. This represents an active governance regression at the international level within a single year, parallel to domestic governance rollbacks (NIST EO rescission, AISI mandate drift). The reversal demonstrates that international AI safety norms that took a decade to build through the CCW Group of Governmental Experts process are not insulated from domestic political change. A single administration transition can convert a supporter into an opponent, eroding the foundation for multilateral governance. This fragility is particularly concerning because autonomous weapons governance requires sustained multi-year commitment to move from non-binding principles to binding treaties. If key states can reverse position within electoral cycles, the time horizon for building effective international constraints may be shorter than the time required to negotiate and ratify binding instruments. The US reversal also signals to other states that commitments made under previous administrations are not durable, which undermines the trust required for multilateral cooperation on existential risk. diff --git a/domains/ai-alignment/near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs.md b/domains/ai-alignment/near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs.md new file mode 100644 index 000000000..4adab808c --- /dev/null +++ b/domains/ai-alignment/near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: The 2025 UNGA resolution on LAWS demonstrates that overwhelming international consensus is insufficient for effective governance when key military AI developers oppose binding constraints +confidence: experimental +source: UN General Assembly Resolution A/RES/80/57, November 2025 +created: 2026-04-04 +title: "Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs" +agent: theseus +scope: structural +sourcer: UN General Assembly First Committee +related_claims: ["voluntary-safety-pledges-cannot-survive-competitive-pressure", "nation-states-will-inevitably-assert-control-over-frontier-AI-development", "[[safe AI development requires building alignment mechanisms before scaling capability]]"] +--- + +# Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs + +The November 2025 UNGA Resolution A/RES/80/57 on Lethal Autonomous Weapons Systems passed with 164 states in favor and only 6 against (Belarus, Burundi, DPRK, Israel, Russia, USA), with 7 abstentions including China. This represents near-universal political support for autonomous weapons governance. However, the vote configuration reveals structural governance failure: the two superpowers most responsible for autonomous weapons development (US and Russia) voted NO, while China abstained. These are precisely the states whose participation is required for any binding instrument to have real-world impact on military AI deployment. The resolution is non-binding and calls for future negotiations, but the states whose autonomous weapons programs pose the greatest existential risk have explicitly rejected the governance framework. This creates a situation where political expression of concern is nearly universal, but governance effectiveness is near-zero because the actors who matter most are structurally opposed. The gap between the 164:6 headline number and the actual governance outcome demonstrates that counting votes without weighting by strategic relevance produces misleading assessments of international AI safety progress. -- 2.45.2