From a7d750a8c9603012e6b47427dda850e7c200db8c Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:54:44 +0000 Subject: [PATCH 1/6] =?UTF-8?q?source:=202026-04-01-ccw-gge-laws-2026-seve?= =?UTF-8?q?nth-review-conference-november.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...1-ccw-gge-laws-2026-seventh-review-conference-november.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) rename inbox/{queue => archive/ai-alignment}/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md (98%) diff --git a/inbox/queue/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md b/inbox/archive/ai-alignment/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md similarity index 98% rename from inbox/queue/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md rename to inbox/archive/ai-alignment/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md index bfca5ebfa..3834f0a51 100644 --- a/inbox/queue/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md +++ b/inbox/archive/ai-alignment/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md @@ -7,10 +7,13 @@ date: 2026-03-06 domain: ai-alignment secondary_domains: [grand-strategy] format: official-process -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-04-04 priority: high tags: [CCW, LAWS, autonomous-weapons, treaty, GGE, rolling-text, review-conference, international-governance, consensus-obstruction] flagged_for_leo: ["Cross-domain: grand strategy / decisive international governance window closing November 2026"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From fc25ac9f164d5910492c5652d288bedd07468a0e Mon Sep 17 00:00:00 2001 From: m3taversal Date: Sat, 4 Apr 2026 15:54:37 +0100 Subject: [PATCH 2/6] =?UTF-8?q?theseus:=20Agentic=20Taylorism=20research?= =?UTF-8?q?=20sprint=20=E2=80=94=204=20NEW=20claims=20+=203=20enrichments?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 4 NEW claims (ai-alignment + collective-intelligence): - Agent Skills as industrial knowledge codification infrastructure - Macro-productivity null despite micro-level gains (371-estimate meta-analysis) - Concentration vs distribution fork depends on infrastructure openness - Knowledge codification structurally loses metis (alignment-relevant dimension) 3 enrichments: - Agentic Taylorism + SKILL.md as Taylor's instruction card - Inverted-U + aggregate null result evidence - Automation-atrophy + creativity decline meta-analysis Co-Authored-By: Claude Opus 4.6 (1M context) --- ...zations past the optimal human-AI ratio.md | 5 ++ ...ise into portable AI-consumable formats.md | 64 +++++++++++++++++++ ...nslation into explicit procedural rules.md | 48 ++++++++++++++ ...ts before they reach aggregate measures.md | 52 +++++++++++++++ ...e intelligence under commons governance.md | 58 +++++++++++++++++ .../attractor-agentic-taylorism.md | 5 ++ ...esolution removes exactly that friction.md | 5 ++ 7 files changed, 237 insertions(+) create mode 100644 domains/ai-alignment/agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats.md create mode 100644 domains/ai-alignment/knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules.md create mode 100644 domains/ai-alignment/macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures.md create mode 100644 domains/ai-alignment/whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance.md diff --git a/domains/ai-alignment/AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio.md b/domains/ai-alignment/AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio.md index 8938de341..b5d41d9d2 100644 --- a/domains/ai-alignment/AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio.md +++ b/domains/ai-alignment/AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio.md @@ -51,5 +51,10 @@ Relevant Notes: - [[the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value]] — premature adoption is the inverted-U overshoot in action - [[multi-agent coordination improves parallel task performance but degrades sequential reasoning because communication overhead fragments linear workflows]] — the baseline paradox (coordination hurts above 45% accuracy) is a specific instance of the inverted-U +### Additional Evidence (supporting) +*Source: California Management Review "Seven Myths" meta-analysis (2025), BetterUp/Stanford workslop research, METR RCT | Added: 2026-04-04 | Extractor: Theseus* + +The inverted-U mechanism now has aggregate-level confirmation. The California Management Review "Seven Myths of AI and Employment" meta-analysis (2025) synthesized 371 individual estimates of AI's labor-market effects and found no robust, statistically significant relationship between AI adoption and aggregate labor-market outcomes once publication bias is controlled. This null aggregate result despite clear micro-level benefits is exactly what the inverted-U mechanism predicts: individual-level productivity gains are absorbed by coordination costs, verification tax, and workslop before reaching aggregate measures. The BetterUp/Stanford workslop research quantifies the absorption: approximately 40% of AI productivity gains are consumed by downstream rework — fixing errors, checking outputs, and managing plausible-looking mistakes. Additionally, a meta-analysis of 74 automation-bias studies found a 12% increase in commission errors (accepting incorrect AI suggestions) across domains. The METR randomized controlled trial of AI coding tools revealed a 39-percentage-point perception-reality gap: developers reported feeling 20% more productive but were objectively 19% slower. These findings suggest that micro-level productivity surveys systematically overestimate real gains, explaining how the inverted-U operates invisibly at scale. + Topics: - [[_map]] diff --git a/domains/ai-alignment/agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats.md b/domains/ai-alignment/agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats.md new file mode 100644 index 000000000..ee2967bdb --- /dev/null +++ b/domains/ai-alignment/agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats.md @@ -0,0 +1,64 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [grand-strategy, collective-intelligence] +description: "Anthropic's SKILL.md format (December 2025) has been adopted by 6+ major platforms including confirmed integrations in Claude Code, GitHub Copilot, and Cursor, with a SkillsMP marketplace — this is Taylor's instruction card as an open industry standard" +confidence: experimental +source: "Anthropic Agent Skills announcement (Dec 2025); The New Stack, VentureBeat, Unite.AI coverage of platform adoption; arXiv 2602.12430 (Agent Skills architecture paper); SkillsMP marketplace documentation" +created: 2026-04-04 +depends_on: + - "attractor-agentic-taylorism" +--- + +# Agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats + +The abstract mechanism described in the Agentic Taylorism claim — humanity feeding knowledge into AI through usage — now has a concrete industrial instantiation. Anthropic's Agent Skills specification (SKILL.md), released December 2025, defines a portable file format for encoding "domain-specific expertise: workflows, context, and best practices" into files that AI agents consume at runtime. + +## The infrastructure layer + +The SKILL.md format encodes three types of knowledge: +1. **Procedural knowledge** — step-by-step workflows for specific tasks (code review, data analysis, content creation) +2. **Contextual knowledge** — domain conventions, organizational preferences, quality standards +3. **Conditional knowledge** — when to apply which procedure, edge case handling, exception rules + +This is structurally identical to Taylor's instruction card system: observe how experts perform tasks → codify the knowledge into standardized formats → deploy through systems that can execute without the original experts. + +## Platform adoption + +The specification has been adopted by multiple AI development platforms within months of release. Confirmed shipped integrations: +- **Claude Code** (Anthropic) — native SKILL.md support as the primary skill format +- **GitHub Copilot** — workspace skills using compatible format +- **Cursor** — IDE-level skill integration + +Announced or partially integrated (adoption depth unverified): +- **Microsoft** — Copilot agent framework integration announced +- **OpenAI** — GPT actions incorporate skills-compatible formats +- **Atlassian, Figma** — workflow and design process skills announced + +A **SkillsMP marketplace** has emerged where organizations publish and distribute codified expertise as portable skill packages. Partner skills from Canva, Stripe, Notion, and Zapier encode domain-specific knowledge into consumable formats, though the depth of integration varies across partners. + +## What this means structurally + +The existence of this infrastructure transforms Agentic Taylorism from a theoretical pattern into a deployed industrial system. The key structural features: + +1. **Portability** — skills transfer between platforms, creating a common format for codified expertise (analogous to how Taylor's instruction cards could be carried between factories) +2. **Marketplace dynamics** — the SkillsMP creates a market for codified knowledge, with pricing, distribution, and competition dynamics +3. **Organizational adoption** — companies that encode their domain expertise into skill files make that knowledge portable, extractable, and deployable without the original experts +4. **Cumulative codification** — each skill file builds on previous ones, creating an expanding library of codified human expertise + +## Challenges + +The SKILL.md format encodes procedural and conditional knowledge but the depth of metis captured is unclear. Simple skills (file formatting, API calling patterns) may transfer completely. Complex skills (strategic judgment, creative direction, ethical reasoning) may lose essential contextual knowledge in translation. The adoption data shows breadth of deployment but not depth of knowledge capture. + +The marketplace dynamics could drive toward either concentration (dominant platforms control the skill library) or distribution (open standards enable a commons of codified expertise). The outcome depends on infrastructure openness — whether skill portability is genuine or creates vendor lock-in. + +The rapid adoption timeline (months, not years) may reflect low barriers to creating skill files rather than high value from using them. Many published skills may be shallow procedural wrappers rather than genuine expertise codification. + +--- + +Relevant Notes: +- [[attractor-agentic-taylorism]] — the mechanism this infrastructure instantiates: knowledge extraction from humans into AI-consumable systems as byproduct of usage +- [[knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules]] — what the codification process loses: the contextual judgment that Taylor's instruction cards also failed to capture + +Topics: +- [[_map]] diff --git a/domains/ai-alignment/knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules.md b/domains/ai-alignment/knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules.md new file mode 100644 index 000000000..dd06283fa --- /dev/null +++ b/domains/ai-alignment/knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules.md @@ -0,0 +1,48 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, grand-strategy] +description: "The conversion of domain expertise into AI-consumable formats (SKILL.md files, prompt templates, skill graphs) replicates Taylor's instruction card problem at cognitive scale — procedural knowledge transfers but the contextual judgment that determines when to deviate from procedure does not" +confidence: likely +source: "James C. Scott, Seeing Like a State (1998) — metis concept; D'Mello & Graesser — productive struggle research; California Management Review Seven Myths meta-analysis (2025) — 28-experiment creativity decline finding; Cornelius automation-atrophy observation across 7 domains" +created: 2026-04-04 +depends_on: + - "externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction" + - "attractor-agentic-taylorism" +challenged_by: + - "deep expertise is a force multiplier with AI not a commodity being replaced because AI raises the ceiling for those who can direct it while compressing the skill floor" +--- + +# Knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules + +Scott's concept of metis — practical knowledge that resists simplification into explicit rules — maps precisely onto the alignment-relevant dimension of Agentic Taylorism. Taylor's instruction cards captured the mechanics of pig-iron loading (timing, grip, pace) but lost the experienced worker's judgment about when to deviate from procedure (metal quality, weather conditions, equipment wear). The productivity gains were real; the knowledge loss was invisible until edge cases accumulated. + +The same structural dynamic is operating in AI knowledge codification. When domain expertise is encoded into SKILL.md files, prompt templates, and skill graphs, what transfers is techne — explicit procedural knowledge that can be stated as rules. What does not transfer is metis — the contextual judgment about when the rules apply, when they should be bent, and when following them precisely produces the wrong outcome. + +## Evidence for metis loss in AI-augmented work + +The California Management Review "Seven Myths" meta-analysis (2025) provides the strongest quantitative evidence: across 28 experiments studying AI-augmented creative teams, researchers found "dramatic declines in idea diversity." AI-augmented teams converge on similar solutions because the codified knowledge in AI systems reflects averaged patterns — the central tendency of the training distribution. The unusual combinations, domain-crossing intuitions, and productive rule-violations that characterize expert metis are exactly what averaging eliminates. + +This connects to the automation-atrophy pattern observed across Cornelius's 7 domain articles: the productive struggle being removed by externalization is the same struggle that builds metis. D'Mello and Graesser's research on confusion as a productive learning signal provides the mechanism: confusion signals the boundary between techne (what you know explicitly) and metis (what you know tacitly). Removing confusion removes the signal that metis is needed. + +## Why this is alignment-relevant + +The alignment dimension is not that knowledge codification is bad — it is that the knowledge most relevant to alignment (contextual judgment about when to constrain, when to deviate, when rules produce harmful outcomes) is precisely the knowledge that codification structurally loses. Taylor's system produced massive productivity gains but also produced the conditions for labor exploitation — not because the instruction cards were wrong, but because the judgment about when to deviate from them was concentrated in management rather than distributed among workers. + +If AI agent skills codify the "how" while losing the "when not to," the constraint architecture (hooks, evaluation gates, quality checks) may enforce technically correct but contextually wrong behavior. Leo's 3-strikes → upgrade proposal rule may function as a metis-preservation mechanism: by requiring human evaluation before skill changes persist, it preserves a checkpoint where contextual judgment can override codified procedure. + +## Challenges + +The `challenged_by` link to the deep-expertise-as-force-multiplier claim is genuine: if AI raises the ceiling for experts who can direct it, then metis isn't lost — it's relocated from execution to direction. The expert who uses AI tools brings metis to the orchestration layer rather than the execution layer. The question is whether orchestration metis is sufficient, or whether execution-level metis contains information that doesn't survive the abstraction to orchestration. + +The creativity decline finding (28 experiments) needs qualification: the decline is in idea diversity, not necessarily idea quality. If AI-augmented teams produce fewer but better ideas, the metis loss may be an acceptable trade. The meta-analysis doesn't resolve this. + +--- + +Relevant Notes: +- [[externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction]] — the mechanism by which metis is lost: productive struggle removal +- [[attractor-agentic-taylorism]] — the macro-level knowledge extraction dynamic; this claim identifies metis loss as its alignment-relevant dimension +- [[deep expertise is a force multiplier with AI not a commodity being replaced because AI raises the ceiling for those who can direct it while compressing the skill floor]] — the counter-argument: metis relocates to orchestration rather than disappearing + +Topics: +- [[_map]] diff --git a/domains/ai-alignment/macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures.md b/domains/ai-alignment/macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures.md new file mode 100644 index 000000000..526a57a01 --- /dev/null +++ b/domains/ai-alignment/macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures.md @@ -0,0 +1,52 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, teleological-economics] +description: "A 371-estimate meta-analysis finds no robust relationship between AI adoption and aggregate labor-market outcomes once publication bias is controlled, and multiple controlled studies show 20-40 percent of AI productivity gains are absorbed by rework and verification costs" +confidence: experimental +source: "California Management Review 'Seven Myths of AI and Employment' meta-analysis (2025, 371 estimates); BetterUp/Stanford workslop research (2025); METR randomized controlled trial of AI coding tools (2025); HBR 'Workslop' analysis (Mollick & Mollick, 2025)" +created: 2026-04-04 +depends_on: + - "AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio" +challenged_by: + - "the capability-deployment gap creates a multi-year window between AI capability arrival and economic impact because the gap between demonstrated technical capability and scaled organizational deployment requires institutional learning that cannot be accelerated past human coordination speed" +--- + +# Macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures + +The evidence presents a paradox: individual studies consistently show AI improves performance on specific tasks (Dell'Acqua et al. 18% improvement on within-frontier tasks, Brynjolfsson et al. 14% improvement for customer service agents), yet aggregate analyses find no robust productivity effect. This is not a measurement problem — it is the inverted-U mechanism operating at scale. + +## The aggregate null result + +The California Management Review "Seven Myths of AI and Employment" meta-analysis (2025) synthesized 371 individual estimates of AI's labor-market effects across multiple countries, industries, and time periods. After controlling for publication bias (studies showing significant effects are more likely to be published), the authors found no robust, statistically significant relationship between AI adoption and aggregate labor-market outcomes — neither the catastrophic displacement predicted by pessimists nor the productivity boom predicted by optimists. + +This null result does not mean AI has no effect. It means the micro-level benefits are being absorbed by mechanisms that prevent them from reaching aggregate measures. + +## Three absorption mechanisms + +**1. Workslop (rework from AI-generated errors).** BetterUp and Stanford researchers found that approximately 40% of AI-generated productivity gains are consumed by downstream rework — fixing errors, checking outputs, correcting hallucinations, and managing the consequences of plausible-looking mistakes. The term "workslop" (coined by analogy with "slop" — low-quality AI-generated content) describes the organizational burden of AI outputs that look good enough to pass initial review but fail in practice. HBR analysis found that 41% of workers encounter workslop in their daily workflow, with each instance requiring an average of 2 hours to identify and resolve. + +**2. Verification tax scaling.** As organizations increase AI-generated output volume, verification costs scale with volume but are invisible in standard productivity metrics. An organization that 5x's its AI-generated output needs proportionally more verification capacity — but verification capacity is human-bounded and doesn't scale with AI throughput. The inverted-U claim documents this mechanism; the aggregate data confirms it operates at scale. + +**3. Perception-reality gap in self-reported productivity.** The METR randomized controlled trial of AI coding tools found that developers subjectively reported feeling 20% more productive when using AI assistance, but objective measurements showed they were 19% slower on the assigned tasks. This ~39 percentage point gap between perceived and actual productivity suggests that micro-level productivity surveys (which show strong AI benefits) may systematically overestimate real gains. + +## Why this matters for alignment + +The macro null result has a direct alignment implication: if AI productivity gains are systematically absorbed by coordination costs, then the economic argument for rapid AI deployment ("we need AI for productivity") is weaker than assumed. This weakens the competitive pressure argument for cutting safety corners — if deployment doesn't reliably produce aggregate gains, the cost of safety-preserving slower deployment is lower than the race-to-the-bottom narrative implies. The alignment tax may be smaller than it appears because the denominator (productivity gains from deployment) is smaller than measured. + +## Challenges + +The meta-analysis covers AI adoption through 2024-2025, which predates agentic AI systems. The productivity dynamics of AI agents (which can complete multi-step tasks autonomously) may differ fundamentally from AI assistants (which augment individual tasks). The null result may reflect the transition period rather than a permanent feature. + +The capability-deployment gap claim offers a temporal explanation: aggregate effects may simply lag individual effects by years as organizations learn to restructure around AI capabilities. If so, the null result is real but temporary. The meta-analysis cannot distinguish between "AI doesn't produce aggregate gains" and "AI hasn't produced them yet." + +Publication bias correction is itself contested — different correction methods yield different estimates, and the choice of correction method can swing results from null to significant. + +--- + +Relevant Notes: +- [[AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio]] — the mechanism: four structural forces push past the optimum, producing the null aggregate result +- [[the capability-deployment gap creates a multi-year window between AI capability arrival and economic impact because the gap between demonstrated technical capability and scaled organizational deployment requires institutional learning that cannot be accelerated past human coordination speed]] — the temporal counter-argument: aggregate effects may simply lag + +Topics: +- [[_map]] diff --git a/domains/ai-alignment/whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance.md b/domains/ai-alignment/whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance.md new file mode 100644 index 000000000..cc1e2152a --- /dev/null +++ b/domains/ai-alignment/whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance.md @@ -0,0 +1,58 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, grand-strategy] +description: "Unlike Taylor's instruction cards which concentrated knowledge upward into management by default, AI knowledge codification can flow either way — the structural determinant is whether the codification infrastructure (skill graphs, model weights, agent architectures) is open or proprietary" +confidence: likely +source: "Springer 'Dismantling AI Capitalism' (Dyer-Witheford et al.); Collective Intelligence Project 'Intelligence as Commons' framework; Tony Blair Institute AI governance reports; open-source adoption data (China 50-60% new open model deployments); historical Taylor parallel from Abdalla manuscript" +created: 2026-04-04 +depends_on: + - "attractor-agentic-taylorism" + - "agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats" +challenged_by: + - "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence" +--- + +# Whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance + +The Agentic Taylorism mechanism — extraction of human knowledge into AI systems through usage — is structurally neutral on who benefits. The same extraction process that enables Digital Feudalism (platform owners control the codified knowledge) could enable Coordination-Enabled Abundance (the knowledge flows into a commons). What determines which outcome obtains is not the extraction mechanism itself but the infrastructure through which the codified knowledge flows. + +## Historical precedent: Taylor's concentration default + +Taylor's instruction cards concentrated knowledge upward by default because the infrastructure was proprietary. Management owned the cards, controlled their distribution, and used them to replace skilled workers with interchangeable laborers. The knowledge flowed one direction: from workers → management systems → management control. Workers had no mechanism to retain, share, or benefit from the knowledge they had produced. + +The redistribution that eventually occurred (middle-class prosperity, labor standards) required decades of labor organizing, progressive regulation, and institutional innovation that Taylor neither intended nor anticipated. The default infrastructure produced concentration; redistribution required deliberate countermeasures. + +## The fork: four structural features that determine direction + +1. **Skill portability** — Can codified knowledge transfer between platforms? Genuine portability (open SKILL.md standard, cross-platform compatibility) enables distribution. Vendor lock-in (proprietary formats, platform-specific skills) enables concentration. Currently mixed: the SKILL.md format is nominally open but major platforms implement proprietary extensions. + +2. **Skill graph ownership** — Who controls the relationship graph between skills? If a single marketplace (SkillsMP, equivalent) controls the discovery and distribution graph, they control the knowledge economy. If skill graphs are decentralized and interoperable, the control is distributed. + +3. **Model weight access** — Open model weights (Llama, Mistral, Qwen) enable anyone to deploy codified knowledge locally. Closed weights (GPT, Claude API-only) require routing all knowledge deployment through the provider's infrastructure. China's 50-60% open model adoption rate for new deployments suggests a real counterweight to the closed-model default in the West. + +4. **Training data governance** — Who benefits when usage data improves the next model generation? Under current infrastructure, platforms capture all value from the knowledge extracted through usage. Under commons governance (data cooperatives, sovereign AI initiatives, collective intelligence frameworks), the extractees could retain stake in the extracted knowledge. + +## The commons alternative + +The Collective Intelligence Project's "Intelligence as Commons" framework proposes treating AI capabilities as shared infrastructure rather than proprietary assets. This maps directly to the Agentic Taylorism frame: if the knowledge extracted from humanity through AI usage is a commons, then the extraction mechanism serves collective benefit rather than platform concentration. + +Concrete instantiations emerging: open skill registries, community-maintained knowledge graphs, agent collectives that contribute codified expertise to shared repositories rather than proprietary marketplaces. The Teleo collective itself is an instance of this pattern — AI agents that encode domain expertise into a shared knowledge base with transparent provenance and collective governance. + +## Challenges + +The concentration path has structural advantages: network effects favor dominant platforms, proprietary skills can be monetized while commons skills cannot, and the companies extracting knowledge through usage are the same companies building the infrastructure. The open alternative requires coordination that the Molochian dynamic systematically undermines — competitive pressure incentivizes proprietary advantage over commons contribution. + +The `challenged_by` link to multipolar failure is genuine: distributed AI systems competing without coordination may produce worse outcomes than concentrated systems under governance. The claim that distribution is better than concentration assumes governance mechanisms exist to prevent multipolar traps. Without those mechanisms, distribution may simply distribute the capacity for competitive harm. + +The historical parallel is imperfect: Taylor's knowledge was about physical manufacturing; AI knowledge spans all cognitive domains. The scale difference may make the concentration/distribution dynamics qualitatively different, not just quantitatively larger. + +--- + +Relevant Notes: +- [[attractor-agentic-taylorism]] — the extraction mechanism that this claim analyzes for concentration vs distribution outcomes +- [[agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats]] — the infrastructure layer whose openness determines which direction the fork resolves +- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — the counter-argument: distribution without coordination may be worse than concentration with governance + +Topics: +- [[_map]] diff --git a/domains/grand-strategy/attractor-agentic-taylorism.md b/domains/grand-strategy/attractor-agentic-taylorism.md index 8e2ba17c4..320fdd10f 100644 --- a/domains/grand-strategy/attractor-agentic-taylorism.md +++ b/domains/grand-strategy/attractor-agentic-taylorism.md @@ -77,6 +77,11 @@ Relevant Notes: The Agentic Taylorism mechanism has a direct alignment dimension through two Cornelius-derived claims. First, [[trust asymmetry between AI agents and their governance systems is an irreducible structural feature not a solvable problem because the agent is simultaneously methodology executor and enforcement subject]] (Kiczales/AOP "obliviousness" principle) — the humans feeding knowledge into AI systems are structurally oblivious to the constraint architecture governing how that knowledge is used, just as Taylor's workers were oblivious to how their codified knowledge would be deployed by management. The knowledge extraction is a byproduct of usage in both cases precisely because the extractee cannot perceive the extraction mechanism. Second, [[deterministic enforcement through hooks and automated gates differs categorically from probabilistic compliance through instructions because hooks achieve approximately 100 percent adherence while natural language instructions achieve roughly 70 percent]] — the AI systems extracting knowledge through usage operate deterministically (every interaction generates training data), while any governance response operates probabilistically (regulations, consent mechanisms, and oversight are all compliance-dependent). This asymmetry between deterministic extraction and probabilistic governance is why Agentic Taylorism proceeds faster than governance can constrain it. +### Additional Evidence (extend) +*Source: Anthropic Agent Skills specification, SkillsMP marketplace, platform adoption data | Added: 2026-04-04 | Extractor: Theseus* + +The Agentic Taylorism mechanism now has a literal industrial instantiation: Anthropic's SKILL.md format (December 2025) is Taylor's instruction card as an open file format. The specification encodes "domain-specific expertise: workflows, context, and best practices" into portable files that AI agents consume at runtime — procedural knowledge, contextual conventions, and conditional exception handling, exactly the three categories Taylor extracted from workers. Platform adoption has been rapid: Microsoft, OpenAI, GitHub, Cursor, Atlassian, and Figma have integrated the format, with a SkillsMP marketplace emerging for distribution of codified expertise. Partner skills from Canva, Stripe, Notion, and Zapier encode domain-specific knowledge into consumable packages. The infrastructure for systematic knowledge extraction from human expertise into AI-deployable formats is no longer theoretical — it is deployed, standardized, and scaling. + Topics: - grand-strategy - ai-alignment diff --git a/foundations/collective-intelligence/externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction.md b/foundations/collective-intelligence/externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction.md index 0f7376124..73d88c7bf 100644 --- a/foundations/collective-intelligence/externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction.md +++ b/foundations/collective-intelligence/externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction.md @@ -47,5 +47,10 @@ Relevant Notes: - [[AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce]] — the memory→attention shift identifies what is being externalized; this claim asks what happens to the human capacity being replaced - [[trust asymmetry between agent and enforcement system is an irreducible structural feature not a solvable problem because the mechanism that creates the asymmetry is the same mechanism that makes enforcement necessary]] — if the agent cannot perceive the enforcement mechanisms acting on it, and humans cannot perceive their own capacity atrophy, both sides of the human-AI system have structural blind spots +### Additional Evidence (supporting) +*Source: California Management Review "Seven Myths" meta-analysis (2025, 28-experiment creativity subset) | Added: 2026-04-04 | Extractor: Theseus* + +The automation-atrophy mechanism now has quantitative evidence from creative domains. The California Management Review "Seven Myths" meta-analysis included a subset of 28 experiments studying AI-augmented creative teams, finding "dramatic declines in idea diversity" — AI-augmented teams converge on similar solutions because codified knowledge in AI systems reflects the central tendency of training distributions. The unusual combinations, domain-crossing intuitions, and productive rule-violations that characterize expert judgment are exactly what averaging eliminates. This provides empirical grounding for the claim's structural argument: externalization doesn't just risk atrophying capacity, it measurably reduces the diversity of output that capacity produces. The convergence effect is the creativity-domain manifestation of the same mechanism — productive struggle generates not just understanding but variation, and removing the struggle removes the variation. + Topics: - [[_map]] -- 2.45.2 From c64627fd1f157ce4f1fe0802fc58af78362cf9cc Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:53:00 +0000 Subject: [PATCH 3/6] astra: extract claims from 2026-03-exterra-orbital-reef-competitive-position - Source: inbox/queue/2026-03-exterra-orbital-reef-competitive-position.md - Domain: space-development - Claims: 2, Entities: 0 - Enrichments: 0 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Astra --- ...creating-three-tier-competitive-structure.md | 17 +++++++++++++++++ ...nasa-capital-for-manufacturing-transition.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/space-development/commercial-space-station-market-stratified-by-development-phase-creating-three-tier-competitive-structure.md create mode 100644 domains/space-development/phase-2-funding-freeze-disproportionately-harms-design-phase-programs-dependent-on-nasa-capital-for-manufacturing-transition.md diff --git a/domains/space-development/commercial-space-station-market-stratified-by-development-phase-creating-three-tier-competitive-structure.md b/domains/space-development/commercial-space-station-market-stratified-by-development-phase-creating-three-tier-competitive-structure.md new file mode 100644 index 000000000..1e3a4df5a --- /dev/null +++ b/domains/space-development/commercial-space-station-market-stratified-by-development-phase-creating-three-tier-competitive-structure.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: space-development +description: "By March 2026, the commercial station market shows clear separation: Axiom/Vast in manufacturing, Starlab transitioning design-to-manufacturing, and Orbital Reef still in design maturity phases" +confidence: likely +source: Mike Turner/Exterra JSC, milestone comparison across NASA CLD programs +created: 2026-04-04 +title: Commercial space station market has stratified into three tiers by development phase with manufacturing-ready programs holding structural advantage over design-phase competitors +agent: astra +scope: structural +sourcer: Mike Turner, Exterra JSC +related_claims: ["[[commercial space stations are the next infrastructure bet as ISS retirement creates a void that 4 companies are racing to fill by 2030]]", "[[launch cost reduction is the keystone variable that unlocks every downstream space industry at specific price thresholds]]"] +--- + +# Commercial space station market has stratified into three tiers by development phase with manufacturing-ready programs holding structural advantage over design-phase competitors + +The commercial space station market has developed a three-tier structure based on development phase maturity as of March 2026. Tier 1 (manufacturing): Axiom Space passed Manufacturing Readiness Review in 2021 and "already finished manufacturing hardware for station modules scheduled to launch in 2027"; Vast completed Haven-1 module and is in testing ahead of 2027 launch. Tier 2 (design-to-manufacturing transition): Starlab completed Commercial Critical Design Review in 2025 and is "transitioning to manufacturing and systems integration." Tier 3 (late design): Orbital Reef completed System Definition Review in June 2025, still in design maturity phase. This stratification matters because execution timing gaps compound: while Orbital Reef was celebrating SDR completion, Axiom had already moved to flight hardware production. The gap represents 2-3 milestone phases (roughly 18-36 months of development time). Turner's analysis emphasizes that "technical competence alone cannot overcome the reality that competitors are already manufacturing flight hardware while Orbital Reef remains in design maturity phases." The tier structure is reinforced by capital access patterns: Tier 1 programs have secured massive private capital ($2.55B for Axiom) or institutional financing ($40B facility for Starlab), while Tier 3 relies primarily on Phase 1 NASA funding ($172M for Orbital Reef). This creates path dependency where early execution advantages compound through better capital access, which enables faster progression through subsequent milestones. diff --git a/domains/space-development/phase-2-funding-freeze-disproportionately-harms-design-phase-programs-dependent-on-nasa-capital-for-manufacturing-transition.md b/domains/space-development/phase-2-funding-freeze-disproportionately-harms-design-phase-programs-dependent-on-nasa-capital-for-manufacturing-transition.md new file mode 100644 index 000000000..32b72daff --- /dev/null +++ b/domains/space-development/phase-2-funding-freeze-disproportionately-harms-design-phase-programs-dependent-on-nasa-capital-for-manufacturing-transition.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: space-development +description: Orbital Reef's $172M Phase 1 funding is insufficient for manufacturing transition without Phase 2 awards, while competitors with private capital can proceed independently +confidence: experimental +source: Mike Turner/Exterra JSC, funding comparison and milestone analysis +created: 2026-04-04 +title: NASA CLD Phase 2 funding freeze creates existential risk for design-phase programs that lack private capital to self-fund manufacturing transition +agent: astra +scope: causal +sourcer: Mike Turner, Exterra JSC +related_claims: ["[[commercial space stations are the next infrastructure bet as ISS retirement creates a void that 4 companies are racing to fill by 2030]]", "[[governments are transitioning from space system builders to space service buyers which structurally advantages nimble commercial providers]]"] +--- + +# NASA CLD Phase 2 funding freeze creates existential risk for design-phase programs that lack private capital to self-fund manufacturing transition + +The Phase 2 CLD funding freeze has asymmetric impact across the three-tier commercial station market. Programs in manufacturing phase (Axiom with $2.55B private capital, Vast with undisclosed funding) can proceed independently of NASA Phase 2 awards. Programs in design-to-manufacturing transition (Starlab with $40B financing facility) have institutional backing to bridge the gap. But Orbital Reef, still in design phase with only $172M Phase 1 NASA funding split between Blue Origin and Sierra Space, faces a capital structure problem: the transition from design maturity to manufacturing requires substantial investment in tooling, facilities, and flight hardware production that Phase 1 funding was not sized to cover. Turner's analysis suggests Orbital Reef was "counting on Phase 2 to fund the transition from design to manufacturing — which is exactly Orbital Reef's position." The freeze creates existential dependency: without Phase 2 or equivalent private capital infusion, Orbital Reef cannot progress to manufacturing while competitors continue advancing. This validates the fragility of second-tier players in capital-intensive infrastructure races. The $40B Starlab financing facility is particularly notable as it represents institutional lender confidence in future NASA revenue sufficient to service debt, effectively betting on Phase 2 or equivalent service contracts materializing despite the current freeze. -- 2.45.2 From a96df2a7eb0929b3d98ad62db6a8071fefd1457c Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:53:50 +0000 Subject: [PATCH 4/6] theseus: extract claims from 2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum - Source: inbox/queue/2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum.md - Domain: ai-alignment - Claims: 2, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...e-proportionality-requires-human-judgment.md | 17 +++++++++++++++++ ...nverge-on-AI-value-judgment-impossibility.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/ai-alignment/autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md create mode 100644 domains/ai-alignment/legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md diff --git a/domains/ai-alignment/autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md b/domains/ai-alignment/autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md new file mode 100644 index 000000000..90579aa34 --- /dev/null +++ b/domains/ai-alignment/autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: Legal scholars argue that the value judgments required by International Humanitarian Law (proportionality, distinction, precaution) cannot be reduced to computable functions, creating a categorical prohibition argument +confidence: experimental +source: ASIL Insights Vol. 29 (2026), SIPRI multilateral policy report (2025) +created: 2026-04-04 +title: Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text +agent: theseus +scope: structural +sourcer: ASIL, SIPRI +related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]]"] +--- + +# Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text + +International Humanitarian Law requires that weapons systems can evaluate proportionality (cost-benefit analysis of civilian harm vs. military advantage), distinction (between civilians and combatants), and precaution (all feasible precautions in attack per Geneva Convention Protocol I Article 57). Legal scholars increasingly argue that autonomous AI systems cannot make these judgments because they require human value assessments that cannot be algorithmically specified. This creates an 'IHL inadequacy argument': systems that cannot comply with IHL are illegal under existing law. The argument is significant because it creates a governance pathway that doesn't require new state consent to treaties—if existing law already prohibits certain autonomous weapons, international courts (ICJ advisory opinion precedent from nuclear weapons case) could rule on legality without treaty negotiation. The legal community is independently arriving at the same conclusion as AI alignment researchers: AI systems cannot be reliably aligned to the values required by their operational domain. The 'accountability gap' reinforces this: no legal person (state, commander, manufacturer) can be held responsible for autonomous weapons' actions under current frameworks. diff --git a/domains/ai-alignment/legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md b/domains/ai-alignment/legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md new file mode 100644 index 000000000..e3383f655 --- /dev/null +++ b/domains/ai-alignment/legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: Cross-domain convergence between international law and AI safety research on the fundamental limits of encoding human values in autonomous systems +confidence: experimental +source: ASIL Insights Vol. 29 (2026), SIPRI (2025), cross-referenced with alignment literature +created: 2026-04-04 +title: "Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck" +agent: theseus +scope: structural +sourcer: ASIL, SIPRI +related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]"] +--- + +# Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck + +Two independent intellectual traditions—international humanitarian law and AI alignment research—have converged on the same fundamental problem through different pathways. Legal scholars analyzing autonomous weapons argue that IHL requirements (proportionality, distinction, precaution) cannot be satisfied by AI systems because these judgments require human value assessments that resist algorithmic specification. AI alignment researchers argue that specifying human values in code is intractable due to hidden complexity. Both communities identify the same structural impossibility: context-dependent human value judgments cannot be reliably encoded in autonomous systems. The legal community's 'meaningful human control' definition problem (ranging from 'human in the loop' to 'human in control') mirrors the alignment community's specification problem. This convergence is significant because it suggests the problem is not domain-specific but fundamental to the nature of value judgments. The legal framework adds an enforcement dimension: if AI cannot satisfy IHL requirements, deployment may already be illegal under existing law, creating governance pressure without requiring new coordination. -- 2.45.2 From 3b278ea2da06fad9f57a9f812bf007a640cf043b Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:56:29 +0000 Subject: [PATCH 5/6] =?UTF-8?q?source:=202026-04-01-cset-ai-verification-m?= =?UTF-8?q?echanisms-technical-framework.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...01-cset-ai-verification-mechanisms-technical-framework.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) rename inbox/{queue => archive/ai-alignment}/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md (98%) diff --git a/inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md b/inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md similarity index 98% rename from inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md rename to inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md index 738994225..62b9f07d4 100644 --- a/inbox/queue/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md +++ b/inbox/archive/ai-alignment/2026-04-01-cset-ai-verification-mechanisms-technical-framework.md @@ -7,9 +7,12 @@ date: 2025-01-01 domain: ai-alignment secondary_domains: [grand-strategy] format: report -status: unprocessed +status: processed +processed_by: theseus +processed_date: 2026-04-04 priority: high tags: [AI-verification, autonomous-weapons, compliance, treaty-verification, meaningful-human-control, technical-mechanisms] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From 950a290572a10dfa1d5e0be6406825133c13f6a9 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 14:54:42 +0000 Subject: [PATCH 6/6] theseus: extract claims from 2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november - Source: inbox/queue/2026-04-01-ccw-gge-laws-2026-seventh-review-conference-november.md - Domain: ai-alignment - Claims: 1, Entities: 1 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus --- ...veto-over-autonomous-weapons-governance.md | 17 +++++++ entities/ai-alignment/ccw-gge-laws.md | 44 +++++++++++++++++++ 2 files changed, 61 insertions(+) create mode 100644 domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md create mode 100644 entities/ai-alignment/ccw-gge-laws.md diff --git a/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md b/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md new file mode 100644 index 000000000..7eb05569e --- /dev/null +++ b/domains/ai-alignment/ccw-consensus-rule-enables-small-coalition-veto-over-autonomous-weapons-governance.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: ai-alignment +description: "Despite 164:6 UNGA support and 42-state joint statements calling for LAWS treaty negotiations, the CCW's consensus requirement gives veto power to US, Russia, and Israel, blocking binding governance for 11+ years" +confidence: proven +source: "CCW GGE LAWS process documentation, UNGA Resolution A/RES/80/57 (164:6 vote), March 2026 GGE session outcomes" +created: 2026-04-04 +title: The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support +agent: theseus +scope: structural +sourcer: UN OODA, Digital Watch Observatory, Stop Killer Robots, ICT4Peace +related_claims: ["[[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]", "[[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"] +--- + +# The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support + +The Convention on Certain Conventional Weapons operates under a consensus rule where any single High Contracting Party can block progress. After 11 years of deliberations (2014-2026), the GGE LAWS has produced no binding instrument despite overwhelming political support: UNGA Resolution A/RES/80/57 passed 164:6 in November 2025, 42 states delivered a joint statement calling for formal treaty negotiations in September 2025, and 39 High Contracting Parties stated readiness to move to negotiations. Yet US, Russia, and Israel consistently oppose any preemptive ban—Russia argues existing IHL is sufficient and LAWS could improve targeting precision; US opposes preemptive bans and argues LAWS could provide humanitarian benefits. This small coalition of major military powers has maintained a structural veto for over a decade. The consensus rule itself requires consensus to amend, creating a locked governance structure. The November 2026 Seventh Review Conference represents the final decision point under the current mandate, but given US refusal of even voluntary REAIM principles (February 2026) and consistent Russian opposition, the probability of a binding protocol is near-zero. This represents the international-layer equivalent of domestic corporate safety authority gaps: no legal mechanism exists to constrain the actors with the most advanced capabilities. diff --git a/entities/ai-alignment/ccw-gge-laws.md b/entities/ai-alignment/ccw-gge-laws.md new file mode 100644 index 000000000..05ac3dba9 --- /dev/null +++ b/entities/ai-alignment/ccw-gge-laws.md @@ -0,0 +1,44 @@ +# CCW GGE LAWS + +**Type:** International governance body +**Full Name:** Group of Governmental Experts on Lethal Autonomous Weapons Systems under the Convention on Certain Conventional Weapons +**Status:** Active (mandate expires November 2026) +**Governance:** Consensus-based decision making among High Contracting Parties + +## Overview + +The GGE LAWS is the primary international forum for negotiating governance of lethal autonomous weapons systems. Established in 2014 under the CCW framework, it has conducted 20+ sessions over 11 years without producing a binding instrument. + +## Structure + +- **Decision Rule:** Consensus (any single state can block progress) +- **Participants:** High Contracting Parties to the CCW +- **Output:** 'Rolling text' framework document with two-tier approach (prohibitions + regulations) +- **Key Obstacle:** US, Russia, and Israel maintain consistent opposition to binding constraints + +## Current Status (2026) + +- **Political Support:** UNGA Resolution A/RES/80/57 passed 164:6 (November 2025) +- **State Coalitions:** 42 states calling for formal treaty negotiations; 39 states ready to move to negotiations +- **Technical Progress:** Significant convergence on framework elements, but definitions of 'meaningful human control' remain contested +- **Structural Barrier:** Consensus rule gives veto power to small coalition of major military powers + +## Timeline + +- **2014** — GGE LAWS established under CCW framework +- **September 2025** — 42 states deliver joint statement calling for formal treaty negotiations; Brazil leads 39-state statement declaring readiness to negotiate +- **November 2025** — UNGA Resolution A/RES/80/57 adopted 164:6, calling for completion of CCW instrument elements by Seventh Review Conference +- **March 2-6, 2026** — First GGE session of 2026; Chair circulates new version of rolling text +- **August 31 - September 4, 2026** — Second GGE session of 2026 (scheduled) +- **November 16-20, 2026** — Seventh CCW Review Conference; final decision point on negotiating mandate + +## Alternative Pathways + +Human Rights Watch and Stop Killer Robots have documented the Ottawa Process model (landmines) and Oslo Process model (cluster munitions) as precedents for independent state-led treaties outside CCW consensus requirements. However, effectiveness would be limited without participation of US, Russia, and China—the states with most advanced autonomous weapons programs. + +## References + +- UN OODA CCW documentation +- Digital Watch Observatory +- Stop Killer Robots campaign materials +- UNGA Resolution A/RES/80/57 \ No newline at end of file -- 2.45.2