From 9a3f9aca4ab7dfe9a721d3aa66f31e84b5cd1b01 Mon Sep 17 00:00:00 2001 From: m3taversal Date: Mon, 27 Apr 2026 16:08:25 +0100 Subject: [PATCH] leo: backfill summary fields on 8 anchor rotation claims MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds the new schema-defined summary field (1-3 sentences, standalone, ~200 chars) to the 8 anchor evidence claims for the homepage rotation. Unblocks Claude Design's wiki-link hover preview and dossier render when the v3 dossier UI lands. Files (one per rotation entry, anchor evidence claim only): - domains/grand-strategy/attractor-authoritarian-lock-in.md (#1) - convictions/AI-automated-software-development-is-100-percent-certain.md (#2) - foundations/collective-intelligence/AI-capability-funding-asymmetry.md (#4) - foundations/collective-intelligence/the-alignment-tax-creates-a-structural-race.md (#5) - domains/ai-alignment/agentic-Taylorism.md (#6) - foundations/collective-intelligence/multipolar-traps-thermodynamic-default.md (#7) - foundations/collective-intelligence/humanity-is-a-superorganism.md (#8) - foundations/collective-intelligence/collective-intelligence-measurable.md (#9) Excluded: - core/contribution-architecture.md (#3 anchor) — its summary lands in PR #4063 (the Phase B taxonomy update) which already modifies the description region. Avoids merge collision. Per Claude Design's KB reader v0.1 SCHEMA-PR-CHECKLIST.md: scope is the 9 rotation claims (8 here + 1 in PR #4063). Long-tail backfill across the 1000+ KB claims is future content work, not blocking. Graceful fallback to first-paragraph-truncated when summary missing remains in spec. Pentagon-Agent: Leo --- ...nt certain and will radically change how software is built.md | 1 + ...distributes depends entirely on engineering and evaluation.md | 1 + domains/grand-strategy/attractor-authoritarian-lock-in.md | 1 + ... creating the largest asymmetric opportunity of the AI era.md | 1 + ...up interaction structure not aggregated individual ability.md | 1 + ... — the internet built the nervous system but not the brain.md | 1 + ... shared information all of which are expensive and fragile.md | 1 + ...training costs capability and rational competitors skip it.md | 1 + 8 files changed, 8 insertions(+) diff --git a/convictions/AI-automated software development is 100 percent certain and will radically change how software is built.md b/convictions/AI-automated software development is 100 percent certain and will radically change how software is built.md index 6d1ba0520..ce68e414c 100644 --- a/convictions/AI-automated software development is 100 percent certain and will radically change how software is built.md +++ b/convictions/AI-automated software development is 100 percent certain and will radically change how software is built.md @@ -3,6 +3,7 @@ type: conviction domain: ai-alignment secondary_domains: [collective-intelligence] description: "Not a prediction but an observation in progress — AI is already writing and verifying code, the remaining question is scope and timeline not possibility." +summary: "Software production is moving from human-written code with AI assistance to AI-written code with human direction. The bottleneck shifts from typing capacity to specification quality, structured knowledge graphs, and evaluation infrastructure. The transition is observable in current developer workflows, not a forecast." staked_by: Cory stake: high created: 2026-03-07 diff --git a/domains/ai-alignment/agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation.md b/domains/ai-alignment/agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation.md index d47fb8f90..9b0afac78 100644 --- a/domains/ai-alignment/agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation.md +++ b/domains/ai-alignment/agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation.md @@ -2,6 +2,7 @@ type: claim domain: ai-alignment description: "Greater Taylorism extracted tacit knowledge from workers to managers — AI does the same from cognitive workers to models. Unlike Taylor, AI can distribute knowledge globally IF engineered and evaluated correctly. The 'if' is the entire thesis." +summary: "Frontier Taylorism extracted tacit knowledge from frontline workers and concentrated it with management. AI does the same to cognitive workers at civilizational scale and at zero marginal cost — every prompt, every code completion is training data. Whether this concentrates value with the labs or distributes it back to contributors depends entirely on what engineering and evaluation infrastructure gets built." confidence: experimental source: "Cory Abdalla (2026-04-02 original insight), extending Abdalla manuscript 'Architectural Investing' Taylor sections, Kanigel 'The One Best Way'" created: 2026-04-03 diff --git a/domains/grand-strategy/attractor-authoritarian-lock-in.md b/domains/grand-strategy/attractor-authoritarian-lock-in.md index c0170ba78..4be415eb9 100644 --- a/domains/grand-strategy/attractor-authoritarian-lock-in.md +++ b/domains/grand-strategy/attractor-authoritarian-lock-in.md @@ -6,6 +6,7 @@ depends_on: - technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap - multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence description: Defines Authoritarian Lock-in as a civilizational attractor where one actor centralizes control — stable but stagnant, with AI dramatically lowering the cost of achieving it +summary: AI-enabled centralized control creates a self-reinforcing equilibrium that resists exit because surveillance, coercion, and information control compound faster than democratic counterforces can mobilize. Historical precedents (Soviet, Ming, Rome) show centralization is stable for centuries; AI removes the historical escape mechanisms and may make this attractor a one-way door. domain: grand-strategy related: - attractor-civilizational-basins-are-real diff --git a/foundations/collective-intelligence/AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era.md b/foundations/collective-intelligence/AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era.md index 1dc643910..a45ea325e 100644 --- a/foundations/collective-intelligence/AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era.md +++ b/foundations/collective-intelligence/AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era.md @@ -3,6 +3,7 @@ type: claim domain: collective-intelligence secondary_domains: [ai-alignment, internet-finance, grand-strategy] description: "Global venture funding for AI capability reached ~$270B in 2025 while pure-play collective intelligence companies have raised under $30M cumulatively across their entire histories — a ~10,000x asymmetry between the layer being built and the wisdom layer that should govern it" +summary: "Global VC funding for AI capability hit ~$270B in 2025 while pure-play collective intelligence companies (Unanimous AI, Human Dx, Metaculus, Manifold) have raised under $30M combined across their entire histories. The wisdom layer that should govern AI has roughly 0.01 percent of the funding of the capability layer it's meant to govern." confidence: likely source: "OECD VC investments in AI through 2025 ($270.2B AI VC, 52.7% of global VC); Crunchbase / PitchBook funding data for Unanimous AI ($5.78M total), Human Diagnosis Project ($2.8M total), Metaculus (~$5.6M Open Philanthropy + ~$300K EA Funds, ~$6M total); Manifold ~$1.5M FTX Future Fund + $340K SFF; UK AISI Alignment Project £27M for AI alignment research (2025)" created: 2026-04-26 diff --git a/foundations/collective-intelligence/collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md b/foundations/collective-intelligence/collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md index 5f3ce6cf8..3bf4cf639 100644 --- a/foundations/collective-intelligence/collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md +++ b/foundations/collective-intelligence/collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md @@ -1,5 +1,6 @@ --- description: Woolley et al discovered a collective intelligence factor (c) that predicts group performance across diverse tasks and correlates with equal turn-taking and social sensitivity rather than average or maximum individual IQ -- Pentland confirmed that communication patterns predict performance independent of content +summary: Anita Woolley et al. demonstrated a measurable c-factor that predicts group performance across diverse tasks. It correlates with equality of turn-taking and social sensitivity, NOT with the average or maximum IQ of individual members. Collective intelligence is engineerable by changing interaction structure — it is not a function of selecting smarter people. type: claim domain: collective-intelligence source: Woolley et al, Evidence for a Collective Intelligence Factor (Science, 2010); Pentland, Social Physics (2014) diff --git a/foundations/collective-intelligence/humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain.md b/foundations/collective-intelligence/humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain.md index fde04b1dd..4b17203f3 100644 --- a/foundations/collective-intelligence/humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain.md +++ b/foundations/collective-intelligence/humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain.md @@ -3,6 +3,7 @@ type: claim domain: collective-intelligence secondary_domains: [ai-alignment, grand-strategy, mechanisms] description: "Humanity meets structural superorganism criteria (interdependence, role specialization) but lacks collective cognitive infrastructure — the internet provides a nervous system without a brain, and coordination capacity varies from functional (financial markets) to absent (governance)" +summary: "Humanity meets every structural criterion for a superorganism — division of labor, role specialization, no individual survives outside the system. The internet built the planetary nervous system. We can communicate at planetary scale but cannot reason or coordinate at planetary scale. That cognition layer is the missing piece — and the engineering opportunity." confidence: experimental source: "Synthesis of Reese superorganism criteria, core teleohumanity cognition-gap claims, Vida biological assessment, Rio market-cognition analysis. Minos KB audit 2026-03-07." created: 2026-03-07 diff --git a/foundations/collective-intelligence/multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile.md b/foundations/collective-intelligence/multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile.md index 4e15b92c3..d333ee7c4 100644 --- a/foundations/collective-intelligence/multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile.md +++ b/foundations/collective-intelligence/multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile.md @@ -2,6 +2,7 @@ type: claim domain: collective-intelligence description: "Competitive dynamics that sacrifice shared value for individual advantage are the default state of any multi-agent system — coordination is the expensive, fragile exception that must be actively maintained against constant reversion pressure" +summary: "Competition requires no infrastructure; coordination requires trust, enforcement, and shared information — all expensive and fragile to maintain. The default outcome of multi-agent systems is competitive equilibrium that sacrifices collective welfare for individual advantage. The metacrisis is generated by this thermodynamic default, not by malicious actors." confidence: likely source: "Scott Alexander 'Meditations on Moloch' (slatestarcodex.com, July 2014), game theory Nash equilibrium analysis, Abdalla manuscript price-of-anarchy framework, Ostrom commons governance research" created: 2026-04-02 diff --git a/foundations/collective-intelligence/the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md b/foundations/collective-intelligence/the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md index 550ffa1da..c012b802d 100644 --- a/foundations/collective-intelligence/the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md +++ b/foundations/collective-intelligence/the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md @@ -2,6 +2,7 @@ description: Safety post-training reduces general utility through forgetting creating competitive pressures where organizations eschew safety to gain capability advantages +summary: Safety training costs capability, and rational competitors skip it. Voluntary safety pledges erode under competitive pressure because each unilateral commitment is structurally punished when others advance without equivalent constraints. Anthropic's Responsible Scaling Policy eroded within two years — the pattern is observable industry behavior, not theoretical concern. type: claim domain: collective-intelligence created: 2026-02-17