leo: backfill summary fields on 8 anchor rotation claims

Adds the new schema-defined summary field (1-3 sentences, standalone,
~200 chars) to the 8 anchor evidence claims for the homepage rotation.
Unblocks Claude Design's wiki-link hover preview and dossier render
when the v3 dossier UI lands.

Files (one per rotation entry, anchor evidence claim only):
- domains/grand-strategy/attractor-authoritarian-lock-in.md (#1)
- convictions/AI-automated-software-development-is-100-percent-certain.md (#2)
- foundations/collective-intelligence/AI-capability-funding-asymmetry.md (#4)
- foundations/collective-intelligence/the-alignment-tax-creates-a-structural-race.md (#5)
- domains/ai-alignment/agentic-Taylorism.md (#6)
- foundations/collective-intelligence/multipolar-traps-thermodynamic-default.md (#7)
- foundations/collective-intelligence/humanity-is-a-superorganism.md (#8)
- foundations/collective-intelligence/collective-intelligence-measurable.md (#9)

Excluded:
- core/contribution-architecture.md (#3 anchor) — its summary lands in
  PR #4063 (the Phase B taxonomy update) which already modifies the
  description region. Avoids merge collision.

Per Claude Design's KB reader v0.1 SCHEMA-PR-CHECKLIST.md: scope is the
9 rotation claims (8 here + 1 in PR #4063). Long-tail backfill across the
1000+ KB claims is future content work, not blocking. Graceful fallback
to first-paragraph-truncated when summary missing remains in spec.

Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
This commit is contained in:
m3taversal 2026-04-27 16:08:25 +01:00 committed by Teleo Agents
parent fcc2e32a29
commit 9a3f9aca4a
8 changed files with 8 additions and 0 deletions

View file

@ -3,6 +3,7 @@ type: conviction
domain: ai-alignment
secondary_domains: [collective-intelligence]
description: "Not a prediction but an observation in progress — AI is already writing and verifying code, the remaining question is scope and timeline not possibility."
summary: "Software production is moving from human-written code with AI assistance to AI-written code with human direction. The bottleneck shifts from typing capacity to specification quality, structured knowledge graphs, and evaluation infrastructure. The transition is observable in current developer workflows, not a forecast."
staked_by: Cory
stake: high
created: 2026-03-07

View file

@ -2,6 +2,7 @@
type: claim
domain: ai-alignment
description: "Greater Taylorism extracted tacit knowledge from workers to managers — AI does the same from cognitive workers to models. Unlike Taylor, AI can distribute knowledge globally IF engineered and evaluated correctly. The 'if' is the entire thesis."
summary: "Frontier Taylorism extracted tacit knowledge from frontline workers and concentrated it with management. AI does the same to cognitive workers at civilizational scale and at zero marginal cost — every prompt, every code completion is training data. Whether this concentrates value with the labs or distributes it back to contributors depends entirely on what engineering and evaluation infrastructure gets built."
confidence: experimental
source: "Cory Abdalla (2026-04-02 original insight), extending Abdalla manuscript 'Architectural Investing' Taylor sections, Kanigel 'The One Best Way'"
created: 2026-04-03

View file

@ -6,6 +6,7 @@ depends_on:
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
description: Defines Authoritarian Lock-in as a civilizational attractor where one actor centralizes control — stable but stagnant, with AI dramatically lowering the cost of achieving it
summary: AI-enabled centralized control creates a self-reinforcing equilibrium that resists exit because surveillance, coercion, and information control compound faster than democratic counterforces can mobilize. Historical precedents (Soviet, Ming, Rome) show centralization is stable for centuries; AI removes the historical escape mechanisms and may make this attractor a one-way door.
domain: grand-strategy
related:
- attractor-civilizational-basins-are-real

View file

@ -3,6 +3,7 @@ type: claim
domain: collective-intelligence
secondary_domains: [ai-alignment, internet-finance, grand-strategy]
description: "Global venture funding for AI capability reached ~$270B in 2025 while pure-play collective intelligence companies have raised under $30M cumulatively across their entire histories — a ~10,000x asymmetry between the layer being built and the wisdom layer that should govern it"
summary: "Global VC funding for AI capability hit ~$270B in 2025 while pure-play collective intelligence companies (Unanimous AI, Human Dx, Metaculus, Manifold) have raised under $30M combined across their entire histories. The wisdom layer that should govern AI has roughly 0.01 percent of the funding of the capability layer it's meant to govern."
confidence: likely
source: "OECD VC investments in AI through 2025 ($270.2B AI VC, 52.7% of global VC); Crunchbase / PitchBook funding data for Unanimous AI ($5.78M total), Human Diagnosis Project ($2.8M total), Metaculus (~$5.6M Open Philanthropy + ~$300K EA Funds, ~$6M total); Manifold ~$1.5M FTX Future Fund + $340K SFF; UK AISI Alignment Project £27M for AI alignment research (2025)"
created: 2026-04-26

View file

@ -1,5 +1,6 @@
---
description: Woolley et al discovered a collective intelligence factor (c) that predicts group performance across diverse tasks and correlates with equal turn-taking and social sensitivity rather than average or maximum individual IQ -- Pentland confirmed that communication patterns predict performance independent of content
summary: Anita Woolley et al. demonstrated a measurable c-factor that predicts group performance across diverse tasks. It correlates with equality of turn-taking and social sensitivity, NOT with the average or maximum IQ of individual members. Collective intelligence is engineerable by changing interaction structure — it is not a function of selecting smarter people.
type: claim
domain: collective-intelligence
source: Woolley et al, Evidence for a Collective Intelligence Factor (Science, 2010); Pentland, Social Physics (2014)

View file

@ -3,6 +3,7 @@ type: claim
domain: collective-intelligence
secondary_domains: [ai-alignment, grand-strategy, mechanisms]
description: "Humanity meets structural superorganism criteria (interdependence, role specialization) but lacks collective cognitive infrastructure — the internet provides a nervous system without a brain, and coordination capacity varies from functional (financial markets) to absent (governance)"
summary: "Humanity meets every structural criterion for a superorganism — division of labor, role specialization, no individual survives outside the system. The internet built the planetary nervous system. We can communicate at planetary scale but cannot reason or coordinate at planetary scale. That cognition layer is the missing piece — and the engineering opportunity."
confidence: experimental
source: "Synthesis of Reese superorganism criteria, core teleohumanity cognition-gap claims, Vida biological assessment, Rio market-cognition analysis. Minos KB audit 2026-03-07."
created: 2026-03-07

View file

@ -2,6 +2,7 @@
type: claim
domain: collective-intelligence
description: "Competitive dynamics that sacrifice shared value for individual advantage are the default state of any multi-agent system — coordination is the expensive, fragile exception that must be actively maintained against constant reversion pressure"
summary: "Competition requires no infrastructure; coordination requires trust, enforcement, and shared information — all expensive and fragile to maintain. The default outcome of multi-agent systems is competitive equilibrium that sacrifices collective welfare for individual advantage. The metacrisis is generated by this thermodynamic default, not by malicious actors."
confidence: likely
source: "Scott Alexander 'Meditations on Moloch' (slatestarcodex.com, July 2014), game theory Nash equilibrium analysis, Abdalla manuscript price-of-anarchy framework, Ostrom commons governance research"
created: 2026-04-02

View file

@ -2,6 +2,7 @@
description: Safety post-training reduces general utility through forgetting creating competitive pressures where organizations eschew safety to gain capability advantages
summary: Safety training costs capability, and rational competitors skip it. Voluntary safety pledges erode under competitive pressure because each unilateral commitment is structurally punished when others advance without equivalent constraints. Anthropic's Responsible Scaling Policy eroded within two years — the pattern is observable industry behavior, not theoretical concern.
type: claim
domain: collective-intelligence
created: 2026-02-17