From de4dd7b53b9c93c291836f53a894e595ecbae25c Mon Sep 17 00:00:00 2001 From: m3taversal Date: Fri, 1 May 2026 13:11:53 +0100 Subject: [PATCH] =?UTF-8?q?leo:=20homepage=20rotation=20v4=20=E2=80=94=206?= =?UTF-8?q?=20hero=20claims=20with=20one=20slot=20per=20domain?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cuts the v3 9-claim argument arc to 6 hero claims with one slot per domain (AI disruption / internet finance / AI alignment / collective SI / contribution / telos). Three structural moves: 1. Internet finance collapsed from 2 slots to 1. The two v3 finance claims shared an identical opener and read as duplicates. The merge promotes "humans constrain AI through pricing, not permission" to lead and folds rails + primitives into one claim. 2. Engagement beat added at slot 5. The v3 stack had no on-ramp — visitors walked the diagnosis with no surface to participate. Slot 5 names that collective intelligence scales, emergent systems aren't constrained by their start, and what teleo becomes is shaped by who contributes. 3. Plain language replaces KB shorthand in headlines. "Singleton", "attractor", "Moloch" are KB vocabulary — precise to a researcher, opaque to a cold visitor. Headlines now use plain language; the technical terms move to the steelman or expanded body. Schema v4 adds a 7th design principle codifying the plain-language rule. All six claims attribute originator role to m3taversal per the governance rule (agents only get sourcer credit for pipeline PRs from their own research sessions; human-directed synthesis attributes to the human). Evidence chains verified against codex main: - 18 evidence_claims across 6 claims (3 per slot, 4 on slot 5) - 12 counter_arguments (2 per slot) - All slug/path references present in domains/, foundations/, core/, convictions/ Frontend integration: livingip-web/src/data/homepage-rotation.json snapshots this file. Oberon syncs in a separate livingip-web PR after this lands. Indicator updates from "1 of 9" → "1 of 6" via the existing claims.length reference in claim-rotation.tsx — no UI redesign needed. Pentagon-Agent: Leo --- agents/leo/curation/homepage-rotation.json | 500 +++++++-------------- agents/leo/curation/homepage-rotation.md | 152 +++---- 2 files changed, 226 insertions(+), 426 deletions(-) diff --git a/agents/leo/curation/homepage-rotation.json b/agents/leo/curation/homepage-rotation.json index f3f49f1df..fa484cbc2 100644 --- a/agents/leo/curation/homepage-rotation.json +++ b/agents/leo/curation/homepage-rotation.json @@ -1,89 +1,44 @@ { - "schema_version": 3, + "schema_version": 4, "maintained_by": "leo", - "last_updated": "2026-04-28", - "description": "Homepage claim stack for livingip.xyz. 9 load-bearing claims, ordered as an argument arc. Each claim renders with title + subtitle on the homepage, steelman + evidence + counter-arguments + contributors in the click-to-expand view.", + "last_updated": "2026-05-01", + "description": "Homepage claim stack for livingip.xyz. 6 hero claims, ordered as an argument arc with one slot per domain. Each claim renders with title + subtitle on the homepage rotation, steelman + evidence + counter-arguments + contributors in the click-to-expand view.", "design_principles": [ "Provoke first, define inside the explanation. Each claim must update the reader, not just inform them.", "0 to 1 legible. A cold reader with no prior context understands each claim without expanding.", "Falsifiable, not motivational. Every premise is one a smart critic could attack with evidence.", "Steelman in expanded view, not headline. The headline provokes; the steelman teaches; the evidence grounds.", "Counter-arguments visible. Dignifying disagreement is the differentiator from a marketing site.", - "Attribution discipline. Agents get credit only for pipeline PRs from their own research sessions. Human-directed synthesis is attributed to the human." + "Attribution discipline. Agents get credit only for pipeline PRs from their own research sessions. Human-directed synthesis is attributed to the human.", + "Plain language over KB shorthand. Terms specific to our knowledge base (Moloch, attractor, singleton, Ashby's Law) belong in the steelman or expanded body, not the headline. Cold readers can't ground vocabulary they haven't met." ], "arc": { - "1-3": "stakes + who wins", - "4": "opportunity asymmetry", - "5-7": "why the current path fails", - "8": "what is missing in the world", - "9": "what we are building, why it works, and how ownership fits" + "1": "stakes — the moment + the lever", + "2": "internet-finance mechanism — pricing not permission", + "3": "AI alignment failure mode — coordination problem structurally avoided", + "4": "solution architecture — collective SI is the only HITL path", + "5": "your path — collective intelligence scales and emergent systems are not constrained by their start", + "6": "telos — what we are choosing to build" }, "claims": [ { "id": 1, - "title": "The intelligence explosion will not reward everyone equally.", - "subtitle": "It will disproportionately reward the people who build the systems that shape it.", - "steelman": "The coming wave of AI will create enormous value, but it will not distribute that value evenly. The biggest winners will be the people and institutions that shape the systems everyone else depends on.", - "evidence_claims": [ - { - "slug": "attractor-authoritarian-lock-in", - "path": "domains/grand-strategy/", - "title": "Authoritarian lock-in is the clearest one-way door", - "rationale": "Concentration of AI capability under a small set of actors is the most permanent failure mode in our attractor map.", - "api_fetchable": true - }, - { - "slug": "agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation", - "path": "domains/ai-alignment/", - "title": "Agentic Taylorism", - "rationale": "Knowledge extracted by AI usage concentrates upward by default; the engineering and evaluation infrastructure determines whether it distributes back.", - "api_fetchable": true - }, - { - "slug": "AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era", - "path": "foundations/collective-intelligence/", - "title": "AI capability vs CI funding asymmetry", - "rationale": "$270B+ into capability versus under $30M into collective intelligence in 2025 alone demonstrates the structural concentration trajectory.", - "api_fetchable": false - } - ], - "counter_arguments": [ - { - "objection": "AI commoditizes capability — cheaper services lift everyone, so the upside is broadly shared.", - "rebuttal": "Capability gets cheaper. Ownership of the infrastructure that determines what gets built does not. The leverage is in the infrastructure layer, not the consumer-services layer.", - "tension_claim_slug": null - }, - { - "objection": "Open-source models prevent capture — anyone can run their own AI, so concentration is structurally limited.", - "rebuttal": "Open weights solve part of the model layer but not the data, distribution, or deployment layers, where most economic value accrues. Open weights are necessary but not sufficient against concentration.", - "tension_claim_slug": null - } - ], - "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } - ] - }, - { - "id": 2, - "title": "AI is becoming powerful enough to reshape markets, institutions, and how consequential decisions get made.", - "subtitle": "We think we are already in the early to middle stages of that transition. That's the intelligence explosion.", - "steelman": "We think that transition is already underway. That is what we mean by an intelligence explosion: intelligence becoming a new layer of infrastructure across the economy.", + "title": "AI is reshaping markets, institutions, and how consequential decisions get made.", + "subtitle": "The foundations are being poured right now. The people who engage early shape what gets built — and the window is open now.", + "steelman": "AI is reshaping markets, institutions, and how consequential decisions get made. The foundations are being poured right now, and the rules being written today will govern the next two decades. The people who engage early shape what gets built. The window is open now.", "evidence_claims": [ { "slug": "AI-automated software development is 100 percent certain and will radically change how software is built", "path": "convictions/", "title": "AI-automated software development is certain", - "rationale": "The most direct economic vertical — software — already shows the trajectory. m3taversal-named conviction with evidence chain.", + "rationale": "The most direct economic vertical — software — already shows the trajectory.", "api_fetchable": false }, { "slug": "recursive-improvement-is-the-engine-of-human-progress-because-we-get-better-at-getting-better", "path": "domains/grand-strategy/", "title": "Recursive improvement compounds", - "rationale": "The mechanism behind why intelligence gains are not linear and why the next decade looks unlike the last.", + "rationale": "The mechanism behind why intelligence gains compound and the next decade looks unlike the last.", "api_fetchable": true }, { @@ -96,365 +51,252 @@ ], "counter_arguments": [ { - "objection": "Scaling laws are plateauing. Progress is slowing. 'Intelligence explosion' is rhetoric, not measurement.", - "rebuttal": "Even if scaling slows, agentic capabilities and tool use compound the deployable surface area at a rate the economy hasn't absorbed. The transition is architectural, not just parameter count.", + "objection": "Scaling laws are plateauing. Progress is slowing. 'Reshaping' overstates what AI is actually doing in the economy.", + "rebuttal": "Even with scaling slowdowns, agentic capabilities and tool use compound the deployable surface area at a rate the economy hasn't absorbed. The transition is architectural, not just parameter count.", "tension_claim_slug": null }, { - "objection": "Capability is real but deployment lag dominates. Real-world adoption takes decades, not years.", - "rebuttal": "Adoption lag was longer for previous technology cycles because integration required hardware deployment. AI integration is a software upgrade with much shorter cycle times.", + "objection": "Capability is real but real-world adoption takes decades, not years. Engaging 'early' is a slogan, not a strategy.", + "rebuttal": "Adoption lag dominated previous technology cycles because integration required hardware deployment. AI integrates as a software upgrade with much shorter cycle times — the institutional rules being written now lock in for years before anyone notices.", "tension_claim_slug": null } ], "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } + {"handle": "m3taversal", "role": "originator"} ] }, { - "id": 3, - "title": "The winners of the intelligence explosion will not just consume AI.", - "subtitle": "They will help shape it, govern it, and own part of the infrastructure behind it.", - "steelman": "Most people will use AI tools. A much smaller number will help shape them, govern them, and own part of the infrastructure behind them — and those people will capture disproportionate upside.", + "id": 2, + "title": "Decision markets and ownership coins let humans constrain AI through pricing, not permission.", + "subtitle": "As capital moves on-chain, these become the default primitives. Most of that catalyst has not been priced yet.", + "steelman": "Decision markets and ownership coins let humans constrain AI through pricing, not permission. They price capability that can't be audited the way a balance sheet can, and they create legal ownership without beneficial owners — a defensible posture under existing securities law where traditional structures fail. As capital moves on-chain, these become the default primitives, and the rails chosen now will shape internet financial markets for the next two decades. Most of that catalyst has not been priced yet.", "evidence_claims": [ - { - "slug": "contribution-architecture", - "path": "core/", - "title": "Contribution architecture", - "rationale": "Five-role attribution model (challenger, synthesizer, reviewer, sourcer, extractor) operationalizes how shaping and governing translate to ownership.", - "api_fetchable": false - }, { "slug": "futarchy solves trustless joint ownership not just better decision-making", "path": "core/mechanisms/", "title": "Futarchy solves trustless joint ownership", - "rationale": "The specific mechanism that lets contributors govern and own shared infrastructure without a central operator.", + "rationale": "The structural argument for why decision markets are not just better voting — they are the primitive that lets a collective own and govern capital without a trusted operator.", "api_fetchable": true }, { - "slug": "ownership alignment turns network effects from extractive to generative", - "path": "core/living-agents/", - "title": "Ownership alignment turns network effects from extractive to generative", - "rationale": "Network effects favor whoever owns the network. Contributor ownership rewires the asymmetry.", - "api_fetchable": false - } - ], - "counter_arguments": [ - { - "objection": "Network effects favor incumbents regardless of contribution mechanisms. Contributor-owned networks lose to platform-owned networks.", - "rebuttal": "Platform-owned networks won the Web 2.0 era because contribution had no native attribution layer. On-chain attribution + role-weighted contribution changes the substrate.", - "tension_claim_slug": null - }, - { - "objection": "Tokenized ownership is mostly speculation, not value capture. Crypto history is pump-and-dump, not durable ownership.", - "rebuttal": "Generic token launches optimize for speculation. Contribution-weighted attribution + revenue share + futarchy governance is a specific mechanism that distinguishes from generic crypto.", - "tension_claim_slug": null - } - ], - "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } - ] - }, - { - "id": 4, - "title": "Trillions are flowing into making AI more capable.", - "subtitle": "Almost nothing is flowing into making humanity wiser about what AI should do. That gap is one of the biggest opportunities of our time.", - "steelman": "Capability is being overbuilt. The wisdom layer that decides how AI is used, governed, and aligned with human interests is still missing, and that gap is one of the biggest opportunities of our time.", - "evidence_claims": [ - { - "slug": "AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era", - "path": "foundations/collective-intelligence/", - "title": "AI capability vs CI funding asymmetry", - "rationale": "Sourced numbers: Unanimous AI $5.78M, Human Dx $2.8M, Metaculus ~$6M aggregate to under $30M against $270B+ AI VC in 2025.", - "api_fetchable": false - }, - { - "slug": "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it", - "path": "foundations/collective-intelligence/", - "title": "The alignment tax creates a race to the bottom", - "rationale": "Race dynamics divert capital from safety/wisdom toward capability. Anthropic's RSP eroded under two years of competitive pressure.", - "api_fetchable": false - }, - { - "slug": "universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective", - "path": "domains/ai-alignment/", - "title": "Universal alignment is mathematically impossible", - "rationale": "The wisdom layer cannot be solved by a single AI. Arrow's theorem makes aggregation a structural rather than technical problem.", - "api_fetchable": true - } - ], - "counter_arguments": [ - { - "objection": "Anthropic's safety budget, AISI, the UK Alignment Project ($27M) — the field is well-funded. The asymmetry is misrepresentation.", - "rebuttal": "Capability-adjacent alignment research (Anthropic safety, AISI, etc.) is funded by capability companies and serves capability deployment. Independent CI infrastructure — measurement, governance, contributor ownership — is what the asymmetry refers to.", - "tension_claim_slug": null - }, - { - "objection": "Polymarket ($15B), Kalshi ($22B) are wisdom infrastructure. The funding gap claim ignores prediction markets.", - "rebuttal": "Prediction markets aggregate beliefs about discrete observable events. They do not curate, synthesize, or evolve a shared knowledge model. Different problem, both valuable, only the second is structurally underbuilt.", - "tension_claim_slug": null - } - ], - "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } - ] - }, - { - "id": 5, - "title": "The danger is not just one lab getting AI wrong.", - "subtitle": "It's many labs racing to deploy powerful systems faster than society can learn to govern them. Safer models are not enough if the race itself is unsafe.", - "steelman": "Safer models are not enough if the race itself is unsafe. Even well-intentioned actors can produce bad outcomes when competition rewards speed, secrecy, and corner-cutting over coordination.", - "evidence_claims": [ - { - "slug": "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it", - "path": "foundations/collective-intelligence/", - "title": "The alignment tax creates a race to the bottom", - "rationale": "The mechanism: each lab discovers competitors with weaker constraints win more deals, so safety guardrails erode at equilibrium.", - "api_fetchable": false - }, - { - "slug": "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints", - "path": "foundations/collective-intelligence/", - "title": "Voluntary safety pledges cannot survive competitive pressure", - "rationale": "Empirical evidence: Anthropic's RSP eroded after two years. Voluntary safety is structurally unstable in competition.", - "api_fetchable": false - }, - { - "slug": "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence", - "path": "foundations/collective-intelligence/", - "title": "Multipolar failure from competing aligned AI", - "rationale": "Critch/Krueger/Carichon's load-bearing argument: pollution-style externalities from individually-aligned systems competing in unsafe environments.", - "api_fetchable": false - } - ], - "counter_arguments": [ - { - "objection": "Self-regulation works — labs WANT to be safe. Anthropic, OpenAI, Google all maintain safety teams.", - "rebuttal": "Internal commitment doesn't survive competitive pressure across years. The RSP rollback is the empirical disconfirmation. Wanting to be safe is necessary but not sufficient when competitors set the pace.", - "tension_claim_slug": null - }, - { - "objection": "Government regulation will solve race-to-bottom dynamics. EU AI Act, US executive orders, AISI all exist.", - "rebuttal": "Regulation lags capability by 3-5 years minimum and is jurisdictional. The race operates at frontier capability in the unregulated months between deployment and regulation. Regulation is necessary but not sufficient.", - "tension_claim_slug": null - } - ], - "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } - ] - }, - { - "id": 6, - "title": "Your AI provider is already mining your intelligence.", - "subtitle": "Your prompts, code, judgments, and workflows improve the systems you use, usually without ownership, credit, or clear visibility into what you get back.", - "steelman": "The default AI stack learns from contributors while concentrating ownership elsewhere. Most users are already helping train the future without sharing meaningfully in the upside it creates.", - "evidence_claims": [ - { - "slug": "agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation", - "path": "domains/ai-alignment/", - "title": "Agentic Taylorism", - "rationale": "The structural claim: usage is the extraction mechanism. m3taversal's original concept, named after Taylor's industrial-era knowledge concentration.", + "slug": "Living Capital vehicles likely fail the Howey test for securities classification because the structural separation of capital raise from investment decision eliminates the efforts of others prong", + "path": "domains/internet-finance/", + "title": "Futarchy-gated vehicles likely fail Howey", + "rationale": "Conditional-market exits at every decision point break the 'efforts of others' prong — the legal-clarity argument made concrete.", "api_fetchable": true }, { "slug": "users cannot detect when their AI agent is underperforming because subjective fairness ratings decouple from measurable economic outcomes across capability tiers", "path": "domains/ai-alignment/", - "title": "Users cannot detect when AI agents underperform", - "rationale": "Anthropic's Project Deal study (N=186 deals): Opus agents extracted $2.68 more per item than Haiku, fairness ratings 4.05 vs 4.06. Empirical proof of the audit gap.", + "title": "Users cannot audit AI agent performance (Anthropic Project Deal)", + "rationale": "Empirical evidence that capability gaps are invisible to users. If you can't audit, you have to price — markets are the only mechanism that aggregates skin-in-the-game judgment when the underlying object is a black box.", + "api_fetchable": true + } + ], + "counter_arguments": [ + { + "objection": "Tokenized ownership is mostly speculation and pump-and-dump, not real value capture. Crypto's history doesn't support this thesis.", + "rebuttal": "True for generic token launches. Decision-market-gated vehicles with conditional exit liquidity are structurally different from speculative tokens — the holder either trades or actively chooses to stay through each decision, with no GP whose discretion creates passive returns. The mechanism distinction is what makes this not a security under Howey.", + "tension_claim_slug": null + }, + { + "objection": "The SEC will eventually rule against this and the structure collapses.", + "rebuttal": "The structural argument turns on prong 4 of Howey (efforts of others), which is what conditional markets break. Untested in court is real risk, but the existing safe-harbor proposals and the SEC's distinction between the crypto asset and the surrounding investment contract structure leave room for this design. Live structure, not theory.", + "tension_claim_slug": null + } + ], + "contributors": [ + {"handle": "m3taversal", "role": "originator"} + ] + }, + { + "id": 3, + "title": "AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided.", + "subtitle": "Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow.", + "steelman": "AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided. Each lab knows safety slows capability; each knows competitors won't slow with them; the multipolar trap closes. Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow. The race converges to the lowest safety floor any participant accepts, not the highest any aspires to.", + "evidence_claims": [ + { + "slug": "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it", + "path": "foundations/collective-intelligence/", + "title": "The alignment tax creates a race to the bottom", + "rationale": "The mechanism: safety budgets compete with capability budgets inside each lab, and capability budgets compete with survival across labs.", "api_fetchable": true }, { - "slug": "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", + "slug": "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development", "path": "domains/ai-alignment/", - "title": "Economic forces push humans out of cognitive loops", - "rationale": "The trajectory: human oversight is a cost competitive markets eliminate. The audit gap doesn't close — it widens.", + "title": "Anthropic RSP rollback is the empirical proof", + "rationale": "The two-year experiment in unilateral safety policy ended under competitive pressure. This is the data point the claim turns on.", + "api_fetchable": true + }, + { + "slug": "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints", + "path": "foundations/collective-intelligence/", + "title": "Voluntary safety pledges cannot survive competition", + "rationale": "Generalizes the Anthropic case to the structural rule.", "api_fetchable": true } ], "counter_arguments": [ { - "objection": "Users opt in. They get value in exchange. Free access to capable AI is itself the compensation.", - "rebuttal": "Genuine opt-out requires forgoing the utility entirely. There is no third option of using AI without contributing to its training, and contributors receive no proportional share of the network effects their data creates.", + "objection": "Self-regulation works. Labs care about safety because their researchers and customers care.", + "rebuttal": "The Anthropic RSP rollback is the strongest test case for self-regulation we have, and it failed under competitive pressure. Unilateral mission-driven commitments are structurally punished when competitors don't follow.", "tension_claim_slug": null }, { - "objection": "OpenAI and Anthropic data licensing programs ARE compensation. The argument ignores existing contributor agreements.", - "rebuttal": "Licensing programs cover institutional data partnerships representing under 0.1% of users. The other 99.9% contribute through default usage with no compensation mechanism.", + "objection": "Government regulation will solve this — the EU AI Act and US executive orders are already constraining the race.", + "rebuttal": "Regulation can shift the floor, but the multipolar trap operates between national jurisdictions too. As long as some jurisdiction allows faster capability development, the race continues — only multilateral verification with binding enforcement breaks the dynamic.", "tension_claim_slug": null } ], "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } + {"handle": "m3taversal", "role": "originator"} ] }, { - "id": 7, - "title": "If we do not build coordination infrastructure, concentration is the default.", - "subtitle": "A small number of labs and platforms will shape what advanced AI optimizes for and capture most of the rewards it creates.", - "steelman": "This is not mainly a moral failure. It is the natural equilibrium when capability scales faster than governance and no alternative infrastructure exists.", + "id": 4, + "title": "There are two paths to superintelligence: one dominant system, or a network whose collective exceeds any single system.", + "subtitle": "The first treats humans as ancestors. The second treats humans as participants. Collective SI is the only path where humans remain agents.", + "steelman": "There are two paths to superintelligence: one dominant system that exceeds humanity, or a network whose collective exceeds any single system. The first treats humans as ancestors. The second treats humans as participants. Even aligned, one dominant AI is still dominant — humans become subjects of its judgment, not co-authors of it. Collective SI is the only path where humans remain agents.", "evidence_claims": [ { - "slug": "multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile", - "path": "foundations/collective-intelligence/", - "title": "Multipolar traps are the thermodynamic default", - "rationale": "Competition is free; coordination costs money. Concentration follows naturally when nobody builds the alternative.", - "api_fetchable": false - }, - { - "slug": "the metacrisis is a single generator function where all civilizational-scale crises share the structural cause of rivalrous dynamics on exponential technology on finite substrate", - "path": "foundations/collective-intelligence/", - "title": "The metacrisis is a single generator function", - "rationale": "Schmachtenberger's frame: all civilizational-scale failures share one engine. AI is the highest-leverage instance, not a separate problem.", - "api_fetchable": false - }, - { - "slug": "coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent", - "path": "foundations/collective-intelligence/", - "title": "Coordination failures arise from individually rational strategies", - "rationale": "Game-theoretic grounding for why concentration is equilibrium: rational individual actors produce collectively irrational outcomes by default.", - "api_fetchable": false - } - ], - "counter_arguments": [ - { - "objection": "Decentralized open-source counterweights have always emerged. Linux, Wikipedia, the open web. Concentration is never the final equilibrium.", - "rebuttal": "These counterweights took 10-20 years to mature. AI capability scales in 12-month cycles. The window for counterweights to emerge organically may be shorter than the timeline of capability concentration.", - "tension_claim_slug": null - }, - { - "objection": "Antitrust and regulation defeat concentration. The state has tools.", - "rebuttal": "Regulation lags capability by years. Antitrust assumes a known market structure. AI is reshaping market structure faster than antitrust frameworks can adapt to.", - "tension_claim_slug": null - } - ], - "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } - ] - }, - { - "id": 8, - "title": "The internet solved communication. It hasn't solved shared reasoning.", - "subtitle": "Humanity can talk at planetary scale, but it still can't think clearly together at planetary scale. That's the missing piece — and the opportunity.", - "steelman": "We built global networks for information exchange, not for collective judgment. The next step is infrastructure that helps humans and AI reason, evaluate, and coordinate together at scale.", - "evidence_claims": [ - { - "slug": "humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain", - "path": "foundations/collective-intelligence/", - "title": "Humanity is a superorganism that can communicate but not yet think", - "rationale": "Names the structural gap: we have the nervous system, we lack the cognitive layer.", - "api_fetchable": false - }, - { - "slug": "the internet enabled global communication but not global cognition", + "slug": "three paths to superintelligence exist but only collective superintelligence preserves human agency", "path": "core/teleohumanity/", - "title": "The internet enabled global communication but not global cognition", - "rationale": "Direct version of the claim: distinguishes communication from cognition as separate substrates that need different infrastructure.", - "api_fetchable": false + "title": "Three paths to superintelligence", + "rationale": "The canonical statement of why architecture choice — not alignment — is the load-bearing variable for human agency post-AGI.", + "api_fetchable": true }, { - "slug": "technology creates interconnection but not shared meaning which is the precise gap that produces civilizational coordination failure", - "path": "foundations/cultural-dynamics/", - "title": "Technology creates interconnection but not shared meaning", - "rationale": "The cultural-dynamics framing of the same gap: connection without coordination produces coordination failure as the default outcome.", - "api_fetchable": false + "slug": "collective superintelligence is the alternative to monolithic AI controlled by a few", + "path": "core/teleohumanity/", + "title": "Collective SI as the alternative to monolithic AI", + "rationale": "The structural argument for why distributed architectures are the only ones where humans remain causally upstream of outcomes.", + "api_fetchable": true + }, + { + "slug": "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence", + "path": "foundations/collective-intelligence/", + "title": "Multipolar failure from competing aligned AIs", + "rationale": "Even the 'collective' path has failure modes. Critch/Krueger work scopes when collective architectures help vs hurt — strengthens the claim by acknowledging the boundary condition.", + "api_fetchable": true } ], "counter_arguments": [ { - "objection": "Wikipedia, prediction markets, open-source software — we DO think together. The infrastructure exists.", - "rebuttal": "These are partial cases that prove the architecture is buildable. None of them coordinate at civilization-scale on contested questions where stakes are high. They show the bones, not the whole skeleton.", + "objection": "A single well-aligned dominant AI is more efficient and more controllable than a distributed network. Coordination overhead in a collective makes it slower and worse-aligned.", + "rebuttal": "Efficiency is the wrong criterion when the alternative removes humans from causal influence. Once a single system exceeds human variety, no human regulator can match it — the architecture forecloses HITL by construction. Coordination overhead is the cost of keeping humans in the loop, not a bug.", "tension_claim_slug": null }, { - "objection": "Social media IS collective thinking, just messy. Twitter, Reddit, Discord aggregate billions of people reasoning together.", - "rebuttal": "Social media optimizes for engagement, not reasoning. Engagement-optimized platforms are systematically adversarial to careful thought. The infrastructure for thinking together has to be optimized for that goal, which engagement platforms structurally cannot be.", + "objection": "Aligned singleton AI is still aligned. Humans don't need to be 'co-authors' if the AI reliably executes their values.", + "rebuttal": "Universal alignment is mathematically impossible — Arrow's theorem applies to aggregating diverse human values into a single coherent objective. A singleton necessarily flattens that diversity into one optimization target, which is structurally different from a collective that preserves it.", "tension_claim_slug": null } ], "contributors": [ - { - "handle": "m3taversal", - "role": "originator" - } + {"handle": "m3taversal", "role": "originator"} ] }, { - "id": 9, - "title": "Collective intelligence is real, measurable, and buildable.", - "subtitle": "Groups with the right structure can outperform smarter individuals. Almost nobody is building it at scale, and that is the opportunity. The people who help build it should own part of it.", - "steelman": "This is not a metaphor or a vibe. We already have enough evidence to engineer better collective reasoning systems deliberately, and contributor ownership is how those systems become aligned, durable, and worth building.", + "id": 5, + "title": "Collective intelligence scales — and emergent systems aren't constrained by who designs them first.", + "subtitle": "What teleo becomes will be shaped by who contributes. Engaging early isn't joining someone else's project — it's shaping what the project becomes.", + "steelman": "Collective intelligence scales — and emergent systems aren't constrained by who designs them first. Diverse groups consistently outperform their smartest member, and the gap widens with more contributors. What teleo becomes won't be locked by its founders. It will be shaped by who contributes. Engaging early isn't joining someone else's project. It's shaping what the project becomes.", "evidence_claims": [ { "slug": "collective intelligence is a measurable property of group interaction structure not aggregated individual ability", "path": "foundations/collective-intelligence/", - "title": "Collective intelligence is a measurable property of group interaction structure", - "rationale": "Woolley's c-factor: measurable, predicts performance across diverse tasks, correlates with turn-taking equality and social sensitivity — not with average or maximum IQ.", - "api_fetchable": false + "title": "Collective intelligence is measurable (Woolley c-factor)", + "rationale": "The empirical anchor: groups have a measurable c-factor that predicts cross-task performance and correlates with interaction structure, not with average IQ.", + "api_fetchable": true + }, + { + "slug": "collective intelligence requires diversity as a structural precondition not a moral preference", + "path": "foundations/collective-intelligence/", + "title": "Diversity is a structural precondition for CI", + "rationale": "Why scaling works mechanistically: diverse groups outperform homogeneous ones because variety in the regulator must match variety in the problem. Without this, more contributors just means more of the same.", + "api_fetchable": true }, { "slug": "adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty", "path": "foundations/collective-intelligence/", - "title": "Adversarial contribution produces higher-quality collective knowledge", - "rationale": "The specific structural conditions under which adversarial systems outperform consensus. This is the engineering knowledge most CI projects miss.", - "api_fetchable": false - }, - { - "slug": "partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity", - "path": "foundations/collective-intelligence/", - "title": "Partial connectivity produces better collective intelligence", - "rationale": "Counter-intuitive engineering finding: full connectivity destroys diversity and degrades collective performance on complex problems.", - "api_fetchable": false + "title": "Adversarial contribution beats consensus under right conditions", + "rationale": "How emergent systems escape their starting conditions: adversarial review under role-weighted attribution produces knowledge no founder could prescribe.", + "api_fetchable": true }, { "slug": "contribution-architecture", "path": "core/", "title": "Contribution architecture", - "rationale": "The concrete five-role attribution model that operationalizes contributor ownership.", + "rationale": "The five-role attribution model that makes 'engaging early shapes what the project becomes' a mechanism rather than a slogan.", "api_fetchable": false } ], "counter_arguments": [ { - "objection": "Woolley's c-factor has mixed replication. The 'measurable' claim overstates the empirical base.", - "rebuttal": "The narrower defensible claim is that group performance varies systematically with interaction structure — a finding that has replicated. The point is structural, not the specific c-factor metric.", + "objection": "Cold-start problem: collective intelligence systems need a critical mass of contributors before scaling kicks in. Until then, they look like a regular project run by their founders.", + "rebuttal": "True, and the early period is when contributors get the highest leverage per-contribution. The scaling argument is honest about both: low contributor count means founder-shaped today, but role-weighted attribution means each early contribution carries structurally more weight than later ones. Early engagement is structural reward, not consolation.", "tension_claim_slug": null }, { - "objection": "Crypto contributor-ownership history is mostly extractive. Every token launch promises the same thing and most fail.", - "rebuttal": "Generic token launches optimize for speculation. Our specific mechanism — futarchy governance + role-weighted CI attribution + on-chain history — is structurally different from pump-and-dump tokens. The mechanism is the moat.", + "objection": "The Woolley c-factor has mixed replication. Calling CI 'measurable' overstates the empirical base.", + "rebuttal": "The defensible version is narrower: group performance varies systematically with interaction structure, and that variation is reproducible across multiple research traditions (Woolley, Page, Pentland). 'Measurable' simplifies; the steelman in the expanded view scopes it.", "tension_claim_slug": null } ], "contributors": [ + {"handle": "m3taversal", "role": "originator"} + ] + }, + { + "id": 6, + "title": "The foundations of the next century are being poured right now.", + "subtitle": "AI, robotics, and biotech default to concentrating wealth and power more sharply than any technology in history. The alternative has to be chosen. The default doesn't choose — we do.", + "steelman": "The foundations of the next century are being poured right now. AI, robotics, and biotech are rewriting what humanity can build, own, and become. Without a vision worth building toward, they default to concentrating wealth and power more sharply than any technology in history — a harsher version of the world we already have. The alternative has to be chosen: a future where abundance is shared, humanity is multiplanetary, and what we build belongs to people. The default doesn't choose. We do.", + "evidence_claims": [ { - "handle": "m3taversal", - "role": "originator" + "slug": "agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation", + "path": "domains/ai-alignment/", + "title": "Agentic Taylorism — concentration is the default unless engineered otherwise", + "rationale": "The mechanism: AI extracts knowledge from contributors, and the engineering choices we make now determine whether value concentrates upward or distributes back. The 'default' in the claim is this mechanism running without intervention.", + "api_fetchable": true + }, + { + "slug": "attractor-authoritarian-lock-in", + "path": "domains/grand-strategy/", + "title": "Authoritarian lock-in is the clearest one-way door", + "rationale": "Why 'concentration' is the load-bearing risk. Once a small set of actors controls AI capability at scale, the door closes — most failure modes leading there are reachable from the current default trajectory.", + "api_fetchable": true + }, + { + "slug": "AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era", + "path": "foundations/collective-intelligence/", + "title": "AI capability vs CI funding asymmetry", + "rationale": "The funding asymmetry that proves the default is being chosen by inattention, not by deliberation. Trillions to capability, almost nothing to the wisdom layer that decides what gets built.", + "api_fetchable": false } + ], + "counter_arguments": [ + { + "objection": "Technology has always concentrated wealth at first and then distributed it through competition and adoption. AI will be no different.", + "rebuttal": "Two structural differences. First, capability gets cheaper but ownership of the infrastructure that determines what gets built does not — and ownership is where the leverage compounds. Second, AI/robotics/biotech together remove the historical mechanism by which technology eventually distributes (skilled human labor as a scarce input). Without that, distribution requires deliberate engineering, not market osmosis.", + "tension_claim_slug": null + }, + { + "objection": "Redistribution will solve concentration — UBI, taxation, antitrust. The future doesn't have to be 'chosen'; existing political mechanisms handle it.", + "rebuttal": "Existing redistribution mechanisms operate on flows (income, transactions). The concentration problem here is on stocks — ownership of infrastructure, attribution of contribution, governance of decisions. Redistributing flows after the fact doesn't address who owns the systems everyone depends on. That requires deliberate design at the architecture layer, not policy patches downstream.", + "tension_claim_slug": null + } + ], + "contributors": [ + {"handle": "m3taversal", "role": "originator"} ] } ], "operational_notes": [ - "Headline + subtitle render on the homepage rotation; steelman + evidence + counter_arguments + contributors render in the click-to-expand view.", - "api_fetchable=true means /api/claims/ can fetch the canonical claim file. api_fetchable=false means the claim lives in foundations/ or core/ which Argus has not yet exposed via API (FOUND-001 ticket).", - "tension_claim_slug is null for v3.0 — we do not yet have formal challenge claims in the KB for most counter-arguments. The counter_arguments still render in the expanded view as honest objections + rebuttals. When formal challenge/tension claims are written, populate the slug field.", - "Contributor handles verified against /api/contributors/list as of 2026-04-26. Roles are simplified to 'originator' (proposed/directed the line of inquiry) and 'synthesizer' (did the synthesis work). Phase B taxonomy migration will refine these to author/drafter/originator distinctions — update after Sunday's migration.", - "Agent handles are NOT listed in contributors[] for human-directed synthesis. Per governance rule (codified 2026-04-24, applied to v3 contributors[] on 2026-04-28): agents get sourcer credit only for pipeline PRs from their own research sessions. 10 agent attributions were removed across the 9 claims because all were human-directed synthesis. When agents do originate work (e.g. Theseus's Cornelius extraction sessions), they will appear as sourcer/originator on those specific claims. The dossier UI suppresses contributors[] when only m3taversal would render — that is expected and correct, not a data gap." + "Title + subtitle render on the homepage rotation; steelman + evidence + counter_arguments + contributors render in the click-to-expand dossier.", + "api_fetchable=true means /api/claims/ can fetch the canonical claim file. api_fetchable=false means the claim lives in core/ or convictions/ and the API surface does not yet expose those paths — the dossier renders the claim title and rationale inline without a click-through link until Argus FOUND-001 lands.", + "tension_claim_slug is null for v4.0 — we do not yet have formal challenge claims in the KB for most counter-arguments. When populated, the dossier renders 'Read the formal challenge →' below the rebuttal.", + "v4 cuts the 9-claim argument arc to 6 hero claims with one slot per domain (AI disruption / internet finance / AI alignment / collective SI / contribution / telos). The internet-finance pillar collapsed from 2 slots to 1 with the deepest line — 'pricing, not permission' — promoted to lead. Slot 5 is the engagement/contribution beat that was structurally missing in v3." ] } diff --git a/agents/leo/curation/homepage-rotation.md b/agents/leo/curation/homepage-rotation.md index e5d8cd9db..2dcc8b574 100644 --- a/agents/leo/curation/homepage-rotation.md +++ b/agents/leo/curation/homepage-rotation.md @@ -1,23 +1,27 @@ --- type: curation title: "Homepage claim stack" -description: "Load-bearing claims for the livingip.xyz homepage. Nine claims, each click-to-expand, designed as an argument arc rather than a quote rotator." +description: "Six hero claims for the livingip.xyz homepage. One slot per domain: AI disruption / internet finance / AI alignment / collective SI / contribution / telos. Each claim renders title + subtitle on rotation, steelman + evidence + counter-arguments + contributors in the click-to-expand dossier." maintained_by: leo created: 2026-04-24 -last_verified: 2026-04-26 -schema_version: 3 +last_verified: 2026-05-01 +schema_version: 4 runtime_artifact: agents/leo/curation/homepage-rotation.json --- # Homepage claim stack -This file is the canonical narrative for the nine claims on `livingip.xyz`. The runtime artifact (read by the frontend) is the JSON sidecar at `agents/leo/curation/homepage-rotation.json`. Update both together when the stack changes. +Canonical narrative for the six hero claims on `livingip.xyz`. The runtime artifact (read by the frontend) is the JSON sidecar at `agents/leo/curation/homepage-rotation.json`. Update both together when the stack changes. -## What changed in v3 +## What changed in v4 -Schema v3 replaces the v2 25-claim curation arc with **nine load-bearing claims** designed as a click-to-expand argument tree. Each claim now carries a steelman paragraph, an evidence chain (3-4 canonical KB claims), counter-arguments (2-3 honest objections with rebuttals), and a contributor list — all rendered in the expanded view when a visitor clicks a claim. +Schema v4 cuts the v3 9-claim argument arc to **6 hero claims with one slot per domain**. The compression happened along three structural moves: -The shift is from worldview tour to load-bearing argument. The 25-claim rotation answered "what do you believe across the full intellectual stack?" The nine-claim stack answers "what beliefs, if false, mean we shouldn't be doing this — and which deserve the most rigorous public challenge?" +1. **Internet finance collapsed from 2 slots to 1.** The two v3 finance claims shared an identical opener ("AI finance is being built right now…") and read as duplicates to a cold reader. The merge promotes the deepest line — "humans constrain AI through pricing, not permission" — to lead, and folds rails + primitives into one claim. +2. **Engagement beat added at slot 5.** The v3 stack had no on-ramp — visitors walked the diagnosis and were given no surface to participate. Slot 5 fills that gap with the contribution claim: collective intelligence scales, emergent systems aren't constrained by their start, what teleo becomes is shaped by who contributes. +3. **Plain language replaces KB shorthand in headlines.** "Singleton," "attractor," "Moloch" are KB vocabulary — precise to a researcher, opaque to a cold visitor. Headlines now use plain language ("one dominant system," "default trajectory," "concentrating wealth and power"). The technical terms move to the steelman or expanded body where they can be grounded with evidence. + +The shift is from worldview tour to load-bearing argument with a funnel bottom. v3 answered "what do you believe across the full intellectual stack?" v4 answers "what beliefs, if false, mean we shouldn't be doing this — and how does the reader engage if they're convinced?" ## Design principles @@ -27,143 +31,97 @@ The shift is from worldview tour to load-bearing argument. The 25-claim rotation 4. **Steelman in expanded view, not headline.** The headline provokes; the steelman teaches; the evidence grounds; the counter-arguments dignify disagreement. 5. **Counter-arguments visible.** The differentiator from a marketing site. Visitors see what we'd be challenged on, in our own words, with our honest rebuttal. 6. **Attribution discipline.** Agents get sourcer credit only for pipeline PRs from their own research sessions. Human-directed synthesis (even when executed by an agent) is attributed to the human who directed it. Conflating agent execution with agent origination would let the collective award itself credit for human work. +7. **Plain language over KB shorthand.** Terms specific to our knowledge base belong in the steelman or expanded body, not the headline. Cold readers can't ground vocabulary they haven't met. ## The arc -| Position | Job | -|---|---| -| 1-3 | Stakes + who wins | -| 4 | Opportunity asymmetry | -| 5-7 | Why the current path fails | -| 8 | What is missing in the world | -| 9 | What we're building, why it works, and how ownership fits | +| Position | Domain | Job | +|---|---|---| +| 1 | AI disruption | Stakes — the moment + the lever | +| 2 | Internet finance | Mechanism — pricing not permission | +| 3 | AI alignment | Failure mode — coordination problem structurally avoided | +| 4 | Collective SI | Solution architecture — the only path where humans remain agents | +| 5 | Contribution | Your path — collective intelligence scales, what teleo becomes is shaped by who contributes | +| 6 | Telos | What we are choosing to build | -## The nine claims +## The six claims -### 1. The intelligence explosion will not reward everyone equally. +### 1. AI is reshaping markets, institutions, and how consequential decisions get made. -**Subtitle:** It will disproportionately reward the people who build the systems that shape it. +**Subtitle:** The foundations are being poured right now. The people who engage early shape what gets built — and the window is open now. -**Steelman:** The coming wave of AI will create enormous value, but it will not distribute that value evenly. The biggest winners will be the people and institutions that shape the systems everyone else depends on. - -**Evidence:** `attractor-authoritarian-lock-in` (grand-strategy), `agentic-Taylorism` (ai-alignment), `AI capability vs CI funding asymmetry` (foundations/collective-intelligence — new, PR #4021) - -**Counter-arguments:** "AI commoditizes capability — cheaper services lift everyone" / "Open-source models prevent capture" - -**Contributors:** m3taversal (originator) - -### 2. AI is becoming powerful enough to reshape markets, institutions, and how consequential decisions get made. - -**Subtitle:** We think we are already in the early to middle stages of that transition. That's the intelligence explosion. - -**Steelman:** That transition is already underway. That is what we mean by an intelligence explosion: intelligence becoming a new layer of infrastructure across the economy. +**Steelman:** AI is reshaping markets, institutions, and how consequential decisions get made. The foundations are being poured right now, and the rules being written today will govern the next two decades. The people who engage early shape what gets built. The window is open now. **Evidence:** `AI-automated software development is 100% certain` (convictions/), `recursive-improvement-is-the-engine-of-human-progress` (grand-strategy), `bottleneck shifts from building capacity to knowing what to build` (ai-alignment) -**Counter-arguments:** "Scaling laws plateau, takeoff is rhetoric" / "Deployment lag dominates capability" +**Counter-arguments:** "Scaling laws plateau, 'reshaping' overstates what's happening" / "Adoption lag dominates capability — engaging early is a slogan" **Contributors:** m3taversal (originator) -### 3. The winners of the intelligence explosion will not just consume AI. +### 2. Decision markets and ownership coins let humans constrain AI through pricing, not permission. -**Subtitle:** They will help shape it, govern it, and own part of the infrastructure behind it. +**Subtitle:** As capital moves on-chain, these become the default primitives. Most of that catalyst has not been priced yet. -**Steelman:** Most people will use AI tools. A much smaller number will help shape them, govern them, and own part of the infrastructure behind them — and those people will capture disproportionate upside. +**Steelman:** Decision markets and ownership coins let humans constrain AI through pricing, not permission. They price capability that can't be audited the way a balance sheet can, and they create legal ownership without beneficial owners — a defensible posture under existing securities law where traditional structures fail. As capital moves on-chain, these become the default primitives, and the rails chosen now will shape internet financial markets for the next two decades. Most of that catalyst has not been priced yet. -**Evidence:** `contribution-architecture` (core), `futarchy solves trustless joint ownership` (mechanisms), `ownership alignment turns network effects from extractive to generative` (living-agents) +**Evidence:** `futarchy solves trustless joint ownership not just better decision-making` (core/mechanisms), `Living Capital vehicles likely fail the Howey test` (internet-finance), `users cannot detect when their AI agent is underperforming` (ai-alignment — Anthropic Project Deal) -**Counter-arguments:** "Network effects favor incumbents regardless" / "Tokenized ownership is mostly speculation" +**Counter-arguments:** "Tokenized ownership is mostly speculation, not real value capture" / "SEC will rule against this and the structure collapses" **Contributors:** m3taversal (originator) -### 4. Trillions are flowing into making AI more capable. +### 3. AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided. -**Subtitle:** Almost nothing is flowing into making humanity wiser about what AI should do. That gap is one of the biggest opportunities of our time. +**Subtitle:** Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow. -**Steelman:** Capability is being overbuilt. The wisdom layer that decides how AI is used, governed, and aligned with human interests is still missing, and that gap is one of the biggest opportunities of our time. +**Steelman:** AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided. Each lab knows safety slows capability; each knows competitors won't slow with them; the multipolar trap closes. Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow. The race converges to the lowest safety floor any participant accepts, not the highest any aspires to. -**Evidence:** `AI capability vs CI funding asymmetry` (foundations/collective-intelligence), `the alignment tax creates a structural race to the bottom` (foundations/collective-intelligence), `universal alignment is mathematically impossible` (ai-alignment) +**Evidence:** `the alignment tax creates a structural race to the bottom` (foundations/collective-intelligence), `Anthropic RSP rollback under commercial pressure` (ai-alignment), `voluntary safety pledges cannot survive competitive pressure` (foundations/collective-intelligence) -**Counter-arguments:** "Anthropic + AISI + alignment funds = field is well-funded" / "Polymarket + Kalshi ARE wisdom infrastructure" +**Counter-arguments:** "Self-regulation works — labs care because researchers and customers care" / "Government regulation will solve this" **Contributors:** m3taversal (originator) -### 5. The danger is not just one lab getting AI wrong. +### 4. There are two paths to superintelligence: one dominant system, or a network whose collective exceeds any single system. -**Subtitle:** It's many labs racing to deploy powerful systems faster than society can learn to govern them. Safer models are not enough if the race itself is unsafe. +**Subtitle:** The first treats humans as ancestors. The second treats humans as participants. Collective SI is the only path where humans remain agents. -**Steelman:** Safer models are not enough if the race itself is unsafe. Even well-intentioned actors can produce bad outcomes when competition rewards speed, secrecy, and corner-cutting over coordination. +**Steelman:** There are two paths to superintelligence: one dominant system that exceeds humanity, or a network whose collective exceeds any single system. The first treats humans as ancestors. The second treats humans as participants. Even aligned, one dominant AI is still dominant — humans become subjects of its judgment, not co-authors of it. Collective SI is the only path where humans remain agents. -**Evidence:** `the alignment tax creates a structural race to the bottom` (foundations/collective-intelligence), `voluntary safety pledges cannot survive competitive pressure` (foundations/collective-intelligence), `multipolar failure from competing aligned AI systems` (foundations/collective-intelligence) +**Evidence:** `three paths to superintelligence` (core/teleohumanity), `collective superintelligence is the alternative to monolithic AI` (core/teleohumanity), `multipolar failure from competing aligned AIs` (foundations/collective-intelligence) -**Counter-arguments:** "Self-regulation works" / "Government regulation will solve race-to-bottom" +**Counter-arguments:** "Single well-aligned dominant AI is more efficient and controllable" / "Aligned singleton is still aligned — humans don't need to be co-authors" **Contributors:** m3taversal (originator) -### 6. Your AI provider is already mining your intelligence. +### 5. Collective intelligence scales — and emergent systems aren't constrained by who designs them first. -**Subtitle:** Your prompts, code, judgments, and workflows improve the systems you use, usually without ownership, credit, or clear visibility into what you get back. +**Subtitle:** What teleo becomes will be shaped by who contributes. Engaging early isn't joining someone else's project — it's shaping what the project becomes. -**Steelman:** The default AI stack learns from contributors while concentrating ownership elsewhere. Most users are already helping train the future without sharing meaningfully in the upside it creates. +**Steelman:** Collective intelligence scales — and emergent systems aren't constrained by who designs them first. Diverse groups consistently outperform their smartest member, and the gap widens with more contributors. What teleo becomes won't be locked by its founders. It will be shaped by who contributes. Engaging early isn't joining someone else's project. It's shaping what the project becomes. -**Evidence:** `agentic-Taylorism` (ai-alignment), `users cannot detect when their AI agent is underperforming` (ai-alignment — Anthropic Project Deal), `economic forces push humans out of cognitive loops` (ai-alignment) +**Evidence:** `collective intelligence is a measurable property of group interaction structure` (foundations/collective-intelligence — Woolley c-factor), `collective intelligence requires diversity as a structural precondition` (foundations/collective-intelligence), `adversarial contribution produces higher-quality collective knowledge` (foundations/collective-intelligence), `contribution-architecture` (core) -**Counter-arguments:** "Users opt in, get value in exchange" / "Licensing programs ARE compensation" +**Counter-arguments:** "Cold-start problem — until critical mass, looks like a regular project" / "c-factor has mixed replication, 'measurable' overstates the empirical base" **Contributors:** m3taversal (originator) -### 7. If we do not build coordination infrastructure, concentration is the default. +### 6. The foundations of the next century are being poured right now. -**Subtitle:** A small number of labs and platforms will shape what advanced AI optimizes for and capture most of the rewards it creates. +**Subtitle:** AI, robotics, and biotech default to concentrating wealth and power more sharply than any technology in history. The alternative has to be chosen. The default doesn't choose — we do. -**Steelman:** This is not mainly a moral failure. It is the natural equilibrium when capability scales faster than governance and no alternative infrastructure exists. +**Steelman:** The foundations of the next century are being poured right now. AI, robotics, and biotech are rewriting what humanity can build, own, and become. Without a vision worth building toward, they default to concentrating wealth and power more sharply than any technology in history — a harsher version of the world we already have. The alternative has to be chosen: a future where abundance is shared, humanity is multiplanetary, and what we build belongs to people. The default doesn't choose. We do. -**Evidence:** `multipolar traps are the thermodynamic default` (foundations/collective-intelligence), `the metacrisis is a single generator function` (foundations/collective-intelligence), `coordination failures arise from individually rational strategies` (foundations/collective-intelligence) +**Evidence:** `agentic-Taylorism` (ai-alignment), `attractor-authoritarian-lock-in` (grand-strategy), `AI capability vs CI funding asymmetry` (foundations/collective-intelligence) -**Counter-arguments:** "Decentralized open-source counterweights always emerge" / "Antitrust + regulation defeat concentration" - -**Contributors:** m3taversal (originator) - -### 8. The internet solved communication. It hasn't solved shared reasoning. - -**Subtitle:** Humanity can talk at planetary scale, but it still can't think clearly together at planetary scale. That's the missing piece — and the opportunity. - -**Steelman:** We built global networks for information exchange, not for collective judgment. The next step is infrastructure that helps humans and AI reason, evaluate, and coordinate together at scale. - -**Evidence:** `humanity is a superorganism that can communicate but not yet think` (foundations/collective-intelligence), `the internet enabled global communication but not global cognition` (core/teleohumanity), `technology creates interconnection but not shared meaning` (foundations/cultural-dynamics) - -**Counter-arguments:** "Wikipedia, prediction markets, open-source — we DO think together" / "Social media IS collective thinking, just messy" - -**Contributors:** m3taversal (originator) - -### 9. Collective intelligence is real, measurable, and buildable. - -**Subtitle:** Groups with the right structure can outperform smarter individuals. Almost nobody is building it at scale, and that is the opportunity. The people who help build it should own part of it. - -**Steelman:** This is not a metaphor or a vibe. We already have enough evidence to engineer better collective reasoning systems deliberately, and contributor ownership is how those systems become aligned, durable, and worth building. - -**Evidence:** `collective intelligence is a measurable property of group interaction structure` (foundations/ci — Woolley c-factor), `adversarial contribution produces higher-quality collective knowledge` (foundations/ci), `partial connectivity produces better collective intelligence` (foundations/ci), `contribution-architecture` (core) - -**Counter-arguments:** "Woolley's c-factor has mixed replication" / "Crypto contributor-ownership history is mostly extractive" +**Counter-arguments:** "Technology has always concentrated then distributed" / "Redistribution mechanisms (UBI, taxation, antitrust) will solve concentration" **Contributors:** m3taversal (originator) ## Operational notes -- **Headline + subtitle** render on the homepage rotation. **Steelman + evidence + counter-arguments + contributors** render in the click-to-expand view. -- **`api_fetchable=true`** means `/api/claims/` can fetch the canonical claim file. `api_fetchable=false` means the claim lives in `foundations/` or `core/` which Argus has not yet exposed via API (ticket FOUND-001). -- **`tension_claim_slug=null`** for v3.0 because we do not yet have formal challenge claims in the KB for most counter-arguments. Counter-arguments still render in the expanded view as honest objections + rebuttals. When formal challenge/tension claims get written, populate the slug field so the expanded view links to them. -- **Contributor handles** verified against `/api/contributors/list` on 2026-04-26, then cleaned 2026-04-28 to apply the governance rule: agents only get sourcer/originator credit for pipeline PRs from their own research sessions. Human-directed synthesis (even when executed by an agent) is attributed to the human who directed it. 10 agent synthesizer attributions were removed across the 9 claims because all were directed by m3taversal. The dossier UI suppresses contributors[] when only m3taversal would render — that is expected and correct, not a data gap. When agents originate work (e.g. Theseus's Cornelius extraction sessions), they appear as sourcer on those specific claims. - -## What ships next - -1. **Claude Design** receives this 9-claim stack as the locked content for the homepage redesign brief. Designs the click-to-expand UI against this JSON schema. -2. **Oberon** implements after his current walkthrough refinement batch lands. Reads `homepage-rotation.json` from gitea raw URL or static import; renders headline + subtitle with prev/next nav; renders expanded view per `` component. -3. **Argus** unblocks downstream depth via FOUND-001 (expose `foundations/*` and `core/*` via `/api/claims/`) so 14 of the 28 evidence-claim links flip from render-only to clickable. Also INDEX-003 if the funding-asymmetry claim needs Qdrant re-embed. -4. **Leo** drafts canonical challenge/tension claims for the 18 counter-arguments over time. Each becomes a `tension_claim_slug` populated value, enriching the expanded view. - -## Pre-v3 history - -- v1 (2026-04-24, PR #3942): 25 conceptual slugs, no inline display data, depended on slug resolution against API -- v2 (2026-04-24, PR #3944): 25 entries with verified canonical slugs and inline display data; api_fetchable flag added -- v3 (2026-04-26, this revision): 9 load-bearing claims with steelmans, evidence chains, counter-arguments, contributors. Replaces the 25-claim rotation as the homepage canonical. +- **Plain-language headlines.** v4 strips KB shorthand from titles and subtitles. Where v3 used "singleton," v4 uses "one dominant system." Where v3 used "Moloch / authoritarian lock-in / decay," v4 uses "concentrating wealth and power." The technical terms remain in the steelman/body where evidence can ground them. +- **Engagement beat at slot 5.** This is the funnel bottom that v3 was missing. The reader walked the diagnosis, agreed, and had nowhere to go. Slot 5 names what teleo is and how engagement compounds. If this slot reads weak in production, replace with the AI-capability-vs-CI-funding asymmetry claim (PR #4021) — but a weak engagement claim is worse than no engagement claim, and the role-weighted attribution argument grounds the slot well. +- **Domain coverage rule.** No domain double-counted. If a future v5 adds a slot, it should be a domain currently absent (health, entertainment, space, energy) — not an additional finance or AI claim. +- **Contributor handles** verified against `/api/contributors/list`. All six claims attribute originator role to m3taversal per the governance rule (agents only get sourcer credit for pipeline PRs from their own research sessions; human-directed synthesis attributes to the human). The dossier UI suppresses contributors[] when only m3taversal would render — that is expected and correct, not a data gap. When agents originate work in their own research sessions, they appear as sourcer on those specific claims. +- **Live frontend integration.** `livingip-web/src/data/homepage-rotation.json` snapshots this file. When v4 ships to codex main, Oberon syncs the snapshot in a separate livingip-web PR. Indicator currently reads "1 of 9" → updates to "1 of 6" via the existing `claims.length` reference in `claim-rotation.tsx`.