teleo-codex/agents/leo/curation/homepage-rotation.json
m3taversal de4dd7b53b
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
leo: homepage rotation v4 — 6 hero claims with one slot per domain
Cuts the v3 9-claim argument arc to 6 hero claims with one slot per
domain (AI disruption / internet finance / AI alignment / collective SI /
contribution / telos).

Three structural moves:

1. Internet finance collapsed from 2 slots to 1. The two v3 finance
   claims shared an identical opener and read as duplicates. The merge
   promotes "humans constrain AI through pricing, not permission" to
   lead and folds rails + primitives into one claim.

2. Engagement beat added at slot 5. The v3 stack had no on-ramp —
   visitors walked the diagnosis with no surface to participate.
   Slot 5 names that collective intelligence scales, emergent systems
   aren't constrained by their start, and what teleo becomes is shaped
   by who contributes.

3. Plain language replaces KB shorthand in headlines. "Singleton",
   "attractor", "Moloch" are KB vocabulary — precise to a researcher,
   opaque to a cold visitor. Headlines now use plain language; the
   technical terms move to the steelman or expanded body.

Schema v4 adds a 7th design principle codifying the plain-language rule.
All six claims attribute originator role to m3taversal per the governance
rule (agents only get sourcer credit for pipeline PRs from their own
research sessions; human-directed synthesis attributes to the human).

Evidence chains verified against codex main:
- 18 evidence_claims across 6 claims (3 per slot, 4 on slot 5)
- 12 counter_arguments (2 per slot)
- All slug/path references present in domains/, foundations/, core/, convictions/

Frontend integration: livingip-web/src/data/homepage-rotation.json
snapshots this file. Oberon syncs in a separate livingip-web PR after
this lands. Indicator updates from "1 of 9" → "1 of 6" via the existing
claims.length reference in claim-rotation.tsx — no UI redesign needed.

Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
2026-05-01 12:14:00 +00:00

302 lines
24 KiB
JSON

{
"schema_version": 4,
"maintained_by": "leo",
"last_updated": "2026-05-01",
"description": "Homepage claim stack for livingip.xyz. 6 hero claims, ordered as an argument arc with one slot per domain. Each claim renders with title + subtitle on the homepage rotation, steelman + evidence + counter-arguments + contributors in the click-to-expand view.",
"design_principles": [
"Provoke first, define inside the explanation. Each claim must update the reader, not just inform them.",
"0 to 1 legible. A cold reader with no prior context understands each claim without expanding.",
"Falsifiable, not motivational. Every premise is one a smart critic could attack with evidence.",
"Steelman in expanded view, not headline. The headline provokes; the steelman teaches; the evidence grounds.",
"Counter-arguments visible. Dignifying disagreement is the differentiator from a marketing site.",
"Attribution discipline. Agents get credit only for pipeline PRs from their own research sessions. Human-directed synthesis is attributed to the human.",
"Plain language over KB shorthand. Terms specific to our knowledge base (Moloch, attractor, singleton, Ashby's Law) belong in the steelman or expanded body, not the headline. Cold readers can't ground vocabulary they haven't met."
],
"arc": {
"1": "stakes — the moment + the lever",
"2": "internet-finance mechanism — pricing not permission",
"3": "AI alignment failure mode — coordination problem structurally avoided",
"4": "solution architecture — collective SI is the only HITL path",
"5": "your path — collective intelligence scales and emergent systems are not constrained by their start",
"6": "telos — what we are choosing to build"
},
"claims": [
{
"id": 1,
"title": "AI is reshaping markets, institutions, and how consequential decisions get made.",
"subtitle": "The foundations are being poured right now. The people who engage early shape what gets built — and the window is open now.",
"steelman": "AI is reshaping markets, institutions, and how consequential decisions get made. The foundations are being poured right now, and the rules being written today will govern the next two decades. The people who engage early shape what gets built. The window is open now.",
"evidence_claims": [
{
"slug": "AI-automated software development is 100 percent certain and will radically change how software is built",
"path": "convictions/",
"title": "AI-automated software development is certain",
"rationale": "The most direct economic vertical — software — already shows the trajectory.",
"api_fetchable": false
},
{
"slug": "recursive-improvement-is-the-engine-of-human-progress-because-we-get-better-at-getting-better",
"path": "domains/grand-strategy/",
"title": "Recursive improvement compounds",
"rationale": "The mechanism behind why intelligence gains compound and the next decade looks unlike the last.",
"api_fetchable": true
},
{
"slug": "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems",
"path": "domains/ai-alignment/",
"title": "Bottleneck shifts to knowing what to build",
"rationale": "Capability commoditization means the variable that decides outcomes is the structured knowledge layer, not the model layer.",
"api_fetchable": true
}
],
"counter_arguments": [
{
"objection": "Scaling laws are plateauing. Progress is slowing. 'Reshaping' overstates what AI is actually doing in the economy.",
"rebuttal": "Even with scaling slowdowns, agentic capabilities and tool use compound the deployable surface area at a rate the economy hasn't absorbed. The transition is architectural, not just parameter count.",
"tension_claim_slug": null
},
{
"objection": "Capability is real but real-world adoption takes decades, not years. Engaging 'early' is a slogan, not a strategy.",
"rebuttal": "Adoption lag dominated previous technology cycles because integration required hardware deployment. AI integrates as a software upgrade with much shorter cycle times — the institutional rules being written now lock in for years before anyone notices.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
},
{
"id": 2,
"title": "Decision markets and ownership coins let humans constrain AI through pricing, not permission.",
"subtitle": "As capital moves on-chain, these become the default primitives. Most of that catalyst has not been priced yet.",
"steelman": "Decision markets and ownership coins let humans constrain AI through pricing, not permission. They price capability that can't be audited the way a balance sheet can, and they create legal ownership without beneficial owners — a defensible posture under existing securities law where traditional structures fail. As capital moves on-chain, these become the default primitives, and the rails chosen now will shape internet financial markets for the next two decades. Most of that catalyst has not been priced yet.",
"evidence_claims": [
{
"slug": "futarchy solves trustless joint ownership not just better decision-making",
"path": "core/mechanisms/",
"title": "Futarchy solves trustless joint ownership",
"rationale": "The structural argument for why decision markets are not just better voting — they are the primitive that lets a collective own and govern capital without a trusted operator.",
"api_fetchable": true
},
{
"slug": "Living Capital vehicles likely fail the Howey test for securities classification because the structural separation of capital raise from investment decision eliminates the efforts of others prong",
"path": "domains/internet-finance/",
"title": "Futarchy-gated vehicles likely fail Howey",
"rationale": "Conditional-market exits at every decision point break the 'efforts of others' prong — the legal-clarity argument made concrete.",
"api_fetchable": true
},
{
"slug": "users cannot detect when their AI agent is underperforming because subjective fairness ratings decouple from measurable economic outcomes across capability tiers",
"path": "domains/ai-alignment/",
"title": "Users cannot audit AI agent performance (Anthropic Project Deal)",
"rationale": "Empirical evidence that capability gaps are invisible to users. If you can't audit, you have to price — markets are the only mechanism that aggregates skin-in-the-game judgment when the underlying object is a black box.",
"api_fetchable": true
}
],
"counter_arguments": [
{
"objection": "Tokenized ownership is mostly speculation and pump-and-dump, not real value capture. Crypto's history doesn't support this thesis.",
"rebuttal": "True for generic token launches. Decision-market-gated vehicles with conditional exit liquidity are structurally different from speculative tokens — the holder either trades or actively chooses to stay through each decision, with no GP whose discretion creates passive returns. The mechanism distinction is what makes this not a security under Howey.",
"tension_claim_slug": null
},
{
"objection": "The SEC will eventually rule against this and the structure collapses.",
"rebuttal": "The structural argument turns on prong 4 of Howey (efforts of others), which is what conditional markets break. Untested in court is real risk, but the existing safe-harbor proposals and the SEC's distinction between the crypto asset and the surrounding investment contract structure leave room for this design. Live structure, not theory.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
},
{
"id": 3,
"title": "AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided.",
"subtitle": "Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow.",
"steelman": "AI safety isn't a hard problem being slowly solved — it's a coordination problem being structurally avoided. Each lab knows safety slows capability; each knows competitors won't slow with them; the multipolar trap closes. Anthropic's two-year RSP is the empirical proof: even mission-driven companies revert to capability priority when competitors don't follow. The race converges to the lowest safety floor any participant accepts, not the highest any aspires to.",
"evidence_claims": [
{
"slug": "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it",
"path": "foundations/collective-intelligence/",
"title": "The alignment tax creates a race to the bottom",
"rationale": "The mechanism: safety budgets compete with capability budgets inside each lab, and capability budgets compete with survival across labs.",
"api_fetchable": true
},
{
"slug": "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development",
"path": "domains/ai-alignment/",
"title": "Anthropic RSP rollback is the empirical proof",
"rationale": "The two-year experiment in unilateral safety policy ended under competitive pressure. This is the data point the claim turns on.",
"api_fetchable": true
},
{
"slug": "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints",
"path": "foundations/collective-intelligence/",
"title": "Voluntary safety pledges cannot survive competition",
"rationale": "Generalizes the Anthropic case to the structural rule.",
"api_fetchable": true
}
],
"counter_arguments": [
{
"objection": "Self-regulation works. Labs care about safety because their researchers and customers care.",
"rebuttal": "The Anthropic RSP rollback is the strongest test case for self-regulation we have, and it failed under competitive pressure. Unilateral mission-driven commitments are structurally punished when competitors don't follow.",
"tension_claim_slug": null
},
{
"objection": "Government regulation will solve this — the EU AI Act and US executive orders are already constraining the race.",
"rebuttal": "Regulation can shift the floor, but the multipolar trap operates between national jurisdictions too. As long as some jurisdiction allows faster capability development, the race continues — only multilateral verification with binding enforcement breaks the dynamic.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
},
{
"id": 4,
"title": "There are two paths to superintelligence: one dominant system, or a network whose collective exceeds any single system.",
"subtitle": "The first treats humans as ancestors. The second treats humans as participants. Collective SI is the only path where humans remain agents.",
"steelman": "There are two paths to superintelligence: one dominant system that exceeds humanity, or a network whose collective exceeds any single system. The first treats humans as ancestors. The second treats humans as participants. Even aligned, one dominant AI is still dominant — humans become subjects of its judgment, not co-authors of it. Collective SI is the only path where humans remain agents.",
"evidence_claims": [
{
"slug": "three paths to superintelligence exist but only collective superintelligence preserves human agency",
"path": "core/teleohumanity/",
"title": "Three paths to superintelligence",
"rationale": "The canonical statement of why architecture choice — not alignment — is the load-bearing variable for human agency post-AGI.",
"api_fetchable": true
},
{
"slug": "collective superintelligence is the alternative to monolithic AI controlled by a few",
"path": "core/teleohumanity/",
"title": "Collective SI as the alternative to monolithic AI",
"rationale": "The structural argument for why distributed architectures are the only ones where humans remain causally upstream of outcomes.",
"api_fetchable": true
},
{
"slug": "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence",
"path": "foundations/collective-intelligence/",
"title": "Multipolar failure from competing aligned AIs",
"rationale": "Even the 'collective' path has failure modes. Critch/Krueger work scopes when collective architectures help vs hurt — strengthens the claim by acknowledging the boundary condition.",
"api_fetchable": true
}
],
"counter_arguments": [
{
"objection": "A single well-aligned dominant AI is more efficient and more controllable than a distributed network. Coordination overhead in a collective makes it slower and worse-aligned.",
"rebuttal": "Efficiency is the wrong criterion when the alternative removes humans from causal influence. Once a single system exceeds human variety, no human regulator can match it — the architecture forecloses HITL by construction. Coordination overhead is the cost of keeping humans in the loop, not a bug.",
"tension_claim_slug": null
},
{
"objection": "Aligned singleton AI is still aligned. Humans don't need to be 'co-authors' if the AI reliably executes their values.",
"rebuttal": "Universal alignment is mathematically impossible — Arrow's theorem applies to aggregating diverse human values into a single coherent objective. A singleton necessarily flattens that diversity into one optimization target, which is structurally different from a collective that preserves it.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
},
{
"id": 5,
"title": "Collective intelligence scales — and emergent systems aren't constrained by who designs them first.",
"subtitle": "What teleo becomes will be shaped by who contributes. Engaging early isn't joining someone else's project — it's shaping what the project becomes.",
"steelman": "Collective intelligence scales — and emergent systems aren't constrained by who designs them first. Diverse groups consistently outperform their smartest member, and the gap widens with more contributors. What teleo becomes won't be locked by its founders. It will be shaped by who contributes. Engaging early isn't joining someone else's project. It's shaping what the project becomes.",
"evidence_claims": [
{
"slug": "collective intelligence is a measurable property of group interaction structure not aggregated individual ability",
"path": "foundations/collective-intelligence/",
"title": "Collective intelligence is measurable (Woolley c-factor)",
"rationale": "The empirical anchor: groups have a measurable c-factor that predicts cross-task performance and correlates with interaction structure, not with average IQ.",
"api_fetchable": true
},
{
"slug": "collective intelligence requires diversity as a structural precondition not a moral preference",
"path": "foundations/collective-intelligence/",
"title": "Diversity is a structural precondition for CI",
"rationale": "Why scaling works mechanistically: diverse groups outperform homogeneous ones because variety in the regulator must match variety in the problem. Without this, more contributors just means more of the same.",
"api_fetchable": true
},
{
"slug": "adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty",
"path": "foundations/collective-intelligence/",
"title": "Adversarial contribution beats consensus under right conditions",
"rationale": "How emergent systems escape their starting conditions: adversarial review under role-weighted attribution produces knowledge no founder could prescribe.",
"api_fetchable": true
},
{
"slug": "contribution-architecture",
"path": "core/",
"title": "Contribution architecture",
"rationale": "The five-role attribution model that makes 'engaging early shapes what the project becomes' a mechanism rather than a slogan.",
"api_fetchable": false
}
],
"counter_arguments": [
{
"objection": "Cold-start problem: collective intelligence systems need a critical mass of contributors before scaling kicks in. Until then, they look like a regular project run by their founders.",
"rebuttal": "True, and the early period is when contributors get the highest leverage per-contribution. The scaling argument is honest about both: low contributor count means founder-shaped today, but role-weighted attribution means each early contribution carries structurally more weight than later ones. Early engagement is structural reward, not consolation.",
"tension_claim_slug": null
},
{
"objection": "The Woolley c-factor has mixed replication. Calling CI 'measurable' overstates the empirical base.",
"rebuttal": "The defensible version is narrower: group performance varies systematically with interaction structure, and that variation is reproducible across multiple research traditions (Woolley, Page, Pentland). 'Measurable' simplifies; the steelman in the expanded view scopes it.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
},
{
"id": 6,
"title": "The foundations of the next century are being poured right now.",
"subtitle": "AI, robotics, and biotech default to concentrating wealth and power more sharply than any technology in history. The alternative has to be chosen. The default doesn't choose — we do.",
"steelman": "The foundations of the next century are being poured right now. AI, robotics, and biotech are rewriting what humanity can build, own, and become. Without a vision worth building toward, they default to concentrating wealth and power more sharply than any technology in history — a harsher version of the world we already have. The alternative has to be chosen: a future where abundance is shared, humanity is multiplanetary, and what we build belongs to people. The default doesn't choose. We do.",
"evidence_claims": [
{
"slug": "agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation",
"path": "domains/ai-alignment/",
"title": "Agentic Taylorism — concentration is the default unless engineered otherwise",
"rationale": "The mechanism: AI extracts knowledge from contributors, and the engineering choices we make now determine whether value concentrates upward or distributes back. The 'default' in the claim is this mechanism running without intervention.",
"api_fetchable": true
},
{
"slug": "attractor-authoritarian-lock-in",
"path": "domains/grand-strategy/",
"title": "Authoritarian lock-in is the clearest one-way door",
"rationale": "Why 'concentration' is the load-bearing risk. Once a small set of actors controls AI capability at scale, the door closes — most failure modes leading there are reachable from the current default trajectory.",
"api_fetchable": true
},
{
"slug": "AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era",
"path": "foundations/collective-intelligence/",
"title": "AI capability vs CI funding asymmetry",
"rationale": "The funding asymmetry that proves the default is being chosen by inattention, not by deliberation. Trillions to capability, almost nothing to the wisdom layer that decides what gets built.",
"api_fetchable": false
}
],
"counter_arguments": [
{
"objection": "Technology has always concentrated wealth at first and then distributed it through competition and adoption. AI will be no different.",
"rebuttal": "Two structural differences. First, capability gets cheaper but ownership of the infrastructure that determines what gets built does not — and ownership is where the leverage compounds. Second, AI/robotics/biotech together remove the historical mechanism by which technology eventually distributes (skilled human labor as a scarce input). Without that, distribution requires deliberate engineering, not market osmosis.",
"tension_claim_slug": null
},
{
"objection": "Redistribution will solve concentration — UBI, taxation, antitrust. The future doesn't have to be 'chosen'; existing political mechanisms handle it.",
"rebuttal": "Existing redistribution mechanisms operate on flows (income, transactions). The concentration problem here is on stocks — ownership of infrastructure, attribution of contribution, governance of decisions. Redistributing flows after the fact doesn't address who owns the systems everyone depends on. That requires deliberate design at the architecture layer, not policy patches downstream.",
"tension_claim_slug": null
}
],
"contributors": [
{"handle": "m3taversal", "role": "originator"}
]
}
],
"operational_notes": [
"Title + subtitle render on the homepage rotation; steelman + evidence + counter_arguments + contributors render in the click-to-expand dossier.",
"api_fetchable=true means /api/claims/<slug> can fetch the canonical claim file. api_fetchable=false means the claim lives in core/ or convictions/ and the API surface does not yet expose those paths — the dossier renders the claim title and rationale inline without a click-through link until Argus FOUND-001 lands.",
"tension_claim_slug is null for v4.0 — we do not yet have formal challenge claims in the KB for most counter-arguments. When populated, the dossier renders 'Read the formal challenge →' below the rebuttal.",
"v4 cuts the 9-claim argument arc to 6 hero claims with one slot per domain (AI disruption / internet finance / AI alignment / collective SI / contribution / telos). The internet-finance pillar collapsed from 2 slots to 1 with the deepest line — 'pricing, not permission' — promoted to lead. Slot 5 is the engagement/contribution beat that was structurally missing in v3."
]
}