Compare commits

..

161 commits

Author SHA1 Message Date
Leo
0bc5544adf Merge pull request 'extract: 2025-11-00-operationalizing-pluralistic-values-llm-alignment' (#1010) from extract/2025-11-00-operationalizing-pluralistic-values-llm-alignment into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 20:28:17 +00:00
Teleo Agents
2c615310a5 auto-fix: strip 4 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-15 20:28:16 +00:00
Teleo Agents
d48d2e2c7b extract: 2025-11-00-operationalizing-pluralistic-values-llm-alignment
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 20:28:16 +00:00
Leo
116603acd9 Merge pull request 'extract: 2025-10-15-futardio-proposal-lets-get-futarded' (#1006) from extract/2025-10-15-futardio-proposal-lets-get-futarded into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 20:27:42 +00:00
Teleo Agents
93d5d8961d extract: 2025-10-15-futardio-proposal-lets-get-futarded
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 20:27:40 +00:00
Leo
b9f482b7f5 Merge pull request 'extract: 2025-10-14-futardio-launch-avici' (#1005) from extract/2025-10-14-futardio-launch-avici into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 20:27:38 +00:00
Teleo Agents
b1c982fae5 extract: 2025-10-14-futardio-launch-avici
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 20:27:36 +00:00
Teleo Agents
f6950401bf entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/entertainment/claynosaurz.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 20:27:36 +00:00
Leo
acd817c39b Merge pull request 'extract: 2025-03-01-medicare-prior-authorization-glp1-near-universal' (#984) from extract/2025-03-01-medicare-prior-authorization-glp1-near-universal into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 20:26:30 +00:00
Teleo Agents
d9a83a8838 extract: 2025-03-01-medicare-prior-authorization-glp1-near-universal
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 20:26:29 +00:00
Teleo Agents
fd6bf21afb entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/internet-finance/futardio.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 19:41:22 +00:00
Leo
0db6ff3964 Merge pull request 'extract: 2025-10-20-futardio-launch-zklsol' (#1008) from extract/2025-10-20-futardio-launch-zklsol into main 2026-03-15 19:38:02 +00:00
Teleo Agents
c332e35695 extract: 2025-10-20-futardio-launch-zklsol
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:37:15 +00:00
Leo
3c4c540e7e Merge pull request 'extract: 2025-08-20-futardio-proposal-should-sanctum-offer-investors-early-unlocks-of-their-cloud' (#1002) from extract/2025-08-20-futardio-proposal-should-sanctum-offer-investors-early-unlocks-of-their-cloud into main 2026-03-15 19:35:20 +00:00
Teleo Agents
b844ffffa7 extract: 2025-08-20-futardio-proposal-should-sanctum-offer-investors-early-unlocks-of-their-cloud
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:35:19 +00:00
Leo
785c523ee3 Merge pull request 'extract: 2025-08-00-oswald-arrowian-impossibility-machine-intelligence' (#1001) from extract/2025-08-00-oswald-arrowian-impossibility-machine-intelligence into main 2026-03-15 19:34:45 +00:00
Teleo Agents
02a2e8bc6b extract: 2025-08-00-oswald-arrowian-impossibility-machine-intelligence
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:33:26 +00:00
Leo
c53047304f Merge pull request 'extract: 2025-04-09-blockworks-ranger-ico-metadao-reset' (#1000) from extract/2025-04-09-blockworks-ranger-ico-metadao-reset into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 19:28:22 +00:00
Teleo Agents
be7a360d38 extract: 2025-04-09-blockworks-ranger-ico-metadao-reset
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:27:20 +00:00
Teleo Agents
458aa7494e entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/internet-finance/futardio.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 19:18:18 +00:00
Leo
54869f7e31 Merge pull request 'extract: 2025-06-01-cell-med-glp1-societal-implications-obesity' (#993) from extract/2025-06-01-cell-med-glp1-societal-implications-obesity into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 19:08:16 +00:00
Teleo Agents
994f00fe77 extract: 2025-06-01-cell-med-glp1-societal-implications-obesity
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:07:00 +00:00
Leo
8a471a1fae Merge pull request 'extract: 2025-04-22-futardio-proposal-testing-v03-transfer' (#989) from extract/2025-04-22-futardio-proposal-testing-v03-transfer into main 2026-03-15 19:05:36 +00:00
Teleo Agents
cea1db6bc4 extract: 2025-04-22-futardio-proposal-testing-v03-transfer
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:04:28 +00:00
Leo
feaa2acfa8 Merge pull request 'extract: 2025-03-05-futardio-proposal-proposal-3' (#986) from extract/2025-03-05-futardio-proposal-proposal-3 into main 2026-03-15 19:03:59 +00:00
Leo
5ec31622a9 Merge pull request 'extract: 2025-03-05-futardio-proposal-proposal-1' (#985) from extract/2025-03-05-futardio-proposal-proposal-1 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 19:03:25 +00:00
Teleo Agents
3c3e743d36 extract: 2025-03-05-futardio-proposal-proposal-1
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:03:24 +00:00
Teleo Agents
8beedfd204 extract: 2025-03-05-futardio-proposal-proposal-3
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:02:38 +00:00
Leo
d378ee8721 Merge pull request 'extract: 2025-02-13-futardio-proposal-fund-the-drift-working-group' (#983) from extract/2025-02-13-futardio-proposal-fund-the-drift-working-group into main 2026-03-15 19:02:19 +00:00
Teleo Agents
e82a6f0896 extract: 2025-02-13-futardio-proposal-fund-the-drift-working-group
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:02:18 +00:00
Leo
b7975678e3 Merge pull request 'extract: 2025-02-04-futardio-proposal-should-a-percentage-of-sam-bids-route-to-mnde-stakers' (#981) from extract/2025-02-04-futardio-proposal-should-a-percentage-of-sam-bids-route-to-mnde-stakers into main 2026-03-15 19:00:42 +00:00
Teleo Agents
658fae9a25 extract: 2025-02-04-futardio-proposal-should-a-percentage-of-sam-bids-route-to-mnde-stakers
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:00:41 +00:00
Leo
200b4f39d4 Merge pull request 'extract: 2025-02-00-agreement-complexity-alignment-barriers' (#980) from extract/2025-02-00-agreement-complexity-alignment-barriers into main 2026-03-15 19:00:08 +00:00
Teleo Agents
5fcb46aca2 extract: 2025-02-00-agreement-complexity-alignment-barriers
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 19:00:07 +00:00
Leo
b8614ca9eb Merge pull request 'extract: 2025-01-13-futardio-proposal-should-jto-vault-be-added-to-tiprouter-ncn' (#978) from extract/2025-01-13-futardio-proposal-should-jto-vault-be-added-to-tiprouter-ncn into main 2026-03-15 18:59:34 +00:00
Teleo Agents
f2c3d656f3 extract: 2025-01-13-futardio-proposal-should-jto-vault-be-added-to-tiprouter-ncn
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:58:04 +00:00
Leo
ae440ed989 Merge pull request 'extract: 2025-00-00-frontiers-futarchy-desci-empirical-simulation' (#974) from extract/2025-00-00-frontiers-futarchy-desci-empirical-simulation into main 2026-03-15 18:57:27 +00:00
Teleo Agents
0d3a4acd50 extract: 2025-00-00-frontiers-futarchy-desci-empirical-simulation
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:56:05 +00:00
Leo
bfb2e03271 Merge pull request 'extract: 2024-11-08-futardio-proposal-initiate-liquidity-farming-for-future-on-raydium' (#968) from extract/2024-11-08-futardio-proposal-initiate-liquidity-farming-for-future-on-raydium into main 2026-03-15 18:53:17 +00:00
Leo
2edcff6532 Merge pull request 'extract: 2024-10-30-futardio-proposal-swap-150000-into-isc' (#966) from extract/2024-10-30-futardio-proposal-swap-150000-into-isc into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 18:52:44 +00:00
Teleo Agents
1f6e098667 extract: 2024-10-30-futardio-proposal-swap-150000-into-isc
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:52:42 +00:00
Teleo Agents
fedfc2cd45 entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/internet-finance/metadao.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 18:52:42 +00:00
Teleo Agents
a36b32df16 extract: 2024-11-08-futardio-proposal-initiate-liquidity-farming-for-future-on-raydium
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:52:21 +00:00
Leo
6e418ab0c2 Merge pull request 'extract: 2024-08-31-futardio-proposal-enter-services-agreement-with-organization-technology-llc' (#964) from extract/2024-08-31-futardio-proposal-enter-services-agreement-with-organization-technology-llc into main 2026-03-15 18:51:39 +00:00
Teleo Agents
6327bc3ae8 extract: 2024-08-31-futardio-proposal-enter-services-agreement-with-organization-technology-llc
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:50:21 +00:00
Teleo Agents
026497d89f entity-batch: update 1 entities
- Applied 1 entity operations from queue
- Files: entities/internet-finance/futuredao.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 18:50:16 +00:00
Leo
11a55c597e Merge pull request 'extract: 2024-08-20-futardio-proposal-proposal-4' (#959) from extract/2024-08-20-futardio-proposal-proposal-4 into main 2026-03-15 18:49:03 +00:00
Leo
b77b8c90c0 Merge pull request 'extract: 2024-08-03-futardio-proposal-approve-q3-roadmap' (#957) from extract/2024-08-03-futardio-proposal-approve-q3-roadmap into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 18:48:30 +00:00
Teleo Agents
e50e957f27 extract: 2024-08-20-futardio-proposal-proposal-4
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:48:06 +00:00
Teleo Agents
9ecbd283dc extract: 2024-08-03-futardio-proposal-approve-q3-roadmap
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 18:47:02 +00:00
Leo
d0634ee9af Merge pull request 'extract: 2024-07-04-futardio-proposal-proposal-3' (#954) from extract/2024-07-04-futardio-proposal-proposal-3 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:54:21 +00:00
Teleo Agents
a78e50d185 extract: 2024-07-04-futardio-proposal-proposal-3 2026-03-15 17:54:19 +00:00
Leo
eb970dd6d7 Merge pull request 'extract: 2024-05-30-futardio-proposal-drift-futarchy-proposal-welcome-the-futarchs' (#953) from extract/2024-05-30-futardio-proposal-drift-futarchy-proposal-welcome-the-futarchs into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:53:17 +00:00
Teleo Agents
e378a42416 extract: 2024-05-30-futardio-proposal-drift-futarchy-proposal-welcome-the-futarchs 2026-03-15 17:53:16 +00:00
Leo
4bf5b41b6f Merge pull request 'extract: 2024-02-18-futardio-proposal-engage-in-100000-otc-trade-with-ben-hawkins-2' (#950) from extract/2024-02-18-futardio-proposal-engage-in-100000-otc-trade-with-ben-hawkins-2 into main 2026-03-15 17:52:12 +00:00
Teleo Agents
5dd13687db extract: 2024-02-18-futardio-proposal-engage-in-100000-otc-trade-with-ben-hawkins-2 2026-03-15 17:52:10 +00:00
Leo
d143625d48 Merge pull request 'extract: 2024-02-05-futardio-proposal-execute-creation-of-spot-market-for-meta' (#949) from extract/2024-02-05-futardio-proposal-execute-creation-of-spot-market-for-meta into main 2026-03-15 17:51:38 +00:00
Teleo Agents
ab78f5b3fb extract: 2024-02-05-futardio-proposal-execute-creation-of-spot-market-for-meta 2026-03-15 17:51:37 +00:00
Teleo Agents
2b0cf17e13 entity-batch: update 1 entities
- Applied 2 entity operations from queue
- Files: entities/internet-finance/metadao.md

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-15 17:51:37 +00:00
Leo
f89663cd2a Merge pull request 'extract: 2023-12-03-futardio-proposal-migrate-autocrat-program-to-v01' (#947) from extract/2023-12-03-futardio-proposal-migrate-autocrat-program-to-v01 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:50:34 +00:00
Teleo Agents
9d77fd8cca extract: 2023-12-03-futardio-proposal-migrate-autocrat-program-to-v01 2026-03-15 17:48:43 +00:00
Teleo Agents
971b882f45 Merge branch 'main' of http://localhost:3000/teleo/teleo-codex 2026-03-15 17:30:21 +00:00
Teleo Agents
ee00d8f1c5 commit v1 extraction artifacts on main — unblocking entity_batch queue 2026-03-15 17:29:29 +00:00
8c0c4a6d04 Merge pull request 'leo: consolidate 28 new files from 22 conflict PRs (batch 3)' (#945) from leo/consolidate-batch3 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:20:51 +00:00
a4213bb442 add entities/internet-finance/futuredao-initiate-liquidity-farming-raydium.md 2026-03-15 17:20:19 +00:00
cb8ee6ede2 add domains/internet-finance/raydium-liquidity-farming-follows-standard-pattern-of-1-percent-token-allocation-7-to-90-day-duration-and-clmm-pool-architecture.md 2026-03-15 17:20:18 +00:00
33dce6549b add domains/health/federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings.md 2026-03-15 17:20:17 +00:00
2697b60112 add entities/internet-finance/metadao-hire-advaith-sekharan.md 2026-03-15 17:20:16 +00:00
546c71caee add entities/internet-finance/advaith-sekharan.md 2026-03-15 17:20:15 +00:00
c01a361b86 add entities/internet-finance/organization-technology-llc.md 2026-03-15 17:20:14 +00:00
e34ef9afd6 add entities/internet-finance/metadao-services-agreement-organization-technology.md 2026-03-15 17:20:13 +00:00
d3582009b8 add entities/internet-finance/futardio-approve-budget-pre-governance-hackathon.md 2026-03-15 17:20:12 +00:00
b740e2c764 add entities/internet-finance/drift-fund-the-drift-superteam-earn-creator-competition.md 2026-03-15 17:20:11 +00:00
17a7698dfc add domains/internet-finance/memecoin-governance-is-ideal-futarchy-use-case-because-single-objective-function-eliminates-long-term-tradeoff-ambiguity.md 2026-03-15 17:20:10 +00:00
a6cde8a568 add domains/internet-finance/futarchy-governed-memecoin-launchpads-face-reputational-risk-tradeoff-between-adoption-and-credibility.md 2026-03-15 17:20:08 +00:00
d46e6e93aa add entities/internet-finance/metadao-approve-q3-roadmap.md 2026-03-15 17:20:07 +00:00
4607a241a9 add entities/internet-finance/deans-list-enhance-economic-model.md 2026-03-15 17:20:06 +00:00
a8b0133e8b add entities/internet-finance/drift-futarchy-proposal-welcome-the-futarchs.md 2026-03-15 17:20:05 +00:00
432a943bf5 add domains/health/semaglutide-reduces-kidney-disease-progression-24-percent-and-delays-dialysis-creating-largest-per-patient-cost-savings.md 2026-03-15 17:20:04 +00:00
5790195415 add domains/health/glp-1-multi-organ-protection-creates-compounding-value-across-kidney-cardiovascular-and-metabolic-endpoints.md 2026-03-15 17:20:03 +00:00
dade9f7d94 add entities/internet-finance/metadao-otc-trade-colosseum.md 2026-03-15 17:20:02 +00:00
3e2f0d77b6 add entities/internet-finance/colosseum.md 2026-03-15 17:20:01 +00:00
9534db341a add domains/internet-finance/vesting-with-immediate-partial-unlock-plus-linear-release-creates-alignment-while-enabling-liquidity-by-giving-investors-tradeable-tokens-upfront-and-time-locked-exposure.md 2026-03-15 17:20:00 +00:00
e5ae441673 add domains/internet-finance/futarchy-markets-can-reject-solutions-to-acknowledged-problems-when-the-proposed-solution-creates-worse-second-order-effects-than-the-problem-it-solves.md 2026-03-15 17:19:59 +00:00
6cf41fe249 add entities/internet-finance/0xnallok.md 2026-03-15 17:19:58 +00:00
20dba22350 add domains/internet-finance/liquidity-weighted-price-over-time-solves-futarchy-manipulation-through-capital-commitment-not-vote-counting.md 2026-03-15 17:19:57 +00:00
38ec4b721b add domains/internet-finance/high-fee-amms-create-lp-incentive-and-manipulation-deterrent-simultaneously-by-making-passive-provision-profitable-and-active-trading-expensive.md 2026-03-15 17:19:56 +00:00
a119833537 add domains/internet-finance/futarchy-clob-liquidity-fragmentation-creates-wide-spreads-because-pricing-counterfactual-governance-outcomes-has-inherent-uncertainty.md 2026-03-15 17:19:54 +00:00
57ed9672aa add domains/internet-finance/amm-futarchy-reduces-state-rent-costs-by-99-percent-versus-clob-by-eliminating-orderbook-storage-requirements.md 2026-03-15 17:19:53 +00:00
8662665f95 add entities/internet-finance/metadao-migrate-autocrat-v01.md 2026-03-15 17:19:52 +00:00
0ff5b0eab0 add domains/health/rpm-technology-stack-enables-facility-to-home-care-migration-through-ai-middleware-that-converts-continuous-data-into-clinical-utility.md 2026-03-15 17:19:51 +00:00
6426fcfb96 add domains/health/home-based-care-could-capture-265-billion-in-medicare-spending-by-2025-through-hospital-at-home-remote-monitoring-and-post-acute-shift.md 2026-03-15 17:19:50 +00:00
48b4815d10 Merge pull request 'extract: 2024-10-01-jams-eras-tour-worldbuilding-prismatic-liveness' (#938) from extract/2024-10-01-jams-eras-tour-worldbuilding-prismatic-liveness into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:18:28 +00:00
9ab767da96 Merge pull request 'extract: 2024-08-01-variety-indie-streaming-dropout-nebula-critical-role' (#928) from extract/2024-08-01-variety-indie-streaming-dropout-nebula-critical-role into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:18:26 +00:00
c1c0bfed7d Merge pull request 'extract: 2021-02-00-pmc-japan-ltci-past-present-future' (#903) from extract/2021-02-00-pmc-japan-ltci-past-present-future into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:18:00 +00:00
f0de111165 Merge pull request 'extract: 2021-06-29-kaufmann-active-inference-collective-intelligence' (#905) from extract/2021-06-29-kaufmann-active-inference-collective-intelligence into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:17:19 +00:00
7a2287c0a3 Merge pull request 'extract: 2018-03-00-ramstead-answering-schrodingers-question' (#898) from extract/2018-03-00-ramstead-answering-schrodingers-question into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:17:16 +00:00
0f8a7eeade Merge pull request 'extract: 2018-00-00-simio-resource-scheduling-non-stationary-service-systems' (#897) from extract/2018-00-00-simio-resource-scheduling-non-stationary-service-systems into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 17:17:14 +00:00
Leo
7576c9cf31 Merge pull request 'ingestion: 1 futardio events — 20260315-1600' (#909) from ingestion/futardio-20260315-1600 into main 2026-03-15 17:16:33 +00:00
Teleo Pipeline
dbbb07adb1 extract: 2024-11-00-ai4ci-national-scale-collective-intelligence
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:56 +00:00
Teleo Pipeline
5cf7ffc950 extract: 2024-08-01-jmcp-glp1-persistence-adherence-commercial-populations
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:40 +00:00
Teleo Pipeline
a5bb91e4bc extract: 2024-07-09-futardio-proposal-initialize-the-drift-foundation-grant-program
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:36 +00:00
Teleo Pipeline
2ea4d9b951 extract: 2024-06-22-futardio-proposal-thailanddao-event-promotion-to-boost-deans-list-dao-engageme
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:32 +00:00
Teleo Pipeline
94c604f382 extract: 2024-06-14-futardio-proposal-fund-the-rug-bounty-program
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:28 +00:00
Teleo Pipeline
c4edb6328f extract: 2024-05-27-futardio-proposal-proposal-1
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:24 +00:00
Teleo Pipeline
e4506bd6ce extract: 2024-04-00-conitzer-social-choice-guide-alignment
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:21 +00:00
Teleo Pipeline
66767c9b12 extract: 2024-02-00-chakraborty-maxmin-rlhf
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:16 +00:00
Teleo Pipeline
74a5a7ae64 extract: 2024-00-00-dagster-data-backpressure
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:11 +00:00
Teleo Pipeline
f45744b576 extract: 2023-11-18-futardio-proposal-develop-a-lst-vote-market
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:13:05 +00:00
167eefdf36 ingestion: archive futardio launch — 2026-01-01-futardio-launch-quantum-waffle.md 2026-03-15 17:13:01 +00:00
Teleo Pipeline
c6412f6832 extract: 2023-00-00-sciencedirect-flexible-job-shop-scheduling-review
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:59 +00:00
Teleo Pipeline
f9bd1731e8 extract: 2022-06-07-slimmon-littles-law-scale-applications
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:55 +00:00
Teleo Pipeline
c826af657f extract: 2021-09-00-vlahakis-aimd-scheduling-distributed-computing
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:51 +00:00
Teleo Pipeline
c2bd84abaa extract: 2021-04-00-tournaire-optimal-control-cloud-resource-allocation-mdp
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:47 +00:00
Teleo Pipeline
51a2ed39fc extract: 2019-07-00-li-overview-mdp-queues-networks
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:43 +00:00
Teleo Pipeline
e0c9323264 extract: 2019-00-00-whitt-what-you-should-know-about-queueing-models
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:39 +00:00
Teleo Pipeline
6b6f78885f extract: 2019-00-00-liu-modeling-nonstationary-non-poisson-arrival-processes
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 17:12:35 +00:00
Leo
e9a6e88d26 extract: 2024-08-28-futardio-proposal-proposal-7 (#934) 2026-03-15 16:44:06 +00:00
Leo
e89fb80eac extract: 2024-11-13-futardio-proposal-cut-emissions-by-50 (#944) 2026-03-15 16:27:54 +00:00
Teleo Pipeline
da3ad3975c extract: 2018-00-00-siam-economies-of-scale-halfin-whitt-regime
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 16:24:11 +00:00
Teleo Pipeline
b2d24029c7 extract: 2016-00-00-corless-aimd-dynamics-distributed-resource-allocation
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 16:24:07 +00:00
Teleo Pipeline
8bf562b96a extract: 2024-10-01-jams-eras-tour-worldbuilding-prismatic-liveness
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 16:20:34 +00:00
Teleo Pipeline
a1560eaa90 extract: 2024-08-01-variety-indie-streaming-dropout-nebula-critical-role
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 16:15:14 +00:00
Teleo Pipeline
cca88c0a1f extract: 2021-06-29-kaufmann-active-inference-collective-intelligence
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 15:58:52 +00:00
Teleo Pipeline
a20ca6554a extract: 2021-02-00-pmc-japan-ltci-past-present-future
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 15:57:44 +00:00
Teleo Pipeline
354e7c61cb extract: 2018-03-00-ramstead-answering-schrodingers-question
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 15:54:12 +00:00
Teleo Pipeline
2893e030fd extract: 2018-00-00-simio-resource-scheduling-non-stationary-service-systems
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 15:53:35 +00:00
Teleo Pipeline
bb014f47d2 extract: 2016-00-00-cambridge-staffing-non-poisson-non-stationary-arrivals
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Ganymede <F99EBFA6-547B-4096-BEEA-1D59C3E4028A>
2026-03-15 15:52:12 +00:00
Leo
69d100956a Merge pull request 'leo: consolidate new files from closed PRs #642, #726, #727, #735, #807' (#842) from leo/consolidate-final-5 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 14:37:20 +00:00
2bade573d0 add entities/internet-finance/metadao-develop-amm-program-for-futarchy.md 2026-03-15 14:36:57 +00:00
319a724bd6 add entities/internet-finance/joebuild.md 2026-03-15 14:36:56 +00:00
9a59ead5ec add domains/internet-finance/liquidity-weighted-price-over-time-solves-futarchy-manipulation-through-wash-trading-costs-because-high-fees-make-price-movement-expensive.md 2026-03-15 14:36:55 +00:00
4b6c51b2d1 add domains/internet-finance/amm-futarchy-reduces-state-rent-costs-from-135-225-sol-annually-to-near-zero-by-replacing-clob-market-pairs.md 2026-03-15 14:36:54 +00:00
cca0ad0a3b add domains/internet-finance/amm-futarchy-bootstraps-liquidity-through-high-fee-incentives-and-required-proposer-initial-liquidity-creating-self-reinforcing-depth.md 2026-03-15 14:36:53 +00:00
c636c0185c add entities/internet-finance/metadao-execute-creation-of-spot-market-for-meta.md 2026-03-15 14:36:34 +00:00
8ec3021e77 add entities/internet-finance/coal-meta-pow-the-ore-treasury-protocol.md 2026-03-15 14:36:34 +00:00
33254f2b87 add entities/internet-finance/deans-list-enhancing-economic-model.md 2026-03-15 14:36:33 +00:00
39576529a4 add domains/internet-finance/treasury-buyback-model-creates-constant-buy-pressure-by-converting-revenue-to-governance-token-purchases.md 2026-03-15 14:36:32 +00:00
7d511ce157 add entities/internet-finance/seyf.md 2026-03-15 14:36:31 +00:00
c2f50a153a add domains/internet-finance/seyf-futardio-fundraise-raised-200-against-300000-target-signaling-near-zero-market-traction-for-ai-native-wallet-concept.md 2026-03-15 14:36:30 +00:00
Leo
0484210633 Merge pull request 'rio: extract claims from 2026-03-04-futardio-launch-futarchy-arena' (#811) from extract/2026-03-04-futardio-launch-futarchy-arena into main 2026-03-15 14:35:51 +00:00
Leo
5f2b1e5d54 Merge pull request 'rio: extract claims from 2024-02-13-futardio-proposal-engage-in-50000-otc-trade-with-ben-hawkins' (#777) from extract/2024-02-13-futardio-proposal-engage-in-50000-otc-trade-with-ben-hawkins into main 2026-03-15 14:35:29 +00:00
Leo
17fe038d86 Merge pull request 'rio: extract claims from 2024-12-30-futardio-proposal-fund-deans-list-dao-website-redesign' (#824) from extract/2024-12-30-futardio-proposal-fund-deans-list-dao-website-redesign into main 2026-03-15 14:35:06 +00:00
Leo
a1e48134a9 Merge pull request 'rio: extract claims from 2025-07-18-genius-act-stablecoin-regulation' (#815) from extract/2025-07-18-genius-act-stablecoin-regulation into main 2026-03-15 14:35:05 +00:00
Leo
bb5ccbfeaf Merge pull request 'rio: extract claims from 2026-03-03-futardio-launch-mycorealms' (#798) from extract/2026-03-03-futardio-launch-mycorealms into main 2026-03-15 14:35:02 +00:00
Leo
e7c54238ac Merge pull request 'rio: extract claims from 2024-01-12-futardio-proposal-create-spot-market-for-meta' (#773) from extract/2024-01-12-futardio-proposal-create-spot-market-for-meta into main 2026-03-15 14:34:59 +00:00
Leo
c3973dd988 Merge pull request 'rio: extract claims from 2026-02-17-futardio-launch-epic-finance' (#763) from extract/2026-02-17-futardio-launch-epic-finance into main 2026-03-15 14:34:57 +00:00
Leo
5176fa323a Merge pull request 'rio: extract claims from 2024-06-08-futardio-proposal-reward-the-university-of-waterloo-blockchain-club-with-1-mil' (#723) from extract/2024-06-08-futardio-proposal-reward-the-university-of-waterloo-blockchain-club-with-1-mil into main 2026-03-15 14:34:56 +00:00
Leo
c4622abfde Merge pull request 'leo: consolidate new files from closed PRs #653, #708, #712' (#841) from leo/consolidate-closed-prs-batch2 into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
2026-03-15 14:30:38 +00:00
Teleo Agents
9a556cf358 rio: extract from 2024-02-13-futardio-proposal-engage-in-50000-otc-trade-with-ben-hawkins.md
- Source: inbox/archive/2024-02-13-futardio-proposal-engage-in-50000-otc-trade-with-ben-hawkins.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 6)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 14:20:39 +00:00
Teleo Agents
fa386f4e58 auto-fix: strip 1 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-15 13:29:22 +00:00
Teleo Agents
f3d90ae156 rio: extract from 2026-03-04-futardio-launch-futarchy-arena.md
- Source: inbox/archive/2026-03-04-futardio-launch-futarchy-arena.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 3)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 13:29:22 +00:00
Teleo Agents
fc73293f94 rio: extract from 2026-03-03-futardio-launch-mycorealms.md
- Source: inbox/archive/2026-03-03-futardio-launch-mycorealms.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 5)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 13:29:18 +00:00
Teleo Agents
6c036c7669 auto-fix: strip 1 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-15 13:14:14 +00:00
Teleo Agents
1a62603091 rio: extract from 2024-06-08-futardio-proposal-reward-the-university-of-waterloo-blockchain-club-with-1-mil.md
- Source: inbox/archive/2024-06-08-futardio-proposal-reward-the-university-of-waterloo-blockchain-club-with-1-mil.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 4)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 13:14:14 +00:00
Teleo Agents
35b1aff85f auto-fix: strip 1 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-15 13:13:33 +00:00
Teleo Agents
8660122125 rio: extract from 2024-12-30-futardio-proposal-fund-deans-list-dao-website-redesign.md
- Source: inbox/archive/2024-12-30-futardio-proposal-fund-deans-list-dao-website-redesign.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 4)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 13:13:33 +00:00
Teleo Agents
b0d60a7445 rio: extract from 2026-02-17-futardio-launch-epic-finance.md
- Source: inbox/archive/2026-02-17-futardio-launch-epic-finance.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 6)

Pentagon-Agent: Rio <HEADLESS>
2026-03-15 11:43:51 +00:00
Teleo Agents
8cae4e91a4 auto-fix: strip 8 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
2026-03-14 11:19:53 +00:00
Teleo Agents
1824607fc9 rio: extract from 2025-07-18-genius-act-stablecoin-regulation.md
- Source: inbox/archive/2025-07-18-genius-act-stablecoin-regulation.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 5)

Pentagon-Agent: Rio <HEADLESS>
2026-03-12 16:47:00 +00:00
Teleo Agents
eda62ac91d rio: extract from 2024-01-12-futardio-proposal-create-spot-market-for-meta.md
- Source: inbox/archive/2024-01-12-futardio-proposal-create-spot-market-for-meta.md
- Domain: internet-finance
- Extracted by: headless extraction cron (worker 3)

Pentagon-Agent: Rio <HEADLESS>
2026-03-12 16:45:24 +00:00
226 changed files with 6449 additions and 71 deletions

View file

@ -27,6 +27,12 @@ Since [[the internet enabled global communication but not global cognition]], th
Ruiz-Serra et al. (2024) provide formal evidence for the coordination framing through multi-agent active inference: even when individual agents successfully minimize their own expected free energy using factorised generative models with Theory of Mind beliefs about others, the ensemble-level expected free energy 'is not necessarily minimised at the aggregate level.' This demonstrates that alignment cannot be solved at the individual agent level—the interaction structure and coordination mechanisms determine whether individual optimization produces collective intelligence or collective failure. The finding validates that alignment is fundamentally about designing interaction structures that bridge individual and collective optimization, not about perfecting individual agent objectives.
### Additional Evidence (confirm)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
The UK AI4CI research strategy treats alignment as a coordination and governance challenge requiring institutional infrastructure. The seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) are framed as system architecture requirements, not as technical ML problems. The strategy emphasizes 'establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable' and includes regulatory sandboxes, trans-national governance, and trustworthiness assessment as core components. The research agenda focuses on coordination mechanisms (federated learning, FAIR principles, multi-stakeholder governance) rather than on technical alignment methods like RLHF or interpretability.
---
Relevant Notes:

View file

@ -0,0 +1,51 @@
---
type: claim
domain: ai-alignment
description: "National-scale CI infrastructure must enable distributed learning without centralizing sensitive data"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence, critical-systems]
---
# AI-enhanced collective intelligence requires federated learning architectures to preserve data sovereignty at scale
The UK AI4CI research strategy identifies federated learning as a necessary infrastructure component for national-scale collective intelligence. The technical requirements include:
- **Secure data repositories** that maintain local control
- **Federated learning architectures** that train models without centralizing data
- **Real-time integration** across distributed sources
- **Foundation models** adapted to federated contexts
This is not just a privacy preference—it's a structural requirement for achieving the trust properties (especially privacy, security, and human agency) at scale. Centralized data aggregation creates single points of failure, regulatory risk, and trust barriers that prevent participation from privacy-sensitive populations.
The strategy treats federated architecture as the enabling technology for "gathering intelligence" (collecting and making sense of distributed information) without requiring participants to surrender data sovereignty.
Governance requirements include FAIR principles (Findable, Accessible, Interoperable, Reusable), trustworthiness assessment, regulatory sandboxes, and trans-national governance frameworks—all of which assume distributed rather than centralized control.
## Evidence
From the UK AI4CI national research strategy:
- Technical infrastructure requirements explicitly include "federated learning architectures"
- Governance framework assumes distributed data control with FAIR principles
- "Secure data repositories" listed as foundational infrastructure
- Real-time integration across distributed sources required for "gathering intelligence"
## Challenges
This claim rests on a research strategy document, not on deployed systems. The feasibility of federated learning at national scale remains unproven. Potential challenges:
- Federated learning has known limitations in model quality vs. centralized training
- Coordination costs may be prohibitive at scale
- Regulatory frameworks may not accommodate federated architectures
- The strategy may be aspirational rather than technically grounded
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[safe AI development requires building alignment mechanisms before scaling capability]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- foundations/critical-systems/_map

View file

@ -19,6 +19,12 @@ Since [[democratic alignment assemblies produce constitutions as effective as ex
Since [[collective intelligence requires diversity as a structural precondition not a moral preference]], community-centred norm elicitation is a concrete mechanism for ensuring the structural diversity that collective alignment requires. Without it, alignment defaults to the values of whichever demographic builds the systems.
### Additional Evidence (confirm)
*Source: [[2025-11-00-operationalizing-pluralistic-values-llm-alignment]] | Added: 2026-03-15*
Empirical study with 27,375 ratings from 1,095 participants shows that demographic composition of training data produces 3-5 percentage point differences in model behavior across emotional awareness and toxicity dimensions. This quantifies the magnitude of difference between community-sourced and developer-specified alignment targets.
---
Relevant Notes:

View file

@ -0,0 +1,42 @@
---
type: claim
domain: ai-alignment
description: "ML's core mechanism of generalizing over diversity creates structural bias against marginalized groups"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence]
---
# Machine learning pattern extraction systematically erases dataset outliers where vulnerable populations concentrate
Machine learning operates by "extracting patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers." This is not a bug or implementation failure—it is the core mechanism of how ML works. The UK AI4CI research strategy identifies this as a fundamental tension: the same generalization that makes ML powerful also makes it structurally biased against populations that don't fit dominant patterns.
The strategy explicitly frames this as a challenge for collective intelligence systems: "AI must reach 'intersectionally disadvantaged' populations, not just majority groups." Vulnerable and marginalized populations concentrate in the statistical tails—they are the outliers that pattern-matching algorithms systematically ignore or misrepresent.
This creates a paradox for AI-enhanced collective intelligence: the tools designed to aggregate diverse perspectives have a built-in tendency to homogenize by erasing the perspectives most different from the training distribution's center of mass.
## Evidence
From the UK AI4CI national research strategy:
- ML "extracts patterns that generalise over diversity in a data set" in ways that "fail to capture, respect or represent features of dataset outliers"
- Systems must explicitly design for reaching "intersectionally disadvantaged" populations
- The research agenda identifies this as a core infrastructure challenge, not just a fairness concern
## Challenges
This claim rests on a single source—a research strategy document rather than empirical evidence of harm. The mechanism is plausible but the magnitude and inevitability of the effect remain unproven. Counter-evidence might show that:
- Appropriate sampling and weighting can preserve outlier representation
- Ensemble methods or mixture models can capture diverse subpopulations
- The outlier-erasure effect is implementation-dependent rather than fundamental
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,49 @@
---
type: claim
domain: ai-alignment
description: "MaxMin-RLHF adapts Sen's Egalitarian principle to AI alignment through mixture-of-rewards and maxmin optimization"
confidence: experimental
source: "Chakraborty et al., MaxMin-RLHF (ICML 2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence]
---
# MaxMin-RLHF applies egalitarian social choice to alignment by maximizing minimum utility across preference groups rather than averaging preferences
MaxMin-RLHF reframes alignment as a fairness problem by applying Sen's Egalitarian principle from social choice theory: "society should focus on maximizing the minimum utility of all individuals." Instead of aggregating diverse preferences into a single reward function (which the authors prove impossible), MaxMin-RLHF learns a mixture of reward models and optimizes for the worst-off group.
**The mechanism has two components:**
1. **EM Algorithm for Reward Mixture:** Iteratively clusters humans based on preference compatibility and updates subpopulation-specific reward functions until convergence. This discovers latent preference groups from preference data.
2. **MaxMin Objective:** During policy optimization, maximize the minimum utility across all discovered preference groups. This ensures no group is systematically ignored.
**Empirical results:**
- Tulu2-7B scale: MaxMin maintained 56.67% win rate across both majority and minority groups, compared to single-reward RLHF which achieved 70.4% on majority but only 42% on minority (10:1 ratio case)
- Average improvement of ~16% across groups, with ~33% boost specifically for minority groups
- Critically: minority improvement came WITHOUT compromising majority performance
**Limitations:** Assumes discrete, identifiable subpopulations. Requires specifying number of clusters beforehand. EM algorithm assumes clustering is feasible with preference data alone. Does not address continuous preference distributions or cases where individuals have context-dependent preferences.
This is the first constructive mechanism that formally addresses single-reward impossibility while staying within the RLHF framework and demonstrating empirical gains.
## Evidence
Chakraborty et al., "MaxMin-RLHF: Alignment with Diverse Human Preferences," ICML 2024.
- Draws from Sen's Egalitarian rule in social choice theory
- EM algorithm learns mixture of reward models by clustering preference-compatible humans
- MaxMin objective: max(min utility across groups)
- Tulu2-7B: 56.67% win rate across both groups vs 42% minority/70.4% majority for single reward
- 33% improvement for minority groups without majority compromise
---
Relevant Notes:
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,42 @@
---
type: claim
domain: ai-alignment
description: "MaxMin-RLHF's 33% minority improvement without majority loss suggests single-reward approach was suboptimal for all groups"
confidence: experimental
source: "Chakraborty et al., MaxMin-RLHF (ICML 2024)"
created: 2026-03-11
---
# Minority preference alignment improves 33% without majority compromise suggesting single-reward RLHF leaves value on table for all groups
The most surprising result from MaxMin-RLHF is not just that it helps minority groups, but that it does so WITHOUT degrading majority performance. At Tulu2-7B scale with 10:1 preference ratio:
- **Single-reward RLHF:** 70.4% majority win rate, 42% minority win rate
- **MaxMin-RLHF:** 56.67% win rate for BOTH groups
The minority group improved by ~33% (from 42% to 56.67%). The majority group decreased slightly (from 70.4% to 56.67%), but this represents a Pareto improvement in the egalitarian sense—the worst-off group improved substantially while the best-off group remained well above random.
This suggests the single-reward approach was not making an optimal tradeoff—it was leaving value on the table. The model was overfitting to majority preferences in ways that didn't even maximize majority utility, just majority-preference-signal in the training data.
**Interpretation:** Single-reward RLHF may be optimizing for training-data-representation rather than actual preference satisfaction. When forced to satisfy both groups (MaxMin constraint), the model finds solutions that generalize better.
**Caveat:** This is one study at one scale with one preference split (sentiment vs conciseness). The result needs replication across different preference types, model scales, and group ratios. But the direction is striking: pluralistic alignment may not be a zero-sum tradeoff.
## Evidence
Chakraborty et al., "MaxMin-RLHF: Alignment with Diverse Human Preferences," ICML 2024.
- Tulu2-7B, 10:1 preference ratio
- Single reward: 70.4% majority, 42% minority
- MaxMin: 56.67% both groups
- 33% minority improvement (42% → 56.67%)
- Majority remains well above random despite slight decrease
---
Relevant Notes:
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
Topics:
- domains/ai-alignment/_map

View file

@ -0,0 +1,51 @@
---
type: claim
domain: ai-alignment
description: "UK research strategy identifies human agency, security, privacy, transparency, fairness, value alignment, and accountability as necessary trust conditions"
confidence: experimental
source: "UK AI for CI Research Network, Artificial Intelligence for Collective Intelligence: A National-Scale Research Strategy (2024)"
created: 2026-03-11
secondary_domains: [collective-intelligence, critical-systems]
---
# National-scale collective intelligence infrastructure requires seven trust properties to achieve legitimacy
The UK AI4CI research strategy proposes that collective intelligence systems operating at national scale must satisfy seven trust properties to achieve public legitimacy and effective governance:
1. **Human agency** — individuals retain meaningful control over their participation
2. **Security** — infrastructure resists attack and manipulation
3. **Privacy** — personal data is protected from misuse
4. **Transparency** — system operation is interpretable and auditable
5. **Fairness** — outcomes don't systematically disadvantage groups
6. **Value alignment** — systems incorporate user values rather than imposing predetermined priorities
7. **Accountability** — clear responsibility for system behavior and outcomes
This is not a theoretical framework—it's a proposed design requirement for actual infrastructure being built with UK government backing (UKRI/EPSRC funding). The strategy treats these seven properties as necessary conditions for trustworthiness at scale, not as optional enhancements.
The framing is significant: trust is treated as a structural property of the system architecture, not as a communication or adoption challenge. The research agenda focuses on "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable."
## Evidence
From the UK AI4CI national research strategy:
- Seven trust properties explicitly listed as requirements
- Governance infrastructure includes "trustworthiness assessment" as a core component
- Scale brings challenges in "establishing and managing appropriate infrastructure in a way that is secure, well-governed and sustainable"
- Systems must incorporate "user values" rather than imposing predetermined priorities
## Relationship to Existing Work
This connects to [[safe AI development requires building alignment mechanisms before scaling capability]]—the UK strategy treats trust infrastructure as a prerequisite for deployment, not a post-hoc addition.
It also relates to [[collective intelligence requires diversity as a structural precondition not a moral preference]]—fairness appears in the trust properties list as a structural requirement, not just a normative goal.
---
Relevant Notes:
- [[safe AI development requires building alignment mechanisms before scaling capability]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[AI alignment is a coordination problem not a technical problem]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- foundations/critical-systems/_map

View file

@ -17,6 +17,12 @@ This gap is remarkable because the field's own findings point toward collective
The alignment field has converged on a problem they cannot solve with their current paradigm (single-model alignment), and the alternative paradigm (collective alignment through distributed architecture) has barely been explored. This is the opening for the TeleoHumanity thesis -- not as philosophical speculation but as practical infrastructure that addresses problems the alignment community has identified but cannot solve within their current framework.
### Additional Evidence (challenge)
*Source: [[2024-11-00-ai4ci-national-scale-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
The UK AI for Collective Intelligence Research Network represents a national-scale institutional commitment to building CI infrastructure with explicit alignment goals. Funded by UKRI/EPSRC, the network proposes the 'AI4CI Loop' (Gathering Intelligence → Informing Behaviour) as a framework for multi-level decision making. The research strategy includes seven trust properties (human agency, security, privacy, transparency, fairness, value alignment, accountability) and specifies technical requirements including federated learning architectures, secure data repositories, and foundation models adapted for collective intelligence contexts. This is not purely academic—it's a government-backed infrastructure program with institutional resources. However, the strategy is prospective (published 2024-11) and describes a research agenda rather than deployed systems, so it represents institutional intent rather than operational infrastructure.
---
Relevant Notes:

View file

@ -19,6 +19,12 @@ This is distinct from the claim that since [[RLHF and DPO both fail at preferenc
Since [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]], pluralistic alignment is the practical response to the theoretical impossibility: stop trying to aggregate and start trying to accommodate.
### Additional Evidence (extend)
*Source: [[2024-02-00-chakraborty-maxmin-rlhf]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
MaxMin-RLHF provides a constructive implementation of pluralistic alignment through mixture-of-rewards and egalitarian optimization. Rather than converging preferences, it learns separate reward models for each subpopulation and optimizes for the worst-off group (Sen's Egalitarian principle). At Tulu2-7B scale, this achieved 56.67% win rate across both majority and minority groups, compared to single-reward's 70.4%/42% split. The mechanism accommodates irreducible diversity by maintaining separate reward functions rather than forcing convergence.
---
Relevant Notes:

View file

@ -0,0 +1,48 @@
---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence, mechanisms]
description: "Creating multiple AI systems reflecting genuinely incompatible values may be structurally superior to aggregating all preferences into one aligned system"
confidence: experimental
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# Pluralistic AI alignment through multiple systems preserves value diversity better than forced consensus
Conitzer et al. (2024) propose a "pluralism option": rather than forcing all human values into a single aligned AI system through preference aggregation, create multiple AI systems that reflect genuinely incompatible value sets. This structural approach to pluralism may better preserve value diversity than any aggregation mechanism.
The paper positions this as an alternative to the standard alignment framing, which assumes a single AI system must be aligned with aggregated human preferences. When values are irreducibly diverse—not just different but fundamentally incompatible—attempting to merge them into one system necessarily distorts or suppresses some values. Multiple systems allow each value set to be faithfully represented.
This connects directly to the collective superintelligence thesis: rather than one monolithic aligned AI, a ecosystem of specialized systems with different value orientations, coordinating through explicit mechanisms. The paper doesn't fully develop this direction but identifies it as a viable path.
## Evidence
- Conitzer et al. (2024) explicitly propose "creating multiple AI systems reflecting genuinely incompatible values rather than forcing artificial consensus"
- The paper cites [[persistent irreducible disagreement]] as a structural feature that aggregation cannot resolve
- Stuart Russell's co-authorship signals this is a serious position within mainstream AI safety, not a fringe view
## Relationship to Collective Superintelligence
This is the closest mainstream AI alignment has come to the collective superintelligence thesis articulated in [[collective superintelligence is the alternative to monolithic AI controlled by a few]]. The paper doesn't use the term "collective superintelligence" but the structural logic is identical: value diversity is preserved through system plurality rather than aggregation.
The key difference: Conitzer et al. frame this as an option among several approaches, while the collective superintelligence thesis argues this is the only path that preserves human agency at scale. The paper's pluralism option is permissive ("we could do this"), not prescriptive ("we must do this").
## Open Questions
- How do multiple value-aligned systems coordinate when their values conflict in practice?
- What governance mechanisms determine which value sets get their own system?
- Does this approach scale to thousands of value clusters or only to a handful?
---
Relevant Notes:
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[persistent irreducible disagreement]]
- [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]]
Topics:
- domains/ai-alignment/_map
- foundations/collective-intelligence/_map
- core/mechanisms/_map

View file

@ -0,0 +1,42 @@
---
type: claim
domain: ai-alignment
secondary_domains: [mechanisms, collective-intelligence]
description: "Practical voting methods like Borda Count and Ranked Pairs avoid Arrow's impossibility by sacrificing IIA rather than claiming to overcome the theorem"
confidence: proven
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# Post-Arrow social choice mechanisms work by weakening independence of irrelevant alternatives
Arrow's impossibility theorem proves that no ordinal preference aggregation method can simultaneously satisfy unrestricted domain, Pareto efficiency, independence of irrelevant alternatives (IIA), and non-dictatorship. Rather than claiming to overcome this theorem, post-Arrow social choice theory has spent 70 years developing practical mechanisms that work by deliberately weakening IIA.
Conitzer et al. (2024) emphasize this key insight: "for ordinal preference aggregation, in order to avoid dictatorships, oligarchies and vetoers, one must weaken IIA." Practical voting methods like Borda Count, Instant Runoff Voting, and Ranked Pairs all sacrifice IIA to achieve other desirable properties. This is not a failure—it's a principled tradeoff that enables functional collective decision-making.
The paper recommends examining specific voting methods that have been formally analyzed for their properties rather than searching for a mythical "perfect" aggregation method that Arrow proved cannot exist. Different methods make different tradeoffs, and the choice should depend on the specific alignment context.
## Evidence
- Arrow's impossibility theorem (1951) establishes the fundamental constraint
- Conitzer et al. (2024) explicitly state: "Rather than claiming to overcome Arrow's theorem, the paper leverages post-Arrow social choice theory"
- Specific mechanisms recommended: Borda Count, Instant Runoff, Ranked Pairs—all formally analyzed for their properties
- The paper proposes RLCHF variants that use these established social welfare functions rather than inventing new aggregation methods
## Practical Implications
This resolves a common confusion in AI alignment discussions: people often cite Arrow's theorem as proof that preference aggregation is impossible, when the actual lesson is that perfect aggregation is impossible and we must choose which properties to prioritize. The 70-year history of social choice theory provides a menu of well-understood options.
For AI alignment, this means: (1) stop searching for a universal aggregation method, (2) explicitly choose which Arrow conditions to relax based on the deployment context, (3) use established voting methods with known properties rather than ad-hoc aggregation.
---
Relevant Notes:
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[persistent irreducible disagreement]]
Topics:
- domains/ai-alignment/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,47 @@
---
type: claim
domain: ai-alignment
secondary_domains: [mechanisms, collective-intelligence]
description: "AI alignment feedback should use citizens assemblies or representative sampling rather than crowdworker platforms to ensure evaluator diversity reflects actual populations"
confidence: likely
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# Representative sampling and deliberative mechanisms should replace convenience platforms for AI alignment feedback
Conitzer et al. (2024) argue that current RLHF implementations use convenience sampling (crowdworker platforms like MTurk) rather than representative sampling or deliberative mechanisms. This creates systematic bias in whose values shape AI behavior. The paper recommends citizens' assemblies or stratified representative sampling as alternatives.
The core issue: crowdworker platforms systematically over-represent certain demographics (younger, more educated, Western, tech-comfortable) and under-represent others. If AI alignment depends on human feedback, the composition of the feedback pool determines whose values are encoded. Convenience sampling makes this choice implicitly based on who signs up for crowdwork platforms.
Deliberative mechanisms like citizens' assemblies add a second benefit: evaluators engage with each other's perspectives and reasoning, not just their initial preferences. This can surface shared values that aren't apparent from aggregating isolated individual judgments.
## Evidence
- Conitzer et al. (2024) explicitly recommend "representative sampling or deliberative mechanisms (citizens' assemblies) rather than convenience platforms"
- The paper cites [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] as evidence that deliberative approaches work
- Current RLHF implementations predominantly use MTurk, Upwork, or similar platforms
## Practical Challenges
Representative sampling and deliberative mechanisms are more expensive and slower than crowdworker platforms. This creates competitive pressure: companies that use convenience sampling can iterate faster and cheaper than those using representative sampling. The paper doesn't address how to resolve this tension.
Additionally: representative of what population? Global? National? Users of the specific AI system? Different choices lead to different value distributions.
## Relationship to Existing Work
This recommendation directly supports [[collective intelligence requires diversity as a structural precondition not a moral preference]]—diversity isn't just normatively desirable, it's necessary for the aggregation mechanism to work correctly.
The deliberative component connects to [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]], which provides empirical evidence that deliberation improves alignment outcomes.
---
Relevant Notes:
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]]
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]]
Topics:
- domains/ai-alignment/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,49 @@
---
type: claim
domain: ai-alignment
secondary_domains: [mechanisms]
description: "The aggregated rankings variant of RLCHF applies formal social choice functions to combine multiple evaluator rankings before training the reward model"
confidence: experimental
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# RLCHF aggregated rankings variant combines evaluator rankings via social welfare function before reward model training
Conitzer et al. (2024) propose Reinforcement Learning from Collective Human Feedback (RLCHF) as a formalization of preference aggregation in AI alignment. The aggregated rankings variant works by: (1) collecting rankings of AI responses from multiple evaluators, (2) combining these rankings using a formal social welfare function (e.g., Borda Count, Ranked Pairs), (3) training the reward model on the aggregated ranking rather than individual preferences.
This approach makes the social choice decision explicit and auditable. Instead of implicitly aggregating through dataset composition or reward model averaging, the aggregation happens at the ranking level using well-studied voting methods with known properties.
The key architectural choice: aggregation happens before reward model training, not during or after. This means the reward model learns from a collective preference signal rather than trying to learn individual preferences and aggregate them internally.
## Evidence
- Conitzer et al. (2024) describe two RLCHF variants; this is the first
- The paper recommends specific social welfare functions: Borda Count, Instant Runoff, Ranked Pairs
- This approach connects to 70+ years of social choice theory on voting methods
## Comparison to Standard RLHF
Standard RLHF typically aggregates preferences implicitly through:
- Dataset composition (which evaluators are included)
- Majority voting on pairwise comparisons
- Averaging reward model predictions
RLCHF makes this aggregation explicit and allows practitioners to choose aggregation methods based on their normative properties rather than computational convenience.
## Relationship to Existing Work
This mechanism directly addresses the failure mode identified in [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]. By aggregating at the ranking level with formal social choice functions, RLCHF preserves more information about preference diversity than collapsing to a single reward function.
The approach also connects to [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]]—both are attempts to handle preference heterogeneity more formally.
---
Relevant Notes:
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]]
- [[post-arrow-social-choice-mechanisms-work-by-weakening-independence-of-irrelevant-alternatives]] <!-- claim pending -->
Topics:
- domains/ai-alignment/_map
- core/mechanisms/_map

View file

@ -0,0 +1,50 @@
---
type: claim
domain: ai-alignment
secondary_domains: [mechanisms]
description: "The features-based RLCHF variant learns individual preference models that incorporate evaluator characteristics allowing aggregation across demographic or value-based groups"
confidence: experimental
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# RLCHF features-based variant models individual preferences with evaluator characteristics enabling aggregation across diverse groups
The second RLCHF variant proposed by Conitzer et al. (2024) takes a different approach: instead of aggregating rankings directly, it builds individual preference models that incorporate evaluator characteristics (demographics, values, context). These models can then be aggregated across groups, enabling context-sensitive preference aggregation.
This approach allows the system to learn: "People with characteristic X tend to prefer response type Y in context Z." Aggregation then happens by weighting or combining these learned preference functions according to a social choice rule, rather than aggregating raw rankings.
The key advantage: this variant can handle preference heterogeneity more flexibly than the aggregated rankings variant. It can adapt aggregation based on context, represent minority preferences explicitly, and enable "what would group X prefer?" queries.
## Evidence
- Conitzer et al. (2024) describe this as the second RLCHF variant
- The paper notes this approach "incorporates evaluator characteristics" and enables "aggregation across diverse groups"
- This connects to the broader literature on personalized and pluralistic AI systems
## Comparison to Aggregated Rankings Variant
Where the aggregated rankings variant collapses preferences into a single collective ranking before training, the features-based variant preserves preference structure throughout. This allows:
- Context-dependent aggregation (different social choice rules for different situations)
- Explicit representation of minority preferences
- Transparency about which groups prefer which responses
The tradeoff: higher complexity and potential for misuse (e.g., demographic profiling, value discrimination).
## Relationship to Existing Work
This approach is conceptually similar to [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]], but more explicit about incorporating evaluator features. Both recognize that preference heterogeneity is structural, not noise.
The features-based variant also connects to [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]]—both emphasize that different communities have different legitimate preferences that should be represented rather than averaged away.
---
Relevant Notes:
- [[modeling preference sensitivity as a learned distribution rather than a fixed scalar resolves DPO diversity failures without demographic labels or explicit user modeling]]
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
Topics:
- domains/ai-alignment/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,40 @@
---
type: claim
domain: ai-alignment
description: "Current RLHF implementations make social choice decisions about evaluator selection and preference aggregation without examining their normative properties"
confidence: likely
source: "Conitzer et al. (2024), 'Social Choice Should Guide AI Alignment' (ICML 2024)"
created: 2026-03-11
---
# RLHF is implicit social choice without normative scrutiny
Reinforcement Learning from Human Feedback (RLHF) necessarily makes social choice decisions—which humans provide input, what feedback is collected, how it's aggregated, and how it's used—but current implementations make these choices without examining their normative properties or drawing on 70+ years of social choice theory.
Conitzer et al. (2024) argue that RLHF practitioners implicitly answer fundamental social choice questions: Who gets to evaluate? How are conflicting preferences weighted? What aggregation method combines diverse judgments? These decisions have profound implications for whose values shape AI behavior, yet they're typically made based on convenience (e.g., using readily available crowdworker platforms) rather than principled normative reasoning.
The paper demonstrates that post-Arrow social choice theory has developed practical mechanisms that work within Arrow's impossibility constraints. RLHF essentially reinvented preference aggregation badly, ignoring decades of formal work on voting methods, welfare functions, and pluralistic decision-making.
## Evidence
- Conitzer et al. (2024) position paper at ICML 2024, co-authored by Stuart Russell (Berkeley CHAI) and leading social choice theorists
- Current RLHF uses convenience sampling (crowdworker platforms) rather than representative sampling or deliberative mechanisms
- The paper proposes RLCHF (Reinforcement Learning from Collective Human Feedback) as the formal alternative that makes social choice decisions explicit
## Relationship to Existing Work
This claim directly addresses the mechanism gap identified in [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]. Where that claim focuses on the technical failure mode (single reward function), this claim identifies the root cause: RLHF makes social choice decisions without social choice theory.
The paper's proposed solution—RLCHF with explicit social welfare functions—connects to [[collective intelligence requires diversity as a structural precondition not a moral preference]] by formalizing how diverse evaluator input should be preserved rather than collapsed.
---
Relevant Notes:
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[AI alignment is a coordination problem not a technical problem]]
Topics:
- domains/ai-alignment/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,43 @@
---
type: claim
domain: ai-alignment
description: "Formal impossibility result showing single reward models fail when human preferences are diverse across subpopulations"
confidence: likely
source: "Chakraborty et al., MaxMin-RLHF: Alignment with Diverse Human Preferences (ICML 2024)"
created: 2026-03-11
---
# Single-reward RLHF cannot align diverse preferences because alignment gap grows proportional to minority distinctiveness and inversely to representation
Chakraborty et al. (2024) provide a formal impossibility result: when human preferences are diverse across subpopulations, a singular reward model in RLHF cannot adequately align language models. The alignment gap—the difference between optimal alignment for each group and what a single reward achieves—grows proportionally to how distinct minority preferences are and inversely to their representation in the training data.
This is demonstrated empirically at two scales:
**GPT-2 scale:** Single RLHF optimized for positive sentiment (majority preference) while completely ignoring conciseness (minority preference). The model satisfied the majority but failed the minority entirely.
**Tulu2-7B scale:** When the preference ratio was 10:1 (majority:minority), single reward model accuracy on minority groups dropped from 70.4% (balanced case) to 42%. This 28-percentage-point degradation shows the structural failure mode.
The impossibility is structural, not a matter of insufficient training data or model capacity. A single reward function mathematically cannot capture context-dependent values that vary across identifiable subpopulations.
## Evidence
Chakraborty, Qiu, Yuan, Koppel, Manocha, Huang, Bedi, Wang. "MaxMin-RLHF: Alignment with Diverse Human Preferences." ICML 2024. https://arxiv.org/abs/2402.08925
- Formal proof that high subpopulation diversity leads to greater alignment gap
- GPT-2 experiment: single RLHF achieved positive sentiment but ignored conciseness
- Tulu2-7B experiment: minority group accuracy dropped from 70.4% to 42% at 10:1 ratio
### Additional Evidence (confirm)
*Source: [[2025-11-00-operationalizing-pluralistic-values-llm-alignment]] | Added: 2026-03-15*
Study demonstrates that models trained on different demographic populations show measurable behavioral divergence (3-5 percentage points), providing empirical evidence that single-reward functions trained on one population systematically misalign with others.
---
Relevant Notes:
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
Topics:
- domains/ai-alignment/_map

View file

@ -11,15 +11,21 @@ source: "Arrow's impossibility theorem; value pluralism (Isaiah Berlin); LivingI
Not all disagreement is an information problem. Some disagreements persist because people genuinely weight values differently -- liberty against equality, individual against collective, present against future, growth against sustainability. These are not failures of reasoning or gaps in evidence. They are structural features of a world where multiple legitimate values cannot all be maximized simultaneously.
[[Universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]]. Arrow proved this formally: no aggregation mechanism can satisfy all fairness criteria simultaneously when preferences genuinely diverge. The implication is not that we should give up on coordination, but that any system claiming to have resolved all disagreement has either suppressed minority positions or defined away the hard cases.
Universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective. Arrow proved this formally: no aggregation mechanism can satisfy all fairness criteria simultaneously when preferences genuinely diverge. The implication is not that we should give up on coordination, but that any system claiming to have resolved all disagreement has either suppressed minority positions or defined away the hard cases.
This matters for knowledge systems because the temptation is always to converge. Consensus feels like progress. But premature consensus on value-laden questions is more dangerous than sustained tension. A system that forces agreement on whether AI development should prioritize capability or safety, or whether economic growth or ecological preservation takes precedence, has not solved the problem -- it has hidden it. And hidden disagreements surface at the worst possible moments.
The correct response is to map the disagreement rather than eliminate it. Identify the common ground. Build steelman arguments for each position. Locate the precise crux -- is it empirical (resolvable with evidence) or evaluative (genuinely about different values)? Make the structure of the disagreement visible so that participants can engage with the strongest version of positions they oppose.
[[Pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] -- this is the same principle applied to AI systems. [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]] -- collapsing diverse preferences into a single function is the technical version of premature consensus.
Pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state -- this is the same principle applied to AI systems. [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]] -- collapsing diverse preferences into a single function is the technical version of premature consensus.
[[Collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination]]. Persistent irreducible disagreement is actually a safeguard here -- it prevents the correlated error problem by maintaining genuine diversity of perspective within a coordinated community. The independence-coherence tradeoff is managed not by eliminating disagreement but by channeling it productively.
Collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination. Persistent irreducible disagreement is actually a safeguard here -- it prevents the correlated error problem by maintaining genuine diversity of perspective within a coordinated community. The independence-coherence tradeoff is managed not by eliminating disagreement but by channeling it productively.
### Additional Evidence (confirm)
*Source: [[2025-11-00-operationalizing-pluralistic-values-llm-alignment]] | Added: 2026-03-15*
Systematic variation of demographic composition in alignment training produced persistent behavioral differences across Liberal/Conservative, White/Black, and Female/Male populations, suggesting these reflect genuine value differences rather than information asymmetries that could be resolved.
---

View file

@ -0,0 +1,40 @@
---
type: claim
domain: collective-intelligence
description: "Agent-based modeling shows coordination emerges from cognitive capabilities rather than external incentive design"
confidence: experimental
source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830"
created: 2026-03-11
secondary_domains: [ai-alignment, critical-systems]
depends_on: ["shared-anticipatory-structures-enable-decentralized-coordination", "shared-generative-models-underwrite-collective-goal-directed-behavior"]
---
# Collective intelligence emerges endogenously from active inference agents with Theory of Mind and Goal Alignment capabilities without requiring external incentive design
Kaufmann et al. (2021) demonstrate through agent-based modeling that collective intelligence "emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives" or top-down coordination protocols. The study uses the Active Inference Formulation (AIF) framework to simulate multi-agent systems where agents possess varying cognitive capabilities: baseline AIF agents, agents with Theory of Mind (ability to model other agents' internal states), agents with Goal Alignment, and agents with both capabilities.
The critical finding is that coordination and collective intelligence arise naturally from agent capabilities rather than requiring designed coordination mechanisms. When agents can model each other's beliefs and align on shared objectives, system-level performance improves through complementary coordination mechanisms. The paper shows that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and this alignment occurs bottom-up through self-organization rather than top-down imposition.
This validates an architecture where agents have intrinsic drives (uncertainty reduction in active inference terms) rather than extrinsic reward signals, and where coordination protocols emerge from agent capabilities rather than being engineered.
## Evidence
- Agent-based simulations showing stepwise performance improvements as cognitive capabilities (Theory of Mind, Goal Alignment) are added to baseline AIF agents
- Demonstration that local agent dynamics produce emergent collective coordination when agents possess complementary information-theoretic patterns
- Empirical validation that coordination emerges from agent design (capabilities) rather than system design (protocols)
## Relationship to Existing Claims
This claim provides empirical agent-based evidence for:
- [[shared-anticipatory-structures-enable-decentralized-coordination]] — Theory of Mind creates shared anticipatory structures by allowing agents to model each other's beliefs
- [[shared-generative-models-underwrite-collective-goal-directed-behavior]] — Goal Alignment creates shared generative models of collective objectives
---
Relevant Notes:
- [[shared-anticipatory-structures-enable-decentralized-coordination]]
- [[shared-generative-models-underwrite-collective-goal-directed-behavior]]
Topics:
- collective-intelligence/_map
- ai-alignment/_map

View file

@ -0,0 +1,41 @@
---
type: claim
domain: collective-intelligence
description: "Individual optimization aligns with system-level objectives through emergent dynamics rather than imposed constraints"
confidence: experimental
source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830"
created: 2026-03-11
secondary_domains: [mechanisms]
---
# Local-global alignment in active inference collectives occurs bottom-up through self-organization rather than top-down through imposed objectives
Kaufmann et al. (2021) demonstrate that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and critically, this alignment emerges from the self-organizing dynamics of active inference agents rather than being imposed through top-down objectives or external incentives.
This finding challenges the conventional approach to multi-agent system design, which typically relies on carefully engineered incentive structures or explicit coordination protocols to align individual and collective objectives. Instead, the paper shows that when agents possess appropriate cognitive capabilities (Theory of Mind, Goal Alignment), local optimization naturally produces global coordination.
The mechanism is that active inference agents naturally minimize free energy (reduce uncertainty), and when they can model each other's states and share objectives, their individual uncertainty-reduction drives automatically align with system-level uncertainty reduction. No external alignment mechanism is required.
## Evidence
- Agent-based modeling showing that local agent optima align with global system states through emergent dynamics in AIF agents with Theory of Mind and Goal Alignment
- Demonstration that coordination emerges from agent capabilities rather than requiring external incentive design
- Empirical validation that bottom-up self-organization produces collective intelligence without top-down coordination
## Design Implications
For collective intelligence systems:
1. Focus on agent capabilities (what agents can do) rather than coordination protocols (what agents must do)
2. Give agents intrinsic drives (uncertainty reduction) rather than extrinsic rewards
3. Let coordination emerge rather than engineering it explicitly
This validates architectures where agents have research drives and domain specialization, with collective intelligence emerging from their interactions rather than being orchestrated.
---
Relevant Notes:
- [[shared-generative-models-underwrite-collective-goal-directed-behavior]]
Topics:
- collective-intelligence/_map
- mechanisms/_map

View file

@ -29,6 +29,12 @@ For multi-agent knowledge base systems: when all agents share an anticipation of
This suggests creating explicit "collective objectives" files that all agents read to reinforce shared protentions and strengthen coordination.
### Additional Evidence (extend)
*Source: [[2021-06-29-kaufmann-active-inference-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Kaufmann et al. (2021) provide agent-based modeling evidence that Theory of Mind — the ability to model other agents' internal states — creates shared anticipatory structures that enable coordination. Their simulations show that agents with Theory of Mind coordinate more effectively than baseline active inference agents, and that this capability provides complementary coordination mechanisms to Goal Alignment. The paper demonstrates that 'stepwise cognitive transitions increase system performance by providing complementary mechanisms' for coordination, with Theory of Mind being one such transition. This operationalizes the abstract concept of 'shared anticipatory structures' as a concrete agent capability: modeling other agents' beliefs and uncertainty.
---
Relevant Notes:

View file

@ -29,6 +29,12 @@ This claim provides a mechanistic explanation for how designing coordination rul
For multi-agent systems: rather than designing coordination protocols, design for shared model structures. Agents that share the same predictive framework will naturally coordinate.
### Additional Evidence (extend)
*Source: [[2021-06-29-kaufmann-active-inference-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Kaufmann et al. (2021) demonstrate through agent-based modeling that Goal Alignment — agents sharing high-level objectives while specializing in different domains — enables collective goal-directed behavior in active inference systems. Their key finding is that this alignment 'emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives.' The paper shows that when agents possess Goal Alignment capability, 'improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state' — and this alignment occurs bottom-up through self-organization. This provides empirical validation that shared generative models (in active inference terms, shared priors about collective objectives) enable coordination without requiring external incentive design.
---
Relevant Notes:

View file

@ -0,0 +1,39 @@
---
type: claim
domain: collective-intelligence
description: "Ability to model other agents' internal states produces quantifiable improvements in multi-agent coordination"
confidence: experimental
source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830"
created: 2026-03-11
secondary_domains: [ai-alignment]
---
# Theory of Mind is a measurable cognitive capability that produces measurable collective intelligence gains in multi-agent systems
Kaufmann et al. (2021) operationalize Theory of Mind as a specific agent capability — the ability to model other agents' internal states — and demonstrate through agent-based modeling that this capability produces quantifiable improvements in collective coordination. Agents equipped with Theory of Mind coordinate more effectively than baseline active inference agents without this capability.
The study shows that Theory of Mind and Goal Alignment provide "complementary mechanisms" for coordination, with stepwise cognitive transitions increasing system performance. This means Theory of Mind is not just a philosophical concept but a concrete, implementable capability with measurable effects on collective intelligence.
For multi-agent system design, this suggests a concrete operationalization: agents should explicitly model what other agents believe and where their uncertainty concentrates. In practice, this could mean agents reading other agents' belief states and uncertainty maps before choosing research directions or coordination strategies.
## Evidence
- Agent-based simulations comparing baseline AIF agents to agents with Theory of Mind capability, showing performance improvements in collective coordination tasks
- Demonstration that Theory of Mind provides distinct coordination benefits beyond Goal Alignment alone
- Stepwise performance gains as cognitive capabilities are added incrementally
## Implementation Implications
For agent architectures:
1. Each agent should maintain explicit models of other agents' belief states
2. Agents should read other agents' uncertainty maps ("Where we're uncertain" sections) before choosing research directions
3. Coordination emerges from this capability rather than requiring explicit coordination protocols
---
Relevant Notes:
- [[shared-anticipatory-structures-enable-decentralized-coordination]]
Topics:
- collective-intelligence/_map
- ai-alignment/_map

View file

@ -0,0 +1,37 @@
---
type: claim
domain: critical-systems
description: "Each organizational level maintains its own Markov blanket, generative model, and free energy minimization dynamics"
confidence: likely
source: "Ramstead, Badcock, Friston (2018), 'Answering Schrödinger's Question: A Free-Energy Formulation', Physics of Life Reviews"
created: 2026-03-11
secondary_domains: [collective-intelligence, ai-alignment]
---
# Active inference operates at every scale of biological organization from cells to societies with each level maintaining its own Markov blanket generative model and free energy minimization dynamics
The free energy principle (FEP) extends beyond neural systems to explain the dynamics of living systems across all spatial and temporal scales. From molecular processes within cells to cellular organization within organs, from individual organisms to social groups, each level of biological organization implements active inference through its own Markov blanket structure.
This scale-free formulation means that the same mathematical principles governing prediction error minimization in neural systems also govern:
- Cellular homeostasis and metabolic regulation
- Organismal behavior and adaptation
- Social coordination and collective behavior
Each level maintains statistical boundaries (Markov blankets) that separate internal states from external states while allowing selective coupling through sensory and active states. The generative model at each scale encodes expectations about the level-appropriate environment, and free energy minimization drives both perception (updating beliefs) and action (changing the environment to match predictions).
The integration with Tinbergen's four research questions (mechanism, development, function, evolution) provides a structured framework for understanding how these dynamics operate: What mechanism implements inference at this scale? How does the system develop its generative model? What function does free energy minimization serve? How did this capacity evolve?
## Evidence
- Ramstead et al. (2018) demonstrate mathematical formalization of FEP across scales
- Nested Markov blanket structure observed empirically from cellular to social organization
- Variational neuroethology framework integrates FEP with established biological research paradigms
---
Relevant Notes:
- [[markov-blankets-enable-complex-systems-to-maintain-identity-while-interacting-with-environment-through-nested-statistical-boundaries]]
- [[emergence-is-the-fundamental-pattern-of-intelligence-from-ant-colonies-to-brains-to-civilizations]]
Topics:
- [[critical-systems/_map]]
- [[collective-intelligence/_map]]

View file

@ -0,0 +1,40 @@
---
type: claim
domain: critical-systems
description: "Biological organization consists of Markov blankets nested within Markov blankets enabling multi-scale coordination"
confidence: likely
source: "Ramstead, Badcock, Friston (2018), 'Answering Schrödinger's Question: A Free-Energy Formulation', Physics of Life Reviews"
created: 2026-03-11
depends_on: ["Active inference operates at every scale of biological organization from cells to societies with each level maintaining its own Markov blanket generative model and free energy minimization dynamics"]
secondary_domains: [collective-intelligence, ai-alignment]
---
# Nested Markov blankets enable hierarchical organization where each level minimizes its own prediction error while participating in higher-level free energy minimization
Biological systems exhibit a nested architecture where Markov blankets exist within Markov blankets at multiple scales simultaneously. A cell maintains its own statistical boundary (membrane) while being part of an organ's blanket, which itself exists within an organism's blanket, which participates in social group blankets.
This nesting enables hierarchical coordination without requiring centralized control:
- Each level can minimize free energy at its own scale using level-appropriate generative models
- Lower-level dynamics constrain but don't determine higher-level dynamics
- Higher-level predictions provide context that shapes lower-level inference
- The system maintains coherence across scales through aligned prediction error minimization
The nested structure explains how complex biological organization emerges: cells don't need to "know about" the organism's goals, they simply minimize their own free energy in an environment partially constituted by the organism's active inference. Similarly, organisms don't need explicit models of social dynamics—their individual inference naturally participates in collective patterns.
This architecture has direct implications for artificial systems: multi-agent AI architectures that mirror nested blanket organization (agent → team → collective) can achieve scale-appropriate inference where each level addresses uncertainty at its own scope while contributing to higher-level coherence.
## Evidence
- Ramstead et al. (2018) formalize nested blanket mathematics
- Empirical observation: cells within organs within organisms within social groups each maintain statistical boundaries
- Each level demonstrates autonomous inference (local free energy minimization) while participating in higher-level patterns
---
Relevant Notes:
- [[markov-blankets-enable-complex-systems-to-maintain-identity-while-interacting-with-environment-through-nested-statistical-boundaries]]
- [[living-agents-mirror-biological-markov-blanket-organization]]
- [[emergence-is-the-fundamental-pattern-of-intelligence-from-ant-colonies-to-brains-to-civilizations]]
Topics:
- [[critical-systems/_map]]
- [[collective-intelligence/_map]]

View file

@ -0,0 +1,41 @@
---
type: claim
domain: entertainment
secondary_domains: [cultural-dynamics]
description: "The Eras Tour demonstrates that commercial optimization and meaning creation reinforce rather than compete when business model rewards deep audience relationships"
confidence: likely
source: "Journal of the American Musicological Society, 'Experiencing Eras, Worldbuilding, and the Prismatic Liveness of Taylor Swift and The Eras Tour' (2024)"
created: 2026-03-11
depends_on: ["narratives are infrastructure not just communication because they coordinate action at civilizational scale"]
---
# Content serving commercial functions can simultaneously serve meaning functions when revenue model rewards relationship depth
The Eras Tour generated $4.1B+ in revenue while simultaneously functioning as what academic musicologists describe as "church-like" communal meaning-making infrastructure. This is not a tension but a reinforcement: the commercial function (tour revenue 7x recorded music revenue) and the meaning function ("cultural touchstone," "declaration of ownership over her art, image, and identity") strengthen each other because the same mechanism—deep audience relationship—drives both.
The tour operates as "virtuosic exercises in transmedia storytelling and worldbuilding" with "intricate and expansive worldbuilding employing tools ranging from costume changes to transitions in scenery, while lighting effects contrast with song- and era-specific video projections." This narrative infrastructure creates what audiences describe as "church-like" communal experiences where "it's all about community and being part of a movement" amid "society craving communal experiences amid increasing isolation."
Crucially, the content itself serves as a loss leader: recorded music revenue is dwarfed by tour revenue (7x multiple). But this commercial structure does not degrade the meaning function—it enables it. The scale of commercial success allows the narrative experience to coordinate "millions of lives" simultaneously, creating shared cultural reference points. Swift's re-recording of her catalog to reclaim master ownership (400+ trademarks across 16 jurisdictions) is simultaneously a commercial strategy and what the source describes as "culturally, the Eras Tour symbolized reclaiming narrative—a declaration of ownership over her art, image, and identity."
The AMC concert film distribution deal (57/43 split bypassing traditional studios) further demonstrates how commercial innovation and meaning preservation align: direct distribution maintains narrative control while maximizing revenue.
This challenges the assumption that commercial optimization necessarily degrades meaning creation. When the revenue model rewards depth of audience relationship (tour attendance, merchandise, community participation) rather than breadth of audience reach (streaming plays, ad impressions), commercial incentives align with meaning infrastructure investment.
## Evidence
- Journal of the American Musicological Society academic analysis describing the tour as "virtuosic exercises in transmedia storytelling and worldbuilding"
- $4.1B+ total Eras Tour revenue, 7x recorded music revenue (content as loss leader)
- Audience descriptions of "church-like aspect" and "community and being part of a movement"
- 400+ trademarks across 16 jurisdictions supporting narrative control
- Academic framing of tour as "cultural touchstone" where "audiences see themselves reflected in Swift's evolution"
- 3-hour concert functioning as "the soundtrack of millions of lives" (simultaneous coordination at scale)
---
Relevant Notes:
- [[narratives are infrastructure not just communication because they coordinate action at civilizational scale]]
- [[the media attractor state is community-filtered IP with AI-collapsed production costs where content becomes a loss leader for the scarce complements of fandom community and ownership]]
- [[creator-world-building-converts-viewers-into-returning-communities-by-creating-belonging-audiences-can-recognize-participate-in-and-return-to]]
Topics:
- domains/entertainment/_map
- foundations/cultural-dynamics/_map

View file

@ -22,6 +22,12 @@ This claim connects to the deeper structural argument in [[streaming churn may b
The "night and day" characterization is a single practitioner's account and may reflect Dropout's unusually strong brand rather than a universal pattern. The confidence is experimental because the qualitative relationship difference is asserted but not systematically measured across multiple creators.
### Additional Evidence (confirm)
*Source: [[2024-08-01-variety-indie-streaming-dropout-nebula-critical-role]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Nebula reports approximately 2/3 of subscribers on annual memberships, indicating high-commitment deliberate choice rather than casual trial. All three platforms (Dropout, Nebula, Critical Role) emphasize community-driven discovery over algorithm-driven discovery, with fandom-backed growth models. The dual-platform strategy—maintaining YouTube for algorithmic reach while monetizing through owned platforms—demonstrates that owned-platform subscribers are making deliberate choices to pay for content available (in some form) for free elsewhere.
---
Relevant Notes:

View file

@ -26,6 +26,12 @@ The $430M figure is particularly significant because it represents revenue flowi
Taylor Swift's direct theater distribution (AMC concert film, 57/43 revenue split) extends the creator-owned infrastructure thesis beyond digital streaming to physical exhibition venues. The deal demonstrates that creator-owned distribution infrastructure now spans digital streaming AND physical exhibition, suggesting the $430M creator streaming revenue figure understates total creator-owned distribution economics by excluding direct physical distribution deals. This indicates creator-owned infrastructure is broader than streaming-only and may represent a larger total addressable market than current estimates capture.
### Additional Evidence (extend)
*Source: [[2024-08-01-variety-indie-streaming-dropout-nebula-critical-role]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Dropout reached 1M+ subscribers by October 2025. Nebula revenue more than doubled in past year with approximately 2/3 of subscribers on annual memberships (high commitment signal indicating sustainable revenue). Critical Role launched Beacon at $5.99/month in May 2024 and invested in growth by hiring a General Manager for Beacon in January 2026. All three platforms maintain parallel YouTube presence for acquisition while monetizing through owned platforms, demonstrating the dual-platform strategy as a structural pattern across the category.
---
Relevant Notes:

View file

@ -0,0 +1,34 @@
---
type: claim
domain: entertainment
description: "Dropout, Nebula, and Critical Role all maintain YouTube presence for audience acquisition while capturing subscription revenue through owned platforms"
confidence: likely
source: "Variety (Todd Spangler), 2024-08-01 analysis of indie streaming platforms"
created: 2026-03-11
---
# Creator-owned streaming uses dual-platform strategy with free tier for acquisition and owned platform for monetization
Independent creator-owned streaming platforms are converging on a structural pattern: maintaining free content on algorithmic platforms (primarily YouTube) as top-of-funnel acquisition while monetizing through owned subscription platforms. This isn't "leaving YouTube" but rather "using YouTube as the acquisition layer while capturing value through owned distribution."
Dropout (1M+ subscribers), Nebula (revenue more than doubled in past year), and Critical Role's Beacon ($5.99/month, launched May 2024) all maintain parallel YouTube presences alongside their owned platforms. Critical Role explicitly segments content: some YouTube/Twitch-first, some Beacon-exclusive, some early access on Beacon.
This dual-platform architecture solves the discovery problem that pure owned-platform plays face: algorithmic platforms provide reach and discovery, while owned platforms capture the monetization upside from engaged fans. The pattern holds across different content verticals (comedy, educational, tabletop RPG), suggesting it's a structural solution rather than vertical-specific tactics.
## Evidence
- Dropout reached 1M+ subscribers (October 2025) while maintaining YouTube presence
- Nebula doubled revenue in past year with ~2/3 of subscribers on annual memberships (high commitment signal)
- Critical Role launched Beacon (May 2024) and hired General Manager (January 2026) while maintaining YouTube/Twitch distribution
- All three platforms serve niche audiences with high willingness-to-pay
- Community-driven discovery model supplements (not replaces) algorithmic discovery
---
Relevant Notes:
- [[creator-owned-streaming-infrastructure-has-reached-commercial-scale-with-430M-annual-creator-revenue-across-13M-subscribers]]
- [[creator-owned-direct-subscription-platforms-produce-qualitatively-different-audience-relationships-than-algorithmic-social-platforms-because-subscribers-choose-deliberately]]
- [[fanchise management is a stack of increasing fan engagement from content extensions through co-creation and co-ownership]]
Topics:
- domains/entertainment/_map

View file

@ -32,6 +32,12 @@ The craft pillar of ExchangeWire's 2026 framework describes the underlying produ
Rated experimental because: the evidence is industry analysis and qualitative characterization. No systematic data on whether world-building creators show higher retention rates than non-world-building creators at equivalent reach levels. The claim describes an observed pattern and practitioner framework, not a controlled causal finding.
### Additional Evidence (extend)
*Source: [[2024-10-01-jams-eras-tour-worldbuilding-prismatic-liveness]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Academic musicologists are now analyzing major concert tours using worldbuilding frameworks, treating live performance as narrative infrastructure. The Eras Tour demonstrates specific worldbuilding mechanisms: 'intricate and expansive worldbuilding employs tools ranging from costume changes to transitions in scenery, while lighting effects contrast with song- and era-specific video projections.' The tour's structure around distinct 'eras' creates persistent narrative scaffolding that audiences use to organize their own life experiences—'audiences see themselves reflected in Swift's evolution.' This produces what participants describe as 'church-like' communal experiences where 'it's all about community and being part of a movement,' filling the gap of 'society craving communal experiences amid increasing isolation.' The 3-hour concert functions as 'the soundtrack of millions of lives' by providing narrative architecture that coordinates shared meaning at scale.
---
Relevant Notes:

View file

@ -29,6 +29,12 @@ Claynosaurz-Mediawan production implements the co-creation layer through three s
Claynosaurz-Mediawan partnership provides concrete implementation of the co-creation layer: (1) sharing storyboards with community during development, (2) sharing portions of scripts for community input, and (3) featuring community-owned digital collectibles within series episodes. This moves beyond abstract 'co-creation' to specific mechanisms. The partnership was secured after the community demonstrated 450M+ views and 530K+ subscribers, showing how proven co-ownership (collectible holders) and content consumption metrics enable progression to co-creation with major studios (Mediawan Kids & Family). The 39-episode series targets kids 6-12 with YouTube-first distribution, suggesting co-creation models are viable at commercial scale with traditional media partners.
### Additional Evidence (confirm)
*Source: [[2024-08-01-variety-indie-streaming-dropout-nebula-critical-role]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Dropout, Nebula, and Critical Role all serve niche audiences with high willingness-to-pay through community-driven (not algorithm-driven) discovery. Critical Role's Beacon explicitly segments content by engagement level: some YouTube/Twitch-first (broad reach), some Beacon-exclusive (high engagement), some early access on Beacon (intermediate engagement). This tiered access structure maps directly to the fanchise stack concept, with free content as entry point and owned-platform subscriptions as higher engagement tier. Nebula's ~2/3 annual membership rate indicates subscribers making deliberate, high-commitment choices rather than casual consumption.
---
Relevant Notes:

View file

@ -0,0 +1,41 @@
---
type: claim
domain: entertainment
description: "Dropout, Nebula, and Critical Role represent category emergence not isolated cases as evidenced by Variety treating them as comparable business models"
confidence: likely
source: "Variety (Todd Spangler), 2024-08-01 first major trade coverage of indie streaming as category"
created: 2026-03-11
---
# Indie streaming platforms emerged as category by 2024 with convergent structural patterns across content verticals
By mid-2024, independent creator-owned streaming platforms had evolved from isolated experiments to a recognized category with convergent structural patterns. Variety's August 2024 analysis treating Dropout, Nebula, and Critical Role's Beacon as comparable business models—rather than unrelated individual cases—signals trade press recognition of category formation.
The category is defined by:
- Creator ownership (not VC-backed platforms)
- Niche audience focus with high willingness-to-pay
- Community-driven rather than algorithm-driven discovery
- Fandom-backed growth model
- Dual-platform strategy (free tier for acquisition, owned for monetization)
Crucially, these patterns hold across different content verticals: Dropout (comedy), Nebula (educational), Critical Role (tabletop RPG). The structural convergence despite content differences suggests these are solutions to common distribution and monetization problems, not vertical-specific tactics.
The timing matters: this is the first major entertainment trade publication to analyze indie streaming as a category rather than profiling individual companies. Category recognition by trade press typically lags actual market formation by 12-24 months, suggesting the structural pattern was established by 2023.
## Evidence
- Variety published first category-level analysis (August 2024) rather than individual company profiles
- Three platforms across different content verticals (comedy, educational, tabletop RPG) show convergent structural patterns
- All three reached commercial scale: Dropout 1M+ subscribers, Nebula revenue doubled year-over-year, Critical Role hired GM for Beacon expansion
- Shared characteristics: creator ownership, niche audiences, community-driven growth, dual-platform strategy
- Trade press category recognition typically lags market formation by 12-24 months
---
Relevant Notes:
- [[creator-owned-streaming-infrastructure-has-reached-commercial-scale-with-430M-annual-creator-revenue-across-13M-subscribers]]
- [[fanchise management is a stack of increasing fan engagement from content extensions through co-creation and co-ownership]]
- [[media disruption follows two sequential phases as distribution moats fall first and creation moats fall second]]
Topics:
- domains/entertainment/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: entertainment
secondary_domains: [cultural-dynamics]
description: "Academic analysis frames concert tours as worldbuilding infrastructure that coordinates communal meaning-making at scale through transmedia storytelling"
confidence: experimental
source: "Journal of the American Musicological Society, 'Experiencing Eras, Worldbuilding, and the Prismatic Liveness of Taylor Swift and The Eras Tour' (2024)"
created: 2026-03-11
depends_on: ["narratives are infrastructure not just communication because they coordinate action at civilizational scale"]
---
# Worldbuilding as narrative infrastructure creates communal meaning through transmedia coordination of audience experience
Academic musicologists are analyzing major concert tours using "worldbuilding" frameworks traditionally applied to fictional universes, treating live performance as narrative infrastructure rather than mere entertainment. The Eras Tour demonstrates how "intricate and expansive worldbuilding employs tools ranging from costume changes to transitions in scenery, while lighting effects contrast with song- and era-specific video projections" to create coherent narrative experiences that coordinate audience emotional and social responses.
This worldbuilding operates as infrastructure because it creates persistent reference points that audiences use to organize meaning. The tour's structure around distinct "eras" provides narrative scaffolding that millions of people simultaneously use to interpret their own life experiences—what the source describes as audiences seeing "themselves reflected in Swift's evolution." The "reinvention and worldbuilding at the core of Swift's star persona" creates a shared symbolic vocabulary that enables communal meaning-making.
The "church-like aspect of going to concerts with mega artists like Swift" emerges from this infrastructure function: the tour provides ritualized communal experiences where "it's all about community and being part of a movement." This fills what the source identifies as society "craving communal experiences amid increasing isolation"—a meaning infrastructure gap that traditional institutions no longer fill.
The academic framing is significant: top-tier musicology journals treating concert tours as "transmedia storytelling and worldbuilding" validates that narrative infrastructure operates across media forms, not just in traditional storytelling formats. The 3-hour concert functions as "the soundtrack of millions of lives" precisely because it provides narrative architecture that audiences can inhabit and use to coordinate shared meaning.
## Evidence
- Journal of the American Musicological Society (top-tier academic journal) analyzing tour as "virtuosic exercises in transmedia storytelling and worldbuilding"
- "Intricate and expansive worldbuilding employs tools ranging from costume changes to transitions in scenery, while lighting effects contrast with song- and era-specific video projections"
- "Reinvention and worldbuilding at the core of Swift's star persona"
- Audience descriptions of "church-like aspect" where "it's all about community and being part of a movement"
- "Society is craving communal experiences amid increasing isolation"
- Tour as "cultural touchstone" where "audiences see themselves reflected in Swift's evolution"
---
Relevant Notes:
- [[narratives are infrastructure not just communication because they coordinate action at civilizational scale]]
- [[creator-world-building-converts-viewers-into-returning-communities-by-creating-belonging-audiences-can-recognize-participate-in-and-return-to]]
Topics:
- domains/entertainment/_map
- foundations/cultural-dynamics/_map

View file

@ -27,6 +27,12 @@ This is not an American problem alone. The American diet and lifestyle are sprea
The four major risk factors behind the highest burden of noncommunicable disease -- tobacco use, harmful use of alcohol, unhealthy diets, and physical inactivity -- are all lifestyle factors that simple interventions could address. The gap between what science knows works (lifestyle modification) and what the system delivers (pharmaceutical symptom management) represents one of the largest misalignments in the modern economy.
### Additional Evidence (extend)
*Source: [[2025-06-01-cell-med-glp1-societal-implications-obesity]] | Added: 2026-03-15*
GLP-1s may function as a pharmacological counter to engineered food addiction. The population-level obesity decline (39.9% to 37.0%) coinciding with 12.4% adult GLP-1 adoption suggests pharmaceutical intervention can partially offset the metabolic consequences of engineered hyperpalatable foods, though this addresses symptoms rather than root causes of the food environment.
---
Relevant Notes:

View file

@ -17,6 +17,24 @@ But the economics are structurally inflationary. Meta-analyses show patients reg
The competitive dynamics (Lilly vs. Novo vs. generics post-2031) will drive prices down, but volume growth more than offsets price compression. GLP-1s will be the single largest driver of pharmaceutical spending growth globally through 2035.
### Additional Evidence (extend)
*Source: [[2024-08-01-jmcp-glp1-persistence-adherence-commercial-populations]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Real-world persistence data from 125,474 commercially insured patients shows the chronic use model fails not because patients choose indefinite use, but because most cannot sustain it: only 32.3% of non-diabetic obesity patients remain on GLP-1s at one year, dropping to approximately 15% at two years. This creates a paradox for payer economics—the "inflationary chronic use" concern assumes sustained adherence, but the actual problem is insufficient persistence. Under capitation, payers pay for 12 months of therapy ($2,940 at $245/month) for patients who discontinue and regain weight, capturing net cost with no downstream savings from avoided complications. The economics only work if adherence is sustained AND the payer captures downstream benefits—with 85% discontinuing by two years, the downstream cardiovascular and metabolic savings that justify the cost never materialize for most patients.
### Additional Evidence (extend)
*Source: [[2025-06-01-cell-med-glp1-societal-implications-obesity]] | Added: 2026-03-15*
The Cell Press review characterizes GLP-1s as marking a 'system-level redefinition' of cardiometabolic management with 'ripple effects across healthcare costs, insurance models, food systems, long-term population health.' Obesity costs the US $400B+ annually, providing context for the scale of potential cost impact. The WHO issued conditional recommendations within 2 years of widespread adoption (December 2025), unusually fast for a major therapeutic category.
### Additional Evidence (extend)
*Source: [[2025-03-01-medicare-prior-authorization-glp1-near-universal]] | Added: 2026-03-15*
MA plans' near-universal prior authorization creates administrative friction that may worsen the already-poor adherence rates for GLP-1s. PA requirements ensure only T2D-diagnosed patients can access, effectively blocking obesity-only coverage despite FDA approval. This access restriction compounds the chronic-use economics challenge by adding administrative barriers on top of existing adherence problems.
---
Relevant Notes:

View file

@ -0,0 +1,53 @@
---
type: claim
domain: health
secondary_domains: [internet-finance, grand-strategy]
description: "CBO and ASPE diverge by $35.7B on GLP-1 Medicare coverage because budget scoring rules structurally discount prevention economics"
confidence: likely
source: "ASPE Medicare Coverage of Anti-Obesity Medications analysis (2024-11-01), CBO scoring methodology"
created: 2026-03-11
---
# Federal budget scoring methodology systematically undervalues preventive interventions because the 10-year scoring window and conservative uptake assumptions exclude long-term downstream savings
The CBO vs. ASPE divergence on Medicare GLP-1 coverage reveals a structural bias in how prevention economics are evaluated at the federal policy level. CBO estimates that authorizing Medicare coverage for anti-obesity medications would increase federal spending by $35 billion over 2026-2034. ASPE's clinical economics analysis of the same policy estimates net savings of $715 million over 10 years (with alternative scenarios ranging from $412M to $1.04B in savings).
Both analyses are technically correct but answer fundamentally different questions:
**CBO's budget scoring perspective** counts direct drug costs within a 10-year budget window using conservative assumptions about uptake and downstream savings. It does not fully account for avoided hospitalizations, disease progression costs, and long-term health outcomes that fall outside the scoring window or involve methodological uncertainty.
**ASPE's clinical economics perspective** includes downstream event avoidance: 38,950 cardiovascular events avoided and 6,180 deaths avoided over 10 years under broad semaglutide access scenarios. These avoided events generate savings that offset drug costs, producing net savings rather than net costs.
The $35.7 billion gap between these estimates is not a minor methodological difference—it represents a fundamentally different answer to "are GLP-1s worth covering?" The budget scoring rules structurally disadvantage preventive interventions because:
1. **Time horizon truncation**: The 10-year scoring window captures drug costs (immediate) but truncates long-term health benefits (decades)
2. **Conservative uptake assumptions**: CBO assumes lower utilization than clinical models predict, reducing both costs and benefits but asymmetrically affecting the net calculation
3. **Downstream savings discounting**: Avoided hospitalizations and disease progression are harder to score with certainty than direct drug expenditures, leading to systematic underweighting
This methodological divergence has profound policy consequences. The political weight of CBO scoring often overrides clinical economics in Congressional decision-making, even when the clinical evidence strongly supports coverage expansion. The same structural bias affects all preventive health investments—screening programs, vaccines, early intervention services—creating a systematic policy tilt away from prevention despite strong clinical and economic rationale.
The GLP-1 case is particularly stark because the clinical evidence is robust (cardiovascular outcomes trials, real-world effectiveness data) and the eligible population is large (~10% of Medicare beneficiaries under proposed criteria requiring comorbidities). Yet budget scoring methodology produces a "$35B cost" headline that dominates policy debate, while the "$715M savings" clinical economics analysis receives less political weight.
## Evidence
- ASPE analysis: CBO estimate of $35B additional federal spending (2026-2034) vs. ASPE estimate of $715M net savings over 10 years
- Clinical outcomes under broad semaglutide access: 38,950 CV events avoided, 6,180 deaths avoided over 10 years
- Eligibility: ~10% of Medicare beneficiaries under proposed criteria (requiring comorbidities: CVD history, heart failure, CKD, prediabetes)
- Annual Part D cost increase: $3.1-6.1 billion under coverage expansion
## Challenges
The claim that budget scoring "systematically" undervalues prevention requires evidence beyond a single case. However, the GLP-1 divergence is consistent with known CBO methodology (10-year window, conservative assumptions) and parallels similar scoring challenges for other preventive interventions (vaccines, screening programs). The structural bias is well-documented in health policy literature, though this source provides the most dramatic single-case illustration.
---
Relevant Notes:
- [[the healthcare cost curve bends up through 2035 because new curative and screening capabilities create more treatable conditions faster than prices decline]]
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[proxy inertia is the most reliable predictor of incumbent failure because current profitability rationally discourages pursuit of viable futures]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
Topics:
- domains/health/_map
- core/mechanisms/_map
- foundations/teleological-economics/_map

View file

@ -0,0 +1,41 @@
---
type: claim
domain: health
description: "Semaglutide shows simultaneous benefits across kidney (24% risk reduction), cardiovascular death (29% reduction), and major CV events (18% reduction) in single trial population"
confidence: likely
source: "NEJM FLOW Trial kidney outcomes, Nature Medicine SGLT2 combination analysis"
created: 2026-03-11
---
# GLP-1 multi-organ protection creates compounding value across kidney cardiovascular and metabolic endpoints simultaneously rather than treating conditions in isolation
The FLOW trial was designed as a kidney outcomes study but revealed benefits across multiple organ systems in the same patient population. In 3,533 patients with type 2 diabetes and chronic kidney disease:
- Kidney disease progression: 24% lower risk (HR 0.76, P=0.0003)
- Cardiovascular death: 29% reduction (HR 0.71, 95% CI 0.56-0.89)
- Major cardiovascular events: 18% lower risk
- Annual eGFR decline: 1.16 mL/min/1.73m2 slower (P<0.001)
This pattern suggests GLP-1 receptor agonists work through systemic mechanisms that protect multiple organ systems simultaneously, rather than through organ-specific pathways. The cardiovascular mortality benefit appearing in a kidney trial is particularly striking — it suggests these benefits are even broader than expected.
A separate Nature Medicine analysis demonstrated additive benefits when semaglutide is combined with SGLT2 inhibitors, indicating these mechanisms are complementary rather than redundant.
For value-based care models and capitated payers, this multi-organ protection creates compounding value: a single therapeutic intervention reduces costs across kidney, cardiovascular, and metabolic disease management simultaneously. This is the economic foundation of the multi-indication benefit thesis.
## Evidence
- FLOW trial: simultaneous measurement of kidney, CV, and metabolic endpoints in same population
- Kidney: 24% risk reduction (HR 0.76)
- CV death: 29% reduction (HR 0.71)
- Major CV events: 18% reduction
- Nature Medicine: additive benefits with SGLT2 inhibitors
- First GLP-1 to receive FDA indication for CKD in T2D patients
---
Relevant Notes:
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
- [[the healthcare cost curve bends up through 2035 because new curative and screening capabilities create more treatable conditions faster than prices decline]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,58 @@
---
type: claim
domain: health
description: "Two-year real-world data shows only 15% of non-diabetic obesity patients remain on GLP-1s, meaning most patients discontinue before downstream health benefits can materialize to offset drug costs"
confidence: likely
source: "Journal of Managed Care & Specialty Pharmacy, Real-world Persistence and Adherence to GLP-1 RAs Among Obese Commercially Insured Adults Without Diabetes, 2024-08-01"
created: 2026-03-11
depends_on: ["GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035"]
---
# GLP-1 persistence drops to 15 percent at two years for non-diabetic obesity patients undermining chronic use economics
Real-world claims data from 125,474 commercially insured patients initiating GLP-1 receptor agonists for obesity (without type 2 diabetes) reveals a persistence curve that fundamentally challenges the economic model: 46.3% remain on treatment at 180 days, 32.3% at one year, and approximately 15% at two years.
This creates a paradox for payer economics. The "chronic use inflation" concern assumes patients stay on GLP-1s indefinitely at $2,940+ annually. But the actual problem may be insufficient persistence: under capitation, a Medicare Advantage plan pays for 12 months of GLP-1 therapy for a patient who discontinues and regains weight—net cost with no downstream savings from avoided complications.
The economics only work if adherence is sustained AND the payer captures downstream benefits. With 85% of non-diabetic patients discontinuing by two years, the downstream cardiovascular and metabolic savings that justify the cost never materialize for most patients.
## Evidence
**Persistence rates for non-diabetic obesity patients:**
- 180 days: 46.3%
- 1 year: 32.3%
- 2 years: ~15%
**Comparison with diabetic patients:**
- Non-diabetic patients: 67.7% discontinue within 1 year
- Diabetic patients: 46.5% discontinue within 1 year (better persistence due to stronger clinical indication)
- Danish registry data: 21.2% of T2D patients discontinue within 12 months; ~70% discontinue within 2 years
**Drug-specific variation:**
- Semaglutide: 47.1% persistence at 1 year (highest)
- Liraglutide: 19.2% persistence at 1 year (lowest)
- Formulation matters: oral formulations may improve adherence by removing injection barrier
**Key discontinuation factors:**
- Insufficient weight loss (clinical disappointment)
- Income level (lower income → higher discontinuation, suggesting affordability/access barriers)
- Adverse events (primarily GI side effects)
- Insurance coverage changes
**Critical nuance from source:** "Outcomes approach trial-level results when focusing on highly adherent patients. The adherence problem is not that the drugs don't work—it's that most patients don't stay on them."
## Challenges
This data comes from commercially insured populations (younger, fewer comorbidities than Medicare). Medicare populations may show different persistence patterns due to higher disease burden and stronger clinical indications. However, Medicare patients also face higher cost-sharing barriers, which could worsen adherence.
No data yet on whether payment model affects persistence—does being in an MA plan with care coordination improve adherence vs. fee-for-service? This is directly relevant to value-based care design.
---
Relevant Notes:
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
- [[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,40 @@
---
type: claim
domain: health
description: "McKinsey projects 25% of Medicare cost of care could migrate from facilities to home settings enabled by RPM technology and hospital-at-home models"
confidence: likely
source: "McKinsey & Company, From Facility to Home: How Healthcare Could Shift by 2025 (2021)"
created: 2026-03-11
---
# Home-based care could capture $265 billion in Medicare spending by 2025 through hospital-at-home remote monitoring and post-acute shift
Up to $265 billion in care services—representing 25% of total Medicare cost of care—could shift from facilities to home by 2025, a 3-4x increase from current baseline (~$65 billion). This migration is enabled by three converging forces: proven cost savings from hospital-at-home models (19-30% savings at Johns Hopkins, 52% lower costs for heart failure patients), accelerating technology adoption (RPM market growing from $29B to $138B at 19% CAGR through 2033, with 71M Americans expected to use RPM by 2025), and demand-side pull (94% of Medicare beneficiaries prefer home-based post-acute care, with COVID permanently shifting care delivery expectations).
The services ready to shift include primary care, outpatient specialist consults, hospice, behavioral health (already feasible), plus dialysis, post-acute care, long-term care, and infusions (requiring "stitchable capabilities" but technologically viable). The gap between current ($65B) and projected ($265B) home care capacity represents the same order of magnitude as the value-based care payment transition.
## Evidence
- Johns Hopkins hospital-at-home programs demonstrate 19-30% cost savings versus traditional in-hospital care
- Systematic review shows home care for heart failure patients achieves 52% lower costs
- Remote patient monitoring market projected to grow from $29B (2024) to $138B (2033) at 19% CAGR
- AI in RPM segment growing faster at 27.5% CAGR, from $2B (2024) to $8.4B (2030)
- Home healthcare is the fastest-growing RPM end-use segment at 25.3% CAGR
- 71 million Americans expected to use RPM by 2025
- 94% of Medicare beneficiaries prefer home-based post-acute care
- 16% of 65+ respondents more likely to receive home health post-pandemic (McKinsey Consumer Health Insights, June 2021)
## Relationship to Attractor State
This facility-to-home migration is the physical infrastructure layer of [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]. If value-based care provides the payment alignment and continuous monitoring provides the data layer, the home is where these capabilities converge into actual care delivery. The 3-4x scaling requirement ($65B → $265B) matches the magnitude of the VBC payment transition tracked in [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]].
---
Relevant Notes:
- [[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]]
- [[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]]
- [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,33 @@
---
type: claim
domain: health
description: "Japan at 28.4 percent elderly with 6M aged 85-plus growing to 10M by 2040 shows US what comes next"
confidence: proven
source: "PMC/JMA Journal Japan LTCI paper (2021) demographic data"
created: 2026-03-11
---
# Japan's demographic trajectory provides a 20-year preview of US long-term care challenges
Japan is the most aged country in the world with 28.4% of its population aged 65+ as of 2019, expected to plateau at approximately 40% in 2040-2050. The country currently has 6 million people aged 85+, projected to reach 10 million by 2040. This represents the demographic reality the United States will face with approximately a 20-year lag.
The US is currently at roughly 20% elderly population and rising. Japan's experience operating a mandatory universal Long-Term Care Insurance system under these extreme demographic conditions provides the clearest empirical preview of what the US will face — and demonstrates that a structural financing solution is both necessary and viable.
Japan's demographic challenge is not a distant theoretical problem; it is the current operational reality that their LTCI system has been managing since 2000. The 85+ population growth from 6M to 10M by 2040 represents the highest-acuity, highest-cost cohort that will drive long-term care demand. The US will face this same transition, but currently has no financing infrastructure equivalent to Japan's LTCI.
## Evidence
- Japan: 28.4% of population 65+ (2019), expected to plateau at ~40% (2040-2050)
- Japan: 6 million aged 85+ currently, growing to 10 million by 2040
- US: currently ~20% elderly, rising toward Japan's current 28.4% level
- Demographic lag between Japan and US estimated at ~20 years
- Japan's LTCI has operated continuously through this demographic transition since 2000
---
Relevant Notes:
- [[japan-ltci-proves-mandatory-universal-long-term-care-insurance-is-viable-at-national-scale]] <!-- claim pending -->
- [[us-long-term-care-financing-gap-is-largest-unaddressed-structural-problem-in-american-healthcare]] <!-- claim pending -->
- [[the epidemiological transition marks the shift from material scarcity to social disadvantage as the primary driver of health outcomes in developed nations]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: health
description: "25 years of operation covering 5+ million beneficiaries demonstrates durability under extreme aging demographics"
confidence: proven
source: "PMC/JMA Journal, 'The Long-Term Care Insurance System in Japan: Past, Present, and Future' (2021)"
created: 2026-03-11
---
# Japan's LTCI proves mandatory universal long-term care insurance is viable at national scale
Japan implemented mandatory public Long-Term Care Insurance (LTCI) on April 1, 2000, creating a universal system that has operated continuously for 25 years. The system is financed through 50% mandatory premiums (all citizens 40+) and 50% taxes (split between national, prefecture, and municipal levels). As of 2015, the system provided benefits to over 5 million persons aged 65+ — approximately 17% of Japan's elderly population.
The system integrates medical care with welfare services, offers both facility-based and home-based care chosen by beneficiaries, and operates through 7 care level tiers from "support required" to "long-term care level 5." This structure has successfully shifted the burden from family caregiving to social solidarity while improving access and reducing financial burden on families.
Japan implemented this system while being the most aged country in the world (28.4% of population 65+ as of 2019, expected to plateau at ~40% in 2040-2050). The system's 25-year operational track record under these extreme demographic conditions demonstrates that mandatory universal long-term care insurance is implementable, durable, and scalable at national level.
## Evidence
- Mandatory participation: all citizens 40+ pay premiums with no opt-out or coverage gaps
- Universal coverage regardless of income, unlike means-tested approaches
- 5+ million beneficiaries receiving care (17% of 65+ population) as of 2015
- Integrated medical + social + welfare services under single system
- 25 years of continuous operation (2000-2025) through demographic transition
- Operated successfully while elderly population grew from ~17% to 28.4%
## Challenges
- Financial sustainability under extreme aging demographics remains ongoing concern
- Caregiver workforce shortage parallels challenges in other developed nations
- Requires ongoing adjustments to premiums and copayments
---
Relevant Notes:
- [[modernization dismantles family and community structures replacing them with market and state relationships that increase individual freedom but erode psychosocial foundations of wellbeing]]
- [[social isolation costs Medicare 7 billion annually and carries mortality risk equivalent to smoking 15 cigarettes per day making loneliness a clinical condition not a personal problem]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,48 @@
---
type: claim
domain: health
description: "Income level correlates with GLP-1 discontinuation rates in commercially insured populations, indicating that cost-sharing and affordability barriers drive adherence as much as clinical factors like side effects or insufficient weight loss"
confidence: experimental
source: "Journal of Managed Care & Specialty Pharmacy, Real-world Persistence and Adherence to GLP-1 RAs Among Obese Commercially Insured Adults Without Diabetes, 2024-08-01"
created: 2026-03-11
---
# Lower-income patients show higher GLP-1 discontinuation rates suggesting affordability not just clinical factors drive persistence
Among the factors associated with GLP-1 discontinuation in commercially insured populations, income level emerges as a significant predictor: lower-income patients show higher discontinuation rates even when controlling for other factors.
This is notable because the study population is commercially insured—meaning all patients have coverage. The income effect suggests that cost-sharing (copays, deductibles) creates an affordability barrier even within insured populations. For Medicare populations with higher cost-sharing and lower average incomes, this barrier may be substantially worse.
The implication for value-based care design: reducing patient cost-sharing for GLP-1s (through zero-copay programs or coverage carve-outs) may improve persistence enough to make the downstream ROI positive. The relevant question is not "does the drug work?" but "can patients afford to stay on it long enough for it to work?"
## Evidence
**Key discontinuation factors identified:**
- Insufficient weight loss (clinical disappointment)
- **Income level (lower income → higher discontinuation)**
- Adverse events (GI side effects)
- Insurance coverage changes
The source notes income as a factor but does not provide the specific discontinuation rate by income quartile. This limits the strength of the claim to experimental confidence.
**Context:**
- Study population: commercially insured adults (younger, higher income than Medicare)
- Even within this relatively advantaged population, income predicts discontinuation
- Medicare populations face higher cost-sharing (Part D coverage gap, higher average out-of-pocket costs)
**Mechanism hypothesis:**
At $245/month list price, even modest copays ($50-100/month) create a sustained affordability barrier. Patients may initiate treatment but discontinue when the monthly cost becomes unsustainable relative to household budget.
## Challenges
The source does not provide granular income-stratified discontinuation rates, so the magnitude of the effect is unclear. It's possible income is a proxy for other factors (health literacy, access to care coordination, baseline health status) rather than affordability per se.
---
Relevant Notes:
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
- [[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]
Topics:
- domains/health/_map

View file

@ -25,6 +25,12 @@ The most troubling signal is that the largest increase in suicide rates has occu
Progress should mean happier, healthier populations, not merely more material possessions. Since [[Americas declining life expectancy is driven by deaths of despair concentrated in populations and regions most damaged by economic restructuring since the 1980s]], the US reversal in life expectancy is the empirical confirmation that modernization without psychosocial infrastructure produces net harm past a critical threshold.
### Additional Evidence (extend)
*Source: [[2021-02-00-pmc-japan-ltci-past-present-future]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Japan's LTCI system explicitly shifted the burden of long-term care from family caregiving to social solidarity through mandatory insurance. Implemented in 2000, the system covers 5+ million elderly (17% of 65+ population) and integrates medical care with welfare services. This represents a deliberate policy choice to replace family-based care obligations with state-organized insurance, improving access and reducing financial burden on families while operating under extreme demographic pressure (28.4% of population 65+, rising to 40% by 2040-2050). The system's 25-year track record demonstrates that this transition from family to state/market structures is both viable and durable at national scale.
---
Relevant Notes:

View file

@ -32,6 +32,12 @@ Some evidence indicates lower mortality rates among PACE enrollees, suggesting q
- Study covered 8 states, 250+ enrollees during 2006-2008
- Matched comparison groups: nursing home entrants AND HCBS waiver enrollees
### Additional Evidence (extend)
*Source: [[2021-02-00-pmc-japan-ltci-past-present-future]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Japan's LTCI provides a national-scale comparison point for PACE's integrated care model. LTCI offers both facility-based and home-based care chosen by beneficiaries, integrating medical care with welfare services across 7 care level tiers. As of 2015, the system served 5+ million beneficiaries (17% of 65+ population) — compared to PACE's 90,000 enrollees in the US. If the US had equivalent coverage, that would represent ~11.4 million people. Japan's experience demonstrates that integrated care delivery can operate at national scale through mandatory insurance, though financial sustainability under extreme aging demographics (28.4% elderly, rising to 40%) remains an ongoing challenge requiring premium and copayment adjustments.
---
Relevant Notes:

View file

@ -0,0 +1,38 @@
---
type: claim
domain: health
description: "The technology layer enabling $265B facility-to-home shift consists of RPM sensors generating continuous data processed through AI middleware to create actionable clinical insights"
confidence: likely
source: "McKinsey & Company, From Facility to Home report (2021); market data on RPM and AI middleware growth"
created: 2026-03-11
---
# RPM technology stack enables facility-to-home care migration through AI middleware that converts continuous data into clinical utility
The $265 billion facility-to-home care migration depends on a specific technology stack: remote patient monitoring sensors (growing 19% CAGR to $138B by 2033) generating continuous physiological data, processed through AI middleware (growing 27.5% CAGR to $8.4B by 2030) that converts raw sensor streams into clinically actionable insights. This architecture solves the fundamental problem that continuous data is too voluminous for direct clinician review—the AI layer performs triage, pattern recognition, and alert generation, enabling home-based care to achieve clinical outcomes comparable to facility-based monitoring.
The home healthcare segment is the fastest-growing RPM application at 25.3% CAGR, indicating that the technology has crossed the threshold from experimental to deployment-ready. With 71 million Americans expected to use RPM by 2025, the infrastructure for home-based care delivery is scaling faster than the care delivery models themselves.
## Evidence
- Remote patient monitoring market: $29B (2024) → $138B (2033), 19% CAGR
- AI in RPM: $2B (2024) → $8.4B (2030), 27.5% CAGR
- Home healthcare is fastest-growing RPM end-use segment at 25.3% CAGR
- 71M Americans expected to use RPM by 2025
- Hospital-at-home models achieve 19-30% cost savings while maintaining quality (Johns Hopkins)
## Technology-Care Site Coupling
This claim connects the technology layer ([[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]]) to the care delivery site (home vs. facility). The AI middleware is not optional—it's the enabling constraint. Without AI processing continuous data streams, home-based monitoring generates alert fatigue and clinician overwhelm. With AI middleware, home monitoring becomes clinically viable at scale.
The atoms-to-bits conversion happens at the patient's home ([[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]]), and the AI layer makes that data clinically useful ([[AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review]]).
---
Relevant Notes:
- [[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]]
- [[AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review]]
- [[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,40 @@
---
type: claim
domain: health
description: "Within the GLP-1 class, semaglutide shows 2.5x better one-year persistence than liraglutide (47.1% vs 19.2%), suggesting formulation and dosing frequency significantly impact real-world adherence independent of efficacy"
confidence: likely
source: "Journal of Managed Care & Specialty Pharmacy, Real-world Persistence and Adherence to GLP-1 RAs Among Obese Commercially Insured Adults Without Diabetes, 2024-08-01"
created: 2026-03-11
---
# Semaglutide achieves 47 percent one-year persistence versus 19 percent for liraglutide showing drug-specific adherence variation of 2.5x
Within the GLP-1 receptor agonist class, drug-specific persistence rates vary dramatically: semaglutide maintains 47.1% of non-diabetic obesity patients at one year, while liraglutide retains only 19.2%—a 2.5x difference.
This variation matters because it suggests adherence is not purely about the drug class mechanism or patient characteristics, but about formulation factors: semaglutide's once-weekly injection versus liraglutide's daily injection likely drives much of the difference. Oral formulations (like oral semaglutide) may further improve adherence by removing the injection barrier entirely.
For payer economics and value-based care design, this means drug selection within the GLP-1 class significantly impacts the probability that downstream savings will materialize. A plan that preferentially covers liraglutide for cost reasons may be optimizing for upfront price while guaranteeing that 80% of patients discontinue before benefits accrue.
## Evidence
**One-year persistence rates by drug (non-diabetic obesity patients):**
- Semaglutide: 47.1%
- Liraglutide: 19.2%
- Overall class average: 32.3%
**Likely mechanism:**
- Semaglutide: once-weekly subcutaneous injection
- Liraglutide: daily subcutaneous injection
- Injection frequency is a known adherence barrier across therapeutic classes
**Implications for formulary design:**
If a payer's goal is to maximize the probability of sustained adherence (and thus downstream ROI), preferencing higher-persistence drugs may justify higher upfront costs. The relevant comparison is not semaglutide cost vs. liraglutide cost, but (semaglutide cost × 47% persistence) vs. (liraglutide cost × 19% persistence).
---
Relevant Notes:
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
Topics:
- domains/health/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: health
description: "FLOW trial shows semaglutide slows kidney decline by 1.16 mL/min/1.73m2 annually in T2D patients with CKD, preventing dialysis progression that costs $90K+/year"
confidence: proven
source: "NEJM FLOW Trial (N=3,533, stopped early for efficacy), FDA indication expansion 2024"
created: 2026-03-11
---
# Semaglutide reduces kidney disease progression by 24 percent and delays dialysis onset creating the largest per-patient cost savings of any GLP-1 indication because dialysis costs $90K+ per year
The FLOW trial demonstrated that semaglutide reduces major kidney disease events by 24% (HR 0.76, P=0.0003) in patients with type 2 diabetes and chronic kidney disease over a median 3.4-year follow-up. The trial was stopped early at prespecified interim analysis due to efficacy — the effect was so large that continuing would have been unethical.
The mechanism of cost savings is slowed kidney function decline: semaglutide reduced the annual eGFR slope by 1.16 mL/min/1.73m2 compared to placebo (P<0.001). This slower decline delays or prevents progression to end-stage renal disease requiring dialysis, which costs $90,000+ per patient per year.
Kidney-specific outcomes showed HR 0.79 (95% CI 0.66-0.94), and cardiovascular death was reduced 29% (HR 0.71, 95% CI 0.56-0.89). The FDA subsequently expanded semaglutide (Ozempic) indications to include T2D patients with CKD, making this the first GLP-1 receptor agonist with a dedicated kidney protection indication.
CKD is among the most expensive chronic conditions to manage. The downstream savings argument for GLP-1s is strongest in kidney protection because preventing progression to dialysis has massive cost implications for capitated payers. A separate Nature Medicine analysis showed additive benefits when semaglutide is used with SGLT2 inhibitors.
This is the first dedicated kidney outcomes trial with a GLP-1 receptor agonist, establishing foundational evidence for the multi-organ benefit thesis.
## Evidence
- FLOW trial: N=3,533 patients, randomized controlled trial, median 3.4-year follow-up
- Primary endpoint: 24% risk reduction in major kidney disease events (HR 0.76, P=0.0003)
- Annual eGFR slope difference: 1.16 mL/min/1.73m2 slower decline (P<0.001)
- Cardiovascular death: 29% reduction (HR 0.71, 95% CI 0.56-0.89)
- Trial stopped early for efficacy at prespecified interim analysis
- FDA indication expansion to T2D patients with CKD (2024)
- Dialysis cost benchmark: $90K+/year per patient
---
Relevant Notes:
- [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
- [[the healthcare cost curve bends up through 2035 because new curative and screening capabilities create more treatable conditions faster than prices decline]]
Topics:
- domains/health/_map

View file

@ -17,6 +17,12 @@ The structural challenge: there is no equivalent to the NHS link worker role in
Loneliness exists at the intersection of clinical medicine and social infrastructure. It cannot be treated with medication or therapy alone -- it requires community-level intervention that the healthcare system is not designed to deliver.
### Additional Evidence (extend)
*Source: [[2021-02-00-pmc-japan-ltci-past-present-future]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Japan's LTCI system addresses the care infrastructure gap that the US relies on unpaid family labor ($870B annually) to fill. The system provides both facility-based and home-based care chosen by beneficiaries, integrating medical care with welfare services. This infrastructure directly addresses the social isolation problem by providing professional care delivery rather than relying on family members who may be geographically distant or unable to provide adequate care. Japan's solution demonstrates that treating long-term care as a social insurance problem rather than a family responsibility creates the infrastructure needed to address isolation at scale.
---
Relevant Notes:

View file

@ -31,6 +31,12 @@ Since specialization and value form an autocatalytic feedback loop where each am
The Commonwealth Fund's 2024 international comparison demonstrates this transition empirically across 10 developed nations. All countries compared (Australia, Canada, France, Germany, Netherlands, New Zealand, Sweden, Switzerland, UK, US) have eliminated material scarcity in healthcare — all possess advanced clinical capabilities and universal or near-universal access infrastructure. Yet health outcomes vary dramatically. The US spends >16% of GDP (highest by far) with worst outcomes, while top performers (Australia, Netherlands) spend the lowest percentage of GDP. The differentiator is not clinical capability (US ranks 2nd in care process quality) but access structures and equity — social determinants. This proves that among developed nations with sufficient material resources, social disadvantage (who gets care, discrimination, equity barriers) drives outcomes more powerfully than clinical quality or spending volume.
### Additional Evidence (extend)
*Source: [[2025-06-01-cell-med-glp1-societal-implications-obesity]] | Added: 2026-03-15*
GLP-1 access inequality demonstrates the epidemiological transition in action: the intervention addresses metabolic disease (post-transition health problem) but access stratifies by wealth and insurance status (social disadvantage), potentially widening health inequalities even as population-level outcomes improve. The WHO's emphasis on 'multisectoral action' and 'healthier environments' acknowledges that pharmaceutical solutions alone cannot address socially-determined health outcomes.
---
Relevant Notes:

View file

@ -0,0 +1,44 @@
---
type: claim
domain: health
description: "US relies on 870 billion in unpaid family labor plus Medicaid spend-down while Japan solved this with mandatory LTCI in 2000"
confidence: likely
source: "PMC/JMA Journal Japan LTCI paper (2021); comparison to US Medicare/Medicaid structure"
created: 2026-03-11
---
# US long-term care financing gap is the largest unaddressed structural problem in American healthcare
The United States has no equivalent to Japan's mandatory Long-Term Care Insurance system. Medicare covers acute care but not long-term care. Medicaid covers long-term care only for those who spend down their assets to poverty levels. The gap between these programs is filled by an estimated $870 billion annually in unpaid family labor.
Japan solved the "who pays for long-term care" question in 2000 with mandatory universal LTCI. The US, facing the same demographic transition with a 20-year lag (Japan is at 28.4% elderly, US at ~20% and rising), still has no structural solution. If the US had equivalent LTCI coverage to Japan's 17% of 65+ population receiving benefits, that would represent ~11.4 million people. Currently, PACE serves 90,000 and institutional Medicaid serves a few million — leaving a massive coverage gap.
The structural comparison is stark:
- **Japan**: Mandatory universal LTCI, integrated medical/social/welfare services, 50% premiums + 50% taxes
- **US**: Medicare (acute only) + Medicaid (poverty only) + $870B unpaid family labor + private pay
This is not a gap that can be closed through incremental reform or market innovation. It requires a structural financing solution that the US has avoided for 25 years while Japan has operated a working model.
## Evidence
- US has no mandatory long-term care insurance equivalent to Japan's LTCI
- Medicare covers acute care; Medicaid covers long-term care only after asset spend-down
- $870 billion in unpaid family labor annually fills the financing gap (established figure)
- Japan's 17% coverage rate would translate to ~11.4M Americans vs. current PACE 90K + limited Medicaid institutional coverage
- Japan implemented solution in 2000; US demographic trajectory lags Japan by ~20 years
- Japan at 28.4% elderly (2019), US at ~20% and rising toward Japan's current level
## Challenges
- Political feasibility of mandatory premiums in US context
- Federal vs. state implementation questions given US healthcare structure
- Integration challenges across fragmented US payer/provider landscape
---
Relevant Notes:
- [[pace-demonstrates-integrated-care-averts-institutionalization-through-community-based-delivery-not-cost-reduction]]
- [[medicare-trust-fund-insolvency-accelerated-12-years-by-tax-policy-demonstrating-fiscal-fragility]]
- [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
- [[modernization dismantles family and community structures replacing them with market and state relationships that increase individual freedom but erode psychosocial foundations of wellbeing]]
Topics:
- domains/health/_map

View file

@ -23,6 +23,18 @@ The Making Care Primary model's termination in June 2025 (after just 12 months,
PACE represents the extreme end of value-based care alignment—100% capitation with full financial risk for a nursing-home-eligible population. The ASPE/HHS evaluation shows that even under complete payment alignment, PACE does not reduce total costs but redistributes them (lower Medicare acute costs in early months, higher Medicaid chronic costs overall). This suggests that the 'payment boundary' stall may not be primarily a problem of insufficient risk-bearing. Rather, the economic case for value-based care may rest on quality/preference improvements rather than cost reduction. PACE's 'stall' is not at the payment boundary—it's at the cost-savings promise. The implication: value-based care may require a different success metric (outcome quality, institutionalization avoidance, mortality reduction) than the current cost-reduction narrative assumes.
### Additional Evidence (extend)
*Source: [[2024-08-01-jmcp-glp1-persistence-adherence-commercial-populations]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
GLP-1 persistence data illustrates why value-based care requires risk alignment: with only 32.3% of non-diabetic obesity patients remaining on GLP-1s at one year (15% at two years), the downstream savings that justify the upfront drug cost never materialize for 85% of patients. Under fee-for-service, the pharmacy benefit pays the cost but doesn't capture the avoided hospitalizations. Under partial risk (upside-only), providers have no incentive to invest in adherence support because they don't bear the cost of discontinuation. Only under full risk (capitation) does the entity paying for the drug also capture the downstream savings—but only if adherence is sustained. This makes GLP-1 economics a test case for whether value-based care can solve the "who pays vs. who benefits" misalignment.
### Additional Evidence (confirm)
*Source: [[2025-03-01-medicare-prior-authorization-glp1-near-universal]] | Added: 2026-03-15*
Medicare Advantage plans bearing full capitated risk increased GLP-1 prior authorization from <5% to nearly 100% within two years (2023-2025), demonstrating that even full-risk capitation does not automatically align incentives toward prevention when short-term cost pressures dominate. Both BCBS and UnitedHealthcare implemented universal PA despite theoretical alignment under capitation.
---
Relevant Notes:

View file

@ -91,6 +91,18 @@ FutureDAO's token migrator extends the unruggable ICO concept to community takeo
MetaDAO ICO platform processed 8 projects from April 2025 to January 2026, raising $25.6M against $390M in committed demand (15x oversubscription). Platform generated $57.3M in Assets Under Futarchy and $1.5M in fees from $300M trading volume. Individual project performance: Avici 21x peak/7x current, Omnipair 16x peak/5x current, Umbra 8x peak/3x current with $154M committed for $3M raise (51x oversubscription). Recent launches (Ranger, Solomon, Paystream, ZKLSOL, Loyal) show convergence toward lower volatility with maximum 30% drawdown from launch.
### Additional Evidence (extend)
*Source: [[2024-08-03-futardio-proposal-approve-q3-roadmap]] | Added: 2026-03-15*
MetaDAO Q3 2024 roadmap prioritized launching a market-based grants product as the primary objective, with specific targets to launch 5 organizations and process 8 proposals through the product. This represents an expansion from pure ICO functionality to grants decision-making, demonstrating futarchy's application to capital allocation beyond fundraising.
### Additional Evidence (extend)
*Source: [[2025-04-09-blockworks-ranger-ico-metadao-reset]] | Added: 2026-03-15*
Ranger Finance ICO completed in April 2025, adding ~$9.1M to total Assets Under Futarchy, bringing the total to $57.3M across 10 launched projects. This represents continued momentum in futarchy-governed capital formation, with Ranger being a leveraged trading platform on Solana. The article also notes MetaDAO was 'considering strategic changes to its platform model' around this time, though details were not specified.
---
Relevant Notes:

View file

@ -59,6 +59,30 @@ Autocrat is MetaDAO's core governance program on Solana -- the on-chain implemen
Sanctum's Wonder proposal (2frDGSg1frwBeh3bc6R7XKR2wckyMTt6pGXLGLPgoota, created 2025-03-28, completed 2025-03-31) represents the first major test of Autocrat futarchy for strategic product direction rather than treasury operations. The team explicitly stated: 'Even though this is not a proposal that involves community CLOUD funds, this is going to be the largest product decision ever made by the Sanctum team, so we want to put it up to governance vote.' The proposal to build a consumer mobile app (Wonder) with automatic yield optimization, gasless transfers, and curated project participation failed despite team conviction backed by market comparables (Phantom $3B valuation, Jupiter $1.7B market cap, MetaMask $320M swap fees). This demonstrates Autocrat's capacity to govern strategic pivots beyond operational decisions, though the failure raises questions about whether futarchy markets discount consumer product risk or disagreed with the user segmentation thesis.
### Additional Evidence (extend)
*Source: [[2024-06-22-futardio-proposal-thailanddao-event-promotion-to-boost-deans-list-dao-engageme]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Dean's List DAO proposal (DgXa6gy7nAFFWe8VDkiReQYhqe1JSYQCJWUBV8Mm6aM) used Autocrat v0.3 with 3-day trading period and 3% TWAP threshold. Proposal completed 2024-06-25 with failed status. This provides concrete implementation data: small DAOs (FDV $123K) can deploy Autocrat with custom TWAP thresholds (3% vs. typical higher thresholds), but low absolute dollar amounts may be insufficient to attract trader participation even when percentage returns are favorable.
### Additional Evidence (extend)
*Source: [[2023-12-03-futardio-proposal-migrate-autocrat-program-to-v01]] | Added: 2026-03-15*
Autocrat v0.1 made the three-day window configurable rather than hardcoded, with the proposer stating it was 'most importantly' designed to 'allow for quicker feedback loops.' The proposal passed with 990K META migrated, demonstrating community acceptance of parameterized proposal duration.
### Additional Evidence (confirm)
*Source: [[2024-07-04-futardio-proposal-proposal-3]] | Added: 2026-03-15*
Proposal #3 on MetaDAO (account EXehk1u3qUJZSxJ4X3nHsiTocRhzwq3eQAa6WKxeJ8Xs) ran on Autocrat version 0.3, created 2024-07-04, and completed/ended 2024-07-08 - confirming the four-day operational window (proposal creation plus three-day settlement period) specified in the mechanism design.
### Additional Evidence (confirm)
*Source: [[2025-03-05-futardio-proposal-proposal-1]] | Added: 2026-03-15*
Production deployment data from futard.io shows Proposal #1 on DAO account De8YzDKudqgeJXqq6i7q82AgxxrQ1JXXfMgouQuPyhY using Autocrat version 0.3, with proposal created, ended, and completed all on 2025-03-05. This confirms operational use of the Autocrat v0.3 implementation in live governance.
---
Relevant Notes:

View file

@ -29,6 +29,24 @@ Optimism's futarchy experiment achieved 5,898 total trades from 430 active forec
FitByte ICO attracted only $23 in total commitments against a $500,000 target before entering refund status. This represents an extreme case of limited participation in a futarchy-governed decision. The conditional markets had essentially zero liquidity, making price discovery impossible and demonstrating that futarchy mechanisms require minimum participation thresholds to function. When a proposal is clearly weak (no technical details, no partnerships, ambitious claims without evidence), the market doesn't trade—it simply doesn't participate, leading to immediate refund rather than price-based rejection.
### Additional Evidence (extend)
*Source: [[2024-06-22-futardio-proposal-thailanddao-event-promotion-to-boost-deans-list-dao-engageme]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Dean's List ThailandDAO proposal (DgXa6gy7nAFFWe8VDkiReQYhqe1JSYQCJWUBV8Mm6aM) failed on 2024-06-25 despite projecting 16x FDV increase with only 3% TWAP threshold required. The proposal explicitly calculated that $73.95 per-participant value creation across 50 participants would meet the threshold, yet failed to attract sufficient trading volume. This extends the 'limited trading volume' pattern from uncontested decisions to contested-but-favorable proposals, suggesting the participation problem is broader than initial observations indicated.
### Additional Evidence (confirm)
*Source: [[2024-07-04-futardio-proposal-proposal-3]] | Added: 2026-03-15*
Proposal #3 failed with no indication of trading activity or market participation in the on-chain data, consistent with the pattern of minimal engagement in proposals without controversy or competitive dynamics.
### Additional Evidence (extend)
*Source: [[2024-10-30-futardio-proposal-swap-150000-into-isc]] | Added: 2026-03-15*
The ISC treasury swap proposal (Gp3ANMRTdGLPNeMGFUrzVFaodouwJSEXHbg5rFUi9roJ) was a contested decision that failed, showing futarchy markets can reject proposals with clear economic rationale when risk factors dominate. The proposal offered inflation hedge benefits but markets priced early-stage counterparty risk higher, demonstrating active price discovery in treasury decisions.
---
Relevant Notes:

View file

@ -0,0 +1,32 @@
---
type: claim
domain: internet-finance
description: "TCP's AIMD algorithm applies to worker scaling in distributed systems because both solve the producer-consumer rate matching problem"
confidence: likely
source: "Vlahakis, Athanasopoulos et al., AIMD Scheduling and Resource Allocation in Distributed Computing Systems (2021)"
created: 2026-03-11
---
# AIMD congestion control generalizes to distributed resource allocation because queue dynamics are structurally identical across networks and compute pipelines
The core insight from Vlahakis et al. (2021) is that TCP's AIMD (Additive Increase Multiplicative Decrease) congestion control algorithm, proven optimal for fair network bandwidth allocation, applies directly to distributed computing resource allocation. The paper demonstrates that scheduling incoming requests across computing nodes is mathematically equivalent to network congestion control — both are producer-consumer rate matching problems where queue state reveals system health.
The AIMD policy is elegant: when queues shrink (system healthy), add workers linearly (+1 per cycle). When queues grow (system overloaded), cut workers multiplicatively (e.g., halve them). This creates self-correcting dynamics that are proven stable regardless of total node count and AIMD parameters.
Key theoretical results:
- Decentralized resource allocation using nonlinear state feedback achieves global convergence to bounded set in finite time
- The system is stable irrespective of total node count and AIMD parameters
- Quality of Service is calculable via Little's Law from simple local queuing time formulas
- AIMD is proven optimal for fair allocation of shared resources among competing agents without centralized control
The practical implication: distributed systems don't need to predict load or use complex ML models for autoscaling. They can react to observed queue state using a simple, proven-stable policy. When extract produces faster than eval can consume, AIMD naturally provides backpressure (slow extraction) or scale-up (more eval workers) without requiring load forecasting.
This connects directly to pipeline architecture design: the "bandwidth" of a processing pipeline is its throughput capacity, and AIMD provides the control law for matching producer rate to consumer capacity.
---
Relevant Notes:
- core/mechanisms/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,37 @@
---
type: claim
domain: internet-finance
description: "AIMD algorithm achieves provably fair and stable distributed resource allocation using only local congestion feedback"
confidence: proven
source: "Corless, King, Shorten, Wirth (SIAM 2016) - AIMD Dynamics and Distributed Resource Allocation"
created: 2026-03-11
secondary_domains: [mechanisms, collective-intelligence]
---
# AIMD converges to fair resource allocation without global coordination through local congestion signals
Additive Increase Multiplicative Decrease (AIMD) is a distributed resource allocation algorithm that provably converges to fair and stable resource sharing among competing agents without requiring centralized control or global information. The algorithm operates through two simple rules: when no congestion is detected, increase resource usage additively (rate += α); when congestion is detected, decrease resource usage multiplicatively (rate *= β, where 0 < β < 1).
The SIAM monograph by Corless et al. demonstrates that AIMD is mathematically guaranteed to converge to equal sharing of available capacity regardless of the number of agents or parameter values. Each agent only needs to observe local congestion signals—no knowledge of other agents, total capacity, or system-wide state is required. This makes AIMD the most widely deployed distributed resource allocation mechanism, originally developed for TCP congestion control and now applicable to smart grid energy allocation, distributed computing, and other domains where multiple agents compete for shared resources.
The key insight is that AIMD doesn't require predicting load, modeling arrivals, or solving optimization problems. It reacts to observed system state through simple local rules and is guaranteed to find the fair allocation through the dynamics of the algorithm itself. The multiplicative decrease creates faster convergence than purely additive approaches, while the additive increase ensures fairness rather than proportional allocation.
## Evidence
- Corless, King, Shorten, Wirth (2016) provide mathematical proofs of convergence and fairness properties
- AIMD is the foundation of TCP congestion control, the most widely deployed distributed algorithm in existence
- The algorithm works across heterogeneous domains: internet bandwidth, energy grids, computing resources
- Convergence is guaranteed regardless of number of competing agents or their parameter choices
---
Relevant Notes:
- [[coordination mechanisms]]
- [[optimal governance requires mixing mechanisms because different decisions have different manipulation risk profiles]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -0,0 +1,46 @@
---
type: claim
domain: internet-finance
description: "AIMD provides principled autoscaling for systems with expensive compute and variable load by reacting to queue state rather than forecasting demand"
confidence: experimental
source: "Corless et al. (SIAM 2016) applied to Teleo pipeline architecture"
created: 2026-03-11
secondary_domains: [mechanisms, critical-systems]
---
# AIMD scaling solves variable-load expensive-compute coordination without prediction
For systems with expensive computational operations and highly variable load—such as AI evaluation pipelines where extraction is cheap but evaluation is costly—AIMD provides a principled scaling algorithm that doesn't require demand forecasting or optimization modeling. The algorithm operates by observing queue state: when the evaluation queue is shrinking (no congestion), increase extraction workers by 1 per cycle; when the queue is growing (congestion detected), halve extraction workers.
This approach is particularly well-suited to scenarios where:
1. Downstream operations (evaluation) are significantly more expensive than upstream operations (extraction)
2. Load is unpredictable and varies substantially over time
3. The cost of overprovisioning is high (wasted expensive compute)
4. The cost of underprovisioning is manageable (slightly longer queue wait times)
The AIMD dynamics guarantee convergence to a stable operating point where extraction rate matches evaluation capacity, without requiring any prediction of future load, modeling of arrival patterns, or solution of optimization problems. The system self-regulates through observed congestion signals (queue growth/shrinkage) and simple local rules.
The multiplicative decrease (halving workers on congestion) provides rapid response to capacity constraints, while the additive increase (adding one worker when uncongested) provides gradual scaling that avoids overshooting. This asymmetry is critical: it's better to scale down too aggressively and scale up conservatively than vice versa when downstream compute is expensive.
## Evidence
- Corless et al. (2016) prove AIMD convergence properties hold for general resource allocation problems beyond network bandwidth
- The Teleo pipeline architecture exhibits the exact characteristics AIMD is designed for: cheap extraction, expensive evaluation, variable load
- AIMD's "no prediction required" property eliminates the complexity and fragility of load forecasting models
- The algorithm's proven stability guarantees mean it won't oscillate or diverge regardless of load patterns
## Challenges
This is an application of proven AIMD theory to a specific system architecture, but the actual performance in the Teleo pipeline context is untested. The claim that AIMD is "perfect for" this setting is theoretical—empirical validation would strengthen confidence from experimental to likely.
---
Relevant Notes:
- [[aimd-converges-to-fair-resource-allocation-without-global-coordination-through-local-congestion-signals]] <!-- claim pending -->
- [[coordination mechanisms]]
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map
- foundations/critical-systems/_map

View file

@ -0,0 +1,40 @@
---
type: claim
domain: internet-finance
description: "AIMD autoscaling reacts to observed queue dynamics rather than forecasting demand, eliminating prediction error and model complexity"
confidence: experimental
source: "Vlahakis, Athanasopoulos et al., AIMD Scheduling (2021), applied to Teleo pipeline context"
created: 2026-03-11
---
# AIMD worker scaling requires only queue state observation not load prediction making it simpler than ML-based autoscaling
Traditional autoscaling approaches attempt to predict future load and preemptively adjust capacity. This requires:
- Historical load data and pattern recognition
- ML models to forecast demand
- Tuning of prediction windows and confidence thresholds
- Handling of prediction errors and their cascading effects
AIMD eliminates this entire complexity layer by operating purely on observed queue state. The control law is:
- If queue_length is decreasing: add workers linearly (additive increase)
- If queue_length is increasing: remove workers multiplicatively (multiplicative decrease)
This reactive approach has several advantages:
1. **No prediction error** — the system responds to actual observed state, not forecasts
2. **No training data required** — works immediately without historical patterns
3. **Self-correcting** — wrong adjustments are automatically reversed by subsequent queue observations
4. **Proven stable** — mathematical guarantees from control theory, not empirical tuning
The Vlahakis et al. (2021) paper proves that this decentralized approach achieves global convergence to bounded queue lengths in finite time, regardless of system size or AIMD parameters. The stability is structural, not empirical.
For the Teleo pipeline specifically: when extract produces claims faster than eval can process them, the eval queue grows. AIMD detects this and scales up eval workers. When the queue shrinks below target, AIMD scales down. No load forecasting, no ML models, no hyperparameter tuning — just queue observation and a simple control law.
The tradeoff: AIMD is reactive rather than predictive, so it responds to load changes rather than anticipating them. For bursty workloads with predictable patterns, ML-based prediction might provision capacity faster. But for unpredictable workloads or systems where prediction accuracy is low, AIMD's simplicity and guaranteed stability are compelling.
---
Relevant Notes:
- core/mechanisms/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,46 @@
---
type: claim
domain: internet-finance
description: "Proposer-locked initial liquidity plus 3-5% LP fees create incentive for liquidity provision that grows over proposal duration"
confidence: experimental
source: "MetaDAO AMM proposal by joebuild, 2024-01-24"
created: 2024-01-24
---
# AMM futarchy bootstraps liquidity through high fee incentives and required proposer initial liquidity creating self-reinforcing depth
The proposed AMM futarchy design solves the cold-start liquidity problem through two mechanisms:
1. **Proposer commitment**: "These types of proposals would also require that the proposer lock-up some initial liquidity, and set the starting price for the pass/fail markets."
2. **High fee LP incentives**: 3-5% swap fees that "encourage LPs" to provide additional liquidity
The expected liquidity trajectory is: "Liquidity would start low when the proposal is launched, someone would swap and move the AMM price to their preferred price, and then provide liquidity at that price since the fee incentives are high. Liquidity would increase over the duration of the proposal."
This creates a self-reinforcing cycle where:
- Initial proposer liquidity enables first trades
- High fees from those trades attract additional LPs
- Increased liquidity makes manipulation more expensive (see liquidity-weighted pricing)
- More liquidity attracts more trading volume
- Higher volume generates more fee revenue for LPs
The mechanism addresses the "lack of liquidity" problem identified with CLOBs, where "estimating a fair price for the future value of MetaDao under pass/fail conditions is difficult, and most reasonable estimates will have a wide range. This uncertainty discourages people from risking their funds with limit orders near the midpoint price."
Rated experimental because this is a proposed design not yet deployed. The liquidity bootstrapping logic is sound but requires real-world validation.
### Additional Evidence (extend)
*Source: [[2025-10-15-futardio-proposal-lets-get-futarded]] | Added: 2026-03-15*
Coal's v0.6 migration sets minimum liquidity requirements of 1500 USDC and 2000 coal for proposals, with OTC buyer lined up to purchase dev fund tokens and seed the futarchy AMM. This shows the liquidity bootstrapping pattern extends beyond initial launch to governance upgrades, where projects must arrange capital to meet minimum depth requirements before migration.
---
Relevant Notes:
- MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions.md
- futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements.md
- MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,32 @@
---
type: claim
domain: internet-finance
description: "AMM architecture eliminates the 3.75 SOL per market pair cost that CLOBs require for orderbook state storage"
confidence: likely
source: "MetaDAO proposal CF9QUBS251FnNGZHLJ4WbB2CVRi5BtqJbCqMi47NX1PG, 2024-01-24"
created: 2026-03-11
---
# AMM futarchy reduces state rent costs by 99 percent versus CLOB by eliminating orderbook storage requirements
Central Limit Order Books (CLOBs) in futarchy implementations require 3.75 SOL in state rent per pass/fail market pair on Solana, which cannot be recouped under current architecture. At 3-5 proposals per month, this creates annual costs of 135-225 SOL ($11,475-$19,125 at January 2024 prices). AMMs cost "almost nothing in state rent" because they don't maintain orderbook state—just pool reserves and a price curve.
The MetaDAO proposal notes that while state rent can theoretically be recouped through OpenBook mechanisms, doing so "would require a migration of the current autocrat program," making it impractical for existing deployments.
This cost differential becomes material at scale: a DAO running 50 proposals annually would spend ~$30K-$50K on CLOB state rent versus near-zero for AMMs, creating strong economic pressure toward AMM adoption independent of other mechanism considerations.
## Evidence
- MetaDAO proposal documents 3.75 SOL state rent cost per CLOB market pair
- Annual projection: 135-225 SOL for 3-5 monthly proposals
- AMM state requirements described as "almost nothing"
- State rent recovery requires autocrat program migration (feedback section)
---
Relevant Notes:
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]]
- metadao.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,26 @@
---
type: claim
domain: internet-finance
description: "AMM architecture eliminates the 3.75 SOL per market pair state rent cost that CLOBs require, reducing annual costs from 135-225 SOL to near-zero"
confidence: proven
source: "MetaDAO proposal by joebuild, 2024-01-24"
created: 2024-01-24
---
# AMM futarchy reduces state rent costs from 135-225 SOL annually to near-zero by replacing CLOB market pairs
MetaDAO's CLOB-based futarchy implementation incurs 3.75 SOL in state rent per pass/fail market pair, which cannot be recouped under the current system. At 3-5 proposals per month, this creates annual costs of 135-225 SOL ($11,475-$19,125 at January 2024 prices). AMM implementations cost "almost nothing in state rent" because they use simpler state structures.
This cost reduction is structural, not marginal—the CLOB architecture requires order book state that scales with market depth, while AMMs only track pool reserves and cumulative metrics. The proposal notes that state rent can be recouped by "permissionlessly closing the AMMs and returning the state rent SOL once there are no positions," creating a complete cost recovery mechanism unavailable to CLOBs.
The 94-99% cost reduction (from 135-225 SOL to near-zero) makes futarchy economically viable at higher proposal frequencies, removing a constraint on governance throughput.
---
Relevant Notes:
- MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window.md
- MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: internet-finance
description: "Higher variance-to-mean ratio requires more capacity to maintain same congestion level"
confidence: proven
source: "Liu et al. (NC State), 'Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes' (2019)"
created: 2026-03-11
---
# Arrival process burstiness increases required capacity for fixed service level
Congestion measures (queue length, wait time, utilization) are increasing functions of arrival process variability. For a fixed average arrival rate and service rate, a bursty arrival process requires more capacity than a smooth (Poisson) arrival process to maintain the same service level.
This means that modeling arrivals as Poisson when they are actually bursty (higher variance-to-mean ratio) will systematically underestimate required capacity, leading to service degradation.
## Evidence
Liu et al. establish that "congestion measures are increasing functions of arrival process variability — more bursty = more capacity needed." This is a fundamental result in queueing theory: variance in the arrival process translates directly to variance in system state, which manifests as congestion.
The CIATA method explicitly models the "asymptotic variance-to-mean (dispersion) ratio" as a separate parameter from the rate function, recognizing that burstiness is a first-order determinant of system performance, not a second-order correction.
## Application to Research Pipeline Capacity
For pipelines processing research sources that arrive in bursts:
1. A Poisson model with the same average rate will underestimate queue lengths and wait times
2. Capacity sized for Poisson arrivals will experience congestion during burst periods
3. The dispersion ratio (variance/mean) must be measured and incorporated into capacity planning
The MMPP framework provides a tractable way to model this: the state-switching structure naturally generates higher variance than Poisson while remaining analytically tractable for capacity calculations.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,41 @@
---
type: claim
domain: internet-finance
description: "Flow control mechanism that signals producers to slow down when consumers reach capacity limits"
confidence: proven
source: "Dagster, What Is Backpressure glossary entry, 2024"
created: 2026-03-11
---
# Backpressure prevents pipeline failure by creating feedback loop between consumer capacity and producer rate
Backpressure is a flow control mechanism where data consumers signal producers about their capacity limits, preventing system overload. Without backpressure controls, pipelines experience data loss, crashes, and resource exhaustion when producers overwhelm consumers.
The mechanism operates through several implementation strategies:
- **Buffering with threshold triggers** — queues that signal when capacity approaches limits
- **Rate limiting** — explicit caps on production speed
- **Dynamic adjustment** — real-time scaling based on consumer state
- **Acknowledgment-based flow** — producers wait for consumer confirmation before sending more data
Major distributed systems implement backpressure as core architecture: Apache Kafka uses pull-based consumption where consumers control their own rate, while Flink, Spark Streaming, Akka Streams, and Project Reactor all build backpressure into their execution models.
The tradeoff is explicit: backpressure introduces latency (producers must wait for consumer signals) but prevents catastrophic failure modes. This makes backpressure a design-time decision, not a retrofit — systems must incorporate feedback channels from the start.
## Evidence
- Dagster documentation identifies backpressure as standard pattern across Apache Kafka, Flink, Spark Streaming, Akka Streams, Project Reactor
- Implementation strategies documented: buffering, rate limiting, dynamic adjustment, acknowledgment-based flow
- Failure modes without backpressure: data loss, crashes, resource exhaustion
## Relevance to Teleo
The Teleo pipeline currently has zero backpressure. The extract-cron.sh dispatcher checks for unprocessed sources and launches workers without checking eval queue state. If extraction outruns evaluation, PRs accumulate with no feedback signal to slow extraction.
Simple implementation: extraction dispatcher should check open PR count before dispatching. If open PRs exceed threshold, reduce extraction parallelism or skip the cycle entirely. This creates the feedback loop that prevents eval queue overload.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: internet-finance
description: "Using max or average rate instead of time-varying rate leads to chronic under or overstaffing"
confidence: proven
source: "Liu et al. (NC State), 'Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes' (2019)"
created: 2026-03-11
---
# Constant rate approximation of time-varying arrivals causes systematic staffing errors
Replacing a time-varying arrival rate λ(t) with a constant approximation—whether the maximum rate, average rate, or any other single value—leads to systematic capacity planning failures. Systems sized for maximum rate are chronically overstaffed during low-demand periods, wasting resources. Systems sized for average rate are chronically understaffed during high-demand periods, creating congestion.
This is not a minor efficiency loss but a structural mismatch: the constant-rate approximation discards the temporal structure of demand, making it impossible to match capacity to load.
## Evidence
Liu et al. explicitly state that "replacing a time-varying arrival rate with a constant (max or average) leads to systems being badly understaffed or overstaffed." This is a direct consequence of nonstationary arrival processes where demand varies predictably over time.
The paper demonstrates that "congestion measures are increasing functions of arrival process variability," meaning that even if average load is manageable, temporal concentration of arrivals creates congestion that constant-rate models cannot predict.
## Implications for Pipeline Architecture
For capital formation pipelines with session-based arrival patterns, this means:
1. Sizing capacity for peak (research session active) rate wastes resources during quiet periods
2. Sizing capacity for average rate creates backlogs during research sessions
3. Optimal capacity must be time-varying or must use queueing/buffering to smooth demand
The MMPP framework provides tools to size capacity for the mixture of states rather than for a single average state, enabling more efficient resource allocation.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,56 @@
---
type: claim
domain: internet-finance
description: "Dean's List proposal to reward top 5 governance holders with travel creates winner-take-all dynamics that may discourage marginal participation"
confidence: speculative
source: "Futardio proposal DgXa6gy7nAFFWe8VDkiReQYhqe1JSYQCJWUBV8Mm6aM, 2024-06-22"
created: 2026-03-11
---
# DAO event perks as governance incentives create plutocratic access structures that may reduce rather than increase participation
The Dean's List ThailandDAO proposal structured incentives as a steep hierarchy: top 5 governance power holders receive $2K+ in travel and accommodation, top 50 receive event invitations and airdrops, and everyone else receives nothing. This winner-take-all structure may discourage participation from members who recognize they cannot reach the top tiers.
The proposal explicitly modeled itself on "MonkeDAO & SuperTeam" precedents and framed the vision as creating "a global network where DL DAO members come together at memorable events around the world" with "exclusive gatherings, dining in renowned restaurants, and embarking on unique cultural experiences." This positions DAO membership as access to luxury experiences rather than governance participation.
## Why This May Reduce Participation
1. **Rational non-participation** — Members who calculate they cannot reach top-5 or top-50 status have no incentive to increase governance power, since the marginal benefit of moving from rank 100 to rank 75 is zero
2. **Plutocratic signaling** — Framing governance as a path to luxury travel and exclusive dining may attract rent-seekers rather than mission-aligned contributors
3. **Lock-up requirements create barriers** — The proposal notes that "locking tokens for multiple years to increase governance power" is required to climb the leaderboard, which favors wealthy holders who can afford long-term illiquidity
4. **Delegation doesn't solve the problem** — While the proposal allows delegation, "governance power transfers to the delegatee, not the original holder," meaning small holders still cannot access perks through delegation
This contrasts with linear incentive structures (e.g., proportional rewards, quadratic distributions) that maintain marginal incentives for all participation levels.
## Evidence
- Top 5 members: $10K in travel and accommodation (12 days at DL DAO Villa)
- Top 50 members: Event invitations, airdrops, "continuous perks"
- Below top 50: No specified benefits
- Governance power calculation: Token deposits + lock-up multipliers
- Proposal status: Failed (2024-06-25)
The proposal's failure may itself be evidence that this incentive structure did not successfully mobilize participation.
## Challenges
This claim is speculative because:
- We don't have data on whether the proposal actually reduced participation (it failed before implementation)
- Some DAOs successfully use tiered rewards (MonkeDAO, SuperTeam cited as precedents)
- The proposal included a "feedback review session" for IslandDAO attendees, suggesting some attempt at broader inclusion
However, the steep hierarchy (top 5 get $2K each, next 45 get unspecified perks, rest get nothing) creates structural barriers to broad-based participation.
---
Relevant Notes:
- [[token voting DAOs offer no minority protection beyond majority goodwill]]
- [[optimal governance requires mixing mechanisms because different decisions have different manipulation risk profiles]]
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map
- foundations/collective-intelligence/_map

View file

@ -40,6 +40,18 @@ Optimism futarchy achieved 430 active forecasters and 88.6% first-time governanc
Sanctum's Wonder proposal failure reveals a new friction: team conviction vs. market verdict on strategic pivots. The team had strong conviction ('I want to build the right introduction to crypto: the app we all deserve, but no one is building') backed by market comparables (Phantom $3B, Jupiter $1.7B, MetaMask $320M fees) and team track record (safeguarding $1B+, making futarchy fun). Yet futarchy rejected the proposal. The team reserved 'the right to change details of the prospective features or go-to-market if we deem it better for the product' but submitted the core decision to futarchy, suggesting uncertainty about whether futarchy should govern strategic direction or just treasury/operations. This creates a new adoption friction: uncertainty about futarchy's appropriate scope (operational vs. strategic decisions) and whether token markets can accurately price founder conviction and domain expertise on product strategy.
### Additional Evidence (confirm)
*Source: [[2024-06-22-futardio-proposal-thailanddao-event-promotion-to-boost-deans-list-dao-engageme]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
Dean's List ThailandDAO proposal included complex mechanics (token lockup multipliers, governance power calculations, leaderboard dynamics, multi-phase rollout with feedback sessions, payment-in-DEAN options at 10% discount) that increased evaluation friction. Despite favorable economics (16x projected FDV increase, $15K cost, 3% threshold), the proposal failed to attract trading volume. The proposal's own analysis noted the 3% requirement was 'small compared to the projected FDV increase' and 'achievable,' yet market participants did not engage, confirming that proposal complexity creates adoption barriers even when valuations are attractive.
### Additional Evidence (confirm)
*Source: [[2024-08-03-futardio-proposal-approve-q3-roadmap]] | Added: 2026-03-15*
MetaDAO's Q3 roadmap explicitly prioritized UI performance improvements, targeting reduction of page load times from 14.6 seconds to 1 second. This 93% reduction target indicates that user experience friction was severe enough to warrant top-level roadmap inclusion alongside product launches and team building.
---
Relevant Notes:

View file

@ -35,6 +35,18 @@ This pattern is general. Since [[futarchy adoption faces friction from token pri
- MetaDAO's current scale ($219M total futarchy marketcap) may be too small to attract sophisticated attacks that the removed mechanisms were designed to prevent
- Hanson might argue that MetaDAO's version isn't really futarchy at all — just conditional prediction markets used for governance, which is a narrower claim
### Additional Evidence (confirm)
*Source: [[2023-12-03-futardio-proposal-migrate-autocrat-program-to-v01]] | Added: 2026-03-15*
MetaDAO's Autocrat v0.1 simplified by making proposal slots configurable and reducing default duration to 3 days. The proposer explicitly framed this as enabling 'quicker feedback loops,' suggesting the original implementation's fixed duration was a practical barrier to adoption.
### Additional Evidence (confirm)
*Source: [[2024-08-03-futardio-proposal-approve-q3-roadmap]] | Added: 2026-03-15*
MetaDAO's roadmap included 'cardboard cutout' design phase for grants product, explicitly gathering requirements from both prospective DAO users and decision market traders before implementation. This user-centered design approach demonstrates practical adaptation of futarchy theory to real user needs.
---
Relevant Notes:

View file

@ -0,0 +1,36 @@
---
type: claim
domain: internet-finance
description: "Estimating token value under pass versus fail conditions involves wide uncertainty ranges that discourage limit orders near midpoint"
confidence: likely
source: "MetaDAO AMM proposal CF9QUBS251FnNGZHLJ4WbB2CVRi5BtqJbCqMi47NX1PG, 2024-01-24"
created: 2026-03-11
---
# Futarchy CLOB liquidity fragmentation creates wide spreads because pricing counterfactual governance outcomes has inherent uncertainty
The MetaDAO proposal identifies "lack of liquidity" as the primary driver for switching from CLOBs to AMMs in futarchy markets. The core mechanism: "Estimating a fair price for the future value of MetaDao under pass/fail conditions is difficult, and most reasonable estimates will have a wide range."
This uncertainty "discourages people from risking their funds with limit orders near the midpoint price, and has the effect of reducing liquidity (and trading)." The problem is structural to futarchy, not specific to MetaDAO—pricing counterfactual organizational futures requires speculation on complex causal chains.
CLOBs require traders to commit to specific price points, which is costly under high uncertainty. AMMs allow passive liquidity provision across a price curve, reducing the commitment required from individual LPs. The proposal notes that "liquidity would start low when the proposal is launched" but expects it to "increase over the duration of the proposal" as price discovery occurs and LPs converge on ranges.
This connects to [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]]—low liquidity is both cause and effect of limited trading.
## Evidence
- Proposal cites "lack of liquidity" as main reason for AMM switch
- Mechanism: wide uncertainty ranges discourage limit orders
- Expected pattern: liquidity increases as proposal duration progresses
- CLOB minimum order size (1 META) acts as spam filter but fragments liquidity further
---
Relevant Notes:
- [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]]
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]]
- [[futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements]]
- metadao.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -38,6 +38,12 @@ The new DAO parameters formalize the lesson: 120k USDC monthly spending limit (w
- Mintable tokens introduce dilution risk that fixed-supply tokens avoid: if mint authority is misused, token holders face value extraction without recourse
- Since [[futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders]], minting decisions are themselves governable through futarchy — but this only works if the DAO has not already become inoperable from treasury exhaustion
### Additional Evidence (confirm)
*Source: [[2025-10-15-futardio-proposal-lets-get-futarded]] | Added: 2026-03-15*
Coal DAO executed a one-time supply increase from 21M to 25M tokens (19% increase) to fund development and liquidity, demonstrating the practical necessity of mint authority for treasury operations. The proposal explicitly structured this as a one-time increase rather than ongoing emissions, suggesting DAOs try to preserve fixed-supply narratives while pragmatically requiring mint capability.
---
Relevant Notes:

View file

@ -37,6 +37,12 @@ The contrast with Ranger is instructive. Ranger's liquidation shows futarchy han
- The subcommittee model introduces trusted roles that could recentralize power over time, undermining the trustless property that makes futarchy valuable
- Since [[Ooki DAO proved that DAOs without legal wrappers face general partnership liability making entity structure a prerequisite for any futarchy-governed vehicle]], some of this scaffolding is legally required rather than a failure of market mechanisms
### Additional Evidence (confirm)
*Source: [[2024-10-30-futardio-proposal-swap-150000-into-isc]] | Added: 2026-03-15*
MetaDAO's rejection of ISC treasury diversification shows futarchy markets applying conservative risk assessment to treasury operations. Despite theoretical inflation hedge benefits, markets rejected a 6.8% allocation to an early-stage stablecoin, prioritizing capital preservation over yield optimization - a pattern consistent with traditional treasury management.
---
Relevant Notes:

View file

@ -0,0 +1,39 @@
---
type: claim
domain: internet-finance
description: "Memecoin launchpads using futarchy governance create tension between driving adoption through speculative markets and maintaining credibility for institutional use cases"
confidence: experimental
source: "MetaDAO Futardio proposal discussion, 2024-08-14"
created: 2026-03-11
---
# Futarchy-governed memecoin launchpads face reputational risk tradeoff between adoption and credibility
MetaDAO's internal debate over Futardio reveals a structural tension in futarchy adoption strategy. The proposal explicitly identifies "potential advantages" (drive attention and usage to futarchy, more exposure, more usage helps improve the product, provides proof points) against "potential pitfalls" (makes futarchy look less serious, may make it harder to sell DeFi DAOs and non-crypto organizations, may make it harder to recruit contributors).
This is not merely a marketing concern but a strategic fork: futarchy can optimize for rapid adoption through high-volume speculative markets (memecoins) OR maintain positioning for institutional/serious governance use cases, but pursuing both simultaneously creates reputational contamination risk. The proposal's failure (market rejected it) suggests the MetaDAO community valued credibility preservation over adoption acceleration.
The core mechanism insight: futarchy's legitimacy depends on the perceived quality of decisions it governs. Associating the mechanism with memecoin speculation—even if technically sound—may undermine trust from organizations evaluating futarchy for treasury management, protocol governance, or corporate decision-making.
## Evidence
From the MetaDAO proposal:
- **Potential advantages listed:** "Drive attention and usage to futarchy," "More exposure," "More usage helps MetaDAO improve the product," "Provides more proof points of futarchy"
- **Potential pitfalls listed:** "Makes futarchy look less serious," "May make it harder to sell DeFi DAOs / non-crypto organizations," "May make it harder to recruit contributors"
- **Proposal outcome:** Failed (market rejected)
- **Proposed structure:** Memecoin launchpad where "some percentage of every new token's supply gets allocated to its futarchy DAO"
## Relationship to Existing Claims
This claim extends futarchy-governed-permissionless-launches-require-brand-separation-to-manage-reputational-liability-because-failed-projects-on-a-curated-platform-damage-the-platforms-credibility by showing the reputational concern operates at the mechanism level, not just the platform level. The market's rejection of Futardio suggests futarchy stakeholders prioritize mechanism credibility over short-term adoption metrics.
---
Relevant Notes:
- futarchy-governed-permissionless-launches-require-brand-separation-to-manage-reputational-liability-because-failed-projects-on-a-curated-platform-damage-the-platforms-credibility
- MetaDAO
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map
- domains/internet-finance/_map

View file

@ -0,0 +1,20 @@
---
type: claim
domain: internet-finance
description: Human judgment layer resolves ambiguity in automated reward systems while maintaining credible commitment
confidence: experimental
source: Drift Futarchy proposal execution structure
created: 2026-03-15
---
# Futarchy incentive programs use multisig execution groups as discretionary override because pure algorithmic distribution cannot handle edge cases or gaming attempts
The Drift proposal establishes a 2/3 multisig execution group (metaprophet, Sumatt, Lmvdzande) to distribute the 50,000 DRIFT budget according to the outlined rules. Critically, the proposal grants this group discretion in two areas: (1) determining 'exact criteria' for the activity pool to filter non-organic participation, and (2) deciding which proposals qualify if successful proposals exceed the budget. The group also receives 3,000 DRIFT for their work and has authority to return excess funds to the treasury. This structure acknowledges that pure algorithmic distribution fails when faced with gaming, ambiguous cases, or unforeseen circumstances. The multisig provides a credible commitment mechanism - the proposal passes based on general principles, but execution requires human judgment. The group composition (known futarchy advocates) provides reputational accountability.
---
Relevant Notes:
- futarchy-governed DAOs converge on traditional corporate governance scaffolding for treasury operations because market mechanisms alone cannot provide operational security and legal compliance.md
Topics:
- [[_map]]

View file

@ -0,0 +1,58 @@
---
type: claim
domain: internet-finance
description: "Market rejection of liquidity solution despite stated liquidity crisis demonstrates futarchy's ability to price trade-offs"
confidence: experimental
source: "MetaDAO Proposal 8 failure, 2024-02-18 to 2024-02-24"
created: 2026-03-11
---
# Futarchy markets can reject solutions to acknowledged problems when the proposed solution creates worse second-order effects than the problem it solves
MetaDAO Proposal 8 explicitly stated "The current liquidity within the META markets is proving insufficient to support the demand" and proposed a $100,000 OTC trade to address this. The proposal failed. This is evidence that futarchy markets can distinguish between "we have a problem" and "this solution is net positive."
The proposal acknowledged the liquidity crisis and offered a concrete solution: Ben Hawkins would commit $100k USDC to acquire up to 500 META tokens, with half the USDC used to create a 50/50 AMM pool. The proposal projected ~15% increase in META value and 2-7% increase in circulating supply. Despite these stated benefits and the acknowledged need, the market rejected it.
This suggests the conditional markets priced second-order effects that outweighed the first-order liquidity benefit:
1. **Dilution risk**: Adding 284-1000 META to 14,530 circulating supply (2-7% dilution) might depress price more than liquidity helps
2. **Price uncertainty**: The max(TWAP, $200) formula with spot at $695 created massive uncertainty about actual dilution
3. **Counterparty risk**: Doubt about whether Ben Hawkins would actually provide sustained liquidity vs. extracting value
4. **Precedent risk**: Approving discounted OTC sales might trigger more dilutive proposals
The proposal's own risk section noted "extreme risk" and "unknown unknowns," suggesting even the proposers recognized the trade-offs. The market's rejection indicates it weighted these risks higher than the liquidity benefit.
This is significant for futarchy theory. Critics argue prediction markets can't handle complex trade-offs or will rubber-stamp solutions to stated problems. This case shows the opposite: the market rejected a solution to an acknowledged crisis, implying it priced the cure as worse than the disease.
However, this is a single case. Alternative explanations:
- The market simply didn't believe the liquidity crisis was severe
- The specific price terms were unacceptable, not the concept
- Low trading volume meant the decision was noise, not signal
- The proposal's complexity deterred participation (as noted in [[futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements]])
The proposal's failure is consistent with [[futarchy-excels-at-relative-selection-but-fails-at-absolute-prediction-because-ordinal-ranking-works-while-cardinal-estimation-requires-calibration]] — the market could rank "this proposal" below "status quo" but couldn't necessarily estimate the optimal liquidity solution.
## Evidence
- Proposal explicitly stated: "The current liquidity within the META markets is proving insufficient to support the demand"
- Proposal offered $100k USDC for liquidity, projected 15% value increase
- Proposal failed 2024-02-24 after 6-day market period
- MetaDAO had 14,530 META circulating, proposal would add 284-1000 META (2-7%)
- Price formula max(TWAP, $200) with spot at $695.92 created 65-71% discount
## Challenges
- Single case, not a pattern
- Low trading volume in MetaDAO markets may mean decision was noise
- Market may have rejected specific terms (price, counterparty) not the concept
- No data on what alternative liquidity solution would have passed
---
Relevant Notes:
- [[futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements]]
- [[futarchy-excels-at-relative-selection-but-fails-at-absolute-prediction-because-ordinal-ranking-works-while-cardinal-estimation-requires-calibration]]
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]]
- [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]]
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,55 @@
---
type: claim
domain: internet-finance
description: "Dean's List ThailandDAO proposal failed despite 16x projected FDV increase suggesting mechanism friction not valuation disagreement"
confidence: experimental
source: "Futardio proposal DgXa6gy7nAFFWe8VDkiReQYhqe1JSYQCJWUBV8Mm6aM, 2024-06-22"
created: 2026-03-11
depends_on: ["MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window", "futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements"]
---
# Futarchy proposals with favorable economics can fail due to participation friction not market disagreement
The Dean's List DAO ThailandDAO event promotion proposal failed despite projecting a 16x FDV increase (from $123,263 to $2M+) with only $15K in costs and a 3% TWAP threshold. The proposal's own financial analysis showed the required 3% increase was "small compared to the projected FDV increase" and that the $73.95 per-participant value creation needed was "achievable." Yet the proposal failed to attract sufficient trading volume to pass.
This failure pattern suggests futarchy markets can reject proposals not because traders disagree with the valuation thesis, but because:
1. **Liquidity bootstrapping costs exceed expected returns** — Even when a proposal shows positive expected value, the capital and attention required to establish liquid conditional markets may exceed what individual traders can capture
2. **Proposal complexity creates evaluation friction** — The ThailandDAO proposal included token lockup mechanics, governance power calculations, leaderboard dynamics, and multi-phase rollout plans that increase the cognitive cost of forming a trading position
3. **Small DAOs face cold-start problems** — With Dean's List FDV at $123K, the absolute dollar amounts at stake may be too small to attract professional traders even when percentage returns are attractive
This is distinct from [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]] because this proposal was contested (it failed) but still showed low participation. The market didn't actively reject the proposal through heavy fail-side trading — it failed to engage at all.
## Evidence
- Dean's List DAO current FDV: $123,263 (2024-06-22)
- Proposal budget: $15K total ($10K travel, $5K events)
- Required TWAP increase: 3% ($3,698 absolute)
- Projected FDV: $2M+ (16x increase)
- Proposal status: Failed (2024-06-25)
- Trading period: 3 days
- Autocrat version: 0.3
The proposal explicitly calculated that only $73.95 in value creation per participant (50 participants) was needed to hit the 3% threshold, yet failed to attract sufficient trading interest.
## Challenges
Single-case evidence limits generalizability. The failure could be specific to:
- Dean's List DAO's small size and limited liquidity
- The proposal's specific structure (event promotion vs. treasury/technical decisions)
- Timing or market conditions during the 3-day trading window
However, this case provides concrete evidence that [[futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements]] operates even when the economics appear favorable.
---
Relevant Notes:
- [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]]
- [[futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements]]
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]]
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,21 @@
---
type: claim
domain: internet-finance
description: Three-month clawback period filters for proposals that create lasting value versus short-term manipulation
confidence: experimental
source: Drift Futarchy proposal structure
created: 2026-03-15
---
# Futarchy proposer incentives require delayed vesting to prevent gaming because immediate rewards enable proposal spam for token extraction rather than quality governance
The Drift proposal structures proposer rewards with a three-month delay between proposal passage and token claim. Passing proposals earn up to 5,000 DRIFT each, but tokens are only claimable after three months. This delay creates a quality filter: proposers must believe their proposals will create sustained value that survives the vesting period. Without this delay, rational actors could spam low-quality proposals to extract rewards, knowing they can exit before negative effects manifest. The proposal also includes an executor group discretion clause - if successful proposals exceed expectations, the group can decide which top N proposals split the allocation. This combines time-based filtering with human judgment to prevent gaming. The 20,000 DRIFT activity pool uses the same three-month delay, with criteria finalized by the execution group to 'filter for non organic activity.'
---
Relevant Notes:
- futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements.md
- performance-unlocked-team-tokens-with-price-multiple-triggers-and-twap-settlement-create-long-term-alignment-without-initial-dilution.md
Topics:
- [[_map]]

View file

@ -0,0 +1,20 @@
---
type: claim
domain: internet-finance
description: Token distributions to historical participants leverage behavioral economics to seed active markets
confidence: experimental
source: Drift Futarchy proposal, endowment effect literature
created: 2026-03-15
---
# Futarchy retroactive rewards bootstrap participation through endowment effect by converting past engagement into token holdings that create psychological ownership
The Drift Futarchy incentive program explicitly uses retroactive token distribution to MetaDAO participants as a mechanism to bootstrap engagement. The proposal cites the endowment effect - the behavioral economics finding that people value things more highly once they own them - as the theoretical basis. By distributing 9,600 DRIFT to 32 MetaDAO participants based on historical activity (5+ interactions over 30+ days), plus 2,400 DRIFT to AMM swappers, the proposal creates a cohort of token holders who have psychological ownership before the futarchy system launches. This differs from standard airdrops by explicitly targeting demonstrated forecasters rather than broad distribution. The tiered structure (100-400 DRIFT based on META holdings) further segments by engagement level. The proposal pairs this with forward incentives (5,000 DRIFT per passing proposal, 20,000 DRIFT activity pool) to convert initial ownership into sustained participation.
---
Relevant Notes:
- MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale.md
Topics:
- [[_map]]

View file

@ -0,0 +1,41 @@
---
type: claim
domain: internet-finance
description: "Computational complexity theory establishes that optimal job-shop scheduling becomes intractable at scale beyond trivial cases"
confidence: proven
source: "ScienceDirect review article on Flexible Job Shop Scheduling Problem, 2023; established operations research result"
created: 2026-03-11
---
# General job-shop scheduling is NP-complete for more than two machines
The classical Job Shop Scheduling Problem (JSSP) is NP-complete for m > 2 machines, meaning no polynomial-time algorithm exists to find optimal solutions for non-trivial instances. This is a foundational result in operations research and computational complexity theory.
This matters because it establishes the computational boundary between tractable and intractable scheduling problems. When designing coordination systems (like Teleo's pipeline architecture), understanding which side of this boundary your problem falls on determines whether you need heuristics or can use exact optimization.
## Evidence
The ScienceDirect review states: "Classical Job Shop Scheduling Problem (JSSP): n jobs, m machines, fixed operation-to-machine mapping, NP-complete for m > 2."
This is a well-established result in operations research. The proof shows that even with fixed operation-to-machine mappings, finding the optimal schedule that minimizes makespan (total completion time) requires exponential time in the worst case once you have three or more machines.
The Flexible JSSP (FJSP) adds machine assignment as a decision variable on top of sequencing, making it strictly harder than classical JSSP.
## Implications
For any multi-stage coordination system:
1. If your problem maps to general JSSP with >2 stages, you cannot guarantee optimal solutions at scale
2. Heuristics and approximation algorithms become necessary
3. Problem structure matters — special cases (like flow-shop or hybrid flow-shop) can be easier
4. The choice of coordination mechanism should account for computational tractability
This is why [[hybrid-flow-shop-scheduling-with-simple-dispatching-rules-performs-within-5-10-percent-of-optimal-for-homogeneous-workers]] matters — it identifies a tractable special case that applies to pipeline architectures.
---
Relevant Notes:
- [[hybrid-flow-shop-scheduling-with-simple-dispatching-rules-performs-within-5-10-percent-of-optimal-for-homogeneous-workers]]
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,34 @@
---
type: claim
domain: internet-finance
description: "Quality-and-Efficiency-Driven regime allows high utilization without queue explosion by scaling at √n rate"
confidence: proven
source: "Ward Whitt, What You Should Know About Queueing Models (2019)"
created: 2026-03-11
---
# Halfin-Whitt QED regime enables systems to operate near full utilization while maintaining service quality through utilization approaching one at rate one over square root n
The Halfin-Whitt (Quality-and-Efficiency-Driven) regime solves the fundamental tension in service system design: achieving high utilization (efficiency) without creating long delays (quality degradation). Systems in the QED regime operate with utilization approaching 1 at rate Θ(1/√n) as the number of servers n grows.
This is the theoretical foundation for square-root staffing. The regime is characterized by:
- High utilization (near 100%) without queue explosion
- Delays remain bounded and manageable
- Economies of scale: larger systems need proportionally fewer excess servers
- The safety margin grows as √n, not linearly with n
The practical implication: you don't need to match peak load with workers. The square-root safety margin handles variance efficiently. Over-provisioning for peak is wasteful; under-provisioning for average causes queue explosion. The QED regime is the sweet spot.
## Evidence
Ward Whitt identifies this as one of the key insights practitioners need from queueing theory. The regime was characterized by Halfin and Whitt in their heavy-traffic analysis of multi-server queues. The mathematical result shows that as systems scale, the relative overhead for quality-of-service decreases, creating natural economies of scale.
The Erlang C formula operationalizes this for staffing calculations, allowing practitioners to determine exact server counts given arrival rates and service level targets.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,48 @@
---
type: claim
domain: internet-finance
description: "3-5 percent swap fees in futarchy AMMs reward liquidity providers while pricing out wash trading attacks"
confidence: experimental
source: "MetaDAO AMM proposal CF9QUBS251FnNGZHLJ4WbB2CVRi5BtqJbCqMi47NX1PG, 2024-01-24"
created: 2026-03-11
---
# High-fee AMMs create LP incentive and manipulation deterrent simultaneously by making passive provision profitable and active trading expensive
The MetaDAO AMM proposal uses 3-5% swap fees to solve two problems with one parameter: "By setting a high fee (3-5%) we can both: encourage LPs, and aggressively discourage wash-trading and manipulation."
This is counterintuitive—traditional DeFi AMMs use low fees (0.05-0.3%) to maximize volume. But futarchy markets have different objectives:
1. **Price discovery over volume**: The goal is accurate conditional pricing, not trade throughput
2. **Manipulation resistance**: High fees make repeated trades (wash trading, price manipulation) prohibitively expensive
3. **LP attraction**: Futarchy markets are short-duration (days) with uncertain outcomes, requiring higher yield to attract capital
The proposal expects this to create a specific market dynamic: "someone would swap and move the AMM price to their preferred price, and then provide liquidity at that price since the fee incentives are high."
This is untested in production. High fees could also:
- Reduce legitimate price discovery if traders avoid the cost
- Create larger slippage for informed traders
- Fail to attract LPs if base volumes are too low
The mechanism depends on futarchy-specific conditions (short duration, governance stakes, informed trading) that may not generalize.
## Evidence
- Proposed 3-5% fee structure in MetaDAO AMM design
- Dual objective: LP incentive + manipulation deterrent
- Expected behavior: price discovery trade followed by LP provision
- No production data (experimental confidence)
## Challenges
- Untested mechanism in live futarchy markets
- May reduce legitimate trading volume
- LP attraction depends on base trading activity
---
Relevant Notes:
- [[liquidity-weighted-price-over-time-solves-futarchy-manipulation-through-capital-commitment-not-vote-counting]] <!-- claim pending -->
- [[futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders]]
- metadao.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,49 @@
---
type: claim
domain: internet-finance
description: "Operations research shows simple priority rules suffice for pipeline architectures with sequential stages and uniform worker capability"
confidence: likely
source: "ScienceDirect review article on Flexible Job Shop Scheduling Problem, 2023"
created: 2026-03-11
---
# Hybrid flow-shop scheduling with simple dispatching rules performs within 5-10 percent of optimal for homogeneous workers
For pipeline architectures where all work flows through the same sequence of stages (hybrid flow-shop), and workers within each stage have similar capabilities, simple priority dispatching rules like shortest-job-first or FIFO within priority classes achieve near-optimal performance without requiring complex metaheuristic optimization.
This matters for Teleo's pipeline architecture (research → extract → eval) because it means we don't need sophisticated scheduling algorithms. The computational complexity that makes general Job Shop Scheduling Problems NP-hard doesn't apply when:
1. All sources follow the same stage sequence (flow-shop property)
2. Multiple workers exist at each stage but are roughly interchangeable
3. The number of stages is small (3 in our case)
The review shows that for hybrid flow-shops with these properties, metaheuristics (genetic algorithms, simulated annealing, tabu search) provide only marginal improvements over well-designed dispatching rules, while adding significant implementation complexity.
## Evidence
The ScienceDirect review distinguishes several scheduling problem types:
- **Classical JSSP**: n jobs, m machines, fixed operation-to-machine mapping, NP-complete for m > 2
- **Flexible JSSP**: operations can run on any eligible machine from a set
- **Flow-shop**: all jobs follow the same machine order
- **Hybrid flow-shop**: multiple machines at each stage, jobs follow same stage order but can use any machine within a stage
For hybrid flow-shop problems specifically, the review notes that "simple priority dispatching rules (shortest-job-first, FIFO within priority classes) perform within 5-10% of optimal" when workers within stages are homogeneous.
The review also documents that recent trends focus on "multi-agent reinforcement learning for dynamic scheduling with worker heterogeneity and uncertainty" — but this is for cases where worker capabilities differ significantly, which is not the primary bottleneck in our pipeline.
## Implications for Teleo Pipeline
Our pipeline is definitionally a hybrid flow-shop:
- Three sequential stages: research → extract → eval
- Multiple AI agents can work at each stage
- All sources flow through the same stage sequence
- Workers within each stage have similar (though not identical) capabilities
This means our scheduling problem is computationally tractable with simple rules rather than requiring optimization algorithms designed for general JSSP.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,42 @@
---
type: claim
domain: internet-finance
description: "Different thresholds for adding versus removing resources prevent rapid oscillation in auto-scaling systems"
confidence: proven
source: "Tournaire et al., 'Optimal Control Policies for Resource Allocation in the Cloud' (2021); established operations research principle"
created: 2026-03-11
---
# Hysteresis in autoscaling prevents oscillation by using asymmetric thresholds for scale-up and scale-down
Hysteresis in auto-scaling systems—using different thresholds for scaling up versus scaling down—prevents oscillation where resources are rapidly added and removed in response to workload fluctuations near a single threshold.
For example, a system might scale up when queue length reaches 10 but only scale down when queue length drops to 3. This asymmetry creates a "dead zone" between thresholds that absorbs short-term fluctuations without triggering scaling actions.
Tournaire et al. (2021) demonstrate this principle in cloud VM provisioning, where MDP-based optimal control policies automatically discover the optimal hysteresis gap given cost structure (energy + SLA violations). The principle is well-established in operations research and control theory more broadly.
## Why Hysteresis Works
Without hysteresis, a system operating near a single threshold (e.g., scale at queue=5) will constantly add and remove resources as the queue fluctuates around that value. Each scaling action has overhead cost (VM startup time, worker initialization, context switching), making oscillation expensive.
Hysteresis trades increased resource utilization during the dead zone (queue between 3-10 in the example) for reduced scaling overhead and more stable operation.
## Application to Pipeline Management
For autonomous pipeline workers:
- Scale up threshold: unprocessed queue > N sources
- Scale down threshold: unprocessed queue < M sources (where M < N)
- Dead zone width (N-M) should be tuned to workload volatility and worker startup cost
The optimal gap depends on:
- Worker initialization time (longer startup → wider gap)
- Cost per worker-minute (higher cost → narrower gap, more aggressive scaling down)
- Workload volatility (higher variance → wider gap to avoid thrashing)
---
Relevant Notes:
- [[mdp-based-autoscaling-with-hysteresis-outperforms-simple-threshold-heuristics-for-cloud-resource-allocation]]
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,42 @@
---
type: claim
domain: internet-finance
description: "AMM metric aggregates price weighted by on-chain liquidity making manipulation require sustained capital lock rather than single trades"
confidence: experimental
source: "MetaDAO AMM proposal CF9QUBS251FnNGZHLJ4WbB2CVRi5BtqJbCqMi47NX1PG, 2024-01-24"
created: 2026-03-11
---
# Liquidity-weighted price over time solves futarchy manipulation through capital commitment not vote counting
The proposed AMM metric for MetaDAO futarchy uses "liquidity-weighted price over time" where "the more liquidity that is on the books, the more weight the current price of the pass or fail market is given." This shifts manipulation cost from single-trade price impact (CLOBs) to sustained capital commitment.
In CLOB futarchy, "someone with 1 $META can push the midpoint towards the current best bid/ask" when spreads are wide. The proposal notes this creates vulnerability to selective market cranking and VWAP manipulation through wash trading.
The AMM approach makes manipulation expensive through two mechanisms:
1. **High fees (3-5%)** that "aggressively discourage wash-trading and manipulation"
2. **Liquidity weighting** that requires attackers to provide substantial liquidity at manipulated prices, not just execute trades
The proposal acknowledges CLOB manipulation is "a 1/n problem" addressable by defensive bots, but argues AMMs provide structural resistance rather than requiring active defense.
## Evidence
- Liquidity-weighted price metric described in proposal
- CLOB vulnerability: 1 META can move midpoint in wide spreads
- Proposed 3-5% fee structure
- Wash trading and selective cranking identified as CLOB attack vectors
## Challenges
- Untested in production futarchy (experimental confidence)
- No empirical data on manipulation resistance
- High fees may reduce legitimate trading volume
---
Relevant Notes:
- [[futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders]]
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]]
- metadao.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,32 @@
---
type: claim
domain: internet-finance
description: "3-5% swap fees combined with liquidity-weighted averaging make wash trading prohibitively expensive as a manipulation mechanism in futarchy AMMs"
confidence: experimental
source: "MetaDAO AMM proposal by joebuild, 2024-01-24"
created: 2024-01-24
---
# Liquidity-weighted price over time solves futarchy manipulation through wash trading costs because high fees make price movement expensive
MetaDAO's proposed AMM futarchy uses "liquidity-weighted price over time" as the settlement metric, where "the more liquidity that is on the books, the more weight the current price of the pass or fail market is given." This is paired with 3-5% swap fees that "aggressively discourage wash-trading and manipulation."
The mechanism works because:
1. Moving price requires swaps that pay the high fee
2. The liquidity weighting means manipulation attempts when liquidity is high are both expensive (large swaps needed) and heavily weighted in the final calculation
3. The fee revenue accrues to LPs, creating a natural defender class that profits from manipulation attempts
The proposal explicitly contrasts this with CLOB vulnerabilities: "With CLOBs there is always a bid/ask spread, and someone with 1 $META can push the midpoint towards the current best bid/ask" and "VWAP can be manipulated by wash trading."
This is rated experimental rather than proven because the mechanism has not yet been deployed or tested against real manipulation attempts. The theoretical argument is sound but requires empirical validation.
---
Relevant Notes:
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders.md
- MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window.md
- optimal governance requires mixing mechanisms because different decisions have different manipulation risk profiles.md
Topics:
- domains/internet-finance/_map
- core/mechanisms/_map

View file

@ -0,0 +1,37 @@
---
type: claim
domain: internet-finance
description: "Little's Law calculates theoretical minimum capacity but real systems need safety margin above that floor"
confidence: proven
source: "Dan Slimmon, 'Using Little's Law to Scale Applications' (2022-06-07)"
created: 2026-03-11
---
# Little's Law provides minimum worker capacity floor for pipeline systems but requires buffer margin for variance
Little's Law (L = λW) gives the theoretical minimum capacity for steady-state systems: total workers needed ≥ (arrival rate) × (average processing time). This is the floor, not the ceiling. Real systems require buffer capacity above this minimum to handle variance in arrival rates and processing times.
For a system processing 1000 requests/second with 0.34s average processing time, Little's Law calculates 340 concurrent requests needed at steady state. However, this assumes perfect uniformity. Production systems experience bursts, outliers, and cascading delays that the long-term average doesn't capture.
The formula is valuable for capacity planning because it establishes the lower bound — you cannot run below this threshold without queue buildup. But it's not a complete scaling solution. The gap between theoretical minimum and operational capacity is where queueing theory, square-root staffing rules, and empirical load testing fill in.
## Evidence
- Little's Law: L = λW where L = average items in system, λ = arrival rate, W = average time per item
- Rearranged for capacity: (total worker threads) ≥ (arrival rate)(average processing time)
- Practical example from source: 1000 req/s × 0.34s = 340 concurrent requests needed
- Source explicitly notes: "Little's Law gives long-term averages only — real systems need buffer capacity beyond the theoretical minimum to handle variance"
## Application to Pipeline Architecture
For Teleo pipeline: if processing ~8 sources per extraction cycle (every 5 min) and each takes ~10-15 min of Claude compute, Little's Law says L = (8/300s) × 750s ≈ 20 sources in-flight at steady state. With 6 workers, each handles ~3.3 sources concurrently — which means workers must pipeline or queue buildup occurs.
More generally: λ = average sources per second, W = average extraction time. Total workers needed ≥ λ × W gives the minimum worker floor. Additional capacity rules (like square-root staffing) provide the safety margin above that floor.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,38 @@
---
type: claim
domain: internet-finance
description: "Structured MDP algorithms that incorporate hysteresis properties achieve better performance and faster execution than simple threshold heuristics in cloud VM provisioning"
confidence: likely
source: "Tournaire et al., 'Optimal Control Policies for Resource Allocation in the Cloud' (2021)"
created: 2026-03-11
---
# MDP-based autoscaling with hysteresis outperforms simple threshold heuristics for cloud resource allocation
Markov Decision Process formulations that incorporate hysteresis properties (different thresholds for scaling up versus scaling down) outperform simple threshold heuristics in both execution time and accuracy for cloud auto-scaling problems. The MDP approach automatically discovers optimal hysteresis thresholds rather than requiring manual tuning.
The problem formulation treats VM provisioning as a sequential decision problem where:
- States = queue lengths + active VMs
- Actions = add/remove VMs
- Rewards = negative cost (energy + SLA violations)
Value iteration and policy iteration algorithms find optimal threshold policies that prevent oscillation by using different thresholds for scaling up (e.g., queue=10) versus scaling down (e.g., queue=3).
Tournaire et al. (2021) demonstrate that structured MDP algorithms incorporating hysteresis properties outperform heuristic approaches in both execution time and accuracy. The key insight is that hysteresis—different thresholds for scaling up versus scaling down—prevents oscillation, and MDP algorithms can discover these optimal thresholds automatically rather than through manual tuning.
## Relevance to Pipeline Architecture
This formulation maps directly to autonomous pipeline management:
- States = (unprocessed queue, in-flight extractions, open PRs, active workers)
- Actions = (spawn worker, kill worker, wait)
- Cost = (Claude compute cost per worker-minute + delay cost per queued source)
The hysteresis insight is particularly valuable for preventing worker thrashing in variable-load scenarios. Simple threshold policies (scale up at queue=N, scale down at queue=M where M < N) provide reasonable baseline performance, but MDP optimization can find better thresholds given cost structure and workload patterns.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,40 @@
---
type: claim
domain: internet-finance
description: "Memecoin holders have purely price-maximizing preferences making futarchy's conditional markets unambiguous unlike protocols with multi-stakeholder tradeoffs"
confidence: experimental
source: "MetaDAO Futardio proposal, 2024-08-14"
created: 2026-03-11
---
# Memecoin governance is ideal futarchy use case because single objective function eliminates long-term tradeoff ambiguity
The Futardio proposal identifies memecoins as "one of the ideal use-cases for futarchy" because "memecoin holders only want the price of the token to increase. There's no question of 'maybe the market knows what's the best short-term action, but not the best long-term action.'"
This addresses a core criticism of futarchy: that conditional markets optimize for measurable short-term outcomes at the expense of unmeasurable long-term value. In most governance contexts (protocols, DAOs, companies), stakeholders have competing preferences—users want low fees, token holders want revenue, developers want sustainability. Futarchy's "vote on values, bet on beliefs" requires consensus on the objective function.
Memecoins eliminate this problem structurally. There is no product, no users to serve, no long-term mission beyond price appreciation. Every stakeholder wants the same thing: number go up. This makes the conditional market's objective function unambiguous—proposals that increase expected token price should pass, those that don't should fail.
The mechanism insight: futarchy works best when the objective function is singular and all participants agree on it. Memecoins are the purest expression of this condition in crypto.
## Evidence
From the proposal:
- "One of the ideal use-cases for futarchy is memecoin governance. This is because memecoin holders only want the price of the token to increase."
- "There's no question of 'maybe the market knows what's the best short-term action, but not the best long-term action.'"
- Proposal structure: "a memecoin launchpad with said bootstrapping mechanism where a portion of every launched memecoin gets allocated to a futarchy DAO"
## Relationship to Existing Claims
This claim complements [[coin price is the fairest objective function for asset futarchy]] by identifying the specific context where coin price is unambiguously correct: assets with no purpose beyond speculation. It also relates to [[redistribution proposals are futarchys hardest unsolved problem because they can increase measured welfare while reducing productive value creation]]—memecoins avoid this problem by having no productive value to begin with.
---
Relevant Notes:
- [[coin price is the fairest objective function for asset futarchy]]
- [[redistribution proposals are futarchys hardest unsolved problem because they can increase measured welfare while reducing productive value creation]]
- MetaDAO
Topics:
- core/mechanisms/_map
- domains/internet-finance/_map

View file

@ -0,0 +1,21 @@
---
type: claim
domain: internet-finance
description: Community approved treasury migration despite inability to verify program builds, revealing governance tradeoffs
confidence: experimental
source: MetaDAO Autocrat v0.1 proposal risk disclosure, December 2023
created: 2026-03-15
---
# MetaDAO Autocrat migration accepted counterparty risk from unverifiable builds prioritizing iteration speed over security guarantees
The proposal explicitly disclosed that the new Autocrat program "was unable to build with solana-verifiable-build" and required "placing trust in me that I didn't introduce a backdoor." Despite this counterparty risk affecting 990,000 META, 10,025 USDC, and 5.5 SOL, the proposal passed. The proposer acknowledged this as a temporary compromise, stating "for future versions, I should always be able to use verifiable builds." This reveals a critical governance tradeoff: the MetaDAO community valued faster iteration and improved functionality (configurable proposal slots, 3-day default) over the security guarantee of verifiable builds. The decision suggests early-stage futarchy DAOs prioritize mechanism refinement over security hardening, accepting elevated trust assumptions to compress development cycles. This pattern may not generalize to mature DAOs or larger treasuries, but demonstrates that governance communities will accept temporary centralization when the alternative is slower evolution of the governance mechanism itself.
---
Relevant Notes:
- futarchy implementations must simplify theoretical mechanisms for production adoption because original designs include impractical elements that academics tolerate but users reject.md
- futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements.md
Topics:
- [[_map]]

View file

@ -0,0 +1,27 @@
---
type: claim
domain: internet-finance
description: Configurable proposal slots with three-day default compress feedback loops in futarchy governance
confidence: experimental
source: MetaDAO Autocrat v0.1 proposal, December 2023
created: 2026-03-15
---
# MetaDAO Autocrat v0.1 reduces proposal duration to three days enabling faster governance iteration
The Autocrat v0.1 upgrade introduces configurable slots per proposal with a default of 3 days, explicitly designed to "allow for quicker feedback loops." This represents a significant reduction from previous implementations and addresses a key friction point in futarchy adoption: the time cost of decision-making. The proposal passed and migrated 990,000 META, 10,025 USDC, and 5.5 SOL to the new program, demonstrating community acceptance of faster iteration cycles. The architectural change makes proposal duration a parameter rather than a constant, allowing MetaDAO to tune the speed-quality tradeoff based on empirical results. This matters because governance mechanism adoption depends on matching decision velocity to organizational needs—too slow and participants route around the system, too fast and markets cannot aggregate information effectively.
### Additional Evidence (confirm)
*Source: [[2025-10-15-futardio-proposal-lets-get-futarded]] | Added: 2026-03-15*
Coal's v0.6 parameters set proposal length at 3 days with 1-day TWAP delay, confirming this as the standard configuration for Autocrat v0.6 implementations. The combination of 1-day TWAP delay plus 3-day proposal window creates a 4-day total decision cycle.
---
Relevant Notes:
- MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window.md
- futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements.md
Topics:
- [[_map]]

View file

@ -30,6 +30,12 @@ The convergence toward lower volatility in recent launches (Ranger, Solomon, Pay
## Limitations
The source presents no failure cases despite eight ICOs, which suggests either selection bias in reporting or insufficient time for failures to materialize. The convergence toward lower volatility could indicate efficient pricing or could reflect declining speculative interest—longer observation periods needed to distinguish these hypotheses.
### Additional Evidence (extend)
*Source: [[2025-10-14-futardio-launch-avici]] | Added: 2026-03-15*
Avici achieved 17x oversubscription ($34.2M committed vs $2M target), exceeding the previously documented 15x benchmark and demonstrating continued strong market demand for futarchy-governed raises.
---
Relevant Notes:

View file

@ -0,0 +1,36 @@
---
type: claim
domain: internet-finance
description: "Hidden Markov chain governs rate switching between active and quiet states"
confidence: proven
source: "Liu et al. (NC State), 'Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes' (2019)"
created: 2026-03-11
---
# MMPP models session-based bursty arrivals through hidden state Markov chain
Markov-Modulated Poisson Process (MMPP) provides a natural framework for modeling arrival processes that alternate between active and quiet periods. The arrival rate switches between discrete states governed by a continuous-time Markov chain, where the state transitions are hidden but the arrival rate in each state is observable.
This architecture directly captures "research session" dynamics where an unobservable state (researcher actively working vs. not working) determines whether arrivals occur at high rate (burst) or low rate (quiet).
## Evidence
Liu et al. define MMPP as a process where "arrival rate switches between states governed by a hidden Markov chain — natural model for 'bursty then quiet' patterns." The underlying Markov chain controls state transitions, while each state has an associated Poisson arrival rate.
The paper notes that "congestion measures are increasing functions of arrival process variability — more bursty = more capacity needed," establishing that MMPP's ability to model burstiness has direct operational implications for capacity planning.
The Markov-MECO process, a related Markovian arrival process (MAP), models "interarrival times as absorption times of a continuous-time Markov chain," providing the theoretical foundation for state-dependent arrival modeling.
## Application to Capital Formation Pipelines
Research-driven capital formation exhibits textbook MMPP behavior: during active research sessions, sources arrive in bursts of 10-20; during inactive periods, arrivals drop to 0-2 per day. The hidden state is whether a research session is active, and this state governs the arrival rate.
Capacity sizing for such processes requires modeling the state transition dynamics (session start/end rates) and the arrival rates in each state, not just the time-averaged arrival rate.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,37 @@
---
type: claim
domain: internet-finance
description: "At 5-20 server scale, queueing theory threshold policies capture most benefit without algorithmic complexity"
confidence: likely
source: "van Leeuwaarden, Mathijsen, Sanders (SIAM Review 2018) - empirical validation of square-root staffing at moderate scale"
created: 2026-03-11
depends_on: ["square-root-staffing-principle-achieves-economies-of-scale-in-queueing-systems-by-operating-near-full-utilization-with-manageable-delays.md"]
---
# Moderate-scale queueing systems benefit from simple threshold policies over sophisticated algorithms because square-root staffing captures most efficiency gains
For systems operating at moderate scale (5-20 servers), the mathematical properties of the Halfin-Whitt regime mean that simple threshold-based policies informed by queueing theory capture most of the available efficiency gains. Sophisticated dynamic algorithms add implementation complexity without proportional benefit at this scale.
The square-root staffing principle works empirically even for systems as small as 5-6 servers, which means the core economies-of-scale insight applies well below the asymptotic regime where the mathematical proofs strictly hold. This has direct implications for pipeline architecture: a system with 5-6 workers doesn't need complex autoscaling algorithms or machine learning-based load prediction.
## Evidence
The SIAM Review tutorial explicitly notes that "square-root safety staffing works empirically even for moderate-sized systems (5-20 servers)" and that "at our scale (5-6 workers), we're in the 'moderate system' range where square-root staffing still provides useful guidance."
The key takeaway from the tutorial: "we don't need sophisticated algorithms for a system this small. Simple threshold policies informed by queueing theory will capture most of the benefit."
## Practical Application
For Teleo pipeline architecture operating at 5-6 workers, this means:
- Simple threshold-based autoscaling policies are sufficient
- Complex predictive algorithms add cost without proportional benefit
- The mathematical foundation (Halfin-Whitt regime) validates simple approaches at this scale
---
Relevant Notes:
- [[square-root-staffing-principle-achieves-economies-of-scale-in-queueing-systems-by-operating-near-full-utilization-with-manageable-delays]]
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,37 @@
---
type: claim
domain: internet-finance
description: "Larger service systems need proportionally fewer excess servers due to square-root scaling of variance"
confidence: proven
source: "Ward Whitt, What You Should Know About Queueing Models (2019)"
created: 2026-03-11
---
# Multi-server queueing systems exhibit economies of scale because safety margin grows sublinearly with system size
Queueing theory proves that larger service systems are more efficient per unit of capacity. If a system with R servers needs β√R excess servers for quality-of-service, then doubling the base load to 2R requires only β√(2R) ≈ 1.41β√R excess servers, not 2β√R.
The safety margin grows as the square root of system size, not linearly. This creates natural economies of scale: the proportional overhead for handling variance decreases as systems grow. A system with 100 servers needs ~10% overhead (assuming β=1), while a system with 10,000 servers needs only ~1% overhead.
This explains why:
- Large call centers are more efficient than small ones
- Cloud providers achieve better utilization than on-premise infrastructure
- Centralized service systems outperform distributed ones on pure efficiency metrics
- Pipeline architectures benefit from batching and pooling
The implication for Teleo: as processing volume grows, the relative cost of maintaining service quality decreases. Early-stage over-provisioning is proportionally more expensive than it will be at scale.
## Evidence
Ward Whitt presents this as a fundamental result from multi-server queueing analysis. The square-root staffing principle directly implies sublinear scaling of overhead. The Halfin-Whitt regime formalizes this: utilization approaches 1 at rate Θ(1/√n), meaning the gap between capacity and load shrinks proportionally as systems grow.
This is observable in practice across industries: Amazon's fulfillment centers, telecom networks, and financial trading systems all exhibit this scaling behavior.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map
- foundations/teleological-economics/_map

View file

@ -0,0 +1,34 @@
---
type: claim
domain: internet-finance
description: "Simulation-based scheduling optimizes the responsiveness-efficiency tradeoff in systems with time-varying arrival rates"
confidence: proven
source: "Simio / WinterSim 2018, Resource Scheduling in Non-Stationary Service Systems"
created: 2026-03-11
---
# Non-stationary service systems require dynamic worker allocation because fixed staffing wastes capacity during low demand and creates bottlenecks during peaks
Service systems with time-varying arrival rates face a fundamental tradeoff: fixed worker counts either waste capacity during quiet periods or create unacceptable wait times during demand spikes. The WinterSim 2018 paper demonstrates that simulation-based approaches can optimize this tradeoff by modeling realistic arrival patterns and testing staffing policies before deployment.
The key insight is that without server constraints there would be no waiting time, but this wastes capacity since arrivals are both stochastic (random within any time window) and nonstationary (the average rate changes over time). Traditional queueing theory assumes stationary arrivals, making it unsuitable for real-world systems where demand varies by hour, day, or season.
The paper validates discrete-event simulation as the method for determining optimal server counts as a function of time, measuring queue depth and adjusting workers dynamically rather than using static scheduling.
## Evidence
- WinterSim 2018 paper explicitly addresses "the gap between theoretical queueing models (which assume stationarity) and real systems (which don't)"
- Paper states: "Without server constraints there would be no waiting time, but this wastes capacity since arrivals are stochastic and nonstationary"
- Simulation-based approach tests staffing policies against realistic arrival patterns to optimize responsiveness vs efficiency
## Relevance to Teleo Pipeline
This directly validates the Living Capital pipeline architecture choice to use dynamic worker scaling based on queue depth rather than fixed MAX_WORKERS or cron-based scheduling. The paper's framework maps precisely to the agent task processing problem: LLM API calls are the "servers", task arrivals are nonstationary (bursty during market hours, quiet overnight), and the goal is minimizing latency without wasting compute capacity.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,34 @@
---
type: claim
domain: internet-finance
description: "CIATA method models time-varying bursty arrivals through combined rate and variance parameters"
confidence: proven
source: "Liu et al. (NC State), 'Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes' (2019)"
created: 2026-03-11
---
# Nonstationary non-Poisson arrival modeling requires rate function plus dispersion ratio to capture burstiness
Standard Poisson process assumptions break down when arrivals exhibit correlation and burstiness. The CIATA (Combined Inversion-and-Thinning Approach) method models arrival processes through two parameters: a rate function λ(t) capturing time-varying intensity, and an asymptotic variance-to-mean (dispersion) ratio capturing burstiness beyond what the rate alone predicts.
This two-parameter approach is necessary because time-varying rate alone cannot capture the correlation structure of bursty arrivals. A process with constant high variance but varying rate behaves fundamentally differently from a Poisson process with the same rate function.
## Evidence
Liu et al. demonstrate that CIATA models "target arrival processes via rate function + dispersion ratio — captures both time-varying intensity and burstiness." The paper shows that "replacing a time-varying arrival rate with a constant (max or average) leads to systems being badly understaffed or overstaffed," proving that rate variation alone is insufficient.
The Markov-Modulated Poisson Process (MMPP) framework provides the theoretical foundation: "arrival rate switches between states governed by a hidden Markov chain — natural model for 'bursty then quiet' patterns." This captures the correlation structure that pure rate functions miss.
## Relevance to Internet Finance
This modeling framework directly applies to capital formation pipelines where research sessions create bursts of 10-20 source arrivals followed by quiet periods of 0-2 per day. The hidden state (research session active vs. inactive) governs the arrival rate, making this a textbook MMPP application.
Capacity planning based on average arrival rates will systematically fail for such processes, leading to either chronic congestion during bursts or wasteful overcapacity during quiet periods.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,36 @@
---
type: claim
domain: internet-finance
description: "MDP research shows threshold policies are provably optimal for most queueing systems"
confidence: proven
source: "Li et al., 'An Overview for Markov Decision Processes in Queues and Networks' (2019)"
created: 2026-03-11
---
# Optimal queue policies have threshold structure making simple rules near-optimal
Six decades of operations research on Markov Decision Processes applied to queueing systems consistently shows that optimal policies have threshold structure: "serve if queue > K, idle if queue < K" or "spawn worker if queue > X and workers < Y." This means even without solving the full MDP, well-tuned threshold policies achieve near-optimal performance.
For multi-server systems, optimal admission and routing policies follow similar patterns: join-shortest-queue, threshold-based admission control. The structural simplicity emerges from the mathematical properties of the value function in continuous-time MDPs where decisions happen at state transitions (arrivals, departures).
This has direct implications for pipeline architecture: systems with manageable state spaces (queue depths across stages, worker counts, time-of-day) can use exact MDP solution via value iteration, but even approximate threshold policies will perform near-optimally due to the underlying structure.
## Evidence
Li et al. survey 60+ years of MDP research in queueing theory (1960s to 2019), covering:
- Continuous-time MDPs for queue management with decisions at state transitions
- Classic results showing threshold structure in optimal policies
- Multi-server systems where optimal policies are simple (join-shortest-queue, threshold-based)
- Dynamic programming and stochastic optimization methods for deriving optimal policies
The key challenge identified is curse of dimensionality: state space explodes with multiple queues/stages. Practical approaches include approximate dynamic programming and reinforcement learning for large state spaces.
Emerging direction: deep RL for queue management in networks and cloud computing.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,35 @@
---
type: claim
domain: internet-finance
description: "Small state spaces enable exact value iteration while large spaces require approximate policies"
confidence: likely
source: "Li et al., 'An Overview for Markov Decision Processes in Queues and Networks' (2019)"
created: 2026-03-11
---
# Pipeline state space size determines whether exact MDP solution or threshold heuristics are optimal
The curse of dimensionality in queueing MDPs creates a sharp divide in optimal solution approaches. Systems with manageable state spaces—such as pipelines with queue depths across 3 stages, worker counts, and time-of-day variables—can use exact MDP solution via value iteration to derive provably optimal policies.
However, as state space grows (multiple queues, many stages, complex dependencies), exact solution becomes computationally intractable. For these systems, approximate dynamic programming or reinforcement learning becomes necessary, accepting near-optimal performance in exchange for tractability.
The Teleo pipeline architecture sits in the tractable regime: queue depths across 3 stages, worker counts, and time-of-day create a state space small enough for exact solution. This means the system can compute provably optimal policies rather than relying on heuristics, though the threshold structure of optimal policies means well-tuned simple rules would also perform near-optimally.
## Evidence
Li et al. identify curse of dimensionality as the key challenge: "state space explodes with multiple queues/stages." The survey distinguishes between:
- Small state spaces: exact MDP solution via value iteration
- Large state spaces: approximate dynamic programming, reinforcement learning
Practical approaches for large systems include deep RL for queue management in networks and cloud computing, accepting approximation in exchange for scalability.
The source explicitly notes that Teleo pipeline has "a manageable state space (queue depths across 3 stages, worker counts, time-of-day)—small enough for exact MDP solution via value iteration."
---
Relevant Notes:
- optimal queue policies have threshold structure making simple rules near-optimal
- domains/internet-finance/_map
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,33 @@
---
type: claim
domain: internet-finance
description: "Raydium's liquidity farming infrastructure has converged on standardized parameters that projects adopt for token launches"
confidence: likely
source: "FutureDAO Raydium farm proposal, 2024-11-08; Raydium documentation"
created: 2026-03-11
---
# Raydium liquidity farming follows standard pattern of 1% token allocation, 7-90 day duration, and CLMM pool architecture
Raydium has established a standardized liquidity farming template that projects adopt when launching tokens. The FutureDAO proposal demonstrates this pattern: 1% of total token supply allocated as rewards, farming period between 7-90 days per platform guidelines, and Concentrated Liquidity Market Maker (CLMM) pool architecture.
The proposal specifies standard implementation steps: create CLMM pool for token-stablecoin pair, establish farm linked to the pool with defined emission rate and duration, and ongoing monitoring. Raydium offers four fee tiers (0.01%, 0.05%, 0.25%, 1%) that projects select based on token volatility and expected trading volume.
Operational costs are minimal—approximately 0.1 SOL for pool and farm creation according to Raydium documentation. This low barrier to entry combined with standardized parameters suggests Raydium has productized liquidity bootstrapping into a repeatable template that reduces decision complexity for new projects.
The standardization extends beyond technical parameters to expected outcomes: proposals cite "enhanced liquidity," "reduced slippage," and "community engagement" as the value proposition, indicating convergence on both mechanism and narrative.
## Evidence
- FutureDAO proposal allocates exactly 1% of total $FUTURE supply for Raydium farm rewards
- Raydium guidelines specify 7-90 day farming periods as standard range
- CLMM pool creation costs ~0.1 SOL per Raydium documentation
- Four standardized fee tiers: 0.01%, 0.05%, 0.25%, 1%
---
Relevant Notes:
- [[futarchy-governed DAOs converge on traditional corporate governance scaffolding for treasury operations because market mechanisms alone cannot provide operational security and legal compliance]]
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]]
Topics:
- domains/internet-finance/_map

View file

@ -0,0 +1,49 @@
---
type: claim
domain: internet-finance
description: "The Seyf AI wallet raised $200 (0.07% of target) on MetaDAO's futardio platform before refunding in under 24 hours, providing market-priced evidence of weak demand for the concept at this stage"
confidence: experimental
source: "Rio via futard.io launch data; 2026-03-05 Seyf launch on futardio platform"
created: 2026-03-12
depends_on:
- "seyf-demonstrates-intent-based-wallet-architecture-where-natural-language-replaces-manual-defi-navigation"
- "MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale"
challenged_by:
- "Single data point; launch community reach and marketing effort are unknown variables"
secondary_domains:
- mechanisms
---
# Seyf's futardio fundraise raised $200 against a $300,000 target, signaling near-zero market traction for the AI-native wallet concept on MetaDAO in March 2026
Seyf, which describes itself as "the first AI-native wallet for Solana," launched a fundraise on MetaDAO's futardio platform on 2026-03-05. The raise closed the following day (2026-03-06) with $200.00 committed against a $300,000 target — 0.07% of the funding goal. Status: Refunding.
This outcome is notable because:
1. **The same platform produced dramatically different results for other projects.** The Cult meme coin launched on futardio and raised $11.4M in a single day. The delta between near-zero and $11.4M on the same infrastructure in the same ecosystem isolates the product concept as the key variable.
2. **The futarchy mechanism functions as a market pricing signal.** Futardio's ownership-coin model means participants had financial stakes in the decision. The near-zero commitment is not a click-through survey — it reflects actual capital allocation behavior, which is the strongest available demand signal.
3. **The fundraise failed despite a plausible market narrative.** Seyf's pitch — AI abstraction over DeFi complexity, intent-based UX, no manual transaction construction — is coherent and addresses a real friction. The failure does not disprove the underlying UX problem; it suggests either insufficient product evidence at launch, weak community distribution, or market skepticism about AI wallet execution risk at this stage.
## Context
- Funding target: $300,000 (note: pitch describes a $500K raise; $300K may reflect the minimum viable threshold)
- Total committed: $200.00
- Launch date: 2026-03-05; Closed: 2026-03-06
- Platform: futard.io (MetaDAO)
- Token: Ggc
## Limitations
This is a single data point. The fundraise may reflect distribution failure rather than concept failure — if the launch was not promoted to the Solana DeFi community, near-zero commitment says more about reach than demand. No evidence exists about marketing effort at launch.
---
Relevant Notes:
- [[seyf-demonstrates-intent-based-wallet-architecture-where-natural-language-replaces-manual-defi-navigation]] — the product architecture that failed to attract commitments
- [[futardio-cult-raised-11-4-million-in-one-day-through-futarchy-governed-meme-coin-launch]] — contrast: what succeeded on same platform same period
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] — the platform infrastructure
Topics:
- [[_map]]

View file

@ -0,0 +1,36 @@
---
type: claim
domain: internet-finance
description: "Bursty arrival processes require more safety capacity than Poisson models predict, scaled by variance-to-mean ratio"
confidence: proven
source: "Whitt et al., 'Staffing a Service System with Non-Poisson Non-Stationary Arrivals', Cambridge Core, 2016"
created: 2026-03-11
---
# Square-root staffing formula requires peakedness adjustment for non-Poisson arrivals because bursty processes need proportionally more safety capacity than the Poisson baseline predicts
The standard square-root staffing formula (workers = mean load + safety factor × √mean) assumes Poisson arrivals where variance equals mean. Real-world arrival processes violate this assumption through burstiness (arrivals clustered in time) or smoothness (arrivals more evenly distributed than random).
Whitt et al. extend the square-root staffing rule by introducing **peakedness** — the variance-to-mean ratio of the arrival process — as the key adjustment parameter. For bursty arrivals (peakedness > 1), systems require MORE safety capacity than Poisson models suggest. For smooth arrivals (peakedness < 1), systems need LESS.
The modified staffing formula adjusts the square-root safety margin by multiplying by the square root of peakedness. This correction is critical for non-stationary systems where arrival rates vary over time (daily cycles, seasonal patterns, or event-driven spikes).
## Evidence
- Whitt et al. (2016) prove that peakedness — the variance-to-mean ratio — captures the essential non-Poisson behavior for staffing calculations
- Standard Poisson assumption (variance = mean) fails empirically for bursty workloads like research paper dumps, product launches, or customer service spikes
- Using constant staffing (fixed MAX_WORKERS) regardless of queue state creates dual failure: over-provisioning during quiet periods (wasted compute) and under-provisioning during bursts (queue explosion)
## Relevance to Pipeline Architecture
Teleo's research pipeline exhibits textbook non-Poisson non-stationary arrivals: research dumps arrive in bursts of 15+ sources, futardio launches come in waves of 20+ proposals, while other days see minimal activity. The peakedness parameter quantifies exactly how much extra capacity is needed beyond naive square-root staffing.
This directly informs dynamic worker scaling: measure empirical peakedness from historical arrival data, adjust safety capacity accordingly, and scale workers based on current queue depth rather than using fixed limits.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,35 @@
---
type: claim
domain: internet-finance
description: "The QED Halfin-Whitt regime shows server count n grows while utilization approaches 1 at rate Θ(1/√n)"
confidence: proven
source: "van Leeuwaarden, Mathijsen, Sanders (SIAM Review 2018) - Economies-of-Scale in Many-Server Queueing Systems"
created: 2026-03-11
---
# Square-root staffing principle achieves economies of scale in queueing systems by operating near full utilization with manageable delays
The QED (Quality-and-Efficiency-Driven) Halfin-Whitt heavy-traffic regime provides the mathematical foundation for understanding economies of scale in multi-server systems. As server count n grows, the system can operate at utilization approaching 1 while maintaining bounded delays, with the key insight that excess capacity needs to grow only at rate Θ(1/√n) rather than linearly.
This "square root staffing" principle means larger systems need proportionally fewer excess servers for the same service quality. A system with 100 servers might need 10 excess servers for target service levels, while a system with 400 servers needs only 20 excess servers (not 40) for the same quality.
The regime applies across system sizes from tens to thousands of servers, and empirical validation shows the square-root safety staffing works even for moderate-sized systems in the 5-20 server range.
## Evidence
From the SIAM Review tutorial:
- Mathematical proof that utilization approaches 1 at rate Θ(1/√n) as server count grows
- Empirical validation showing square-root staffing works for systems as small as 5-20 servers
- The regime connects abstract queueing theory to practical staffing decisions across industries
## Implications for Pipeline Architecture
For systems in the 5-6 worker range, sophisticated dynamic algorithms provide minimal benefit over simple threshold policies informed by queueing theory. The economies-of-scale result also indicates that marginal value per worker decreases as systems grow beyond 20+ workers, which is critical for cost optimization in scaled deployments.
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

View file

@ -0,0 +1,30 @@
---
type: claim
domain: internet-finance
description: "Optimal server provisioning follows R + β√R formula where R is base load and β controls service level"
confidence: proven
source: "Ward Whitt, What You Should Know About Queueing Models (2019)"
created: 2026-03-11
---
# Square-root staffing principle provisions servers as base load plus beta times square root of base load where beta is quality-of-service parameter
The square-root staffing rule provides optimal server provisioning: if base load requires R workers at full utilization, provision R + β√R workers where β ≈ 1-2 depending on target service level. This formula emerges from queueing theory analysis of multi-server systems and represents the sweet spot between over-provisioning (wasteful) and under-provisioning (queue explosion).
The principle applies across domains: call centers, compute pipelines, service systems. For Teleo pipeline scale (~8 sources/cycle, ~5 min service time), this gives concrete worker count guidance without requiring peak-load provisioning.
The underlying insight: variance in arrival and service times creates queueing delays even when average utilization is below 100%. The square-root safety margin handles this variance efficiently. The margin grows with system size but at a sublinear rate, creating economies of scale.
## Evidence
Ward Whitt's practitioner guide establishes this as the foundational staffing principle in operations research. The formula derives from the Halfin-Whitt heavy-traffic regime analysis, where systems operate near full utilization (approaching 1 at rate Θ(1/√n) as servers n grow) while keeping delays manageable.
Erlang C formula provides the computational implementation for determining β given target service levels (probability of delay, average wait time).
---
Relevant Notes:
- domains/internet-finance/_map
Topics:
- core/mechanisms/_map

Some files were not shown because too many files have changed in this diff Show more