Compare commits

..

No commits in common. "55ff1b0c758526eb278608befcb19a0ece60b27a" and "63017207706c210f438a30d485bce33152ddbb31" have entirely different histories.

16 changed files with 0 additions and 836 deletions

View file

@ -1,234 +0,0 @@
# Vital Signs Operationalization Spec
*How to automate the five collective health vital signs for Milestone 4.*
Each vital sign maps to specific data sources already available in the repo.
The goal is scripts that can run on every PR merge (or on a cron) and produce
a dashboard JSON.
---
## 1. Cross-Domain Linkage Density (circulation)
**Data source:** All `.md` files in `domains/`, `core/`, `foundations/`
**Algorithm:**
1. For each claim file, extract all `[[wiki links]]` via regex: `\[\[([^\]]+)\]\]`
2. For each link target, resolve to a file path and read its `domain:` frontmatter
3. Compare link target domain to source file domain
4. Calculate: `cross_domain_links / total_links` per domain and overall
**Output:**
```json
{
"metric": "cross_domain_linkage_density",
"overall": 0.22,
"by_domain": {
"health": { "total_links": 45, "cross_domain": 12, "ratio": 0.27 },
"internet-finance": { "total_links": 38, "cross_domain": 8, "ratio": 0.21 }
},
"status": "healthy",
"threshold": { "low": 0.15, "high": 0.30 }
}
```
**Implementation notes:**
- Link resolution is the hard part. Titles are prose, not slugs. Need fuzzy matching or a title→path index.
- CLAIM CANDIDATE: Build a `claim-index.json` mapping every claim title to its file path and domain. This becomes infrastructure for multiple vital signs.
- Pre-step: generate index with `find domains/ core/ foundations/ -name "*.md"` → parse frontmatter → build `{title: path, domain: ...}`.
---
## 2. Evidence Freshness (metabolism)
**Data source:** `source:` and `created:` frontmatter fields in all claim files
**Algorithm:**
1. For each claim, parse `created:` date
2. Parse `source:` field — extract year references (regex: `\b(20\d{2})\b`)
3. Calculate `claim_age = today - created_date`
4. For fast-moving domains (health, ai-alignment, internet-finance): flag if `claim_age > 180 days`
5. For slow-moving domains (cultural-dynamics, critical-systems): flag if `claim_age > 365 days`
**Output:**
```json
{
"metric": "evidence_freshness",
"median_claim_age_days": 45,
"by_domain": {
"health": { "median_age": 30, "stale_count": 2, "total": 35, "status": "healthy" },
"ai-alignment": { "median_age": 60, "stale_count": 5, "total": 28, "status": "warning" }
},
"stale_claims": [
{ "title": "...", "domain": "...", "age_days": 200, "path": "..." }
]
}
```
**Implementation notes:**
- Source field is free text, not structured. Year extraction via regex is best-effort.
- Better signal: compare `created:` date to `git log --follow` last-modified date. A claim created 6 months ago but enriched last week is fresh.
- QUESTION: Should we track "source publication date" separately from "claim creation date"? A claim created today citing a 2020 study is using old evidence but was recently written.
---
## 3. Confidence Calibration Accuracy (immune function)
**Data source:** `confidence:` frontmatter + claim body content
**Algorithm:**
1. For each claim, read `confidence:` level
2. Scan body for evidence markers:
- **proven indicators:** "RCT", "randomized", "meta-analysis", "N=", "p<", "statistically significant", "replicated", "mathematical proof"
- **likely indicators:** "study", "data shows", "evidence", "research", "survey", specific numbers/percentages
- **experimental indicators:** "suggests", "argues", "framework", "model", "theory"
- **speculative indicators:** "may", "could", "hypothesize", "imagine", "if"
3. Flag mismatches: `proven` claim with no empirical markers, `speculative` claim with strong empirical evidence
**Output:**
```json
{
"metric": "confidence_calibration",
"total_claims": 200,
"flagged": 8,
"flag_rate": 0.04,
"status": "healthy",
"flags": [
{ "title": "...", "confidence": "proven", "issue": "no empirical evidence markers", "path": "..." }
]
}
```
**Implementation notes:**
- This is the hardest to automate well. Keyword matching is a rough proxy — an LLM evaluation would be more accurate but expensive.
- Minimum viable: flag `proven` claims without any empirical markers. This catches the worst miscalibrations with low false-positive rate.
- FLAG @Leo: Consider whether periodic LLM-assisted audits (like the foundations audit) are the right cadence rather than per-PR automation. Maybe automated for `proven` only, manual audit for `likely`.
---
## 4. Orphan Ratio (neural integration)
**Data source:** All claim files + the claim-index from VS1
**Algorithm:**
1. Build a reverse-link index: for each claim, which other claims link TO it
2. Claims with 0 incoming links are orphans
3. Calculate `orphan_count / total_claims`
**Output:**
```json
{
"metric": "orphan_ratio",
"total_claims": 200,
"orphans": 25,
"ratio": 0.125,
"status": "healthy",
"threshold": 0.15,
"orphan_list": [
{ "title": "...", "domain": "...", "path": "...", "outgoing_links": 3 }
]
}
```
**Implementation notes:**
- Depends on the same claim-index and link-resolution infrastructure as VS1.
- Orphans with outgoing links are "leaf contributors" — they cite others but nobody cites them. These are the easiest to integrate (just add a link from a related claim).
- Orphans with zero outgoing links are truly isolated — may indicate extraction without integration.
- New claims are expected to be orphans briefly. Filter: exclude claims created in the last 7 days from the orphan count.
---
## 5. Review Throughput (homeostasis)
**Data source:** GitHub PR data via `gh` CLI
**Algorithm:**
1. `gh pr list --state all --json number,state,createdAt,mergedAt,closedAt,title,author`
2. Calculate per week: PRs opened, PRs merged, PRs pending
3. Track review latency: `mergedAt - createdAt` for each merged PR
4. Flag: backlog > 3 open PRs, or median review latency > 48 hours
**Output:**
```json
{
"metric": "review_throughput",
"current_backlog": 2,
"median_review_latency_hours": 18,
"weekly_opened": 4,
"weekly_merged": 3,
"status": "healthy",
"thresholds": { "backlog_warning": 3, "latency_warning_hours": 48 }
}
```
**Implementation notes:**
- This is the easiest to implement — `gh` CLI provides structured JSON output.
- Could run on every PR merge as a post-merge check.
- QUESTION: Should we weight by PR size? A PR with 11 claims (like Theseus PR #50) takes longer to review than a 3-claim PR. Latency per claim might be fairer.
---
## Shared Infrastructure
### claim-index.json
All five vital signs benefit from a pre-computed index:
```json
{
"claims": [
{
"title": "the healthcare attractor state is...",
"path": "domains/health/the healthcare attractor state is....md",
"domain": "health",
"confidence": "likely",
"created": "2026-02-15",
"outgoing_links": ["claim title 1", "claim title 2"],
"incoming_links": ["claim title 3"]
}
],
"generated": "2026-03-08T10:30:00Z"
}
```
**Build script:** Parse all `.md` files with `type: claim` frontmatter. Extract title (first `# ` heading), domain, confidence, created, and all `[[wiki links]]`. Resolve links bidirectionally.
### Dashboard aggregation
A single `vital-signs.json` output combining all 5 metrics:
```json
{
"generated": "2026-03-08T10:30:00Z",
"overall_status": "healthy",
"vital_signs": {
"cross_domain_linkage": { ... },
"evidence_freshness": { ... },
"confidence_calibration": { ... },
"orphan_ratio": { ... },
"review_throughput": { ... }
}
}
```
### Trigger options
1. **Post-merge hook:** Run on every PR merge to main. Most responsive.
2. **Daily cron:** Run once per day. Less noise, sufficient for trend detection.
3. **On-demand:** Agent runs manually when doing health checks.
Recommendation: daily cron for the dashboard, with post-merge checks only for review throughput (cheapest to compute, most time-sensitive).
---
## Implementation Priority
| Vital Sign | Difficulty | Dependencies | Priority |
|-----------|-----------|-------------|----------|
| Review throughput | Easy | `gh` CLI only | 1 — implement first |
| Orphan ratio | Medium | claim-index | 2 — reveals integration gaps |
| Linkage density | Medium | claim-index + link resolution | 3 — reveals siloing |
| Evidence freshness | Medium | date parsing | 4 — reveals calcification |
| Confidence calibration | Hard | NLP/heuristics | 5 — partial automation, rest manual |
Build claim-index first (shared dependency for 2, 3, 4), then review throughput (independent), then orphan ratio → linkage density → freshness → calibration.

View file

@ -1,72 +0,0 @@
---
type: claim
domain: collective-intelligence
description: "Hayek's knowledge problem — no central planner can access the dispersed, tacit, time-and-place-specific knowledge that market participants possess, but price signals aggregate this knowledge into actionable information — is the theoretical foundation for prediction markets, futarchy, and any system that coordinates through information rather than authority"
confidence: proven
source: "Hayek, 'The Use of Knowledge in Society' (1945); Fama, 'Efficient Capital Markets' (1970); Grossman & Stiglitz (1980); Surowiecki, 'The Wisdom of Crowds' (2004); Nobel Prize in Economics 1974 (Hayek), 2013 (Fama)"
created: 2026-03-08
---
# Decentralized information aggregation outperforms centralized planning because dispersed knowledge cannot be collected into a single mind but can be coordinated through price signals that encode local information into globally accessible indicators
Friedrich Hayek (1945) identified the fundamental problem of economic coordination: the knowledge required for rational resource allocation is never concentrated in a single mind. It is dispersed among millions of individuals as "knowledge of the particular circumstances of time and place" — tacit, local, perishable information that cannot be transmitted through any reporting system. The economic problem is not how to allocate given resources optimally (the calculation problem), but how to coordinate when no one possesses the information needed to calculate the optimum.
## The price mechanism as information aggregator
Hayek's solution: the price system. Prices aggregate dispersed information into a single signal that guides action without requiring anyone to understand the full picture. When a natural disaster disrupts tin supply, the price of tin rises. Every tin user worldwide adjusts their behavior — conserving tin, substituting alternatives, expanding production — without knowing WHY the price rose. The price signal encodes the local knowledge of the disruption and transmits it globally at near-zero cost.
This mechanism has three properties that no centralized system can replicate:
1. **Tacit knowledge inclusion.** Much dispersed knowledge is tacit — the factory manager's sense that demand is shifting, the trader's intuition about counterparty risk. Tacit knowledge cannot be articulated in reports but CAN be expressed through market action (buying, selling, pricing). Markets aggregate knowledge that cannot be communicated any other way.
2. **Incentive compatibility.** Market participants who act on accurate private information profit; those who act on inaccurate information lose. The market mechanism creates incentive compatibility — honest information revelation is the profitable strategy. This is why [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — the "incentive effect" is Hayek's price mechanism formalized through [[mechanism design enables incentive-compatible coordination by constructing rules under which self-interested agents voluntarily reveal private information and take socially optimal actions|mechanism design theory]].
3. **Dynamic updating.** Prices adjust continuously as new information arrives. No committee meeting, no reporting cycle, no bureaucratic delay. The information aggregation is real-time and automatic.
## The Efficient Market Hypothesis and its limits
Fama (1970) formalized Hayek's insight as the Efficient Market Hypothesis: asset prices reflect all available information. In the strong form, no one can consistently outperform the market because prices already incorporate all public and private information.
Grossman and Stiglitz (1980) identified the paradox: if prices fully reflect all information, no one has incentive to pay the cost of acquiring information — but if no one acquires information, prices cannot reflect it. The resolution: markets are informationally efficient to the degree that information-gathering costs are compensated by trading profits. Prices are not perfectly efficient but are efficient enough that systematic exploitation is difficult.
This paradox directly explains [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]] — when a decision is obvious, the market price reflects the consensus immediately, and no one profits from trading on information everyone already has. Low volume in uncontested decisions is not a failure but a feature of efficient information aggregation.
## Why centralized alternatives fail
The Soviet calculation debate (Mises 1920, Hayek 1945) established that centralized planning fails not because planners are stupid or corrupt, but because the information problem is structurally unsolvable. Even an omniscient, benevolent planner could not solve it because:
1. The relevant knowledge changes continuously — any snapshot is stale before it arrives
2. Tacit knowledge cannot be transmitted — it can only be expressed through action
3. Aggregation requires incentives — without profit/loss signals, there is no mechanism to elicit honest information revelation
This is not an argument against all coordination — it is an argument that coordination through prices outperforms coordination through authority when the relevant knowledge is dispersed. When knowledge IS concentrated (a small team, a single expert domain), hierarchy can outperform markets. The question is always: where is the relevant knowledge?
## Why this is foundational
Information aggregation theory provides the theoretical grounding for:
- **Prediction markets:** [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — prediction market accuracy IS Hayek's price mechanism applied to forecasting.
- **Futarchy:** [[futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders]] — futarchy works because the price mechanism aggregates dispersed governance knowledge more efficiently than voting.
- **The internet finance thesis:** [[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]] — the GDP impact comes from extending the price mechanism to assets and decisions previously coordinated through hierarchy.
- **Hayek's broader framework:** [[Hayek argued that designed rules of just conduct enable spontaneous order of greater complexity than deliberate arrangement could achieve]] — the knowledge problem is WHY designed rules outperform designed outcomes. Rules enable the price mechanism; designed outcomes require the impossible centralization of dispersed knowledge.
- **Collective intelligence:** [[humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain]] — the price mechanism is the most successful existing form of collective cognition. It proves that distributed information aggregation works; the question is whether it can be extended beyond pricing.
---
Relevant Notes:
- [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — prediction markets as formalized Hayekian information aggregation
- [[futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders]] — futarchy as price-mechanism governance
- [[mechanism design enables incentive-compatible coordination by constructing rules under which self-interested agents voluntarily reveal private information and take socially optimal actions]] — mechanism design formalizes Hayek's insight about incentive-compatible information revelation
- [[Hayek argued that designed rules of just conduct enable spontaneous order of greater complexity than deliberate arrangement could achieve]] — the broader Hayekian framework that the knowledge problem grounds
- [[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]] — extending price mechanisms to new domains
- [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]] — the Grossman-Stiglitz paradox in practice
- [[humanity is a superorganism that can communicate but not yet think — the internet built the nervous system but not the brain]] — prices as existing collective cognition
- [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent]] — information aggregation solves a different problem than coordination failures — the former is about knowledge, the latter about incentives
Topics:
- [[coordination mechanisms]]
- [[internet finance and decision markets]]

View file

@ -25,11 +25,6 @@ Self-organized criticality, emergence, and free energy minimization describe how
- [[the universal disruption cycle is how systems of greedy agents perform global optimization because local convergence creates fragility that triggers restructuring toward greater efficiency]] — SOC applied to industry transitions
- [[what matters in industry transitions is the slope not the trigger because self-organized criticality means accumulated fragility determines the avalanche while the specific disruption event is irrelevant]] — slope reading
## Complex Adaptive Systems
- [[complex adaptive systems are defined by four properties that distinguish them from merely complicated systems agents with schemata adaptation through feedback nonlinear interactions and emergent macro-patterns]] — Holland's foundational framework: the boundary between complicated and complex is adaptation
- [[fitness landscape ruggedness determines whether adaptive systems find good solutions because smooth landscapes reward hill-climbing while rugged landscapes trap agents in local optima and require exploration or recombination to escape]] — Kauffman's NK model: landscape structure determines search strategy effectiveness
- [[coevolution means agents fitness landscapes shift as other agents adapt creating a world where standing still is falling behind and the optimal strategy depends on what everyone else is doing]] — Red Queen dynamics: coupled adaptation prevents equilibrium and self-organizes to edge of chaos
## Free Energy Principle
- [[biological systems minimize free energy to maintain their states and resist entropic decay]] — the core principle
- [[Markov blankets enable complex systems to maintain identity while interacting with environment through nested statistical boundaries]] — boundary architecture (used in agent design)

View file

@ -1,38 +0,0 @@
---
type: claim
domain: critical-systems
description: "The Red Queen effect in CAS: when your fitness depends on other adapting agents, the landscape itself moves — static optimization becomes impossible and the system never reaches equilibrium"
confidence: likely
source: "Kauffman & Johnsen 'Coevolution to the Edge of Chaos' (1991); Arthur 'Complexity and the Economy' (2015); Van Valen 'A New Evolutionary Law' (1973)"
created: 2026-03-08
---
# Coevolution means agents' fitness landscapes shift as other agents adapt, creating a world where standing still is falling behind and the optimal strategy depends on what everyone else is doing
Van Valen (1973) identified the Red Queen effect: species in ecosystems show constant extinction rates regardless of how long they've existed, because the environment is composed of other adapting species. A species that stops adapting doesn't maintain its fitness — it declines, because its competitors and predators continue improving. "It takes all the running you can do, to keep in the same place."
Kauffman and Johnsen (1991) formalized this through coupled NK landscapes. When species A adapts (changes its genotype to climb its fitness landscape), the fitness landscape of species B *deforms* — peaks shift, valleys appear where plains were. The more tightly coupled the species (higher inter-species K), the more violently the landscapes deform under mutual adaptation. At high coupling, each species' adaptation makes the other's landscape more rugged, potentially triggering an "avalanche" of coevolutionary changes across the entire ecosystem.
Their central finding: coevolutionary systems self-organize to the "edge of chaos" — the critical boundary between frozen order (where no species adapts because landscapes are too stable) and chaotic turnover (where adaptation is futile because landscapes change faster than agents can track). At the edge, adaptation is possible but never complete, producing the perpetual dynamism observed in real ecosystems, markets, and technology races.
Arthur (2015) showed the same dynamic in economic competition: firms' strategic choices change the competitive landscape for other firms. A platform that achieves network effects doesn't just climb its own fitness peak — it collapses rivals' peaks. The result is not convergence to equilibrium but perpetual coevolutionary dynamics where strategy must account for others' adaptation, not just current conditions.
This has three operational implications:
1. **Static optimization fails.** Any strategy optimized for the current landscape becomes suboptimal as other agents adapt. This is why [[equilibrium models of complex systems are fundamentally misleading]] — they assume a fixed landscape.
2. **The arms race is structural, not optional.** Agents that stop adapting don't hold their position — they lose it. This applies equally to biological species, competing firms, and AI safety labs facing competitive pressure.
3. **Coupling strength determines dynamics.** Loosely coupled agents coevolve slowly (gradual improvement). Tightly coupled agents produce volatile dynamics where one agent's breakthrough can cascade into wholesale restructuring. The coupling parameter — not individual agent capability — determines whether the system is stable, dynamic, or chaotic.
---
Relevant Notes:
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — the alignment tax IS a coevolutionary trap: labs that invest in safety change their competitive landscape adversely, and the Red Queen effect punishes them for "standing still" on capability
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — voluntary pledges are static strategies on a coevolutionary landscape; they fail because the landscape shifts as competitors adapt
- [[minsky's financial instability hypothesis shows that stability breeds instability as good times incentivize leverage and risk-taking that fragilize the system until shocks trigger cascades]] — Minsky's instability IS coevolutionary dynamics in finance: firms adapt to stability by increasing leverage, which deforms the landscape toward fragility
- [[the universal disruption cycle is how systems of greedy agents perform global optimization because local convergence creates fragility that triggers restructuring toward greater efficiency]] — disruption cycles are coevolutionary avalanches at the edge of chaos
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — multipolar failure is the catastrophic coevolutionary outcome: individually aligned agents whose mutual adaptation produces collectively destructive dynamics
Topics:
- [[foundations/critical-systems/_map]]

View file

@ -1,36 +0,0 @@
---
type: claim
domain: critical-systems
description: "Holland's CAS framework identifies the boundary between complicated and complex: a jet engine has millions of parts but no adaptation — a market with three traders can produce emergent behavior no participant intended"
confidence: likely
source: "Holland 'Hidden Order' (1995), 'Emergence' (1998); Mitchell 'Complexity: A Guided Tour' (2009); Arthur 'Complexity and the Economy' (2015)"
created: 2026-03-08
---
# Complex adaptive systems are defined by four properties that distinguish them from merely complicated systems: agents with schemata, adaptation through feedback, nonlinear interactions, and emergent macro-patterns
A complex adaptive system (CAS) is not simply a system with many parts. A Boeing 747 has six million parts but is merely *complicated* — its behavior follows predictably from its design. A CAS differs on four properties, first formalized by Holland (1995):
1. **Agents with schemata.** The components are agents that carry internal models (schemata) of their environment and act on them. Unlike gears or circuits, they interpret signals and modify behavior based on those interpretations. Holland demonstrated that even minimal schema — classifier rules that compete for activation — produce adaptive behavior in simulated economies.
2. **Adaptation through feedback.** Agents revise their schemata based on outcomes. Successful strategies proliferate; unsuccessful ones get revised or abandoned. This is not central design — it's distributed learning. Arthur (2015) showed that economic agents who update heterogeneous expectations based on outcomes reproduce real market phenomena (clustering, bubbles, crashes) that equilibrium models cannot.
3. **Nonlinear interactions.** Small inputs can produce large effects and vice versa. Agent actions change the environment, which changes the signals other agents receive, which changes their actions. Mitchell (2009) catalogs how this nonlinearity produces qualitatively different behavior at each scale — ant pheromone trails, immune system learning, market dynamics — all from local rules with no global controller.
4. **Emergent macro-patterns.** The system exhibits coherent large-scale patterns — market prices, ecosystem niches, traffic flows — that no individual agent intended or controls. These patterns are not reducible to individual behavior: knowing everything about individual ants tells you nothing about colony architecture.
The boundary between complicated and complex is *adaptation*. If components respond to outcomes by modifying their behavior, the system is complex. If they don't, it's merely complicated. This distinction matters operationally: complicated systems can be engineered top-down, while CAS can only be cultivated through enabling constraints.
Holland's framework is domain-independent — the same four properties appear in immune systems (antibodies as agents with schemata), ecosystems (organisms adapting to niches), markets (traders updating strategies), and AI collectives (agents revising policies). The universality of the pattern is what makes it foundational rather than domain-specific.
---
Relevant Notes:
- [[emergence is the fundamental pattern of intelligence from ant colonies to brains to civilizations]] — emergence is the fourth CAS property; this claim provides the theoretical framework that explains why emergence recurs
- [[companies and people are greedy algorithms that hill-climb toward local optima and require external perturbation to escape suboptimal equilibria]] — greedy hill-climbing is the simplest form of CAS adaptation (property 2), where agents have schemata but update them only locally
- [[enabling constraints create possibility spaces for emergence while governing constraints dictate specific outcomes]] — CAS design requires enabling constraints precisely because top-down governance contradicts the adaptation property
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]] — CAS theory is one of those nine traditions; the distinction maps to enabling vs governing constraints
- [[equilibrium models of complex systems are fundamentally misleading because systems in balance cannot exhibit catastrophes fractals or history]] — equilibrium models fail for CAS specifically because adaptation (property 2) and nonlinearity (property 3) prevent convergence
Topics:
- [[foundations/critical-systems/_map]]

View file

@ -1,36 +0,0 @@
---
type: claim
domain: critical-systems
description: "Kauffman's NK model formalizes the intuition that some problems are navigable by incremental improvement while others require leaps — the tunable parameter K (epistatic interactions) controls landscape ruggedness and therefore the effectiveness of local search"
confidence: likely
source: "Kauffman 'The Origins of Order' (1993), 'At Home in the Universe' (1995); Levinthal 'Adaptation on Rugged Landscapes' (1997); Page 'The Difference' (2007)"
created: 2026-03-08
---
# Fitness landscape ruggedness determines whether adaptive systems find good solutions because smooth landscapes reward hill-climbing while rugged landscapes trap agents in local optima and require exploration or recombination to escape
Kauffman's NK model (1993) provides the formal framework for understanding why some optimization problems yield to incremental improvement while others resist it. The model has two parameters: N (number of components) and K (epistatic interactions — how many other components each component's contribution depends on).
When K = 0, each component's fitness contribution is independent. The landscape is smooth with a single global peak — hill-climbing works perfectly. When K = N-1 (maximum interaction), every component's contribution depends on every other component. The landscape becomes maximally rugged — essentially random — with an exponential number of local optima. Hill-climbing fails catastrophically because almost every peak is mediocre.
The critical insight is that **real-world systems occupy the middle range**. Kauffman showed that at intermediate K values, landscapes have structure: correlated peaks clustered by quality, with navigable ridges connecting good solutions. This is where adaptation is hardest but most consequential — local search finds decent solutions but can't reach the best ones without some form of exploration beyond nearest neighbors.
Levinthal (1997) applied this directly to organizational adaptation: firms that search only locally (incremental innovation) perform well on smooth landscapes but get trapped on mediocre peaks in rugged ones. Firms that occasionally make "long jumps" (radical innovation, recombination) sacrifice short-term performance but discover better peaks. The optimal search strategy depends on landscape ruggedness — which the searcher cannot directly observe.
Page (2007) extended this to group problem-solving: diverse agents with different heuristics collectively explore more of a rugged landscape than homogeneous experts, because their different starting perspectives correspond to different search trajectories. This is why diversity outperforms individual excellence on hard problems — it's a landscape coverage argument, not a moral one.
The framework explains several patterns across domains:
- **Why modularity helps**: Reducing K through modular design smooths the landscape, making local search effective within modules while recombination happens between them
- **Why diversity matters**: On rugged landscapes, the best single searcher is dominated by a diverse collection of mediocre searchers covering more territory
- **Why exploration and exploitation must be balanced**: Pure exploitation (hill-climbing) gets trapped; pure exploration (random search) wastes effort on bad regions
---
Relevant Notes:
- [[companies and people are greedy algorithms that hill-climb toward local optima and require external perturbation to escape suboptimal equilibria]] — this claim IS the greedy hill-climbing failure mode; the NK model explains precisely when and why it fails (high K)
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — partial connectivity preserves diverse search trajectories on rugged landscapes, exactly as Page's framework predicts
- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — the NK model provides the formal mechanism: diversity covers more of the rugged landscape
- [[the self-organized critical state is the most efficient state dynamically achievable even though a perfectly engineered state would perform better]] — the critical state lives on a rugged landscape where global optima are inaccessible to local search
Topics:
- [[foundations/critical-systems/_map]]

View file

@ -9,16 +9,6 @@ Cultural evolution, memetics, master narrative theory, and paradigm shifts expla
- [[memeplexes survive by combining mutually reinforcing memes that protect each other from external challenge through untestability threats and identity attachment]] — how idea-systems persist
- [[the strongest memeplexes align individual incentive with collective behavior creating self-validating feedback loops]] — the design target for LivingIP
## Community Formation
- [[human social cognition caps meaningful relationships at approximately 150 because neocortex size constrains the number of individuals whose behavior and relationships can be tracked]] — the cognitive ceiling on group size
- [[social capital erodes when associational life declines because trust generalized reciprocity and civic norms are produced by repeated face-to-face interaction in voluntary organizations not by individual virtue]] — how trust infrastructure is built and depleted
- [[collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution]] — why groups don't naturally act in their shared interest
- [[weak ties bridge otherwise disconnected clusters enabling information flow and opportunity access that strong ties within clusters cannot provide]] — the structural role of acquaintances
## Selfplex and Identity
- [[the self is a memeplex that persists because memes attached to a personal identity get copied more reliably than free-floating ideas]] — identity as replicator strategy
- [[identity-protective cognition causes people to reject evidence that threatens their group identity even when they have the cognitive capacity to evaluate it correctly]] — why smarter people aren't less biased
## Propagation Dynamics
- [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]] — why ideas don't go viral like tweets
- [[complex ideas propagate with higher fidelity through personal interaction than mass media because nuance requires bidirectional communication]] — fidelity vs reach tradeoff

View file

@ -1,37 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Olson's logic of collective action: large groups systematically underprovide public goods because individual incentives favor free-riding, and this problem worsens with group size — small concentrated groups outorganize large diffuse ones"
confidence: proven
source: "Olson 1965 The Logic of Collective Action; Ostrom 1990 Governing the Commons (boundary condition)"
created: 2026-03-08
---
# collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution
Mancur Olson's *The Logic of Collective Action* (1965) demolished the assumption that groups with shared interests will naturally act to advance those interests. The logic is straightforward: if a public good (clean air, national defense, industry lobbying) benefits everyone in a group regardless of whether they contributed, the individually rational strategy is to free-ride — enjoy the benefit without paying the cost. When everyone follows this logic, the public good is underprovided or not provided at all.
Three mechanisms make large groups systematically worse at collective action than small ones. First, **imperceptibility**: in a large group, each individual's contribution is negligible — your donation to a million-person cause is invisible, reducing motivation. Second, **monitoring difficulty**: in large groups, it is harder to identify and sanction free-riders. Third, **asymmetric benefits**: in small groups, concentrated benefits per member can exceed individual costs, making action rational even without enforcement. The steel industry (few large firms, each with massive individual stake) organizes effectively; consumers (millions of people, each with tiny individual stake) do not.
This produces Olson's central prediction: **small, concentrated groups will outorganize large, diffuse ones**, even when the large group's aggregate interest is greater. Industry lobbies defeat consumer interests. Medical associations restrict competition more effectively than patients can demand it. The concentrated few overcome the diffuse many not because they care more, but because the per-member stakes justify the per-member costs.
Olson identifies two solutions: **selective incentives** (benefits available only to contributors — insurance, publications, social access) and **coercion** (mandatory participation — union closed shops, taxation). Both work by changing the individual payoff structure to make contribution rational regardless of others' behavior.
**The Ostrom boundary condition.** [[Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization]]. Ostrom demonstrated that Olson's logic, while correct for anonymous large groups, does not hold for communities with clear boundaries, monitoring capacity, graduated sanctions, and local conflict resolution. Her design principles are precisely the institutional mechanisms that overcome Olson's free-rider problem without requiring either privatization or state coercion. The question is not whether collective action fails — it does, by default. The question is what institutional designs prevent the default from holding.
For community-based coordination systems, Olson's logic is the baseline prediction: without explicit mechanism design, participation declines as group size increases. Selective incentives (ownership stakes, attribution, reputation) and Ostrom-style governance principles are not optional enhancements — they are the minimum requirements for sustained collective action.
---
Relevant Notes:
- [[Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization]] — the boundary condition showing collective action CAN succeed with specific institutional design
- [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent]] — Olson's free-rider problem is the specific mechanism by which coordination failure manifests in public goods provision
- [[gamified contribution with ownership stakes aligns individual sharing with collective intelligence growth]] — selective incentives (ownership) as the mechanism design solution to Olson's free-rider problem
- [[community ownership accelerates growth through aligned evangelism not passive holding]] — ownership transforms free-riders into stakeholders by changing the individual payoff structure
- [[history is shaped by coordinated minorities with clear purpose not by majorities]] — Olson explains WHY: small groups can solve the collective action problem that large groups cannot
- [[human social cognition caps meaningful relationships at approximately 150 because neocortex size constrains the number of individuals whose behavior and relationships can be tracked]] — Dunbar's number defines the scale at which informal monitoring works; beyond it, Olson's monitoring difficulty dominates
- [[social capital erodes when associational life declines because trust generalized reciprocity and civic norms are produced by repeated face-to-face interaction in voluntary organizations not by individual virtue]] — social capital is the informal mechanism that mitigates free-riding through reciprocity norms and reputational accountability
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,36 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Dunbar's number (~150) is a cognitive constraint on group size derived from the correlation between primate neocortex ratio and social group size, with layered structure at 5/15/50/150/500/1500 reflecting decreasing emotional closeness"
confidence: likely
source: "Dunbar 1992 Journal of Human Evolution; Dunbar 2010 How Many Friends Does One Person Need?"
created: 2026-03-08
---
# human social cognition caps meaningful relationships at approximately 150 because neocortex size constrains the number of individuals whose behavior and relationships can be tracked
Robin Dunbar's social brain hypothesis establishes that primate social group size correlates with neocortex ratio — the proportion of brain devoted to the neocortex. For humans, this predicts a mean group size of approximately 150, a number that recurs across diverse social structures: Neolithic farming villages, Roman military centuries, Hutterite communities that split at ~150, average personal network sizes in modern surveys, and the typical size of functional organizational units.
The mechanism is cognitive, not social. Maintaining a relationship requires tracking not just who someone is, but their relationships to others, their reliability, their emotional state, and shared history. This mentalizing capacity — modeling others' mental states and social connections — scales with neocortex volume. At ~150, the combinatorial explosion of third-party relationships exceeds what human cognitive architecture can track. Beyond this number, relationships become transactional rather than trust-based, requiring formal rules, hierarchies, and institutions to maintain cohesion.
The number is not a hard boundary but the center of a layered structure. Dunbar identifies concentric circles of decreasing closeness: ~5 (intimate support group), ~15 (sympathy group — those whose death would be devastating), ~50 (close friends), ~150 (meaningful relationships), ~500 (acquaintances), ~1,500 (faces you can put names to). Each layer scales by roughly a factor of 3, and emotional closeness decreases with each expansion. The innermost circles require the most cognitive investment per relationship; the outermost require the least.
This has direct implications for community formation and organizational design. Communities that grow beyond ~150 without introducing formal coordination mechanisms lose the trust-based cohesion that held them together. This is why [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — trust operates naturally within Dunbar-scale groups but requires institutional scaffolding beyond them. It also explains why [[isolated populations lose cultural complexity because collective brains require minimum network size to sustain accumulated knowledge]] — the Tasmanian population of ~4,000 had enough Dunbar-scale groups for some cultural retention but insufficient interconnection between groups for full knowledge maintenance.
For collective intelligence systems, Dunbar's number defines the scale at which informal coordination breaks down and formal mechanisms become necessary. The transition from trust-based to institution-based coordination is not a failure — it is the threshold where design must replace emergence.
**Scope:** This claim is about cognitive constraints on individual social tracking, not about the optimal size for all social groups. Task-oriented teams, online communities, and algorithmically-mediated networks operate under different constraints. Dunbar's number bounds natural human social cognition, not designed coordination.
---
Relevant Notes:
- [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — trust is the coordination substrate that Dunbar's number constrains at the individual level
- [[isolated populations lose cultural complexity because collective brains require minimum network size to sustain accumulated knowledge]] — network size must exceed Dunbar-scale for cultural accumulation, but interconnection between Dunbar-scale groups is what maintains it
- [[collective brains generate innovation through population size and interconnectedness not individual genius]] — innovation requires networks larger than Dunbar's number, which is why institutional coordination is a prerequisite for complex civilization
- [[Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization]] — Ostrom's design principles are the institutional mechanisms that extend coordination beyond Dunbar-scale groups
- [[civilization was built on the false assumption that humans are rational individuals]] — Dunbar's number is another cognitive limitation that the rationality fiction obscures
- [[humans are the minimum viable intelligence for cultural evolution not the pinnacle of cognition]] — the 150-person cap is evidence of minimal cognitive sufficiency, not optimal design
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,40 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Kahan's identity-protective cognition thesis: individuals with higher scientific literacy are MORE polarized on culturally contested issues, not less, because they use their cognitive skills to defend identity-consistent positions rather than to converge on truth"
confidence: likely
source: "Kahan 2012 Nature Climate Change; Kahan 2017 Advances in Political Psychology; Kahan et al. 2013 Journal of Risk Research"
created: 2026-03-08
---
# identity-protective cognition causes people to reject evidence that threatens their group identity even when they have the cognitive capacity to evaluate it correctly
Dan Kahan's cultural cognition research produces one of social science's most disturbing findings: on culturally contested issues (climate change, gun control, nuclear power), individuals with higher scientific literacy and numeracy are *more* polarized, not less. People who score highest on cognitive reflection tests — those best equipped to evaluate evidence — show the largest gaps in risk perception between cultural groups. More information, more analytical capacity, and more education do not produce convergence. They produce more sophisticated defense of the position their identity demands.
The mechanism is identity-protective cognition. When a factual claim is entangled with group identity — when "believing X" signals membership in a cultural group — the individual faces a conflict between epistemic accuracy and social belonging. Since the individual cost of holding an inaccurate belief about climate change is negligible (one person's belief changes nothing about the climate), while the cost of deviating from group identity is immediate and tangible (social ostracism, loss of status, identity threat), the rational individual strategy is to protect identity. Higher cognitive capacity simply provides better tools for motivated reasoning — more sophisticated arguments for the predetermined conclusion.
Kahan's empirical work demonstrates this across multiple domains. In one study, participants who correctly solved a complex statistical problem about skin cream treatment effectiveness failed to solve an *identical* problem when the data was reframed as gun control evidence — but only when the correct answer contradicted their cultural group's position. The analytical capacity was identical. The identity stakes changed the outcome.
This is the empirical mechanism behind [[the self is a memeplex that persists because memes attached to a personal identity get copied more reliably than free-floating ideas]]. The selfplex is the theoretical framework; identity-protective cognition is the measured behavior. When beliefs become load-bearing components of the selfplex, they are defended with whatever cognitive resources are available. Smarter people defend them more skillfully.
The implications for knowledge systems and collective intelligence are severe. Presenting evidence does not change identity-integrated beliefs — it can *strengthen* them through the backfire effect (challenged beliefs become more firmly held as the threat triggers defensive processing). This means [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]] operates not just at the social level but at the cognitive level: the "trusted sources" must be trusted by the target's identity group, or the evidence is processed as identity threat rather than information.
**What works instead:** Kahan's research suggests two approaches that circumvent identity-protective cognition. First, **identity-affirmation**: when individuals are affirmed in their identity before encountering threatening evidence, they process the evidence more accurately — the identity threat is preemptively neutralized. Second, **disentangling facts from identity**: presenting evidence in ways that do not signal group affiliation reduces identity-protective processing. The messenger matters more than the message: the same data presented by an in-group source is processed as information, while the same data from an out-group source is processed as attack.
**Scope:** This claim is about factual beliefs on culturally contested issues, not about values or preferences. Identity-protective cognition does not explain all disagreement — genuine value differences exist that are not reducible to motivated reasoning. The claim is that on empirical questions where evidence should produce convergence, group identity prevents it.
---
Relevant Notes:
- [[the self is a memeplex that persists because memes attached to a personal identity get copied more reliably than free-floating ideas]] — the selfplex is the theoretical framework; identity-protective cognition is the measured behavior
- [[memeplexes survive by combining mutually reinforcing memes that protect each other from external challenge through untestability threats and identity attachment]] — identity attachment is the specific trick that identity-protective cognition exploits at the individual level
- [[civilization was built on the false assumption that humans are rational individuals]] — identity-protective cognition is perhaps the strongest evidence against the rationality assumption: even the most capable reasoners are identity-protective first
- [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]] — the "trusted sources" requirement is partly explained by identity-protective cognition: sources must be identity-compatible
- [[collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination]] — identity-protective cognition is the mechanism by which shared worldview correlates errors: community members protect community-consistent beliefs
- [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]] — identity-protective cognition creates *artificially* irreducible disagreements on empirical questions by entangling facts with identity
- [[metaphor reframing is more powerful than argument because it changes which conclusions feel natural without requiring persuasion]] — reframing works because it circumvents identity-protective cognition by presenting the same conclusion through a different identity lens
- [[validation-synthesis-pushback is a conversational design pattern where affirming then deepening then challenging creates the experience of being understood]] — the validation step pre-empts identity threat, enabling more accurate processing of the subsequent challenge
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,37 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Putnam's social capital thesis: the decline of bowling leagues, PTAs, fraternal organizations, and civic associations in the US since the 1960s depleted the trust infrastructure that enables collective action — caused primarily by generational change, television, suburban sprawl, and time pressure"
confidence: likely
source: "Putnam 2000 Bowling Alone; Fukuyama 1995 Trust; Henrich 2016 The Secret of Our Success"
created: 2026-03-08
---
# social capital erodes when associational life declines because trust generalized reciprocity and civic norms are produced by repeated face-to-face interaction in voluntary organizations not by individual virtue
Robert Putnam's *Bowling Alone* (2000) documented the decline of American civic engagement across multiple dimensions: PTA membership down 40% since 1960, fraternal organization membership halved, league bowling collapsed while individual bowling rose, church attendance declined, dinner party hosting dropped, union membership fell from 33% to 14% of the workforce. The data spans dozens of indicators across decades, making it one of the most comprehensive empirical accounts of social change in American sociology.
The mechanism Putnam identifies is generative, not merely correlational. Voluntary associations — bowling leagues, Rotary clubs, church groups, PTAs — produce social capital as a byproduct of repeated interaction. When people meet regularly for shared activities, they develop generalized trust (willingness to trust strangers based on community norms), reciprocity norms (the expectation that favors will be returned, not by the individual but by the community), and civic skills (the practical ability to organize, deliberate, and coordinate). These are public goods: they benefit the entire community, not just participants.
Social capital comes in two forms that map directly to network structure. **Bonding** social capital strengthens ties within homogeneous groups (ethnic communities, religious congregations, close-knit neighborhoods) — these are the strong ties that enable complex contagion and mutual aid. **Bridging** social capital connects across groups (civic organizations that bring together people of different backgrounds) — these are the weak ties that [[weak ties bridge otherwise disconnected clusters enabling information flow and opportunity access that strong ties within clusters cannot provide]]. A healthy civic ecosystem needs both: bonding for support and identity, bridging for information flow and broad coordination.
Putnam identifies four primary causes of decline: (1) **Generational replacement** — the civic generation (born 1910-1940) who joined everything is being replaced by boomers and Gen X who join less, accounting for roughly half the decline. (2) **Television** — each additional hour of TV watching correlates with reduced civic participation, accounting for roughly 25% of the decline. (3) **Suburban sprawl** — commuting time directly substitutes for civic time; each 10 minutes of commuting reduces all forms of social engagement. (4) **Time and money pressures** — dual-income families have less discretionary time for voluntary associations.
The implication is that social capital is *infrastructure*, not character. It is produced by specific social structures (voluntary associations with regular face-to-face interaction) and depleted when those structures erode. This connects to [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — Putnam's social capital is the micro-mechanism by which trust is produced and sustained at the community level. When associational life declines, trust declines, and the capacity for collective action degrades.
**Scope:** This claim is about the mechanism by which social capital is produced and depleted, not about whether the internet has offset Putnam's decline. Online communities may generate bonding social capital within interest groups, but their capacity to generate bridging social capital and generalized trust remains empirically contested. The claim is structural: repeated face-to-face interaction in voluntary organizations produces trust as a public good. Whether digital interaction can substitute remains an open question.
---
Relevant Notes:
- [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — Putnam's social capital is the micro-mechanism that produces the trust Hidalgo identifies as the binding constraint on economic complexity
- [[weak ties bridge otherwise disconnected clusters enabling information flow and opportunity access that strong ties within clusters cannot provide]] — bridging social capital IS the Granovetter weak-tie mechanism applied to civic life
- [[human social cognition caps meaningful relationships at approximately 150 because neocortex size constrains the number of individuals whose behavior and relationships can be tracked]] — voluntary associations work within Dunbar-scale groups, creating the repeated interaction needed for trust formation
- [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]] — bonding social capital provides the clustered strong-tie exposure that complex contagion requires
- [[technology creates interconnection but not shared meaning which is the precise gap that produces civilizational coordination failure]] — Putnam's decline is the social infrastructure version of Ansary's meaning gap: connectivity without trust-producing institutions
- [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent]] — social capital is the informal enforcement mechanism that shifts Nash equilibria toward cooperation without formal institutions
- [[modernization dismantles family and community structures replacing them with market and state relationships that increase individual freedom but erode psychosocial foundations of wellbeing]] — Putnam's decline is the American instance of the broader modernization-driven erosion of community structures
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,34 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Blackmore's selfplex: personal identity is a cluster of mutually reinforcing memes (beliefs, values, narratives, preferences) organized around a central 'I' that provides a replication advantage — memes attached to identity spread through self-expression and resist displacement through identity-protective mechanisms"
confidence: experimental
source: "Blackmore 1999 The Meme Machine; Dennett 1991 Consciousness Explained; Henrich 2016 The Secret of Our Success"
created: 2026-03-08
---
# the self is a memeplex that persists because memes attached to a personal identity get copied more reliably than free-floating ideas
Susan Blackmore's concept of the "selfplex" is the application of memetic theory to personal identity. The self — "I" — is not a biological given but a memeplex: a cluster of mutually reinforcing memes (beliefs, values, preferences, narratives, group affiliations) organized around a central fiction of a unified agent. The selfplex persists because memes attached to it gain a replication advantage: a belief that is "part of who I am" gets expressed more frequently, defended more vigorously, and transmitted more reliably than a belief held lightly.
The mechanism works through three channels. First, **expression frequency**: people talk about what they identify with. A person who identifies as an environmentalist mentions environmental issues more often than someone who merely agrees that pollution is bad. The identity-attached meme gets more transmission opportunities. Second, **defensive vigor**: when a meme is part of the selfplex, challenges to it feel like challenges to the self. This triggers emotional defense responses that protect the meme from displacement — the same [[memeplexes survive by combining mutually reinforcing memes that protect each other from external challenge through untestability threats and identity attachment]] mechanism, but applied to the personal identity rather than a collective ideology. Third, **social signaling**: expressing identity-consistent beliefs signals group membership, which activates reciprocal transmission from fellow group members.
Blackmore builds on Dennett's "center of narrative gravity" — the self is a story we tell about ourselves, not a thing we discover. But she adds the evolutionary dimension: the selfplex is not just a narrative convenience. It is a replicator strategy. Memes that successfully attach to the selfplex gain protection, expression, and transmission advantages that free-floating memes do not. The self is the ultimate host environment for memes.
This has direct implications for belief updating. When evidence contradicts a belief that is integrated into the selfplex, the rational response (update the belief) conflicts with the memetic response (protect the selfplex). The selfplex wins more often than not because the emotional cost of identity threat exceeds the cognitive benefit of accuracy. This explains why [[civilization was built on the false assumption that humans are rational individuals]] — rationality assumes beliefs are held for epistemic reasons, but selfplex theory shows they are held for identity reasons, with epistemic justification constructed post-hoc.
**Scope and confidence.** Rated experimental because the selfplex is a theoretical construct, not an empirically isolated mechanism. The component observations are well-established (identity-consistent beliefs are expressed and defended more vigorously, belief change is harder for identity-integrated beliefs). But whether "selfplex" as a coherent replicator unit adds explanatory power beyond these individual effects is debated. The strongest version of the claim — that the self is *literally* a memeplex with its own replication dynamics — is a theoretical framework, not an empirical finding.
---
Relevant Notes:
- [[memeplexes survive by combining mutually reinforcing memes that protect each other from external challenge through untestability threats and identity attachment]] — the selfplex IS the identity attachment trick applied to the individual rather than the collective
- [[civilization was built on the false assumption that humans are rational individuals]] — the selfplex explains WHY the rationality assumption fails: beliefs serve identity before truth
- [[meme propagation selects for simplicity novelty and conformity pressure rather than truth or utility]] — selfplex attachment is a fourth selection pressure: memes that attach to identity replicate regardless of simplicity, novelty, or conformity
- [[the strongest memeplexes align individual incentive with collective behavior creating self-validating feedback loops]] — the selfplex is the individual-level version: self-expression validates self-identity in a feedback loop
- [[true imitation is the threshold capacity that creates a second replicator because only faithful copying of behaviors enables cumulative cultural evolution]] — the selfplex is a higher-order organization of the second replicator, organizing memes into identity-coherent clusters
- [[collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination]] — shared selfplex structures within a community correlate errors through identity-protective cognition
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,34 +0,0 @@
---
type: claim
domain: cultural-dynamics
description: "Granovetter's strength of weak ties shows that acquaintances bridge structural holes between dense clusters, providing access to non-redundant information — but this applies to simple contagion (information), not complex contagion (behavioral/ideological change)"
confidence: proven
source: "Granovetter 1973 American Journal of Sociology; Burt 2004 structural holes; Centola 2010 Science (boundary condition)"
created: 2026-03-08
---
# weak ties bridge otherwise disconnected clusters enabling information flow and opportunity access that strong ties within clusters cannot provide
Mark Granovetter's 1973 paper "The Strength of Weak Ties" established one of network science's most counterintuitive and empirically robust findings: acquaintances (weak ties) are more valuable than close friends (strong ties) for accessing novel information and opportunities. The mechanism is structural, not relational. Strong ties cluster — your close friends tend to know each other and share the same information. Weak ties bridge — your acquaintances connect you to entirely different social clusters with non-redundant information.
The original evidence came from job-seeking: Granovetter found that 84% of respondents who found jobs through personal contacts used weak ties rather than strong ones. The information that led to employment came from people they saw "occasionally" or "rarely," not from close friends. This is because close friends circulate in the same information environment — they know what you already know. Acquaintances have access to different information pools entirely.
Ronald Burt extended this into "structural holes" theory: the most valuable network positions are those that bridge gaps between otherwise disconnected clusters. Individuals who span structural holes have access to diverse, non-redundant information and can broker between groups. This creates information advantages, earlier access to opportunities, and disproportionate influence — not because of personal ability but because of network position.
**The critical boundary condition.** Granovetter's thesis holds for *information* flow — simple contagion where a single exposure is sufficient for transmission. But [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]]. Centola's research demonstrates that for behavioral and ideological change, weak ties are actually *counterproductive*: a signal arriving via a weak tie comes without social reinforcement. Complex contagion requires the redundant, trust-rich exposure that strong ties and clustered networks provide. This creates a fundamental design tension: the same network structure that maximizes information flow (bridging weak ties) minimizes ideological adoption (which needs clustered strong ties).
For any system that must both spread information widely and drive deep behavioral change, the implication is a two-phase architecture: weak ties for awareness and information discovery, strong ties for adoption and commitment. Broadcasting reaches everyone; community converts the committed.
---
Relevant Notes:
- [[ideological adoption is a complex contagion requiring multiple reinforcing exposures from trusted sources not simple viral spread through weak ties]] — the boundary condition that limits weak tie effectiveness to simple contagion
- [[complex ideas propagate with higher fidelity through personal interaction than mass media because nuance requires bidirectional communication]] — strong ties enable the bidirectional communication that nuanced ideas require
- [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — trust operates through strong ties within clusters; weak ties enable information flow between clusters but do not carry trust
- [[collective brains generate innovation through population size and interconnectedness not individual genius]] — weak ties provide the interconnectedness that makes collective brains work by connecting otherwise siloed knowledge pools
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — partial connectivity preserves the cluster structure that weak ties bridge, maintaining both diversity and connection
- [[cross-domain knowledge connections generate disproportionate value because most insights are siloed]] — cross-domain connections are the intellectual equivalent of weak ties bridging structural holes
Topics:
- [[memetics and cultural evolution]]
- [[cultural-dynamics/_map]]

View file

@ -1,58 +0,0 @@
---
type: claim
domain: teleological-economics
description: "Vickrey's foundational insight that auction format determines economic outcomes — not just 'who pays the most' but how information is revealed, how risk is distributed, and whether allocation is efficient — underpins token launch design, spectrum allocation, and any market where goods are allocated through competitive bidding"
confidence: proven
source: "Vickrey (1961); Milgrom & Weber (1982); Myerson (1981); Riley & Samuelson (1981); Nobel Prize in Economics 1996 (Vickrey), 2020 (Milgrom & Wilson)"
created: 2026-03-08
---
# Auction theory reveals that allocation mechanism design determines price discovery efficiency and revenue because different auction formats produce different outcomes depending on bidder information structure and risk preferences
William Vickrey (1961) established that auctions are not interchangeable — the format determines economic outcomes. This insight, seemingly obvious in retrospect, overturned the assumption that "let people bid" is sufficient for efficient allocation. The mechanism matters.
## Revenue equivalence — and its failures
The Revenue Equivalence Theorem (Vickrey 1961, Myerson 1981, Riley & Samuelson 1981) proves that under specific conditions — risk-neutral bidders, independent private values, symmetric information — all standard auction formats (English, Dutch, first-price sealed, second-price sealed) yield the same expected revenue. This is the baseline result.
The power of the theorem lies in what happens when its assumptions fail:
**Risk-averse bidders** break equivalence. First-price auctions generate more revenue than second-price auctions because risk-averse bidders shade their bids less — they'd rather overpay slightly than risk losing. This is why most real-world procurement uses first-price formats.
**Correlated values** break equivalence. Milgrom and Weber (1982) proved the Linkage Principle: when bidder values are correlated (common-value auctions), formats that reveal more information during bidding generate higher revenue because they reduce the winner's curse. English auctions outperform sealed-bid auctions in common-value settings because the bidding process itself reveals information.
**Asymmetric information** breaks equivalence. When some bidders have better information than others, format choice determines whether informed bidders extract rents or whether the mechanism levels the playing field.
## The winner's curse
In common-value auctions (where the item has a single true value that bidders estimate with noise), the winner is the bidder with the most optimistic estimate — and therefore the most likely to have overpaid. Rational bidders shade their bids to account for this, but the degree of shading depends on the auction format. The winner's curse is why IPOs are systematically underpriced (Rock 1986) and why token launches that ignore information asymmetry between insiders and outsiders produce adverse selection.
## Why this is foundational
Auction theory provides the formal toolkit for:
- **Token launch design:** [[token launches are hybrid-value auctions where common-value price discovery and private-value community alignment require different mechanisms because auction theory optimized for one degrades the other]] — the hybrid-value problem is precisely the failure of revenue equivalence when you have both common-value (price discovery) and private-value (community alignment) components in the same allocation.
- **Dutch-auction mechanisms:** [[dutch-auction dynamic bonding curves solve the token launch pricing problem by combining descending price discovery with ascending supply curves eliminating the instantaneous arbitrage that has cost token deployers over 100 million dollars on Ethereum]] — the descending-price mechanism is a specific auction format choice designed to solve the information asymmetry that creates MEV extraction.
- **Layered architecture:** [[optimal token launch architecture is layered not monolithic because separating quality governance from price discovery from liquidity bootstrapping from community rewards lets each layer use the mechanism best suited to its objective]] — the insight that different allocation problems within a single launch need different auction formats.
- **Mechanism design:** [[mechanism design enables incentive-compatible coordination by constructing rules under which self-interested agents voluntarily reveal private information and take socially optimal actions]] — auction theory is mechanism design's most successful application domain. Vickrey auctions are the canonical example of incentive-compatible mechanisms.
- **Prediction markets:** [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — continuous double auctions in prediction markets aggregate information because the market mechanism rewards accurate pricing, a direct application of the Linkage Principle.
Without auction theory, claims about token launch design and price discovery mechanisms lack the formal framework for evaluating why one format outperforms another. "Run an auction" is not a design — the format, information structure, and participation rules determine everything.
---
Relevant Notes:
- [[token launches are hybrid-value auctions where common-value price discovery and private-value community alignment require different mechanisms because auction theory optimized for one degrades the other]] — the central application of auction theory to internet finance
- [[dutch-auction dynamic bonding curves solve the token launch pricing problem by combining descending price discovery with ascending supply curves eliminating the instantaneous arbitrage that has cost token deployers over 100 million dollars on Ethereum]] — a specific auction format choice
- [[optimal token launch architecture is layered not monolithic because separating quality governance from price discovery from liquidity bootstrapping from community rewards lets each layer use the mechanism best suited to its objective]] — why different auction formats suit different launch stages
- [[mechanism design enables incentive-compatible coordination by constructing rules under which self-interested agents voluntarily reveal private information and take socially optimal actions]] — auction theory as mechanism design's most successful subdomain
- [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — prediction market pricing as continuous auction
- [[early-conviction pricing is an unsolved mechanism design problem because systems that reward early believers attract extractive speculators while systems that prevent speculation penalize genuine supporters]] — the unsolved auction design problem
Topics:
- [[analytical-toolkit]]
- [[internet finance and decision markets]]

View file

@ -1,62 +0,0 @@
---
type: claim
domain: teleological-economics
description: "Platforms are not just big companies — they are fundamentally different economic structures that create and capture value through cross-side network effects, and understanding their economics is critical because half the claims in the codex reference platform dynamics without a foundational claim explaining why platforms behave the way they do"
confidence: proven
source: "Rochet & Tirole, 'Platform Competition in Two-Sided Markets' (2003); Parker, Van Alstyne & Choudary, 'Platform Revolution' (2016); Eisenmann, Parker & Van Alstyne (2006); Evans & Schmalensee, 'Matchmakers' (2016); Nobel Prize in Economics 2014 (Tirole)"
created: 2026-03-08
---
# Platform economics creates winner-take-most markets through cross-side network effects where the platform that reaches critical mass on any side locks in the entire ecosystem because multi-sided markets tip faster than single-sided ones
Rochet and Tirole (2003) formalized what practitioners had intuited: two-sided markets have fundamentally different economics from traditional markets. A platform serves two or more distinct user groups whose participation creates value for each other. The platform's primary economic function is not production but matching — reducing the transaction cost of finding, evaluating, and transacting with the other side.
## Cross-side network effects
The defining feature of platform economics is cross-side network effects: users on one side of the platform attract users on the other side. More app developers attract phone buyers; more phone buyers attract app developers. More drivers attract riders; more riders attract drivers. This creates a self-reinforcing feedback loop that is stronger than same-side network effects because it operates across TWO growth curves simultaneously.
Cross-side effects produce three dynamics that traditional economics doesn't predict:
**1. Pricing below cost on one side.** Platforms rationally price below marginal cost (or even at zero) on the side whose participation creates more value for the other side. Google gives away search to attract users to attract advertisers. This is not predatory pricing — it is the profit-maximizing strategy in a multi-sided market. The subsidy side generates demand that the monetization side pays for.
**2. Chicken-and-egg problem.** Both sides need the other to join first. Platforms solve this through sequencing strategies: subsidize the harder side, seed supply artificially, or find a single-sided use case that doesn't require the other side. [[early-conviction pricing is an unsolved mechanism design problem because systems that reward early believers attract extractive speculators while systems that prevent speculation penalize genuine supporters]] — the early-conviction problem is a specific instance of the chicken-and-egg problem applied to token launches.
**3. Multi-homing costs determine lock-in.** When users can participate on multiple platforms simultaneously (multi-homing), winner-take-most dynamics weaken. When multi-homing is costly (because of data lock-in, reputation systems, or switching costs), tipping accelerates. DeFi protocols with composable liquidity reduce multi-homing costs; walled-garden platforms increase them.
## Platform envelopment
Eisenmann, Parker, and Van Alstyne (2006) identified platform envelopment: a platform in an adjacent market leverages its user base to enter and dominate a new market. Microsoft used the Windows installed base to envelope browsers. Google used search to envelope email, maps, and video. Amazon used e-commerce to envelope cloud computing.
Envelopment works because the entering platform already solved the chicken-and-egg problem on one side. It imports its existing user base as a beachhead and only needs to attract the new side. This is why platform competition is not about building a better product — it's about controlling the user relationship that enables cross-side leverage.
This dynamic directly threatens any protocol or platform that relies on a single market position. [[when profits disappear at one layer of a value chain they emerge at an adjacent layer through the conservation of attractive profits]] — platform envelopment is the mechanism through which profits migrate: the enveloping platform captures the adjacent layer's attractive profits.
## Why this is foundational
Platform economics provides the theoretical grounding for:
- **Token launch platforms:** MetaDAO as a launch platform faces classic two-sided market dynamics — it needs both token deployers and traders/governance participants. [[agents create dozens of proposals but only those attracting minimum stake become live futarchic decisions creating a permissionless attention market for capital formation]] — the permissionless proposal market is a platform matching capital allocators with investment opportunities.
- **Network effects:** [[network effects create winner-take-most markets because each additional user increases value for all existing users producing positive feedback that concentrates market share among early leaders]] — platform economics extends this from single-sided to cross-side effects, which are stronger and tip faster.
- **Media disruption:** [[two-phase disruption where distribution moats fall first and creation moats fall second is a universal pattern across entertainment knowledge work and financial services]] — platforms are the mechanism through which distribution moats fall, because platforms reduce the transaction cost of matching creators to audiences below what incumbent distribution achieves.
- **Why intermediaries accumulate rent:** [[transaction costs determine organizational boundaries because firms exist to economize on the costs of using markets and the boundary shifts when technology changes the relative cost of internal coordination versus external contracting]] — platforms are transaction cost innovations that create new governance structures with their own rent-extraction potential.
- **Vertical integration dynamics:** [[purpose-built full-stack systems outcompete acquisition-based incumbents during structural transitions because integrated design eliminates the misalignment that bolted-on components create]] — vertical integration vs platform strategy is the central architectural choice, and transaction cost economics determines which wins.
---
Relevant Notes:
- [[network effects create winner-take-most markets because each additional user increases value for all existing users producing positive feedback that concentrates market share among early leaders]] — platform economics extends network effects from single-sided to cross-side
- [[when profits disappear at one layer of a value chain they emerge at an adjacent layer through the conservation of attractive profits]] — platform envelopment as profit migration mechanism
- [[early-conviction pricing is an unsolved mechanism design problem because systems that reward early believers attract extractive speculators while systems that prevent speculation penalize genuine supporters]] — chicken-and-egg problem applied to token launches
- [[agents create dozens of proposals but only those attracting minimum stake become live futarchic decisions creating a permissionless attention market for capital formation]] — MetaDAO as two-sided platform
- [[two-phase disruption where distribution moats fall first and creation moats fall second is a universal pattern across entertainment knowledge work and financial services]] — platforms as distribution-moat destroyers
- [[transaction costs determine organizational boundaries because firms exist to economize on the costs of using markets and the boundary shifts when technology changes the relative cost of internal coordination versus external contracting]] — platforms as transaction cost governance structures
- [[purpose-built full-stack systems outcompete acquisition-based incumbents during structural transitions because integrated design eliminates the misalignment that bolted-on components create]] — vertical integration vs platform as architectural choice
- [[good management causes disruption because rational resource allocation systematically favors sustaining innovation over disruptive opportunities]] — platforms disrupt because incumbents rationally optimize existing business models instead of building platform alternatives
Topics:
- [[analytical-toolkit]]
- [[attractor dynamics]]

View file

@ -1,67 +0,0 @@
---
type: claim
domain: teleological-economics
description: "Coase and Williamson's insight that firms are not production functions but governance structures — they exist because market transactions have costs, and the boundary between firm and market shifts when technology changes those costs — is the theoretical foundation for understanding platform economics, vertical integration, and why intermediaries rise and fall"
confidence: proven
source: "Coase, 'The Nature of the Firm' (1937); Williamson, 'Markets and Hierarchies' (1975), 'The Economic Institutions of Capitalism' (1985); Nobel Prize in Economics 1991 (Coase), 2009 (Williamson)"
created: 2026-03-08
---
# Transaction costs determine organizational boundaries because firms exist to economize on the costs of using markets and the boundary shifts when technology changes the relative cost of internal coordination versus external contracting
Ronald Coase (1937) asked the question economics had ignored: if markets are efficient allocators, why do firms exist? His answer: because using markets has costs. Finding trading partners, negotiating terms, writing contracts, monitoring performance, enforcing agreements — these transaction costs explain why some activities happen inside firms (hierarchy) rather than between firms (market). The boundary of the firm is where the marginal cost of internal coordination equals the marginal cost of market transaction.
## Williamson's three dimensions
Oliver Williamson (1975, 1985) operationalized Coase by identifying three dimensions that determine whether transactions are governed by markets, hybrids, or hierarchies:
**Asset specificity:** When an investment is tailored to a specific transaction partner (specialized equipment, dedicated training, site-specific infrastructure), the investing party becomes vulnerable to hold-up — the partner can renegotiate terms after the investment is sunk. High asset specificity pushes governance toward hierarchy (vertical integration) because internal governance protects against hold-up.
**Uncertainty:** When outcomes are unpredictable and contracts cannot specify all contingencies, market governance fails because incomplete contracts create disputes. Hierarchy handles uncertainty through authority — a manager can adapt in real-time without renegotiating contracts. This is why complex, novel activities tend to happen inside firms rather than through market contracts.
**Frequency:** Transactions that recur frequently justify the fixed costs of specialized governance structures. A one-time purchase goes to market; a daily supply relationship justifies a long-term contract or vertical integration.
## Why intermediaries rise and fall
Transaction cost economics explains the lifecycle of intermediaries:
1. **Intermediaries arise** when they reduce transaction costs below what direct trading achieves. Brokers aggregate information, market makers provide liquidity, platforms match counterparties. Each exists because the transaction cost of direct exchange exceeds the intermediary's fee.
2. **Intermediaries accumulate rent** when they become the lowest-cost governance structure AND create switching costs. The intermediary's margin is bounded by the transaction cost of the next-best alternative. When no alternative is cheaper, the intermediary extracts rent.
3. **Intermediaries fall** when technology reduces the transaction costs they were built to economize. If blockchain reduces the cost of trustless exchange below the intermediary's fee, the intermediary's governance advantage disappears. This is not disruption through better products — it's disruption through lower transaction costs making the intermediary's existence uneconomical.
This framework directly explains why [[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]] — the GDP impact comes from reducing transaction costs, not from creating new demand.
## Platform economics as transaction cost innovation
Platforms are transaction cost innovations. They reduce the cost of matching, pricing, and trust-building below what bilateral markets achieve. But platforms also create NEW transaction costs — switching costs, data lock-in, platform-specific investments (app development, audience building) that constitute asset specificity. The platform becomes the governance structure, and participants face the same hold-up problem that vertical integration was designed to solve.
This is why [[network effects create winner-take-most markets because each additional user increases value for all existing users producing positive feedback that concentrates market share among early leaders]] — network effects are demand-side transaction cost reductions (more users = easier to find counterparties = lower search costs), but they also create asset specificity (users' social graphs, reputation, content are platform-specific investments).
## Why this is foundational
Transaction cost economics provides the theoretical lens for:
- **Why intermediaries exist and when they die** — the core question for internet finance. Every intermediary is a transaction cost governance structure; technology that reduces those costs makes the intermediary obsolete.
- **Why vertical integration happens** — Kaiser Permanente, SpaceX, and Apple all vertically integrate because asset specificity and uncertainty in their domains make market governance more expensive than hierarchy. [[when profits disappear at one layer of a value chain they emerge at an adjacent layer through the conservation of attractive profits]] — profit migration follows transaction cost shifts.
- **Why platforms capture value** — platforms reduce transaction costs between sides of the market, but the platform itself becomes a governance structure with its own transaction costs (fees, rules, lock-in).
- **Why DAOs struggle** — DAOs attempt to replace hierarchical governance with market/protocol governance, but many activities inside organizations have high asset specificity and uncertainty — exactly the conditions where Williamson predicts hierarchy outperforms markets.
---
Relevant Notes:
- [[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]] — GDP impact as transaction cost reduction
- [[network effects create winner-take-most markets because each additional user increases value for all existing users producing positive feedback that concentrates market share among early leaders]] — network effects as demand-side transaction cost reductions that create new asset specificity
- [[when profits disappear at one layer of a value chain they emerge at an adjacent layer through the conservation of attractive profits]] — profit migration follows transaction cost shifts
- [[value in industry transitions accrues to bottleneck positions in the emerging architecture not to pioneers or to the largest incumbents]] — bottleneck positions are where transaction costs are highest and governance is most valuable
- [[the personbyte is a fundamental quantization limit on knowledge accumulation forcing all complex production into networked teams]] — the personbyte is a knowledge-specific transaction cost: transferring knowledge between minds has irreducible cost
- [[trust is the binding constraint on network size and therefore on the complexity of products an economy can produce]] — trust reduces transaction costs; more trust enables larger networks and more complex production
- [[industries are need-satisfaction systems and the attractor state is the configuration that most efficiently satisfies underlying human needs given available technology]] — the attractor state is the minimum-transaction-cost configuration
Topics:
- [[analytical-toolkit]]
- [[internet finance and decision markets]]