teleo-codex/foundations/critical-systems/positive feedback loops amplify deviations from equilibrium while negative feedback loops dampen them and the balance between the two determines whether systems stabilize self-correct or run away.md
m3taversal ddee7f4c42 theseus: foundations follow-up — _map.md fix + 4 gap claims
- What: Updated ai-alignment/_map.md to reflect PR #49 moves (3 claims
  now local, 3 in core/teleohumanity/, remainder in foundations/).
  Added 2 superorganism claims from PR #47 to map. Drafted 4 gap
  claims identified during foundations audit: game theory (CI),
  principal-agent theory (CI), feedback loops (critical-systems),
  network effects (teleological-economics).
- Why: Audit identified these as missing scaffolding for alignment
  claims. Game theory grounds coordination failure analysis.
  Principal-agent theory grounds oversight/deception claims.
  Feedback loops formalize dynamics referenced across all domains.
  Network effects explain AI capability concentration.
- Connections: New claims link to existing alignment claims they
  scaffold (alignment tax, voluntary safety, scalable oversight,
  treacherous turn, intelligence explosion, multipolar failure).

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-07 19:03:38 +00:00

3.9 KiB

type domain description confidence source created
claim critical-systems Control theory's foundational distinction: negative feedback creates stability and self-correction while positive feedback creates exponential growth, lock-in, and cascading failure — most complex systems exhibit both simultaneously proven Wiener, Cybernetics (1948); Meadows, Thinking in Systems (2008); Arthur, Increasing Returns and Path Dependence (1994) 2026-03-07

positive feedback loops amplify deviations from equilibrium while negative feedback loops dampen them and the balance between the two determines whether systems stabilize self-correct or run away

Wiener's cybernetics (1948) formalized what engineers had known for centuries: systems are governed by feedback. Negative feedback loops (thermostats, homeostasis, market price corrections) push systems toward equilibrium by counteracting deviations. Positive feedback loops (compound interest, viral spread, arms races) amplify deviations, driving systems away from their starting state.

The interaction between the two determines system behavior:

Dominated by negative feedback: The system is self-correcting. Perturbations decay. Examples: body temperature regulation, competitive market pricing, ecosystem population dynamics. These systems are stable but can be slow to adapt.

Dominated by positive feedback: The system runs away. Small advantages compound into large ones. Examples: nuclear chain reactions, bank runs, network effects in technology adoption. Arthur (1994) demonstrated that positive feedback in technology markets produces lock-in — the winning technology need not be the best, only the first to cross a tipping point.

Both operating simultaneously: Most real complex systems. Meadows (2008) showed that the most dangerous systems are those where positive feedback loops operate on short timescales (quarterly profits, capability advances) while negative feedback loops operate on long timescales (regulation, social learning, institutional adaptation). The system appears stable until the positive loop overwhelms the negative one — then the transition is sudden and often irreversible.

This framework applies directly to coordination design: designed systems need negative feedback (error correction, oversight, accountability) that operates at least as fast as the positive feedback (capability growth, competitive pressure, accumulation of power). When negative feedback is slower, the system is structurally unstable regardless of initial conditions.


Relevant Notes:

Topics: