Wrote sourced_from: into 414 claim files pointing back to their origin source. Backfilled claims_extracted: into 252 source files that were processed but missing this field. Matching uses author+title overlap against claim source: field, validated against 296 known-good pairs from existing claims_extracted. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
41 lines
2.7 KiB
Markdown
41 lines
2.7 KiB
Markdown
---
|
|
type: source
|
|
title: "Coasean Bargaining at Scale: Decentralization, coordination, and co-existence with AGI"
|
|
author: "Seb Krier (Frontier Policy Development, Google DeepMind; personal capacity)"
|
|
url: https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale
|
|
date_published: 2025-09-26
|
|
date_archived: 2026-03-16
|
|
domain: ai-alignment
|
|
secondary_domains: [collective-intelligence, teleological-economics]
|
|
status: enrichment
|
|
processed_by: theseus
|
|
tags: [coase-theorem, transaction-costs, agent-governance, decentralization, coordination]
|
|
sourced_via: "Alex Obadia (@ObadiaAlex) tweet, ARIA Research Scaling Trust programme"
|
|
twitter_id: "712705562191011841"
|
|
processed_by: theseus
|
|
processed_date: 2026-03-19
|
|
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md"]
|
|
extraction_model: "anthropic/claude-sonnet-4.5"
|
|
claims_extracted:
|
|
- "AI agents as personal advocates collapse Coasean transaction costs enabling bottom-up coordination at societal scale but catastrophic risks remain non-negotiable requiring state enforcement as outer boundary"
|
|
---
|
|
|
|
# Coasean Bargaining at Scale
|
|
|
|
Krier argues AGI agents as personal advocates can dramatically reduce transaction costs, enabling Coasean bargaining at societal scale. Shifts governance from top-down central planning to bottom-up market coordination.
|
|
|
|
Key arguments:
|
|
- Coasean private bargaining has been theoretically sound but practically impossible due to prohibitive transaction costs (discovery, negotiation, enforcement)
|
|
- AI agents solve this: instant communication of granular preferences, hyper-granular contracting, automatic verification/enforcement
|
|
- Three resulting governance principles: accountability (desires become priced offers), voluntary coalitions (diffuse interests band together at nanosecond speed), continuous self-calibration (rules flex based on live preference streams)
|
|
- "Matryoshkan alignment" — nested governance: outer (legal/state), middle (competitive service providers), inner (individual customization)
|
|
- Critical limitations acknowledged: wealth inequality, rights allocation remains constitutional/normative, catastrophic risks need state enforcement
|
|
- Reframes alignment from engineering guarantees to institutional design
|
|
|
|
Directly relevant to [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes]] and [[decentralized information aggregation outperforms centralized planning because dispersed knowledge cannot be collected into a single mind]].
|
|
|
|
|
|
## Key Facts
|
|
- Seb Krier works at Frontier Policy Development, Google DeepMind (writing in personal capacity)
|
|
- Article published at Cosmos Institute blog, 2025-09-26
|
|
- Sourced via Alex Obadia tweet about ARIA Research Scaling Trust programme
|