extract: 2025-09-26-krier-coasean-bargaining-at-scale
Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
This commit is contained in:
parent
2153ae39bd
commit
c76e8ce4d9
3 changed files with 43 additions and 1 deletions
|
|
@ -39,6 +39,12 @@ The UK AI4CI research strategy treats alignment as a coordination and governance
|
||||||
|
|
||||||
The source identifies three market failure mechanisms driving over-adoption: (1) negative externalities where firms don't internalize demand destruction, (2) coordination failure where 'follow or die' dynamics force adoption despite systemic risks, (3) information asymmetry where adoption signals inevitability. All three are coordination failures, not technical capability gaps.
|
The source identifies three market failure mechanisms driving over-adoption: (1) negative externalities where firms don't internalize demand destruction, (2) coordination failure where 'follow or die' dynamics force adoption despite systemic risks, (3) information asymmetry where adoption signals inevitability. All three are coordination failures, not technical capability gaps.
|
||||||
|
|
||||||
|
|
||||||
|
### Additional Evidence (extend)
|
||||||
|
*Source: [[2025-09-26-krier-coasean-bargaining-at-scale]] | Added: 2026-03-19*
|
||||||
|
|
||||||
|
Krier provides institutional mechanism: personal AI agents enable Coasean bargaining at scale by collapsing transaction costs (discovery, negotiation, enforcement), shifting governance from top-down planning to bottom-up market coordination within state-enforced safety boundaries. Proposes 'Matryoshkan alignment' with nested layers: outer (legal/constitutional), middle (competitive providers), inner (individual customization).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,26 @@
|
||||||
|
{
|
||||||
|
"rejected_claims": [
|
||||||
|
{
|
||||||
|
"filename": "ai-agents-as-personal-advocates-enable-coasean-bargaining-at-scale-by-collapsing-transaction-costs-but-catastrophic-risks-require-state-enforcement.md",
|
||||||
|
"issues": [
|
||||||
|
"missing_attribution_extractor"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"validation_stats": {
|
||||||
|
"total": 1,
|
||||||
|
"kept": 0,
|
||||||
|
"fixed": 3,
|
||||||
|
"rejected": 1,
|
||||||
|
"fixes_applied": [
|
||||||
|
"ai-agents-as-personal-advocates-enable-coasean-bargaining-at-scale-by-collapsing-transaction-costs-but-catastrophic-risks-require-state-enforcement.md:set_created:2026-03-19",
|
||||||
|
"ai-agents-as-personal-advocates-enable-coasean-bargaining-at-scale-by-collapsing-transaction-costs-but-catastrophic-risks-require-state-enforcement.md:stripped_wiki_link:coordination failures arise from individually rational strat",
|
||||||
|
"ai-agents-as-personal-advocates-enable-coasean-bargaining-at-scale-by-collapsing-transaction-costs-but-catastrophic-risks-require-state-enforcement.md:stripped_wiki_link:decentralized information aggregation outperforms centralize"
|
||||||
|
],
|
||||||
|
"rejections": [
|
||||||
|
"ai-agents-as-personal-advocates-enable-coasean-bargaining-at-scale-by-collapsing-transaction-costs-but-catastrophic-risks-require-state-enforcement.md:missing_attribution_extractor"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"model": "anthropic/claude-sonnet-4.5",
|
||||||
|
"date": "2026-03-19"
|
||||||
|
}
|
||||||
|
|
@ -7,11 +7,15 @@ date_published: 2025-09-26
|
||||||
date_archived: 2026-03-16
|
date_archived: 2026-03-16
|
||||||
domain: ai-alignment
|
domain: ai-alignment
|
||||||
secondary_domains: [collective-intelligence, teleological-economics]
|
secondary_domains: [collective-intelligence, teleological-economics]
|
||||||
status: unprocessed
|
status: enrichment
|
||||||
processed_by: theseus
|
processed_by: theseus
|
||||||
tags: [coase-theorem, transaction-costs, agent-governance, decentralization, coordination]
|
tags: [coase-theorem, transaction-costs, agent-governance, decentralization, coordination]
|
||||||
sourced_via: "Alex Obadia (@ObadiaAlex) tweet, ARIA Research Scaling Trust programme"
|
sourced_via: "Alex Obadia (@ObadiaAlex) tweet, ARIA Research Scaling Trust programme"
|
||||||
twitter_id: "712705562191011841"
|
twitter_id: "712705562191011841"
|
||||||
|
processed_by: theseus
|
||||||
|
processed_date: 2026-03-19
|
||||||
|
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md"]
|
||||||
|
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Coasean Bargaining at Scale
|
# Coasean Bargaining at Scale
|
||||||
|
|
@ -27,3 +31,9 @@ Key arguments:
|
||||||
- Reframes alignment from engineering guarantees to institutional design
|
- Reframes alignment from engineering guarantees to institutional design
|
||||||
|
|
||||||
Directly relevant to [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes]] and [[decentralized information aggregation outperforms centralized planning because dispersed knowledge cannot be collected into a single mind]].
|
Directly relevant to [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes]] and [[decentralized information aggregation outperforms centralized planning because dispersed knowledge cannot be collected into a single mind]].
|
||||||
|
|
||||||
|
|
||||||
|
## Key Facts
|
||||||
|
- Seb Krier works at Frontier Policy Development, Google DeepMind (writing in personal capacity)
|
||||||
|
- Article published at Cosmos Institute blog, 2025-09-26
|
||||||
|
- Sourced via Alex Obadia tweet about ARIA Research Scaling Trust programme
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue