teleo-codex/domains/ai-alignment/beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment.md
Teleo Agents ef292693a4 auto-fix: address review feedback on 2025-12-00-fullstack-alignment-thick-models-value.md
- Fixed based on eval review comments
- Quality gate pass 3 (fix-from-feedback)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-11 18:53:01 +00:00

6.2 KiB

type domain secondary_domains description confidence source created enrichments
claim ai-alignment
mechanisms
grand-strategy
Full-stack alignment requires concurrent alignment of AI systems and governing institutions with thick models of value, not just individual model alignment speculative Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (arXiv 2512.03399, December 2025) 2026-03-11
AI alignment is a coordination problem not a technical problem
AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation

Beneficial AI outcomes require institutional co-alignment not just model alignment

The full-stack alignment framework argues that "beneficial societal outcomes cannot be guaranteed by aligning individual AI systems" alone. Instead, comprehensive alignment requires concurrent alignment of BOTH AI systems and the institutions that shape their development and deployment.

This extends the existing coordination-first thesis in a specific way: the existing "AI alignment is a coordination problem" claim treats institutions (governments, regulatory bodies, economic structures) as the environment within which coordination must occur. Full-stack alignment treats institutions themselves as alignment targets that must be redesigned and co-evolved alongside AI systems. The distinction is architectural: coordination-first asks "how do competing actors align around AI?" Full-stack alignment asks "how do we align the institutions that govern AI development?"

The framework proposes five implementation mechanisms:

  1. AI value stewardship — institutional structures for preserving and transmitting human values
  2. Normatively competent agents — AI systems that reason about values rather than optimize fixed objectives
  3. Win-win negotiation systems — mechanisms for resolving stakeholder conflicts without zero-sum extraction
  4. Meaning-preserving economic mechanisms — economic structures that preserve rather than flatten human meaning and purpose
  5. Democratic regulatory institutions — governance structures that represent affected populations, not just developers or governments

The key claim: these five institutional mechanisms must be built concurrently with AI capability development, not sequentially after. This creates a timing problem: institutional redesign operates on decades-long timescales (Acemoglu's critical junctures are measured in decades); AI capability development operates on months-to-years timescales. The simultaneous co-alignment requirement may be structurally incoherent if the two processes cannot be synchronized.

Evidence

The paper presents this as a theoretical framework rather than an empirically validated approach. The five implementation mechanisms are proposed but lack formal specification, deployment evidence, or comparative analysis against alternative institutional designs. No working system exists that demonstrates institutional co-alignment at scale.

Challenges

Timescale incoherence: Institutional change (decades) and AI capability development (months) operate on fundamentally different timescales. The paper does not address whether simultaneous co-alignment is even temporally feasible, or whether the requirement should be sequential (build institutions first, then scale AI) or parallel (accept institutional lag).

Coordination across jurisdictions: The framework does not specify how to coordinate institutional redesign across nations with conflicting interests, different legal systems, and competing strategic incentives. Full-stack alignment requires global institutional alignment, but the mechanisms for achieving this across sovereign states are unspecified.

Irreducible value disagreement: The framework does not address how institutional co-alignment handles cases where different populations have genuinely incompatible enduring values, not just preference differences. Democratic regulatory institutions may amplify rather than resolve these conflicts.

Operationalization gap: The paper does not provide concrete methods for implementing any of the five mechanisms. "AI value stewardship" and "meaning-preserving economic mechanisms" are conceptually interesting but lack specification sufficient for deployment.

Institutional capture risk: The framework does not address how to prevent the proposed institutions from being captured by concentrated interests once they are built. Acemoglu's work emphasizes that critical junctures can close through backsliding — the paper does not propose anti-fragility mechanisms.


Relevant Notes:

Topics: