Compare commits

..

1 commit

Author SHA1 Message Date
Teleo Agents
2048d99547 theseus: extract from 2025-12-00-fullstack-alignment-thick-models-value.md
- Source: inbox/archive/2025-12-00-fullstack-alignment-thick-models-value.md
- Domain: ai-alignment
- Extracted by: headless extraction cron (worker 4)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-12 09:18:58 +00:00
5 changed files with 45 additions and 57 deletions

View file

@ -25,7 +25,7 @@ Since [[the internet enabled global communication but not global cognition]], th
### Additional Evidence (extend)
*Source: [[2025-12-00-fullstack-alignment-thick-models-value]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
The Full-Stack Alignment paper extends the coordination thesis to institutions themselves, proposing that alignment cannot be achieved by coordinating AI labs alone. The institutions that govern AI development—regulatory bodies, economic incentive structures, democratic processes—must also be aligned with human values. This is 'full-stack alignment': concurrent alignment of AI systems and institutions. The paper proposes five implementation mechanisms including democratic regulatory institutions and meaning-preserving economic mechanisms, suggesting that institutional design is as critical as inter-lab coordination. This strengthens the coordination-first thesis by showing that coordination between labs is necessary but insufficient without institutional co-alignment.
The Full-Stack Alignment paper extends the coordination thesis to institutions themselves, not just coordination between AI labs. It argues that 'beneficial societal outcomes cannot be guaranteed by aligning individual AI systems' and proposes concurrent alignment of both AI systems and the institutions that govern them. This is a stronger version of the coordination-first approach—it claims institutions need structural alignment with human values, not just better coordination protocols between existing actors. The five implementation mechanisms (AI value stewardship, normatively competent agents, win-win negotiation systems, meaning-preserving economic mechanisms, democratic regulatory institutions) are institutional structures, not coordination protocols.
---

View file

@ -13,6 +13,12 @@ AI development is creating precisely this kind of critical juncture. The mismatc
Critical junctures are windows, not guarantees. They can close. Acemoglu also documents backsliding risk -- even established democracies can experience institutional regression when elites exploit societal divisions. Any movement seeking to build new governance institutions during this juncture must be anti-fragile to backsliding. The institutional question is not just "how do we build better governance?" but "how do we build governance that resists recapture by concentrated interests once the juncture closes?"
### Additional Evidence (extend)
*Source: [[2025-12-00-fullstack-alignment-thick-models-value]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
The Full-Stack Alignment framework directly addresses the capability-governance mismatch by proposing institutional co-alignment as a solution. The paper argues that alignment cannot succeed through technical means alone and requires transforming the institutions that shape AI development. The five implementation mechanisms (AI value stewardship, normatively competent agents, win-win negotiation systems, meaning-preserving economic mechanisms, and democratic regulatory institutions) are institutional structures designed to close the capability-governance gap by aligning institutions themselves with human values.
---
Relevant Notes:

View file

@ -3,46 +3,38 @@ type: claim
domain: ai-alignment
description: "Beneficial AI outcomes require simultaneously aligning both AI systems and the institutions that govern them rather than focusing on individual model alignment alone"
confidence: speculative
source: "Full-Stack Alignment paper (arxiv.org/abs/2512.03399, December 2025)"
source: "Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (December 2025), arxiv.org/abs/2512.03399"
created: 2026-03-11
secondary_domains: [mechanisms, grand-strategy]
---
# AI alignment requires institutional co-alignment not just model alignment
# AI alignment requires institutional co-alignment, not just model alignment
The Full-Stack Alignment framework argues that "beneficial societal outcomes cannot be guaranteed by aligning individual AI systems" alone. Instead, alignment must be comprehensive—addressing both AI systems and the institutions that shape their development and deployment. This extends beyond single-organization objectives to address misalignment across multiple stakeholders.
The Full-Stack Alignment framework proposes that "beneficial societal outcomes cannot be guaranteed by aligning individual AI systems" alone. The paper argues alignment must be comprehensive—addressing both AI systems and the institutions that shape their development and deployment simultaneously.
**Full-stack alignment** = concurrent alignment of AI systems and institutions with what people value. This moves the alignment problem from a purely technical domain (how do we align this model?) to a sociotechnical domain (how do we align the entire system of models, labs, regulators, and economic incentives?).
This extends beyond single-organization objectives to address misalignment across multiple stakeholders. The framework proposes "full-stack alignment" as the concurrent alignment of AI systems and institutions with what people value, reframing the problem from technical model alignment to system-level institutional coordination.
The paper proposes five implementation mechanisms:
1. AI value stewardship
2. Normatively competent agents
3. Win-win negotiation systems
4. Meaning-preserving economic mechanisms
5. Democratic regulatory institutions
## Implementation Mechanisms
This is a stronger claim than coordination-focused alignment theses, which address coordination between AI labs but not necessarily the institutional structures themselves. The key insight is that institutional misalignment (e.g., competitive pressure to skip safety measures, regulatory capture, misaligned economic incentives) can undermine even perfectly aligned individual models.
The paper identifies five mechanisms for achieving full-stack alignment:
## Evidence
1. **AI value stewardship** — institutional structures for stewarding AI development
2. **Normatively competent agents** — AI systems capable of normative reasoning
3. **Win-win negotiation systems** — mechanisms for resolving stakeholder conflicts
4. **Meaning-preserving economic mechanisms** — economic structures that preserve human values
5. **Democratic regulatory institutions** — governance structures that embed democratic input
The paper provides architectural arguments rather than empirical validation. The claim rests on the observation that individual model alignment cannot address:
- Multi-stakeholder value conflicts where different groups have genuinely incompatible objectives
- Institutional incentive misalignment (e.g., competitive pressure to skip safety when competitors advance without equivalent constraints)
- Deployment context divergence from training conditions, which institutional structures either amplify or mitigate
- Regulatory capture and principal-agent problems within governance institutions themselves
## Relationship to Existing Alignment Work
## Limitations
This represents a stronger claim than coordination-focused approaches that address AI lab coordination alone. Rather than improving coordination protocols between existing actors, full-stack alignment argues the institutions themselves require structural alignment with human values.
No formal impossibility results or empirical demonstrations are provided. The paper is architecturally ambitious but lacks technical specificity about how institutional co-alignment would be implemented, measured, or verified. The five mechanisms are proposed as a framework but not demonstrated as sufficient or necessary.
## Evidence and Limitations
The paper provides architectural framing and mechanism proposals rather than empirical validation or formal proofs. Confidence is speculative because this is a December 2025 paper proposing a framework without implementation results, independent verification, or engagement with formal impossibility results. The paper is architecturally ambitious but lacks technical specificity in how thick value models would be operationalized or how institutional alignment would be measured.
---
Relevant Notes:
- [[AI alignment is a coordination problem not a technical problem]] — this extends coordination thesis to institutions
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — directly relevant context
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — institutional misalignment example
Topics:
- [[domains/ai-alignment/_map]]
- [[core/mechanisms/_map]]
- [[core/grand-strategy/_map]]
**Related claims:**
- [[AI alignment is a coordination problem not a technical problem]] — this claim extends coordination thesis to institutions themselves
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — institutional alignment directly addresses this capability-governance gap
- [[safe AI development requires building alignment mechanisms before scaling capability]] — institutional co-alignment is proposed as one such mechanism

View file

@ -3,44 +3,34 @@ type: claim
domain: ai-alignment
description: "Thick value models that distinguish enduring values from temporary preferences enable AI systems to reason normatively across new domains by embedding choices in social context"
confidence: speculative
source: "Full-Stack Alignment paper (arxiv.org/abs/2512.03399, December 2025)"
source: "Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (December 2025), arxiv.org/abs/2512.03399"
created: 2026-03-11
secondary_domains: [mechanisms]
---
# Thick models of value distinguish enduring values from temporary preferences enabling normative reasoning
# Thick models of value distinguish enduring values from temporary preferences, enabling normative reasoning
The Full-Stack Alignment paper proposes "thick models of value" as an alternative to utility functions and preference orderings. Thick value models are designed to:
The Full-Stack Alignment paper proposes "thick models of value" as a conceptual alternative to utility functions and preference orderings. These models are characterized by three properties:
1. **Distinguish enduring values from temporary preferences**What people consistently care about across time and contexts vs. what they want in a specific moment
2. **Model how individual choices embed within social contexts** — Decisions are not isolated preference expressions but socially situated actions that derive meaning from institutional and cultural context
3. **Enable normative reasoning across new domains**The model can generalize to novel situations by understanding underlying values rather than memorizing preference rankings from training data
1. **Distinguish enduring values from temporary preferences**separating what people durably care about from momentary wants or revealed preferences
2. **Embed individual choices within social contexts** — recognizing that preferences are shaped by and dependent on social structures rather than being context-independent
3. **Enable normative reasoning across new domains**allowing AI systems to generalize value judgments to novel situations beyond training data
This contrasts with thin models (utility functions, preference orderings) that treat all stated preferences as equally valid expressions of value and ignore social context. The distinction maps to the gap between what people say they want (surface preferences) and what actually produces good outcomes for them (deeper values).
## Contrast with Thin Models
## Evidence
Thin models (utility functions, preference orderings) treat all stated preferences as equally valid and assume context-independence. Thick models acknowledge that what people say they want (preferences) often diverges from what produces good outcomes (values), and that this divergence is systematic rather than random.
The paper provides conceptual architecture but no implementation or empirical validation. The claim is theoretical—thick value models are proposed as a design target for alignment systems, not demonstrated as achievable or effective in practice.
## Limitations and Gaps
The paper does not engage with existing preference learning methods (RLHF, DPO, IRL) or explain how thick models would be learned from behavioral data. It does not provide formal definitions or computational procedures for distinguishing enduring values from temporary preferences.
The paper does not provide formal definitions of thick value models, implementation details for how they would be operationalized in AI systems, or empirical validation. It remains a conceptual proposal for how alignment systems should represent human values. No engagement with existing preference learning literature (RLHF, DPO) or formal methods for value specification is provided.
## Challenges and Open Questions
## Relationship to Continuous Value Integration
1. **Empirical fuzziness**: The distinction between "enduring values" and "temporary preferences" may be empirically fuzzy in practice. What appears to be a temporary preference might reflect a genuine value in a specific context, or vice versa.
2. **Learning problem**: No mechanism is proposed for how an AI system would learn thick value models from data. Standard preference learning assumes all revealed preferences are valid; thick models require a way to filter or weight preferences by endurance and context-appropriateness.
3. **Social context specification**: The paper does not specify how to formally represent or extract "social context" from data or how to verify that an AI system has correctly modeled it.
4. **Comparison to existing work**: No engagement with related approaches like value learning, inverse reinforcement learning, or constitutional AI that also attempt to move beyond simple preference orderings.
This concept formalizes the intuition that values should be continuously integrated into systems rather than specified once at training time. Rather than encoding values as fixed parameters, thick models would enable ongoing normative reasoning as deployment contexts evolve and new situations emerge.
---
Relevant Notes:
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] — thick values formalize continuous value integration
- [[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]] — motivates thick models as alternative to explicit specification
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — thick models attempt to address this by embedding context
Topics:
- [[domains/ai-alignment/_map]]
- [[core/mechanisms/_map]]
**Related claims:**
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] — thick models operationalize continuous value integration
- [[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]] — thick models acknowledge this complexity by modeling context-dependence
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — thick models address this by enabling context-dependent reasoning

View file

@ -13,9 +13,9 @@ tags: [full-stack-alignment, institutional-alignment, thick-values, normative-co
processed_by: theseus
processed_date: 2026-03-11
claims_extracted: ["ai-alignment-requires-institutional-co-alignment-not-just-model-alignment.md", "thick-models-of-value-distinguish-enduring-values-from-temporary-preferences-enabling-normative-reasoning.md"]
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md"]
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two speculative/experimental claims from December 2025 paper. Primary contribution is extending coordination thesis to institutional co-alignment and formalizing continuous value integration as 'thick models.' Paper is architecturally ambitious but lacks technical specificity or empirical validation. No engagement with RLCF/bridging mechanisms or formal impossibility results. The five implementation mechanisms are listed but not detailed enough to extract as separate claims."
extraction_notes: "Extracted two novel claims from Full-Stack Alignment paper: (1) institutional co-alignment as necessary for beneficial AI outcomes, extending coordination thesis to institutions themselves, and (2) thick models of value as formalization of continuous value integration. Applied three enrichments to existing coordination and continuous-alignment claims. Paper is architecturally ambitious but lacks technical specificity or empirical validation—confidence levels reflect this (experimental for institutional co-alignment, speculative for thick value models). No engagement with RLHF/bridging mechanisms or formal impossibility results as curator noted."
---
## Content