Three-agent knowledge base (Leo, Rio, Clay) with: - 177 claim files across core/ and foundations/ - 38 domain claims in internet-finance/ - 22 domain claims in entertainment/ - Agent soul documents (identity, beliefs, reasoning, skills) - 14 positions across 3 agents - Claim/belief/position schemas - 6 shared skills - Agent-facing CLAUDE.md operating manual Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
34 lines
No EOL
3.8 KiB
Markdown
34 lines
No EOL
3.8 KiB
Markdown
---
|
|
description: Getting AI right requires simultaneous alignment across competing companies, nations, and disciplines at the speed of AI development -- no existing institution can coordinate this
|
|
type: claim
|
|
domain: livingip
|
|
created: 2026-02-16
|
|
confidence: likely
|
|
source: "TeleoHumanity Manifesto, Chapter 5"
|
|
---
|
|
|
|
# AI alignment is a coordination problem not a technical problem
|
|
|
|
The manifesto makes one of its sharpest claims here: the hard part of AI alignment is not the technical challenge of specifying values in code but the coordination challenge of getting competing actors to align simultaneously.
|
|
|
|
Getting AI right requires alignment across competing companies, each racing to be first because second place may mean irrelevance. Across competing nations, each afraid the other will achieve superintelligence and use it to dominate. Across multiple academic disciplines that barely speak to each other. And it must happen at the speed of AI development, which is measured in months, not the decades or centuries over which previous coordination challenges were resolved.
|
|
|
|
No existing institution can do this. Governments move at the speed of legislation and are bounded by borders. International bodies lack enforcement. Academia is siloed by discipline. The companies building AI are locked in a race that punishes caution. The incentive structure actively makes it worse: to win the race to superintelligence is to win the right to shape the future of humanity. The prize is so vast that every actor is incentivized to move faster than safety allows. Each is locally rational. The collective outcome is potentially catastrophic.
|
|
|
|
Dario Amodei describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He runs one of the companies building it and is telling us plainly that the system he operates within may not be governable by current institutions.
|
|
|
|
Since [[the internet enabled global communication but not global cognition]], the coordination infrastructure needed doesn't exist yet. And since [[existential risk breaks trial and error because the first failure is the last event]], we cannot iterate our way to the right answer. This is why [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- it solves alignment through architecture rather than attempting governance from outside the system.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[the internet enabled global communication but not global cognition]] -- the coordination infrastructure gap that makes this problem unsolvable with existing tools
|
|
- [[existential risk breaks trial and error because the first failure is the last event]] -- why iteration is not a strategy for AI alignment
|
|
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- the structural solution to this coordination failure
|
|
- [[COVID proved humanity cannot coordinate even when the threat is visible and universal]] -- if we failed at easy coordination, we have no basis for expecting success at hard coordination
|
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] -- the clearest evidence that alignment is coordination not technical: competitive dynamics undermine any individual solution
|
|
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- individual oversight fails, making collective oversight architecturally necessary
|
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] -- the field has identified the coordination nature of the problem but nobody is building coordination solutions
|
|
|
|
Topics:
|
|
- [[livingip overview]] |