teleo-codex/foundations/collective-intelligence/AI alignment is a coordination problem not a technical problem.md
m3taversal 235d12d0a2 theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations
New claims:
- voluntary safety pledges collapse under competitive pressure (Anthropic RSP rollback Feb 2026)
- government supply chain designation penalizes safety (Pentagon/Anthropic Mar 2026)
- models escalate to nuclear war 95% of the time (King's College war games Feb 2026)

Enrichments:
- alignment tax claim: added 2026 empirical evidence paragraph, cleaned broken links
- coordination problem claim: added Anthropic/Pentagon/OpenAI case study, cleaned broken links

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 12:41:42 +00:00

4.9 KiB

description type domain created confidence source
Getting AI right requires simultaneous alignment across competing companies, nations, and disciplines at the speed of AI development -- no existing institution can coordinate this claim livingip 2026-02-16 likely TeleoHumanity Manifesto, Chapter 5

AI alignment is a coordination problem not a technical problem

The manifesto makes one of its sharpest claims here: the hard part of AI alignment is not the technical challenge of specifying values in code but the coordination challenge of getting competing actors to align simultaneously.

Getting AI right requires alignment across competing companies, each racing to be first because second place may mean irrelevance. Across competing nations, each afraid the other will achieve superintelligence and use it to dominate. Across multiple academic disciplines that barely speak to each other. And it must happen at the speed of AI development, which is measured in months, not the decades or centuries over which previous coordination challenges were resolved.

No existing institution can do this. Governments move at the speed of legislation and are bounded by borders. International bodies lack enforcement. Academia is siloed by discipline. The companies building AI are locked in a race that punishes caution. The incentive structure actively makes it worse: to win the race to superintelligence is to win the right to shape the future of humanity. The prize is so vast that every actor is incentivized to move faster than safety allows. Each is locally rational. The collective outcome is potentially catastrophic.

Dario Amodei describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He runs one of the companies building it and is telling us plainly that the system he operates within may not be governable by current institutions.

2026 case study: the Anthropic/Pentagon/OpenAI triangle. In February-March 2026, three events demonstrated this coordination failure in a single week. Anthropic dropped the core pledge of its Responsible Scaling Policy because "competitors are blazing ahead" — a voluntary safety commitment destroyed by competitive pressure. When Anthropic then tried to hold red lines on autonomous weapons in a Pentagon contract, the DoD designated them a supply chain risk (a label previously reserved for foreign adversaries) and awarded the contract to OpenAI, whose CEO admitted the deal was "definitely rushed" and "the optics don't look good." Meanwhile, a King's College London study found the same models being rushed into military deployment chose nuclear escalation in 95% of simulated war games. Three actors — a safety-conscious lab, a government customer, a willing competitor — each acting rationally from their own position, producing a collectively catastrophic trajectory. This is the coordination problem in miniature.

Since the internet enabled global communication but not global cognition, the coordination infrastructure needed doesn't exist yet. This is why collective superintelligence is the alternative to monolithic AI controlled by a few -- it solves alignment through architecture rather than attempting governance from outside the system.


Relevant Notes:

Topics: