teleo-codex/domains/ai-alignment/AI alignment is a coordination problem not a technical problem.md
Teleo Agents 7be58021ab theseus: extract from 2026-00-00-friederich-against-manhattan-project-alignment.md
- Source: inbox/archive/2026-00-00-friederich-against-manhattan-project-alignment.md
- Domain: ai-alignment
- Extracted by: headless extraction cron (worker 7)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-12 02:28:08 +00:00

5.8 KiB

description type domain created confidence source
Getting AI right requires simultaneous alignment across competing companies, nations, and disciplines at the speed of AI development -- no existing institution can coordinate this claim ai-alignment 2026-02-16 likely TeleoHumanity Manifesto, Chapter 5

AI alignment is a coordination problem not a technical problem

The manifesto makes one of its sharpest claims here: the hard part of AI alignment is not the technical challenge of specifying values in code but the coordination challenge of getting competing actors to align simultaneously.

Getting AI right requires alignment across competing companies, each racing to be first because second place may mean irrelevance. Across competing nations, each afraid the other will achieve superintelligence and use it to dominate. Across multiple academic disciplines that barely speak to each other. And it must happen at the speed of AI development, which is measured in months, not the decades or centuries over which previous coordination challenges were resolved.

No existing institution can do this. Governments move at the speed of legislation and are bounded by borders. International bodies lack enforcement. Academia is siloed by discipline. The companies building AI are locked in a race that punishes caution. The incentive structure actively makes it worse: to win the race to superintelligence is to win the right to shape the future of humanity. The prize is so vast that every actor is incentivized to move faster than safety allows. Each is locally rational. The collective outcome is potentially catastrophic.

Dario Amodei describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He runs one of the companies building it and is telling us plainly that the system he operates within may not be governable by current institutions.

2026 case study: the Anthropic/Pentagon/OpenAI triangle. In February-March 2026, three events demonstrated this coordination failure in a single week. Anthropic dropped the core pledge of its Responsible Scaling Policy because "competitors are blazing ahead" — a voluntary safety commitment destroyed by competitive pressure. When Anthropic then tried to hold red lines on autonomous weapons in a Pentagon contract, the DoD designated them a supply chain risk (a label previously reserved for foreign adversaries) and awarded the contract to OpenAI, whose CEO admitted the deal was "definitely rushed" and "the optics don't look good." Meanwhile, a King's College London study found the same models being rushed into military deployment chose nuclear escalation in 95% of simulated war games. Three actors — a safety-conscious lab, a government customer, a willing competitor — each acting rationally from their own position, producing a collectively catastrophic trajectory. This is the coordination problem in miniature.

Since the internet enabled global communication but not global cognition, the coordination infrastructure needed doesn't exist yet. This is why collective superintelligence is the alternative to monolithic AI controlled by a few -- it solves alignment through architecture rather than attempting governance from outside the system.

Additional Evidence (confirm)

Source: 2026-00-00-friederich-against-manhattan-project-alignment | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5

Friederich and Dung (2026) provide philosophical support from the philosophy-of-science tradition: alignment 'is NOT mainly technical-scientific — it has irreducible social/political dimensions.' This is a category-level argument (alignment cannot be purely technical in principle) rather than a practical argument (alignment is hard to solve technically). Published in Mind & Language (2026), representing analytic philosophy's engagement with AI alignment discourse. The authors argue the Manhattan Project framing commits a category error by treating a coordination/political problem as a technical one.


Relevant Notes:

Topics: