teleo-codex/domains/ai-alignment/democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations.md
Teleo Agents 8b4463d697
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
fix: normalize YAML list indentation across 241 claim files
Previous reweave runs used 2-space indent + quotes for list entries
while the standard format is 0-space indent without quotes. This caused
YAML parse failures during merge. Bulk-fixed all reweave_edges files.

Pentagon-Agent: Ship <D53BE6DB-B498-4B30-B588-75D1F6D2124A>
2026-04-07 00:44:26 +00:00

4.2 KiB

description type domain created source confidence supports reweave_edges
CIP and Anthropic empirically demonstrated that publicly sourced AI constitutions via deliberative assemblies of 1000 participants perform as well as internally designed ones on helpfulness and harmlessness claim ai-alignment 2026-02-17 Anthropic/CIP, Collective Constitutional AI (arXiv 2406.07814, FAccT 2024); CIP Alignment Assemblies (cip.org, 2023-2025); STELA (Bergman et al, Scientific Reports, March 2024) likely
representative sampling and deliberative mechanisms should replace convenience platforms for ai alignment feedback
representative sampling and deliberative mechanisms should replace convenience platforms for ai alignment feedback|supports|2026-03-28

democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations

The Collective Intelligence Project (CIP), co-founded by Divya Siddarth and Saffron Huang, has run the most ambitious experiments in democratic AI alignment. Their Alignment Assemblies use deliberative processes where diverse participants collectively define rules for AI behavior, combining large-scale surveys (1,000+ participants) with platforms like Polis and AllOurIdeas.

In the landmark pilot with Anthropic (FAccT 2024), approximately 1,000 demographically representative Americans contributed 1,127 statements and cast 38,252 votes on what rules an AI chatbot should follow. Two Claude models were trained -- one using this publicly sourced constitution, one using Anthropic's internal constitution. The result: the public model was rated as helpful and harmless as the standard model. Democratic input did not degrade performance.

Two additional findings matter. First, participants showed remarkably high consensus, with only a few divisive statements per hundreds of consensus statements -- suggesting "whose values" may be less contested than assumed at the level of general principles. Second, CIP's Global Dialogues (bimonthly, 1000 participants from 70+ countries) demonstrated that participatory processes scale internationally.

However, this remains one-shot constitution-setting, not continuous alignment. The STELA study (Bergman et al, Scientific Reports 2024) adds a critical nuance: community-centred deliberation with underrepresented communities (female-identifying, Latina/o/x, African American, Southeast Asian groups) elicited latent normative perspectives materially different from developer-set rules. "Whose values" is not abstract -- different communities produce substantively different specifications.

Since collective intelligence requires diversity as a structural precondition not a moral preference, democratic assemblies structurally ensure the diversity that expert panels cannot guarantee. Since the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance, the next step beyond assemblies is continuous participatory alignment, not periodic constitution-setting.


Relevant Notes:

Topics: