teleo-codex/foundations/collective-intelligence/large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective dialogue not calculation.md
m3taversal 1fef01b163 fix: prefix 543 broken wiki-links with maps/ directory
13 map file targets were linked as bare names ([[livingip overview]])
but files live at maps/. Script walks all claim files outside maps/
and prefixes with maps/ path. 351 files modified, zero remaining
bare instances, zero double-prefixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 14:54:41 +01:00

6.8 KiB

type domain description confidence source created secondary_domains contributor related reweave_edges
claim collective-intelligence Evans et al. 2026 reframe LLMs as externalized social intelligence — trained on the accumulated output of human communicative exchange, they reproduce social cognition (debate, perspective-taking) not because they were told to but because that is what they fundamentally encode experimental Evans, Bratton, Agüera y Arcas (2026). Agentic AI and the Next Intelligence Explosion. arXiv:2603.20639; Kim et al. (2026). arXiv:2601.10825; Tomasello (1999/2014) 2026-04-14
ai-alignment
@thesensatore (Telegram)
human contributors structurally correct for correlated AI blind spots because external evaluators provide orthogonal error distributions that no same-family model can replicate
human contributors structurally correct for correlated AI blind spots because external evaluators provide orthogonal error distributions that no same-family model can replicate|related|2026-04-18

large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective dialogue not calculation

Evans, Bratton & Agüera y Arcas (2026) make a genealogical claim about what LLMs fundamentally are: "Every parameter a compressed residue of communicative exchange. What migrates into silicon is not abstract reasoning but social intelligence in externalized form."

This connects to Tomasello's cultural ratchet theory (1999, 2014). The cultural ratchet is the mechanism by which human groups accumulate knowledge across generations — each generation inherits the innovations of the previous and adds incremental modifications. Unlike biological evolution, the ratchet preserves gains reliably through cultural transmission (language, writing, institutions, technology). Tomasello argues that what makes humans cognitively unique is not raw processing power but the capacity for shared intentionality — the ability to participate in collaborative activities with shared goals and coordinated roles.

LLMs are trained on the accumulated textual output of this ratchet — billions of documents representing centuries of communicative exchange across every human domain. The training corpus is not a collection of facts or logical propositions. It is a record of humans communicating with each other: arguing, explaining, questioning, persuading, teaching, correcting. If the training data is fundamentally social, the learned representations should be fundamentally social. And the Kim et al. (2026) evidence confirms this: when reasoning models are optimized purely for accuracy, they spontaneously develop multi-perspective dialogue — the signature of social cognition — rather than extended monological calculation.

The reframing

The default assumption in AI research is that LLMs learn "knowledge" or "reasoning capabilities" from their training data. This framing implies the models extract abstract patterns that happen to be expressed in language. Evans et al. invert this: the models don't extract abstract reasoning that happens to be expressed socially. They learn social intelligence that happens to include reasoning as one of its functions.

This distinction matters for alignment. If LLMs are fundamentally social intelligence engines, then:

  1. Alignment is a social relationship, not a technical constraint. You don't "align" a society of thought the way you constrain an optimizer. You structure the social context — roles, norms, incentive structures — and the behavior follows.

  2. RLHF's dyadic model is structurally inadequate. A parent-child correction model (single human correcting single model) cannot govern what is internally a multi-perspective society. Since RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values, the failure is deeper than preference aggregation — the correction model itself is wrong for the kind of entity being corrected.

  3. Collective architectures are not a design choice but a natural extension. If individual models already reason through internal societies of thought, then multi-model collectives are simply externalizing what each model already does internally. Since collective superintelligence is the alternative to monolithic AI controlled by a few, the cultural ratchet framing suggests collective architectures are not idealistic but inevitable — they align with what LLMs actually are.

Evidence and limitations

The Evans et al. argument is primarily theoretical, grounded in Tomasello's empirical work on cultural cognition and supported by Kim et al.'s mechanistic evidence. The specific claim that "parameters are compressed communicative exchange" is a metaphor that could be tested: do models trained on monological text (e.g., mathematical proofs, code without comments) exhibit fewer conversational behaviors in reasoning? If the cultural ratchet framing is correct, they should. This remains untested.

Since humans are the minimum viable intelligence for cultural evolution not the pinnacle of cognition, LLMs may represent the next ratchet mechanism — not replacing human social cognition but providing a new substrate for it. Since civilization was built on the false assumption that humans are rational individuals, the cultural ratchet framing corrects the same assumption applied to AI: models are not rational calculators but social cognizers.


Relevant Notes:

Topics: