inbox/queue/ (52 unprocessed) — landing zone for new sources
inbox/archive/{domain}/ (311 processed) — organized by domain
inbox/null-result/ (174) — reviewed, nothing extractable
One-time atomic migration. All paths preserved (wiki links use stems).
Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
26 lines
1.4 KiB
Markdown
26 lines
1.4 KiB
Markdown
---
|
|
type: source
|
|
title: "Distributional AGI Safety"
|
|
author: "Nenad Tomašev, Matija Franklin, Julian Jacobs, Sébastien Krier, Simon Osindero"
|
|
url: https://arxiv.org/abs/2512.16856
|
|
date_published: 2025-12-18
|
|
date_archived: 2026-03-16
|
|
domain: ai-alignment
|
|
status: processing
|
|
processed_by: theseus
|
|
tags: [distributed-agi, multi-agent-safety, patchwork-hypothesis, coordination]
|
|
sourced_via: "Alex Obadia (@ObadiaAlex) tweet, ARIA Research Scaling Trust programme"
|
|
twitter_id: "712705562191011841"
|
|
---
|
|
|
|
# Distributional AGI Safety
|
|
|
|
Tomašev et al. challenge the monolithic AGI assumption. They propose the "patchwork AGI hypothesis" — general capability levels first manifest through coordination among groups of sub-AGI agents with complementary skills and affordances, not through a single unified system.
|
|
|
|
Key arguments:
|
|
- AI safety research has focused on safeguarding individual systems, overlooking distributed emergence
|
|
- Rapid deployment of agents with tool-use and coordination capabilities makes distributed safety urgent
|
|
- Proposed framework: "virtual agentic sandbox economies" with robust market mechanisms, auditability, reputation management, and oversight for collective risks
|
|
- Safety focus shifts from individual agent alignment to managing risks at the system-of-systems level
|
|
|
|
Directly relevant to our claim [[AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system]] and to the collective superintelligence thesis.
|