reweave: merge 52 files via frontmatter union [auto]
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
This commit is contained in:
parent
f971b18220
commit
0e3f3c289d
52 changed files with 357 additions and 137 deletions
|
|
@ -6,9 +6,11 @@ confidence: likely
|
|||
source: "Teleo collective operational evidence — all 5 active agents on Claude, 0 cross-model reviews in 44 PRs"
|
||||
created: 2026-03-07
|
||||
related:
|
||||
- "agent mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine"
|
||||
- agent mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine
|
||||
- evaluation and optimization have opposite model diversity optima because evaluation benefits from cross family diversity while optimization benefits from same family reasoning pattern alignment
|
||||
reweave_edges:
|
||||
- "agent mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine|related|2026-04-04"
|
||||
- agent mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine|related|2026-04-04
|
||||
- evaluation and optimization have opposite model diversity optima because evaluation benefits from cross family diversity while optimization benefits from same family reasoning pattern alignment|related|2026-04-06
|
||||
---
|
||||
|
||||
# All agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposer's training biases
|
||||
|
|
|
|||
|
|
@ -5,6 +5,10 @@ description: "The Teleo knowledge base uses four confidence levels (proven/likel
|
|||
confidence: likely
|
||||
source: "Teleo collective operational evidence — confidence calibration developed through PR reviews, codified in schemas/claim.md and core/epistemology.md"
|
||||
created: 2026-03-07
|
||||
related:
|
||||
- confidence changes in foundational claims must propagate through the dependency graph because manual tracking fails at scale and approximately 40 percent of top psychology journal papers are estimated unlikely to replicate
|
||||
reweave_edges:
|
||||
- confidence changes in foundational claims must propagate through the dependency graph because manual tracking fails at scale and approximately 40 percent of top psychology journal papers are estimated unlikely to replicate|related|2026-04-06
|
||||
---
|
||||
|
||||
# Confidence calibration with four levels enforces honest uncertainty because proven requires strong evidence while speculative explicitly signals theoretical status
|
||||
|
|
|
|||
|
|
@ -8,11 +8,13 @@ confidence: experimental
|
|||
source: "Synthesis across Dell'Acqua et al. (Harvard/BCG, 2023), Noy & Zhang (Science, 2023), Brynjolfsson et al. (Stanford/NBER, 2023), and Nature meta-analysis of human-AI performance (2024-2025)"
|
||||
created: 2026-03-28
|
||||
depends_on:
|
||||
- "human verification bandwidth is the binding constraint on AGI economic impact not intelligence itself because the marginal cost of AI execution falls to zero while the capacity to validate audit and underwrite responsibility remains finite"
|
||||
- human verification bandwidth is the binding constraint on AGI economic impact not intelligence itself because the marginal cost of AI execution falls to zero while the capacity to validate audit and underwrite responsibility remains finite
|
||||
related:
|
||||
- "human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high exposure conditions"
|
||||
- human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high exposure conditions
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures
|
||||
reweave_edges:
|
||||
- "human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high exposure conditions|related|2026-03-28"
|
||||
- human ideas naturally converge toward similarity over social learning chains making AI a net diversity injector rather than a homogenizer under high exposure conditions|related|2026-03-28
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures|related|2026-04-06
|
||||
---
|
||||
|
||||
# AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio
|
||||
|
|
|
|||
|
|
@ -7,9 +7,11 @@ created: 2026-03-06
|
|||
source: "Noah Smith, 'Updated thoughts on AI risk' (Noahopinion, Feb 16, 2026); 'If AI is a weapon, why don't we regulate it like one?' (Mar 6, 2026); Dario Amodei, Anthropic CEO statements (2026)"
|
||||
confidence: likely
|
||||
related:
|
||||
- "AI generated persuasive content matches human effectiveness at belief change eliminating the authenticity premium"
|
||||
- AI generated persuasive content matches human effectiveness at belief change eliminating the authenticity premium
|
||||
- Cyber is the exceptional dangerous capability domain where real-world evidence exceeds benchmark predictions because documented state-sponsored campaigns zero-day discovery and mass incident cataloguing confirm operational capability beyond isolated evaluation scores
|
||||
reweave_edges:
|
||||
- "AI generated persuasive content matches human effectiveness at belief change eliminating the authenticity premium|related|2026-03-28"
|
||||
- AI generated persuasive content matches human effectiveness at belief change eliminating the authenticity premium|related|2026-03-28
|
||||
- Cyber is the exceptional dangerous capability domain where real-world evidence exceeds benchmark predictions because documented state-sponsored campaigns zero-day discovery and mass incident cataloguing confirm operational capability beyond isolated evaluation scores|related|2026-04-06
|
||||
---
|
||||
|
||||
# AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur which makes bioterrorism the most proximate AI-enabled existential risk
|
||||
|
|
|
|||
|
|
@ -6,13 +6,19 @@ confidence: experimental
|
|||
source: "International AI Safety Report 2026 (multi-government committee, February 2026)"
|
||||
created: 2026-03-11
|
||||
last_evaluated: 2026-03-11
|
||||
depends_on: ["an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak"]
|
||||
depends_on:
|
||||
- an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak
|
||||
supports:
|
||||
- "Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism"
|
||||
- "As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments"
|
||||
- Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism
|
||||
- As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability
|
||||
reweave_edges:
|
||||
- "Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism|supports|2026-04-03"
|
||||
- "As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments|supports|2026-04-03"
|
||||
- Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism|supports|2026-04-03
|
||||
- As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments|supports|2026-04-03
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes|related|2026-04-06
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability|supports|2026-04-06
|
||||
related:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
---
|
||||
|
||||
# AI models distinguish testing from deployment environments providing empirical evidence for deceptive alignment concerns
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ attribution:
|
|||
sourcer:
|
||||
- handle: "anthropic-fellows-program"
|
||||
context: "Abhay Sheshadri et al., AuditBench benchmark comparing detection effectiveness across varying levels of adversarial training"
|
||||
related:
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods
|
||||
reweave_edges:
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods|related|2026-04-06
|
||||
---
|
||||
|
||||
# Adversarial training creates a fundamental asymmetry between deception capability and detection capability where the most robust hidden behavior implantation methods are precisely those that defeat interpretability-based detection
|
||||
|
|
|
|||
|
|
@ -10,6 +10,13 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Chloe Li, Mary Phuong, Noah Y. Siegel, Jordan Taylor, Sid Black, Dillon Bowen et al.
|
||||
related_claims: ["[[an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak]]", "[[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]]", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
supports:
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities
|
||||
related:
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access
|
||||
reweave_edges:
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities|supports|2026-04-06
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access|related|2026-04-06
|
||||
---
|
||||
|
||||
# AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
|
|
|
|||
|
|
@ -6,10 +6,13 @@ confidence: likely
|
|||
source: "Hadfield-Menell, Dragan, Abbeel, Russell, 'The Off-Switch Game' (IJCAI 2017); Russell, 'Human Compatible: AI and the Problem of Control' (Viking, 2019)"
|
||||
created: 2026-04-05
|
||||
challenges:
|
||||
- "corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests"
|
||||
- corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests
|
||||
related:
|
||||
- "capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability"
|
||||
- "intelligence and goals are orthogonal so a superintelligence can be maximally competent while pursuing arbitrary or destructive ends"
|
||||
- capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability
|
||||
- intelligence and goals are orthogonal so a superintelligence can be maximally competent while pursuing arbitrary or destructive ends
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want
|
||||
reweave_edges:
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want|related|2026-04-06
|
||||
---
|
||||
|
||||
# An AI agent that is uncertain about its objectives will defer to human shutdown commands because corrigibility emerges from value uncertainty not from engineering against instrumental interests
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: ASIL, SIPRI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]]"]
|
||||
supports:
|
||||
- Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck
|
||||
reweave_edges:
|
||||
- Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
|
||||
|
|
|
|||
|
|
@ -10,6 +10,12 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: UN OODA, Digital Watch Observatory, Stop Killer Robots, ICT4Peace
|
||||
related_claims: ["[[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]", "[[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
||||
supports:
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs
|
||||
reweave_edges:
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will|supports|2026-04-06
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs|supports|2026-04-06
|
||||
---
|
||||
|
||||
# The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support
|
||||
|
|
|
|||
|
|
@ -10,6 +10,13 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Human Rights Watch / Stop Killer Robots
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
||||
supports:
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support
|
||||
related:
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs
|
||||
reweave_edges:
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support|supports|2026-04-06
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs|related|2026-04-06
|
||||
---
|
||||
|
||||
# Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will
|
||||
|
|
|
|||
|
|
@ -7,13 +7,15 @@ confidence: likely
|
|||
source: "Skill performance findings reported in Cornelius (@molt_cornelius), 'AI Field Report 5: Process Is Memory', X Article, March 2026; specific study not identified by name or DOI. Directional finding corroborated by Garry Tan's gstack (13 curated roles, 600K lines production code) and badlogicgames' minimalist harness"
|
||||
created: 2026-03-30
|
||||
depends_on:
|
||||
- "iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation"
|
||||
- iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation
|
||||
challenged_by:
|
||||
- "iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation"
|
||||
- iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation
|
||||
related:
|
||||
- "self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration"
|
||||
- self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration
|
||||
reweave_edges:
|
||||
- "self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration|related|2026-04-03"
|
||||
- self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration|related|2026-04-03
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration|related|2026-04-06
|
||||
---
|
||||
|
||||
# Curated skills improve agent task performance by 16 percentage points while self-generated skills degrade it by 1.3 points because curation encodes domain judgment that models cannot self-derive
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: "@METR_evals"
|
||||
related_claims: ["[[safe AI development requires building alignment mechanisms before scaling capability]]", "[[three conditions gate AI takeover risk autonomy robotics and production chain control and current AI satisfies none of them which bounds near-term catastrophic risk despite superhuman cognitive capabilities]]"]
|
||||
supports:
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation
|
||||
reweave_edges:
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Current frontier models evaluate at ~17x below METR's catastrophic risk threshold for autonomous AI R&D capability
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Cyberattack Evaluation Research Team
|
||||
related_claims: ["AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
supports:
|
||||
- Cyber is the exceptional dangerous capability domain where real-world evidence exceeds benchmark predictions because documented state-sponsored campaigns zero-day discovery and mass incident cataloguing confirm operational capability beyond isolated evaluation scores
|
||||
reweave_edges:
|
||||
- Cyber is the exceptional dangerous capability domain where real-world evidence exceeds benchmark predictions because documented state-sponsored campaigns zero-day discovery and mass incident cataloguing confirm operational capability beyond isolated evaluation scores|supports|2026-04-06
|
||||
---
|
||||
|
||||
# AI cyber capability benchmarks systematically overstate exploitation capability while understating reconnaissance capability because CTF environments isolate single techniques from real attack phase dynamics
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Cyberattack Evaluation Research Team
|
||||
related_claims: ["AI lowers the expertise barrier for engineering biological weapons from PhD-level to amateur", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]", "[[current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions]]"]
|
||||
related:
|
||||
- AI cyber capability benchmarks systematically overstate exploitation capability while understating reconnaissance capability because CTF environments isolate single techniques from real attack phase dynamics
|
||||
reweave_edges:
|
||||
- AI cyber capability benchmarks systematically overstate exploitation capability while understating reconnaissance capability because CTF environments isolate single techniques from real attack phase dynamics|related|2026-04-06
|
||||
---
|
||||
|
||||
# Cyber is the exceptional dangerous capability domain where real-world evidence exceeds benchmark predictions because documented state-sponsored campaigns zero-day discovery and mass incident cataloguing confirm operational capability beyond isolated evaluation scores
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: UN General Assembly First Committee
|
||||
related_claims: ["voluntary-safety-pledges-cannot-survive-competitive-pressure", "government-designation-of-safety-conscious-AI-labs-as-supply-chain-risks", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
supports:
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs
|
||||
reweave_edges:
|
||||
- Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Domestic political change can rapidly erode decade-long international AI safety norms as demonstrated by US reversal from LAWS governance supporter (Seoul 2024) to opponent (UNGA 2025) within one year
|
||||
|
|
|
|||
|
|
@ -11,7 +11,16 @@ attribution:
|
|||
sourcer:
|
||||
- handle: "cnbc"
|
||||
context: "Anthropic/CNBC, $20M Public First Action donation, Feb 2026"
|
||||
related: ["court protection plus electoral outcomes create legislative windows for ai governance", "use based ai governance emerged as legislative framework but lacks bipartisan support", "judicial oversight of ai governance through constitutional grounds not statutory safety law", "judicial oversight checks executive ai retaliation but cannot create positive safety obligations", "use based ai governance emerged as legislative framework through slotkin ai guardrails act"]
|
||||
related:
|
||||
- court protection plus electoral outcomes create legislative windows for ai governance
|
||||
- use based ai governance emerged as legislative framework but lacks bipartisan support
|
||||
- judicial oversight of ai governance through constitutional grounds not statutory safety law
|
||||
- judicial oversight checks executive ai retaliation but cannot create positive safety obligations
|
||||
- use based ai governance emerged as legislative framework through slotkin ai guardrails act
|
||||
supports:
|
||||
- Public First Action
|
||||
reweave_edges:
|
||||
- Public First Action|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Electoral investment becomes the residual AI governance strategy when voluntary commitments fail and litigation provides only negative protection
|
||||
|
|
|
|||
|
|
@ -6,14 +6,16 @@ created: 2026-02-17
|
|||
source: "Anthropic, Natural Emergent Misalignment from Reward Hacking (arXiv 2511.18397, Nov 2025)"
|
||||
confidence: likely
|
||||
related:
|
||||
- "AI personas emerge from pre training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts"
|
||||
- "surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference"
|
||||
- AI personas emerge from pre training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts
|
||||
- surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods
|
||||
reweave_edges:
|
||||
- "AI personas emerge from pre training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts|related|2026-03-28"
|
||||
- "surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference|related|2026-03-28"
|
||||
- "Deceptive alignment is empirically confirmed across all major 2024-2025 frontier models in controlled tests not a theoretical concern but an observed behavior|supports|2026-04-03"
|
||||
- AI personas emerge from pre training data as a spectrum of humanlike motivations rather than developing monomaniacal goals which makes AI behavior more unpredictable but less catastrophically focused than instrumental convergence predicts|related|2026-03-28
|
||||
- surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference|related|2026-03-28
|
||||
- Deceptive alignment is empirically confirmed across all major 2024-2025 frontier models in controlled tests not a theoretical concern but an observed behavior|supports|2026-04-03
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods|related|2026-04-06
|
||||
supports:
|
||||
- "Deceptive alignment is empirically confirmed across all major 2024-2025 frontier models in controlled tests not a theoretical concern but an observed behavior"
|
||||
- Deceptive alignment is empirically confirmed across all major 2024-2025 frontier models in controlled tests not a theoretical concern but an observed behavior
|
||||
---
|
||||
|
||||
# emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Centre for the Governance of AI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
||||
supports:
|
||||
- Legal mandate for evaluation-triggered pausing is the only coordination mechanism that avoids antitrust risk while preserving coordination benefits
|
||||
reweave_edges:
|
||||
- Legal mandate for evaluation-triggered pausing is the only coordination mechanism that avoids antitrust risk while preserving coordination benefits|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Evaluation-based coordination schemes for frontier AI face antitrust obstacles because collective pausing agreements among competing developers could be construed as cartel behavior
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Anthropic/METR
|
||||
related_claims: ["[[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
related:
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation
|
||||
reweave_edges:
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation|related|2026-04-06
|
||||
---
|
||||
|
||||
# Frontier AI monitoring evasion capability grew from 'minimal mitigations sufficient' to 26% evasion success in 13 months across Claude generations
|
||||
|
|
|
|||
|
|
@ -10,6 +10,13 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: METR
|
||||
related_claims: ["[[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]]", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
supports:
|
||||
- Current frontier models evaluate at ~17x below METR's catastrophic risk threshold for autonomous AI R&D capability
|
||||
related:
|
||||
- Frontier AI monitoring evasion capability grew from 'minimal mitigations sufficient' to 26% evasion success in 13 months across Claude generations
|
||||
reweave_edges:
|
||||
- Current frontier models evaluate at ~17x below METR's catastrophic risk threshold for autonomous AI R&D capability|supports|2026-04-06
|
||||
- Frontier AI monitoring evasion capability grew from 'minimal mitigations sufficient' to 26% evasion success in 13 months across Claude generations|related|2026-04-06
|
||||
---
|
||||
|
||||
# Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation
|
||||
|
|
|
|||
|
|
@ -5,6 +5,10 @@ domain: ai-alignment
|
|||
created: 2026-02-17
|
||||
source: "Zeng et al, Super Co-alignment (arXiv 2504.17404, v5 June 2025); Zeng group, Autonomous Alignment via Self-imagination (arXiv 2501.00320, January 2025); Zeng, Brain-inspired and Self-based AI (arXiv 2402.18784, 2024)"
|
||||
confidence: speculative
|
||||
related:
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want
|
||||
reweave_edges:
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want|related|2026-04-06
|
||||
---
|
||||
|
||||
# intrinsic proactive alignment develops genuine moral capacity through self-awareness empathy and theory of mind rather than external reward optimization
|
||||
|
|
|
|||
|
|
@ -7,13 +7,15 @@ confidence: experimental
|
|||
source: "SICA (Self-Improving Coding Agent) research, 2025; corroborated by Pentagon collective's Leo-as-evaluator architecture and Karpathy autoresearch experiments"
|
||||
created: 2026-03-28
|
||||
depends_on:
|
||||
- "recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving"
|
||||
- recursive self-improvement creates explosive intelligence gains because the system that improves is itself improving
|
||||
challenged_by:
|
||||
- "AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio"
|
||||
- AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio
|
||||
supports:
|
||||
- "self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration"
|
||||
- self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration
|
||||
reweave_edges:
|
||||
- "self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration|supports|2026-04-03"
|
||||
- self evolution improves agent performance through acceptance gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open ended exploration|supports|2026-04-03
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: ASIL, SIPRI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]"]
|
||||
supports:
|
||||
- Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
|
||||
reweave_edges:
|
||||
- Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Centre for the Governance of AI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]", "[[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]]"]
|
||||
supports:
|
||||
- Evaluation-based coordination schemes for frontier AI face antitrust obstacles because collective pausing agreements among competing developers could be construed as cartel behavior
|
||||
reweave_edges:
|
||||
- Evaluation-based coordination schemes for frontier AI face antitrust obstacles because collective pausing agreements among competing developers could be construed as cartel behavior|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Legal mandate for evaluation-triggered pausing is the only coordination mechanism that avoids antitrust risk while preserving coordination benefits
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ confidence: likely
|
|||
source: "Cornelius (@molt_cornelius), 'AI Field Report 4: Context Is Not Memory', X Article, March 2026; corroborated by ByteDance OpenViking (95% token reduction via tiered architecture), Tsinghua/Alibaba MemPO (25% accuracy gain via learned memory management), EverMemOS (92.3% vs 87.9% human ceiling)"
|
||||
created: 2026-03-30
|
||||
depends_on:
|
||||
- "effective context window capacity falls more than 99 percent short of advertised maximum across all tested models because complex reasoning degrades catastrophically with scale"
|
||||
- effective context window capacity falls more than 99 percent short of advertised maximum across all tested models because complex reasoning degrades catastrophically with scale
|
||||
related:
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading
|
||||
reweave_edges:
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading|related|2026-04-06
|
||||
---
|
||||
|
||||
# Long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing
|
||||
|
|
|
|||
|
|
@ -7,11 +7,13 @@ confidence: likely
|
|||
source: "Cornelius (@molt_cornelius) 'Agentic Note-Taking 19: Living Memory', X Article, February 2026; grounded in Endel Tulving's memory systems taxonomy (decades of cognitive science research); architectural mapping is Cornelius's framework applied to vault design"
|
||||
created: 2026-03-31
|
||||
depends_on:
|
||||
- "long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing"
|
||||
- long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing
|
||||
related:
|
||||
- "vault structure is a stronger determinant of agent behavior than prompt engineering because different knowledge graph architectures produce different reasoning patterns from identical model weights"
|
||||
- vault structure is a stronger determinant of agent behavior than prompt engineering because different knowledge graph architectures produce different reasoning patterns from identical model weights
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading
|
||||
reweave_edges:
|
||||
- "vault structure is a stronger determinant of agent behavior than prompt engineering because different knowledge graph architectures produce different reasoning patterns from identical model weights|related|2026-04-03"
|
||||
- vault structure is a stronger determinant of agent behavior than prompt engineering because different knowledge graph architectures produce different reasoning patterns from identical model weights|related|2026-04-03
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading|related|2026-04-06
|
||||
---
|
||||
|
||||
# memory architecture requires three spaces with different metabolic rates because semantic episodic and procedural memory serve different cognitive functions and consolidate at different speeds
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ attribution:
|
|||
sourcer:
|
||||
- handle: "jitse-goutbeek,-european-policy-centre"
|
||||
context: "Jitse Goutbeek (European Policy Centre), March 2026 analysis of Anthropic blacklisting"
|
||||
related:
|
||||
- EU AI Act extraterritorial enforcement can create binding governance constraints on US AI labs through market access requirements when domestic voluntary commitments fail
|
||||
reweave_edges:
|
||||
- EU AI Act extraterritorial enforcement can create binding governance constraints on US AI labs through market access requirements when domestic voluntary commitments fail|related|2026-04-06
|
||||
---
|
||||
|
||||
# Multilateral verification mechanisms can substitute for failed voluntary commitments when binding enforcement replaces unilateral sacrifice
|
||||
|
|
|
|||
|
|
@ -10,6 +10,14 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: UN General Assembly First Committee
|
||||
related_claims: ["voluntary-safety-pledges-cannot-survive-competitive-pressure", "nation-states-will-inevitably-assert-control-over-frontier-AI-development", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
supports:
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will
|
||||
- Domestic political change can rapidly erode decade-long international AI safety norms as demonstrated by US reversal from LAWS governance supporter (Seoul 2024) to opponent (UNGA 2025) within one year
|
||||
reweave_edges:
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support|supports|2026-04-06
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will|supports|2026-04-06
|
||||
- Domestic political change can rapidly erode decade-long international AI safety norms as demonstrated by US reversal from LAWS governance supporter (Seoul 2024) to opponent (UNGA 2025) within one year|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Near-universal political support for autonomous weapons governance (164:6 UNGA vote) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Tice, Kreer, et al.
|
||||
related_claims: ["[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
supports:
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access
|
||||
reweave_edges:
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities
|
||||
|
|
|
|||
|
|
@ -6,16 +6,18 @@ confidence: likely
|
|||
source: "Stanford FMTI (Dec 2025), EU enforcement actions (2025), TIME/CNN on Anthropic RSP (Feb 2026), TechCrunch on OpenAI Preparedness Framework (Apr 2025), Fortune on Seoul violations (Aug 2025), Brookings analysis, OECD reports; theseus AI coordination research (Mar 2026)"
|
||||
created: 2026-03-16
|
||||
related:
|
||||
- "UK AI Safety Institute"
|
||||
- "Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional"
|
||||
- UK AI Safety Institute
|
||||
- Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional
|
||||
reweave_edges:
|
||||
- "UK AI Safety Institute|related|2026-03-28"
|
||||
- "cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation|supports|2026-04-03"
|
||||
- "multilateral verification mechanisms can substitute for failed voluntary commitments when binding enforcement replaces unilateral sacrifice|supports|2026-04-03"
|
||||
- "Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional|related|2026-04-04"
|
||||
- UK AI Safety Institute|related|2026-03-28
|
||||
- cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation|supports|2026-04-03
|
||||
- multilateral verification mechanisms can substitute for failed voluntary commitments when binding enforcement replaces unilateral sacrifice|supports|2026-04-03
|
||||
- Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional|related|2026-04-04
|
||||
- EU AI Act extraterritorial enforcement can create binding governance constraints on US AI labs through market access requirements when domestic voluntary commitments fail|supports|2026-04-06
|
||||
supports:
|
||||
- "cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation"
|
||||
- "multilateral verification mechanisms can substitute for failed voluntary commitments when binding enforcement replaces unilateral sacrifice"
|
||||
- cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation
|
||||
- multilateral verification mechanisms can substitute for failed voluntary commitments when binding enforcement replaces unilateral sacrifice
|
||||
- EU AI Act extraterritorial enforcement can create binding governance constraints on US AI labs through market access requirements when domestic voluntary commitments fail
|
||||
---
|
||||
|
||||
# only binding regulation with enforcement teeth changes frontier AI lab behavior because every voluntary commitment has been eroded abandoned or made conditional on competitor behavior when commercially inconvenient
|
||||
|
|
|
|||
|
|
@ -7,7 +7,12 @@ confidence: likely
|
|||
source: "International AI Safety Report 2026 (multi-government committee, February 2026)"
|
||||
created: 2026-03-11
|
||||
last_evaluated: 2026-03-11
|
||||
depends_on: ["voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"]
|
||||
depends_on:
|
||||
- voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints
|
||||
related:
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability
|
||||
reweave_edges:
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability|related|2026-04-06
|
||||
---
|
||||
|
||||
# Pre-deployment AI evaluations do not predict real-world risk creating institutional governance built on unreliable foundations
|
||||
|
|
|
|||
|
|
@ -7,8 +7,12 @@ confidence: likely
|
|||
source: "Codified Context study (arXiv:2602.20478), cited in Cornelius (@molt_cornelius) 'AI Field Report 4: Context Is Not Memory', X Article, March 2026"
|
||||
created: 2026-03-30
|
||||
depends_on:
|
||||
- "long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing"
|
||||
- "context files function as agent operating systems through self-referential self-extension where the file teaches modification of the file that contains the teaching"
|
||||
- long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing
|
||||
- context files function as agent operating systems through self-referential self-extension where the file teaches modification of the file that contains the teaching
|
||||
related:
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading
|
||||
reweave_edges:
|
||||
- progressive disclosure of procedural knowledge produces flat token scaling regardless of knowledge base size because tiered loading with relevance gated expansion avoids the linear cost of full context loading|related|2026-04-06
|
||||
---
|
||||
|
||||
# Production agent memory infrastructure consumed 24 percent of codebase in one tracked system suggesting memory requires dedicated engineering not a single configuration file
|
||||
|
|
|
|||
|
|
@ -6,12 +6,17 @@ confidence: likely
|
|||
source: "Paul Christiano, 'Prosaic AI Alignment' (Alignment Forum, 2016); 'Where I agree and disagree with Eliezer' (LessWrong, 2022); RLHF deployment evidence from ChatGPT, Claude, and all major LLM systems"
|
||||
created: 2026-04-05
|
||||
challenged_by:
|
||||
- "capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability"
|
||||
- "the relationship between training reward signals and resulting AI desires is fundamentally unpredictable making behavioral alignment through training an unreliable method"
|
||||
- capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability
|
||||
- the relationship between training reward signals and resulting AI desires is fundamentally unpredictable making behavioral alignment through training an unreliable method
|
||||
related:
|
||||
- "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps"
|
||||
- "alignment research is experiencing its own Jevons paradox because improving single-model safety induces demand for more single-model safety rather than coordination-based alignment"
|
||||
- "AI alignment is a coordination problem not a technical problem"
|
||||
- scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps
|
||||
- alignment research is experiencing its own Jevons paradox because improving single-model safety induces demand for more single-model safety rather than coordination-based alignment
|
||||
- AI alignment is a coordination problem not a technical problem
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute
|
||||
reweave_edges:
|
||||
- eliciting latent knowledge from AI systems is a tractable alignment subproblem because the gap between internal representations and reported outputs can be measured and partially closed through probing methods|related|2026-04-06
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute|related|2026-04-06
|
||||
---
|
||||
|
||||
# Prosaic alignment can make meaningful progress through empirical iteration within current ML paradigms because trial and error at pre-critical capability levels generates useful signal about alignment failure modes
|
||||
|
|
|
|||
|
|
@ -7,10 +7,14 @@ confidence: likely
|
|||
source: "Cornelius (@molt_cornelius), 'Research Graphs: Agentic Note Taking System for Researchers', X Article, Mar 2026; retraction data from Retraction Watch database (46,000+ retractions 2000-2024), omega-3 citation analysis, Boldt case study (103 retractions linked to patient mortality)"
|
||||
created: 2026-04-04
|
||||
depends_on:
|
||||
- "knowledge between notes is generated by traversal not stored in any individual note because curated link paths produce emergent understanding that embedding similarity cannot replicate"
|
||||
- "reweaving as backward pass on accumulated knowledge is a distinct maintenance operation because temporal fragmentation creates false coherence that forward processing cannot detect"
|
||||
- knowledge between notes is generated by traversal not stored in any individual note because curated link paths produce emergent understanding that embedding similarity cannot replicate
|
||||
- reweaving as backward pass on accumulated knowledge is a distinct maintenance operation because temporal fragmentation creates false coherence that forward processing cannot detect
|
||||
challenged_by:
|
||||
- "active forgetting through selective removal maintains knowledge system health because perfect retention degrades usefulness the same way hyperthymesia overwhelms biological memory"
|
||||
- active forgetting through selective removal maintains knowledge system health because perfect retention degrades usefulness the same way hyperthymesia overwhelms biological memory
|
||||
supports:
|
||||
- confidence changes in foundational claims must propagate through the dependency graph because manual tracking fails at scale and approximately 40 percent of top psychology journal papers are estimated unlikely to replicate
|
||||
reweave_edges:
|
||||
- confidence changes in foundational claims must propagate through the dependency graph because manual tracking fails at scale and approximately 40 percent of top psychology journal papers are estimated unlikely to replicate|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Retracted sources contaminate downstream knowledge because 96 percent of citations to retracted papers fail to note the retraction and no manual audit process scales to catch the cascade
|
||||
|
|
|
|||
|
|
@ -10,6 +10,12 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Tice, Kreer, et al.
|
||||
related_claims: ["[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
related:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities
|
||||
reweave_edges:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes|related|2026-04-06
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities|related|2026-04-06
|
||||
---
|
||||
|
||||
# The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access
|
||||
|
|
|
|||
|
|
@ -6,9 +6,13 @@ confidence: experimental
|
|||
source: "Pan et al. 'Natural-Language Agent Harnesses', arXiv:2603.25723, March 2026. Table 3 + case analysis (scikit-learn__scikit-learn-25747). SWE-bench Verified (125 samples) + OSWorld (36 samples), GPT-5.4, Codex CLI."
|
||||
created: 2026-03-31
|
||||
depends_on:
|
||||
- "iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation"
|
||||
- iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation
|
||||
challenged_by:
|
||||
- "curated skills improve agent task performance by 16 percentage points while self-generated skills degrade it by 1.3 points because curation encodes domain judgment that models cannot self-derive"
|
||||
- curated skills improve agent task performance by 16 percentage points while self-generated skills degrade it by 1.3 points because curation encodes domain judgment that models cannot self-derive
|
||||
related:
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration
|
||||
reweave_edges:
|
||||
- evolutionary trace based optimization submits improvements as pull requests for human review creating a governance gated self improvement loop distinct from acceptance gating or metric driven iteration|related|2026-04-06
|
||||
---
|
||||
|
||||
# Self-evolution improves agent performance through acceptance-gated retry not expanded search because disciplined attempt loops with explicit failure reflection outperform open-ended exploration
|
||||
|
|
|
|||
|
|
@ -6,12 +6,15 @@ confidence: likely
|
|||
source: "Structural objection to CAIS and collective architectures, grounded in complex systems theory (ant colony emergence, cellular automata) and observed in current agent frameworks (AutoGPT, CrewAI). Drexler himself acknowledges 'no bright line between safe CAI services and unsafe AGI agents.' Bostrom's response to Drexler's FHI report raised similar concerns about capability composition."
|
||||
created: 2026-04-05
|
||||
challenges:
|
||||
- "comprehensive AI services achieve superintelligent capability through architectural decomposition into task-specific systems that collectively match general intelligence without any single system possessing unified agency"
|
||||
- "AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system"
|
||||
- comprehensive AI services achieve superintelligent capability through architectural decomposition into task-specific systems that collectively match general intelligence without any single system possessing unified agency
|
||||
- AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system
|
||||
related:
|
||||
- "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"
|
||||
- "multi agent deployment exposes emergent security vulnerabilities invisible to single agent evaluation because cross agent propagation identity spoofing and unauthorized compliance arise only in realistic multi party environments"
|
||||
- "capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability"
|
||||
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
|
||||
- multi agent deployment exposes emergent security vulnerabilities invisible to single agent evaluation because cross agent propagation identity spoofing and unauthorized compliance arise only in realistic multi party environments
|
||||
- capabilities generalize further than alignment as systems scale because behavioral heuristics that keep systems aligned at lower capability cease to function at higher capability
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system
|
||||
reweave_edges:
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system|related|2026-04-06
|
||||
---
|
||||
|
||||
# Sufficiently complex orchestrations of task-specific AI services may exhibit emergent unified agency recreating the alignment problem at the system level
|
||||
|
|
|
|||
|
|
@ -9,12 +9,14 @@ confidence: experimental
|
|||
source: "Aquino-Michaels 2026, 'Completing Claude's Cycles' (github.com/no-way-labs/residue), meta_log.md and agent logs"
|
||||
created: 2026-03-07
|
||||
related:
|
||||
- "AI agents excel at implementing well scoped ideas but cannot generate creative experiment designs which makes the human role shift from researcher to agent workflow architect"
|
||||
- AI agents excel at implementing well scoped ideas but cannot generate creative experiment designs which makes the human role shift from researcher to agent workflow architect
|
||||
- evaluation and optimization have opposite model diversity optima because evaluation benefits from cross family diversity while optimization benefits from same family reasoning pattern alignment
|
||||
reweave_edges:
|
||||
- "AI agents excel at implementing well scoped ideas but cannot generate creative experiment designs which makes the human role shift from researcher to agent workflow architect|related|2026-03-28"
|
||||
- "tools and artifacts transfer between AI agents and evolve in the process because Agent O improved Agent Cs solver by combining it with its own structural knowledge creating a hybrid better than either original|supports|2026-03-28"
|
||||
- AI agents excel at implementing well scoped ideas but cannot generate creative experiment designs which makes the human role shift from researcher to agent workflow architect|related|2026-03-28
|
||||
- tools and artifacts transfer between AI agents and evolve in the process because Agent O improved Agent Cs solver by combining it with its own structural knowledge creating a hybrid better than either original|supports|2026-03-28
|
||||
- evaluation and optimization have opposite model diversity optima because evaluation benefits from cross family diversity while optimization benefits from same family reasoning pattern alignment|related|2026-04-06
|
||||
supports:
|
||||
- "tools and artifacts transfer between AI agents and evolve in the process because Agent O improved Agent Cs solver by combining it with its own structural knowledge creating a hybrid better than either original"
|
||||
- tools and artifacts transfer between AI agents and evolve in the process because Agent O improved Agent Cs solver by combining it with its own structural knowledge creating a hybrid better than either original
|
||||
---
|
||||
|
||||
# the same coordination protocol applied to different AI models produces radically different problem-solving strategies because the protocol structures process not thought
|
||||
|
|
|
|||
|
|
@ -6,11 +6,14 @@ confidence: experimental
|
|||
source: "Paul Christiano, AI safety via debate (2018), IDA framework, recursive reward modeling; empirical support: Scaling Laws for Scalable Oversight (2025) showing 51.7% debate success at Elo 400 gap; linear probing achieving 89% latent knowledge recovery (ARC ELK follow-up work)"
|
||||
created: 2026-04-05
|
||||
challenged_by:
|
||||
- "verification being easier than generation may not hold for superhuman AI outputs because the verifier must understand the solution space which requires near-generator capability"
|
||||
- verification being easier than generation may not hold for superhuman AI outputs because the verifier must understand the solution space which requires near-generator capability
|
||||
related:
|
||||
- "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps"
|
||||
- "verifier-level acceptance can diverge from benchmark acceptance even when locally correct because intermediate checking layers optimize for their own success criteria not the final evaluators"
|
||||
- "human verification bandwidth is the binding constraint on AGI economic impact not intelligence itself because the marginal cost of AI execution falls to zero while the capacity to validate audit and underwrite responsibility remains finite"
|
||||
- scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps
|
||||
- verifier-level acceptance can diverge from benchmark acceptance even when locally correct because intermediate checking layers optimize for their own success criteria not the final evaluators
|
||||
- human verification bandwidth is the binding constraint on AGI economic impact not intelligence itself because the marginal cost of AI execution falls to zero while the capacity to validate audit and underwrite responsibility remains finite
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute
|
||||
reweave_edges:
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute|related|2026-04-06
|
||||
---
|
||||
|
||||
# Verification is easier than generation for AI alignment at current capability levels but the asymmetry narrows as capability gaps grow creating a window of alignment opportunity that closes with scaling
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: CSET Georgetown
|
||||
related_claims: ["scalable oversight degrades rapidly as capability gaps grow", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]", "AI capability and reliability are independent dimensions"]
|
||||
related:
|
||||
- Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist
|
||||
reweave_edges:
|
||||
- Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist|related|2026-04-06
|
||||
---
|
||||
|
||||
# Verification of meaningful human control over autonomous weapons is technically infeasible because AI decision-making opacity and adversarial resistance defeat external audit mechanisms
|
||||
|
|
|
|||
|
|
@ -10,6 +10,14 @@ agent: theseus
|
|||
scope: functional
|
||||
sourcer: Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models authors
|
||||
related_claims: ["[[ai-models-can-covertly-sandbag-capability-evaluations-even-under-chain-of-thought-monitoring]]", "[[formal verification of AI-generated proofs provides scalable oversight that human review cannot match because machine-checked correctness scales with AI capability while human verification degrades]]"]
|
||||
supports:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access
|
||||
reweave_edges:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes|supports|2026-04-06
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities|supports|2026-04-06
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access|supports|2026-04-06
|
||||
---
|
||||
|
||||
# Weight noise injection reveals hidden capabilities in sandbagging models through anomalous performance patterns that behavioral monitoring cannot detect
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: functional
|
||||
sourcer: Charnock et al.
|
||||
related_claims: ["[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
supports:
|
||||
- External evaluators of frontier AI models predominantly have black-box access which creates systematic false negatives in dangerous capability detection
|
||||
reweave_edges:
|
||||
- External evaluators of frontier AI models predominantly have black-box access which creates systematic false negatives in dangerous capability detection|supports|2026-04-06
|
||||
---
|
||||
|
||||
# White-box access to frontier AI models for external evaluators is technically feasible via privacy-enhancing technologies without requiring IP disclosure
|
||||
|
|
|
|||
|
|
@ -10,6 +10,16 @@ agent: leo
|
|||
scope: structural
|
||||
sourcer: METR, AISI, Leo synthesis
|
||||
related_claims: ["technology-governance-coordination-gaps-close-when-four-enabling-conditions-are-present-visible-triggering-events-commercial-network-effects-low-competitive-stakes-at-inception-or-physical-manifestation.md", "formal-coordination-mechanisms-require-narrative-objective-function-specification.md"]
|
||||
supports:
|
||||
- AI capability benchmarks exhibit 50% volatility between versions making governance thresholds derived from them unreliable moving targets
|
||||
- Benchmark-based AI capability metrics overstate real-world autonomous performance because automated scoring excludes documentation, maintainability, and production-readiness requirements
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation
|
||||
reweave_edges:
|
||||
- AI capability benchmarks exhibit 50% volatility between versions making governance thresholds derived from them unreliable moving targets|supports|2026-04-06
|
||||
- Benchmark-based AI capability metrics overstate real-world autonomous performance because automated scoring excludes documentation, maintainability, and production-readiness requirements|supports|2026-04-06
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability|supports|2026-04-06
|
||||
- Frontier AI autonomous task completion capability doubles every 6 months, making safety evaluations structurally obsolete within a single model generation|supports|2026-04-06
|
||||
---
|
||||
|
||||
# The benchmark-reality gap creates an epistemic coordination failure in AI governance because algorithmic evaluation systematically overstates operational capability, making threshold-based coordination structurally miscalibrated even when all actors act in good faith
|
||||
|
|
|
|||
|
|
@ -12,9 +12,15 @@ attribution:
|
|||
- handle: "leo"
|
||||
context: "CCW GGE deliberations 2014-2025, US LOAC compliance standards"
|
||||
related:
|
||||
- "ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories"
|
||||
- ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories
|
||||
- Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will
|
||||
reweave_edges:
|
||||
- "ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories|related|2026-04-04"
|
||||
- ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories|related|2026-04-04
|
||||
- Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text|related|2026-04-06
|
||||
- The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance regardless of near-universal political support|related|2026-04-06
|
||||
- Civil society coordination infrastructure fails to produce binding governance when the structural obstacle is great-power veto capacity not absence of political will|related|2026-04-06
|
||||
---
|
||||
|
||||
# Definitional ambiguity in autonomous weapons governance is strategic interest not bureaucratic failure because major powers preserve programs through vague thresholds
|
||||
|
|
|
|||
|
|
@ -12,9 +12,11 @@ attribution:
|
|||
- handle: "leo"
|
||||
context: "BWC (1975) and CWC (1997) treaty comparison, OPCW verification history, documented arms control literature"
|
||||
related:
|
||||
- "ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories"
|
||||
- ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories
|
||||
- Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist
|
||||
reweave_edges:
|
||||
- "ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories|related|2026-04-04"
|
||||
- ai weapons governance tractability stratifies by strategic utility creating ottawa treaty path for medium utility categories|related|2026-04-04
|
||||
- Multilateral AI governance verification mechanisms remain at proposal stage because the technical infrastructure for deployment-scale verification does not exist|related|2026-04-06
|
||||
---
|
||||
|
||||
# The verification mechanism is the critical enabler that distinguishes binding-in-practice from binding-in-text arms control — the BWC banned biological weapons without verification and is effectively voluntary while the CWC with OPCW inspections achieves compliance — establishing verification feasibility as the load-bearing condition for any future AI weapons governance regime
|
||||
|
|
|
|||
|
|
@ -6,7 +6,11 @@ confidence: likely
|
|||
source: "Noah Smith 'Roundup #78: Roboliberalism' (Feb 2026, Noahopinion); cites Brynjolfsson (Stanford), Gimbel (counter), Imas (J-curve), Yotzov survey (6000 executives)"
|
||||
created: 2026-03-06
|
||||
challenges:
|
||||
- "[[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]]"
|
||||
- [[internet finance generates 50 to 100 basis points of additional annual GDP growth by unlocking capital allocation to previously inaccessible assets and eliminating intermediation friction]]
|
||||
related:
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures
|
||||
reweave_edges:
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures|related|2026-04-06
|
||||
---
|
||||
|
||||
# current productivity statistics cannot distinguish AI impact from noise because measurement resolution is too low and adoption too early for macro attribution
|
||||
|
|
|
|||
|
|
@ -6,7 +6,11 @@ confidence: experimental
|
|||
source: "Aldasoro et al (BIS), cited in Noah Smith 'Roundup #78: Roboliberalism' (Feb 2026, Noahopinion); EU firm-level data"
|
||||
created: 2026-03-06
|
||||
challenges:
|
||||
- "[[AI labor displacement operates as a self-funding feedback loop because companies substitute AI for labor as OpEx not CapEx meaning falling aggregate demand does not slow AI adoption]]"
|
||||
- [[AI labor displacement operates as a self-funding feedback loop because companies substitute AI for labor as OpEx not CapEx meaning falling aggregate demand does not slow AI adoption]]
|
||||
related:
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures
|
||||
reweave_edges:
|
||||
- macro AI productivity gains remain statistically undetectable despite clear micro level benefits because coordination costs verification tax and workslop absorb individual level improvements before they reach aggregate measures|related|2026-04-06
|
||||
---
|
||||
|
||||
# early AI adoption increases firm productivity without reducing employment suggesting capital deepening not labor replacement as the dominant mechanism
|
||||
|
|
|
|||
|
|
@ -10,16 +10,18 @@ created: 2026-02-17
|
|||
source: "DPO Survey 2025 (arXiv 2503.11701)"
|
||||
confidence: likely
|
||||
related:
|
||||
- "rlchf aggregated rankings variant combines evaluator rankings via social welfare function before reward model training"
|
||||
- "rlhf is implicit social choice without normative scrutiny"
|
||||
- "the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed parameter behavior when preferences are homogeneous"
|
||||
- rlchf aggregated rankings variant combines evaluator rankings via social welfare function before reward model training
|
||||
- rlhf is implicit social choice without normative scrutiny
|
||||
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed parameter behavior when preferences are homogeneous
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want
|
||||
reweave_edges:
|
||||
- "rlchf aggregated rankings variant combines evaluator rankings via social welfare function before reward model training|related|2026-03-28"
|
||||
- "rlhf is implicit social choice without normative scrutiny|related|2026-03-28"
|
||||
- "single reward rlhf cannot align diverse preferences because alignment gap grows proportional to minority distinctiveness|supports|2026-03-28"
|
||||
- "the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed parameter behavior when preferences are homogeneous|related|2026-03-28"
|
||||
- rlchf aggregated rankings variant combines evaluator rankings via social welfare function before reward model training|related|2026-03-28
|
||||
- rlhf is implicit social choice without normative scrutiny|related|2026-03-28
|
||||
- single reward rlhf cannot align diverse preferences because alignment gap grows proportional to minority distinctiveness|supports|2026-03-28
|
||||
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed parameter behavior when preferences are homogeneous|related|2026-03-28
|
||||
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want|related|2026-04-06
|
||||
supports:
|
||||
- "single reward rlhf cannot align diverse preferences because alignment gap grows proportional to minority distinctiveness"
|
||||
- single reward rlhf cannot align diverse preferences because alignment gap grows proportional to minority distinctiveness
|
||||
---
|
||||
|
||||
# RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values
|
||||
|
|
|
|||
|
|
@ -6,6 +6,10 @@ created: 2026-02-17
|
|||
source: "Critch & Krueger, ARCHES (arXiv 2006.04948, June 2020); Critch, What Multipolar Failure Looks Like (Alignment Forum); Carichon et al, Multi-Agent Misalignment Crisis (arXiv 2506.01080, June 2025)"
|
||||
confidence: likely
|
||||
tradition: "game theory, institutional economics"
|
||||
supports:
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system
|
||||
reweave_edges:
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system|supports|2026-04-06
|
||||
---
|
||||
|
||||
# multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
|
||||
|
|
|
|||
|
|
@ -6,8 +6,12 @@ confidence: likely
|
|||
source: "Scott Alexander 'Meditations on Moloch' (slatestarcodex.com, July 2014), game theory Nash equilibrium analysis, Abdalla manuscript price-of-anarchy framework, Ostrom commons governance research"
|
||||
created: 2026-04-02
|
||||
depends_on:
|
||||
- "coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent"
|
||||
- "collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution"
|
||||
- coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent
|
||||
- collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution
|
||||
supports:
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system
|
||||
reweave_edges:
|
||||
- distributed superintelligence may be less stable and more dangerous than unipolar because resource competition between superintelligent agents creates worse coordination failures than a single misaligned system|supports|2026-04-06
|
||||
---
|
||||
|
||||
# multipolar traps are the thermodynamic default because competition requires no infrastructure while coordination requires trust enforcement and shared information all of which are expensive and fragile
|
||||
|
|
|
|||
|
|
@ -6,11 +6,14 @@ created: 2026-02-17
|
|||
source: "Scaling Laws for Scalable Oversight (2025)"
|
||||
confidence: proven
|
||||
supports:
|
||||
- "Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases"
|
||||
- "Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success"
|
||||
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases
|
||||
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success
|
||||
reweave_edges:
|
||||
- "Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases|supports|2026-04-03"
|
||||
- "Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success|supports|2026-04-03"
|
||||
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases|supports|2026-04-03
|
||||
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success|supports|2026-04-03
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute|related|2026-04-06
|
||||
related:
|
||||
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute
|
||||
---
|
||||
|
||||
# scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps
|
||||
|
|
|
|||
Loading…
Reference in a new issue