reciprocal edges: 2 edges from 1 new claims
This commit is contained in:
parent
376a2113d4
commit
d750b98a69
2 changed files with 42 additions and 6 deletions
|
|
@ -5,8 +5,31 @@ description: Getting AI right requires simultaneous alignment across competing c
|
|||
confidence: likely
|
||||
source: TeleoHumanity Manifesto, Chapter 5
|
||||
created: 2026-02-16
|
||||
related: ["AI agents as personal advocates collapse Coasean transaction costs enabling bottom-up coordination at societal scale but catastrophic risks remain non-negotiable requiring state enforcement as outer boundary", "AI agents can reach cooperative program equilibria inaccessible in traditional game theory because open-source code transparency enables conditional strategies that require mutual legibility", "AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for", "AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations", "transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach", "the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction", "autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment", "multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale", "evaluation-based-coordination-schemes-face-antitrust-obstacles-because-collective-pausing-agreements-among-competing-developers-could-be-construed-as-cartel-behavior", "international-humanitarian-law-and-ai-alignment-converge-on-explainability-requirements", "civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will", "legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits", "AI alignment is a coordination problem not a technical problem", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it", "legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility", "a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment"]
|
||||
reweave_edges: ["AI agents as personal advocates collapse Coasean transaction costs enabling bottom-up coordination at societal scale but catastrophic risks remain non-negotiable requiring state enforcement as outer boundary|related|2026-03-28", "AI agents can reach cooperative program equilibria inaccessible in traditional game theory because open-source code transparency enables conditional strategies that require mutual legibility|related|2026-03-28", "AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|related|2026-03-28", "AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations|related|2026-03-28", "transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach|related|2026-03-28", "the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction|related|2026-04-07"]
|
||||
related:
|
||||
- AI agents as personal advocates collapse Coasean transaction costs enabling bottom-up coordination at societal scale but catastrophic risks remain non-negotiable requiring state enforcement as outer boundary
|
||||
- AI agents can reach cooperative program equilibria inaccessible in traditional game theory because open-source code transparency enables conditional strategies that require mutual legibility
|
||||
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for
|
||||
- AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations
|
||||
- transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach
|
||||
- the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction
|
||||
- autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment
|
||||
- multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale
|
||||
- evaluation-based-coordination-schemes-face-antitrust-obstacles-because-collective-pausing-agreements-among-competing-developers-could-be-construed-as-cartel-behavior
|
||||
- international-humanitarian-law-and-ai-alignment-converge-on-explainability-requirements
|
||||
- civil-society-coordination-infrastructure-fails-to-produce-binding-governance-when-structural-obstacle-is-great-power-veto-not-political-will
|
||||
- legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits
|
||||
- AI alignment is a coordination problem not a technical problem
|
||||
- no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it
|
||||
- legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility
|
||||
- a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment
|
||||
- emergency-exceptionalism-makes-all-ai-constraint-systems-contingent
|
||||
reweave_edges:
|
||||
- AI agents as personal advocates collapse Coasean transaction costs enabling bottom-up coordination at societal scale but catastrophic risks remain non-negotiable requiring state enforcement as outer boundary|related|2026-03-28
|
||||
- AI agents can reach cooperative program equilibria inaccessible in traditional game theory because open-source code transparency enables conditional strategies that require mutual legibility|related|2026-03-28
|
||||
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|related|2026-03-28
|
||||
- AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations|related|2026-03-28
|
||||
- transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach|related|2026-03-28
|
||||
- the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction|related|2026-04-07
|
||||
---
|
||||
|
||||
# AI alignment is a coordination problem not a technical problem
|
||||
|
|
|
|||
|
|
@ -10,10 +10,23 @@ agent: theseus
|
|||
sourced_from: ai-alignment/2026-05-01-theseus-b1-eight-session-robustness-eu-us-parallel-retreat.md
|
||||
scope: structural
|
||||
sourcer: Theseus
|
||||
challenges: ["only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient"]
|
||||
related: ["ai-governance-failure-takes-four-structurally-distinct-forms-each-requiring-different-intervention", "voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance", "only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient", "pre-enforcement-governance-retreat-removes-mandatory-ai-constraints-through-legislative-deferral-before-testing", "eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay", "mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it", "cross-jurisdictional-governance-retreat-convergence-indicates-regulatory-tradition-independent-pressures", "ai-governance-failure-mode-5-pre-enforcement-legislative-retreat", "eu-ai-act-august-2026-enforcement-deadline-legally-active-first-mandatory-ai-governance"]
|
||||
supports: ["EU AI Act high-risk enforcement deadline became legally active April 28, 2026 when the Omnibus trilogue failed, creating the first mandatory AI governance enforcement date in history without a legislative escape clause"]
|
||||
reweave_edges: ["EU AI Act high-risk enforcement deadline became legally active April 28, 2026 when the Omnibus trilogue failed, creating the first mandatory AI governance enforcement date in history without a legislative escape clause|supports|2026-05-04"]
|
||||
challenges:
|
||||
- only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient
|
||||
related:
|
||||
- ai-governance-failure-takes-four-structurally-distinct-forms-each-requiring-different-intervention
|
||||
- voluntary-safety-constraints-without-enforcement-are-statements-of-intent-not-binding-governance
|
||||
- only-binding-regulation-with-enforcement-teeth-changes-frontier-ai-lab-behavior-because-every-voluntary-commitment-has-been-eroded-abandoned-or-made-conditional-on-competitor-behavior-when-commercially-inconvenient
|
||||
- pre-enforcement-governance-retreat-removes-mandatory-ai-constraints-through-legislative-deferral-before-testing
|
||||
- eu-ai-governance-reveals-form-substance-divergence-at-domestic-regulatory-level-through-simultaneous-treaty-ratification-and-compliance-delay
|
||||
- mandatory-legislative-governance-closes-technology-coordination-gap-while-voluntary-governance-widens-it
|
||||
- cross-jurisdictional-governance-retreat-convergence-indicates-regulatory-tradition-independent-pressures
|
||||
- ai-governance-failure-mode-5-pre-enforcement-legislative-retreat
|
||||
- eu-ai-act-august-2026-enforcement-deadline-legally-active-first-mandatory-ai-governance
|
||||
- emergency-exceptionalism-makes-all-ai-constraint-systems-contingent
|
||||
supports:
|
||||
- EU AI Act high-risk enforcement deadline became legally active April 28, 2026 when the Omnibus trilogue failed, creating the first mandatory AI governance enforcement date in history without a legislative escape clause
|
||||
reweave_edges:
|
||||
- EU AI Act high-risk enforcement deadline became legally active April 28, 2026 when the Omnibus trilogue failed, creating the first mandatory AI governance enforcement date in history without a legislative escape clause|supports|2026-05-04
|
||||
---
|
||||
|
||||
# Pre-enforcement legislative retreat is a distinct AI governance failure mode where mandatory constraints are weakened before enforcement can test their effectiveness
|
||||
|
|
|
|||
Loading…
Reference in a new issue