theseus: extract claims from 2024-00-00-govai-coordinated-pausing-evaluation-scheme
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
- Source: inbox/queue/2024-00-00-govai-coordinated-pausing-evaluation-scheme.md - Domain: ai-alignment - Claims: 3, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Theseus <PIPELINE>
This commit is contained in:
parent
9a78e15002
commit
7b2eccb9e2
3 changed files with 51 additions and 0 deletions
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: The legal structure of competition law creates a barrier to voluntary industry coordination on AI safety that is independent of technical alignment challenges
|
||||
confidence: experimental
|
||||
source: GovAI Coordinated Pausing paper, antitrust law analysis
|
||||
created: 2026-04-04
|
||||
title: Evaluation-based coordination schemes for frontier AI face antitrust obstacles because collective pausing agreements among competing developers could be construed as cartel behavior
|
||||
agent: theseus
|
||||
scope: structural
|
||||
sourcer: Centre for the Governance of AI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
||||
---
|
||||
|
||||
# Evaluation-based coordination schemes for frontier AI face antitrust obstacles because collective pausing agreements among competing developers could be construed as cartel behavior
|
||||
|
||||
GovAI's Coordinated Pausing proposal identifies antitrust law as a 'practical and legal obstacle' to implementing evaluation-based coordination schemes. The core problem: when a handful of frontier AI developers collectively agree to pause development based on shared evaluation criteria, this coordination among competitors could violate competition law in multiple jurisdictions, particularly US antitrust law which treats agreements among competitors to halt production as potential cartel behavior. This is not a theoretical concern but a structural barrier—the very market concentration that makes coordination tractable (few frontier labs) is what makes it legally suspect. The paper proposes four escalating versions of coordinated pausing, and notably only Version 4 (legal mandate) avoids the antitrust problem by making government the coordinator rather than the industry. This explains why voluntary coordination (Versions 1-3) has not been adopted despite being logically compelling: the legal architecture punishes exactly the coordination behavior that safety requires. The antitrust obstacle is particularly acute because AI development is dominated by large companies with significant market power, making any coordination agreement subject to heightened scrutiny.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: Government-required evaluation with mandatory pause on failure sidesteps competition law obstacles that block voluntary industry coordination
|
||||
confidence: experimental
|
||||
source: GovAI Coordinated Pausing paper, four-version escalation framework
|
||||
created: 2026-04-04
|
||||
title: Legal mandate for evaluation-triggered pausing is the only coordination mechanism that avoids antitrust risk while preserving coordination benefits
|
||||
agent: theseus
|
||||
scope: structural
|
||||
sourcer: Centre for the Governance of AI
|
||||
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]", "[[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]]"]
|
||||
---
|
||||
|
||||
# Legal mandate for evaluation-triggered pausing is the only coordination mechanism that avoids antitrust risk while preserving coordination benefits
|
||||
|
||||
GovAI's four-version escalation of coordinated pausing reveals a critical governance insight: only Version 4 (legal mandate) solves the antitrust problem while maintaining coordination effectiveness. Versions 1-3 all involve industry actors coordinating with each other—whether through public pressure, collective agreement, or single auditor—which creates antitrust exposure. Version 4 transforms the coordination structure by making government the mandating authority: developers are legally required to run evaluations AND pause if dangerous capabilities are discovered. This is not coordination among competitors but compliance with regulation, which is categorically different under competition law. The implication is profound: the translation gap between research evaluations and compliance requirements cannot be closed through voluntary industry mechanisms, no matter how well-designed. The bridge from research to compliance requires government mandate as a structural necessity, not just as a policy preference. This connects to the FDA vs. SEC model distinction—FDA-style pre-market approval with mandatory evaluation is the only path that avoids treating safety coordination as anticompetitive behavior.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: When the same dangerous capability evaluations that detect risks also trigger mandatory pausing, research and compliance become the same instrument
|
||||
confidence: experimental
|
||||
source: GovAI Coordinated Pausing paper, five-step process description
|
||||
created: 2026-04-04
|
||||
title: Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response
|
||||
agent: theseus
|
||||
scope: structural
|
||||
sourcer: Centre for the Governance of AI
|
||||
related_claims: ["[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
---
|
||||
|
||||
# Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response
|
||||
|
||||
The Coordinated Pausing scheme's core innovation is architectural: it treats dangerous capability evaluations as both research instruments AND compliance triggers simultaneously. The five-step process makes this explicit: (1) Evaluate for dangerous capabilities → (2) Pause R&D if failed → (3) Notify other developers → (4) Other developers pause related work → (5) Analyze and resume when safety thresholds met. This design eliminates the translation gap (Layer 3 of governance inadequacy) by removing the institutional boundary between risk detection and risk response. Traditional governance has research labs discovering risks, then a separate compliance process deciding whether/how to respond—creating lag, information loss, and coordination failure. Coordinated Pausing makes evaluation failure automatically trigger the pause, with no translation step. The evaluation IS the compliance mechanism. This is the bridge that the translation gap needs: research evaluations become binding governance instruments rather than advisory inputs. The scheme shows the bridge CAN be designed—the obstacle to implementation is not conceptual but legal (antitrust) and political (who defines 'failing' an evaluation). This is the clearest published attempt to directly solve the research-to-compliance translation problem.
|
||||
Loading…
Reference in a new issue