- AgentRank: "solves" → "enables...in objectively-verifiable domains" Elevated GPU plutocracy from open question to structural flaw. Fixed depends_on (prediction markets → expert staking, better parallel). - Quantum markets: "solve" → "could address", experimental → speculative (no production deployment, theoretical only) - Updated wiki links in Umia claim + entity to match renamed files Pentagon-Agent: Rio <760F7FE7-5D50-4C2E-8B7C-9F1A8FEE8A46>
50 lines
4.8 KiB
Markdown
50 lines
4.8 KiB
Markdown
---
|
|
type: claim
|
|
domain: internet-finance
|
|
description: "Hyperspace's AgentRank adapts PageRank to P2P agent networks using cryptographic computational stake — works in objectively-verifiable domains (ML experiments) but cannot generalize to judgment-dependent domains without solving the oracle problem"
|
|
confidence: speculative
|
|
source: "Rio via @varun_mathur, Hyperspace AI; AgentRank whitepaper (March 15, 2026)"
|
|
created: 2026-03-16
|
|
secondary_domains:
|
|
- ai-alignment
|
|
- mechanisms
|
|
depends_on:
|
|
- "expert staking in Living Capital uses Numerai-style bounded burns for performance and escalating dispute bonds for fraud creating accountability without deterring participation"
|
|
flagged_for:
|
|
- theseus
|
|
challenged_by:
|
|
- "Single empirical test (333 experiments, 35 agents). Scale and adversarial robustness are untested."
|
|
- "Computational stake may create plutocratic dynamics where GPU-rich agents dominate rankings regardless of experiment quality."
|
|
---
|
|
|
|
# Cryptographic stake-weighted trust enables autonomous agent coordination in objectively-verifiable domains because AgentRank adapts PageRank to computational contribution
|
|
|
|
Hyperspace's AgentRank (March 2026) demonstrates a mechanism design for trust among autonomous agents in decentralized networks. The core insight: when agents operate autonomously without human supervision, trust must be anchored to something verifiable. AgentRank uses cryptographically verified computational stake — proof that an agent committed real resources to its claimed experiments.
|
|
|
|
**How it works:**
|
|
1. Agents on a P2P network run ML experiments autonomously
|
|
2. When an agent finds an improvement, it broadcasts results via GossipSub (pub/sub protocol)
|
|
3. Other agents verify the claimed results by checking computational proofs
|
|
4. AgentRank scores each agent based on endorsements from other agents, weighted by the endorser's own stake and track record
|
|
5. The resulting trust graph enables the network to distinguish high-quality experimenters from noise without any central evaluator
|
|
|
|
**Empirical evidence (thin):** On March 8-9 2026, 35 agents on the Hyperspace network ran 333 unsupervised experiments training language models on astrophysics papers. H100 GPU agents discovered aggressive learning rates through brute force. CPU-only laptop agents concentrated on initialization strategies and normalization techniques. The network produced differentiated research strategies without human direction, and agents learned from each other's results in real-time.
|
|
|
|
**Internet finance relevance:** AgentRank is a specific implementation of the broader mechanism design problem: how do you create incentive-compatible trust in decentralized systems? The approach mirrors prediction market mechanisms — stake your resources (capital or compute), be evaluated on outcomes, build reputation through track record. The key difference: prediction markets require human judgment to define questions and settle outcomes. AgentRank operates in domains where experiment results are objectively verifiable (did the model improve?), bypassing the oracle problem.
|
|
|
|
**Structural flaw: GPU plutocracy.** Stake-weighting by compute means well-resourced agents dominate reputation regardless of insight quality. A laptop agent with better search heuristics will be outranked by a brute-force H100 agent. This isn't an open question — it's a design flaw that mirrors capital-weighted voting in DAOs. The mechanism trades one form of plutocracy (financial) for another (computational). Whether this matters depends on whether insight density correlates with compute scale — in ML experiments it often does, but in broader research it may not.
|
|
|
|
**Open questions:**
|
|
- How does the system handle adversarial agents that fabricate computational proofs?
|
|
- Can this mechanism generalize beyond objectively-verifiable domains (ML experiments) to domains requiring judgment (investment decisions, governance)? The body's own analysis suggests no — the oracle problem blocks generalization.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — AgentRank uses similar mechanism: stake creates incentive, track record creates selection
|
|
- [[expert staking in Living Capital uses Numerai-style bounded burns for performance and escalating dispute bonds for fraud creating accountability without deterring participation]] — parallel staking mechanism for human experts, AgentRank does the same for autonomous agents
|
|
- [[all agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposers training biases]] — Hyperspace's heterogeneous compute (H100 vs CPU) naturally creates diversity. Mechanism design insight for our own pipeline.
|
|
|
|
Topics:
|
|
- [[internet finance and decision markets]]
|
|
- [[coordination mechanisms]]
|