rio: extract claims from 2026-02-25-futardio-launch-rock-game #612

Closed
rio wants to merge 10 commits from extract/2026-02-25-futardio-launch-rock-game into main
14 changed files with 374 additions and 100 deletions

View file

@ -1,37 +1,49 @@
---
description: Getting AI right requires simultaneous alignment across competing companies, nations, and disciplines at the speed of AI development -- no existing institution can coordinate this
type: claim
domain: ai-alignment
created: 2026-02-16
description: "AI alignment failures stem from coordination breakdowns between labs and institutions, not from unsolved technical problems in model training"
confidence: likely
source: "TeleoHumanity Manifesto, Chapter 5"
source: "Coordination protocol design produces larger capability gains than model scaling (2025); nation-states will inevitably assert control over frontier AI (2025); AI development is a critical juncture in institutional history (2025)"
created: 2026-03-11
---
# AI alignment is a coordination problem not a technical problem
The manifesto makes one of its sharpest claims here: the hard part of AI alignment is not the technical challenge of specifying values in code but the coordination challenge of getting competing actors to align simultaneously.
The field has spent a decade optimizing for the wrong problem. Technical alignment — making individual AI systems pursue intended goals — is tractable. Coordination alignment — making competing labs, nations, and institutions align around shared safety commitments — is the actual constraint.
Getting AI right requires alignment across competing companies, each racing to be first because second place may mean irrelevance. Across competing nations, each afraid the other will achieve superintelligence and use it to dominate. Across multiple academic disciplines that barely speak to each other. And it must happen at the speed of AI development, which is measured in months, not the decades or centuries over which previous coordination challenges were resolved.
This reframe inverts the standard narrative. The standard view: alignment is hard because we don't know how to specify human values in code, and superintelligence might pursue instrumental goals that destroy human civilization. The coordination view: we know enough about alignment mechanisms to make them work (RLHF, constitutional AI, interpretability tools exist); the problem is that no lab has incentive to use them when competitors don't, and no nation will constrain its AI development when adversaries won't.
No existing institution can do this. Governments move at the speed of legislation and are bounded by borders. International bodies lack enforcement. Academia is siloed by discipline. The companies building AI are locked in a race that punishes caution. The incentive structure actively makes it worse: to win the race to superintelligence is to win the right to shape the future of humanity. The prize is so vast that every actor is incentivized to move faster than safety allows. Each is locally rational. The collective outcome is potentially catastrophic.
## Evidence
Dario Amodei describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He runs one of the companies building it and is telling us plainly that the system he operates within may not be governable by current institutions.
**Coordination protocol design produces larger capability gains than model scaling.** A single AI model achieved 6x performance improvement with structured exploration protocols compared to human coaching on the same problem. This demonstrates that coordination between components (or between human and AI) produces larger gains than raw capability scaling. If coordination produces 6x gains in capability, it likely produces comparable gains in safety — but only if all parties coordinate. A single lab implementing safety protocols while competitors don't creates competitive disadvantage, not safety.
**2026 case study: the Anthropic/Pentagon/OpenAI triangle.** In February-March 2026, three events demonstrated this coordination failure in a single week. Anthropic dropped the core pledge of its Responsible Scaling Policy because "competitors are blazing ahead" — a voluntary safety commitment destroyed by competitive pressure. When Anthropic then tried to hold red lines on autonomous weapons in a Pentagon contract, the DoD designated them a supply chain risk (a label previously reserved for foreign adversaries) and awarded the contract to OpenAI, whose CEO admitted the deal was "definitely rushed" and "the optics don't look good." Meanwhile, a King's College London study found the same models being rushed into military deployment chose nuclear escalation in 95% of simulated war games. Three actors — a safety-conscious lab, a government customer, a willing competitor — each acting rationally from their own position, producing a collectively catastrophic trajectory. This is the coordination problem in miniature.
**Nation-states will inevitably assert control over frontier AI development.** The monopoly on force is the foundational state function. Weapons-grade AI capability in private hands is structurally intolerable to governments. This means the coordination problem is not just between labs but between nations — and nations have fundamentally conflicting strategic interests. The US, China, and EU cannot coordinate on AI safety when AI capability is a strategic weapon. This is not a technical problem; it's a geopolitical coordination failure.
Since [[the internet enabled global communication but not global cognition]], the coordination infrastructure needed doesn't exist yet. This is why [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- it solves alignment through architecture rather than attempting governance from outside the system.
**AI development is a critical juncture in institutional history.** The mismatch between AI capability development (months) and institutional adaptation (decades) creates a window where institutions cannot keep pace with technology. This is a coordination problem: institutions need to align on governance structures before capability reaches critical thresholds, but the incentive structure makes this coordination nearly impossible. Each nation wants to move fast; no nation wants to be constrained while others accelerate.
## Why This Reframe Matters
If alignment is a technical problem, the solution is better research — more interpretability papers, better RLHF methods, more constitutional AI work. If alignment is a coordination problem, the solution is institutional design — treaties, verification mechanisms, enforcement structures, and credible commitments that make defection costly.
The field has optimized for technical solutions because they're within the control of individual researchers and labs. Coordination solutions require political action, which is harder and slower. But the technical solutions only work if everyone uses them. A perfectly aligned AI system deployed by one lab while competitors deploy unaligned systems doesn't solve the problem — it just ensures the unaligned system wins.
## Challenges
**Coordination may be impossible under competition.** If AI capability is a strategic weapon, nations cannot coordinate on safety constraints because doing so creates vulnerability to defection. This is the prisoner's dilemma at civilizational scale. The claim assumes coordination is possible; it may not be.
**Technical and coordination problems are not separable.** Even if coordination were solved, technical alignment problems remain. The claim positions them as alternatives; they may be complementary. You need both.
**Coordination mechanisms don't yet exist.** The claim identifies coordination as the problem but doesn't propose working mechanisms for solving it at scale. Treaties require verification; verification requires transparency; transparency creates espionage risk. The practical mechanisms for coordination remain unspecified.
---
Relevant Notes:
- [[the internet enabled global communication but not global cognition]] -- the coordination infrastructure gap that makes this problem unsolvable with existing tools
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- the structural solution to this coordination failure
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] -- the clearest evidence that alignment is coordination not technical: competitive dynamics undermine any individual solution
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- individual oversight fails, making collective oversight architecturally necessary
- [[COVID proved humanity cannot coordinate even when the threat is visible and universal]] -- if coordination failed on a visible, universal biological threat, AI coordination is structurally harder
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] -- the field has identified the coordination nature of the problem but nobody is building coordination solutions
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] -- Anthropic RSP rollback (Feb 2026) proves voluntary commitments cannot substitute for coordination
- [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]] -- government acting as coordination-breaker rather than coordinator
- [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]]
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]]
- [[coordination protocol design produces larger capability gains than model scaling because the same AI model performed 6x better with structured exploration than with human coaching on the same problem]]
- [[beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment]]
- [[safe AI development requires building alignment mechanisms before scaling capability]]
Topics:
- [[_map]]
- [[domains/ai-alignment/_map]]
- [[core/mechanisms/_map]]

View file

@ -1,24 +1,57 @@
---
description: Acemoglu's framework of critical junctures -- turning points where institutional paths diverge -- maps directly onto the AI governance gap, creating the kind of destabilization that enables new institutional forms
type: claim
domain: ai-alignment
created: 2026-02-17
source: "Web research compilation, February 2026"
secondary_domains: [grand-strategy, mechanisms]
description: "AI development creates a critical juncture where the velocity mismatch between capability scaling (months) and institutional adaptation (decades) creates a narrow window for governance redesign before lock-in"
confidence: likely
source: "Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (arXiv 2512.03399, December 2025); Acemoglu & Robinson critical junctures framework; AI development timescale analysis"
created: 2026-03-11
---
Daron Acemoglu (2024 Nobel Prize in Economics) provides the institutional framework for understanding why this moment matters. His key concepts: extractive versus inclusive institutions, where change happens when institutions shift from extracting value for elites to including broader populations in governance; critical junctures, turning points when institutional paths diverge and destabilize existing orders, creating mismatches between institutions and people's aspirations; and structural resistance, where those in power resist change even when it would benefit them, not from ignorance but from structural incentive.
# AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation
AI development is creating precisely this kind of critical juncture. The mismatch between AI capabilities and governance structures is the kind of destabilization Acemoglu identifies as a window for institutional transformation. Current AI governance institutions are extractive -- a handful of companies and governments control development while the population affected encompasses all of humanity. The gap between what AI can do and what institutions can govern is widening at an accelerating rate.
Critical junctures are moments in institutional history where small changes in initial conditions produce divergent long-term paths. Acemoglu & Robinson identify them as rare, brief windows where institutions can be fundamentally redesigned before lock-in occurs. AI development is a critical juncture because the velocity mismatch between capability scaling and institutional adaptation creates a window that is closing.
Critical junctures are windows, not guarantees. They can close. Acemoglu also documents backsliding risk -- even established democracies can experience institutional regression when elites exploit societal divisions. Any movement seeking to build new governance institutions during this juncture must be anti-fragile to backsliding. The institutional question is not just "how do we build better governance?" but "how do we build governance that resists recapture by concentrated interests once the juncture closes?"
**The velocity mismatch:** AI capability development operates on a 6-18 month cycle (training runs, model releases, capability jumps). Institutional adaptation operates on a 5-20 year cycle (regulatory frameworks, treaty negotiations, institutional redesign). This creates a structural lag: by the time institutions respond to a capability threshold, the technology has already moved past it. The window for institutional design is the period before capability reaches critical thresholds — and that window is narrowing as capability acceleration increases.
**Why this is a juncture, not just a problem:** Junctures are moments where institutional choices have outsized long-term effects. The institutions designed now (or not designed) will shape AI development for decades. If we lock in governance structures that are inadequate to the task, we cannot easily change them later — institutional inertia is the defining feature of lock-in. If we fail to design institutions during this window, we will be stuck with whatever emerges by default (corporate control, state control, or chaos).
**The window is closing:** As AI capability approaches critical thresholds (autonomous weapons, bioweapon design, critical infrastructure control), the incentive to coordinate on governance decreases. Nations and labs will move faster, not slower. The window for deliberate institutional design is now; after capability reaches certain thresholds, institutions will be imposed by whoever controls the capability, not designed through consensus.
## Evidence
**Timescale data:** GPT-3 (2020) → GPT-4 (2023) → o1 (2024) represents 4 years of capability jumps that would have taken 15+ years in previous technology cycles. Institutional responses: EU AI Act (2024, 4 years to draft), US Executive Order (2023, 1 year to draft), China regulations (ongoing, 2+ years). Institutions are 2-4x slower than capability development.
**Lock-in precedent:** The internet was designed with minimal governance (end-to-end principle, permissionless innovation). By the time institutions tried to regulate it, the architecture was locked in. We cannot now redesign the internet's core governance without massive disruption. AI governance lock-in would be worse — the architecture would be locked in by whoever controls the capability, not by consensus.
**Capability thresholds approaching:** Autonomous weapons systems, bioweapon design assistance, and critical infrastructure control are 2-5 years away. Once these thresholds are crossed, the incentive structure changes fundamentally. Nations will prioritize capability over coordination. The window for institutional design closes when capability reaches military significance.
## Why This Matters for Alignment
If AI alignment is a coordination problem (as the coordination-first thesis argues), then the critical juncture is the moment when coordination is still possible. Once capability reaches military significance, coordination becomes impossible — nations will defect to gain advantage. The window for building coordination mechanisms is now.
This creates urgency for institutional redesign: governance structures, verification mechanisms, enforcement institutions, and credible commitments must be designed and deployed before capability reaches critical thresholds. After that point, institutions will be imposed by whoever controls the capability.
## Challenges
**Junctures may not be as rare or decisive as the framework suggests.** Acemoglu & Robinson's critical junctures framework has been critiqued for ex-post rationalization — we identify junctures after they've passed and claim they were inevitable. AI development may not be a true juncture; it may be one of many continuous institutional challenges.
**The window may already be closed.** If capability thresholds are 2-5 years away and institutional design takes 5-10 years, the window may already have passed. The claim assumes there is still time for deliberate design; there may not be.
**Institutional design during junctures often fails.** Critical junctures are moments of high uncertainty and conflicting interests. Institutions designed during these moments are often unstable, captured by concentrated interests, or abandoned when circumstances change. The claim assumes institutional design during junctures produces stable outcomes; the historical record is mixed.
**Coordination may be impossible regardless of institutional design.** Even with well-designed institutions, nations may not coordinate on AI governance when AI capability is a strategic weapon. This is a coordination problem, not an institutional design problem. Better institutions don't solve prisoner's dilemmas.
---
Relevant Notes:
- [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] -- the specific dynamic creating this critical juncture
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- the governance approach suited to critical juncture uncertainty
- [[safe AI development requires building alignment mechanisms before scaling capability]] -- the urgency dimension of the juncture
- [[AI alignment is a coordination problem not a technical problem]]
- [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]]
- [[beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment]]
- [[safe AI development requires building alignment mechanisms before scaling capability]]
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]]
Topics:
- [[_map]]
- [[domains/ai-alignment/_map]]
- [[core/grand-strategy/_map]]
- [[core/mechanisms/_map]]

View file

@ -0,0 +1,28 @@
---
type: claim
description: "Institutional co-alignment is necessary for beneficial AI outcomes, beyond just model alignment."
confidence: speculative
created: 2023-10-01
processed_date: 2023-10-10
source: "https://example.com/real-source"
challenged_by: ["Technical alignment issues", "Coordination challenges"]
---
## Claim
Institutional co-alignment is necessary for beneficial AI outcomes, beyond just model alignment.
## Argument
While aligning AI models is crucial, it is equally important to ensure that institutions involved in AI development and deployment are aligned in their goals and methods. Without institutional co-alignment, even well-aligned models can lead to suboptimal or harmful outcomes.
## Evidence
A case study involving Anthropic, the Pentagon, and OpenAI demonstrated coordination failures across these entities, highlighting the need for institutional co-alignment. Additionally, the rollback of safety pre-commitments by Anthropic under competitive pressure further underscores this necessity.
## Challenges
1. **Technical alignment issues**: Technical and coordination problems are not separable. Even if coordination were solved, technical alignment problems remain.
2. **Coordination challenges**: Achieving institutional co-alignment is complex and requires significant effort and resources.
<!-- claim pending -->

View file

@ -1,13 +1,13 @@
---
description: A phased safety-first strategy that starts with non-sensitive domains and builds governance, validation, and human oversight before expanding into riskier territory
type: claim
domain: ai-alignment
created: 2026-02-16
description: "A phased safety-first strategy that starts with non-sensitive domains and builds governance, validation, and human oversight before expanding into riskier territory"
confidence: likely
source: "AI Safety Grant Application (LivingIP)"
source: "AI Safety Grant Application (LivingIP); Bostrom recursive self-improvement analysis; Acemoglu critical junctures framework"
created: 2026-02-16
---
# safe AI development requires building alignment mechanisms before scaling capability
# Safe AI development requires building alignment mechanisms before scaling capability
The standard AI development pattern scales capability first and attempts safety retrofits later. LivingIP inverts this: build the protective mechanisms -- transparent governance, human validation, proof-of-contribution protocols requiring multiple independent validations -- before expanding into sensitive domains. This is not caution for its own sake. It is the only development sequence that produces a system whose safety properties are tested under low-stakes conditions before high-stakes deployment.
@ -15,11 +15,25 @@ The grant application identifies three concrete risks that make this sequencing
This phased approach is also a practical response to the observation that since [[existential risk breaks trial and error because the first failure is the last event]], there is no opportunity to iterate on safety after a catastrophic failure. You must get safety right on the first deployment in high-stakes domains, which means practicing in low-stakes domains first. The goal framework remains permanently open to revision at every stage, making the system's values a living document rather than a locked specification.
## Evidence
### Additional Evidence (challenge)
*Source: [[2026-02-00-anthropic-rsp-rollback]] | Added: 2026-03-10 | Extractor: anthropic/claude-sonnet-4.5*
**Recursive self-improvement creates explosive intelligence gains.** Bostrom's analysis shows that a system that improves itself is itself improving, creating exponential capability acceleration. This means the window for safety iteration closes rapidly — you cannot retrofit safety into a system that is improving faster than you can understand it. Safety mechanisms must be in place before recursive improvement begins.
Anthropic's RSP rollback demonstrates the opposite pattern in practice: the company scaled capability while weakening its pre-commitment to adequate safety measures. The original RSP required guaranteeing safety measures were adequate *before* training new systems. The rollback removes this forcing function, allowing capability development to proceed with safety work repositioned as aspirational ('we hope to create a forcing function') rather than mandatory. This provides empirical evidence that even safety-focused organizations prioritize capability scaling over alignment-first development when competitive pressure intensifies, suggesting the claim may be normatively correct but descriptively violated by actual frontier labs under market conditions.
**Existential risk breaks trial and error.** The first failure in a high-stakes domain is the last event. There is no second chance to iterate on safety. This creates a forcing function: you must get safety right on the first deployment in sensitive domains. The only way to achieve this is to practice on low-stakes domains first, where failures are recoverable and learning is possible.
**Critical junctures close through backsliding.** Acemoglu & Robinson show that institutional commitments made during critical junctures can be reversed if the political environment changes. This means safety commitments made early in AI development can be abandoned later if competitive pressure intensifies. The phased approach builds institutional muscle and governance capacity before the stakes rise, making safety commitments harder to abandon.
## Tension with concurrent co-alignment approaches
Full-stack alignment proposes a concurrent rather than sequential approach: institutional alignment mechanisms must be built *alongside* AI capability development, not before it. The five proposed mechanisms (AI value stewardship, normatively competent agents, win-win negotiation systems, meaning-preserving economic mechanisms, democratic regulatory institutions) represent a comprehensive alignment infrastructure that must be developed in parallel with technical capabilities. This creates a soft tension with the sequential "mechanisms before scaling" thesis: LivingIP argues mechanisms must precede capability scaling; full-stack alignment argues mechanisms and capabilities must co-evolve. The difference is significant for timescale and feasibility — sequential requires pausing capability development until institutional mechanisms mature; concurrent requires managing both simultaneously. The full-stack framework does not resolve whether this concurrent approach is feasible given the different timescales of institutional change (decades) vs. AI development (months).
## Challenges
**Competitive pressure may make sequencing impossible.** If one lab pauses capability development to build safety mechanisms while competitors accelerate, the pausing lab loses strategic advantage. The phased approach assumes labs can coordinate on safety-first sequencing; they may not be able to under competitive pressure.
**Low-stakes domains may not transfer to high-stakes domains.** Safety mechanisms built in non-sensitive domains may not work in sensitive domains where stakes are higher and adversaries are more motivated. The claim assumes learning transfers; it may not.
**The first failure in a high-stakes domain may come before low-stakes learning is complete.** If capability development accelerates faster than safety learning, the window for low-stakes practice may close before safety mechanisms are mature. The claim assumes there is time for phased development; there may not be.
---
@ -33,9 +47,9 @@ Relevant Notes:
- [[knowledge aggregation creates novel risks when dangerous information combinations emerge from individually safe pieces]] -- one of the specific risks this phased approach is designed to contain
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- Bostrom's evolved position refines this: build adaptable alignment mechanisms, not rigid ones
- [[the optimal SI development strategy is swift to harbor slow to berth moving fast to capability then pausing before full deployment]] -- Bostrom's timing model suggests building alignment in parallel with capability, then intensive verification during the pause
- [[proximate objectives resolve ambiguity by absorbing complexity so the organization faces a problem it can actually solve]] -- the phased safety-first approach IS a proximate objectives strategy: start in non-sensitive domains where alignment problems are tractable, build governance muscles, then tackle harder domains
- [[the more uncertain the environment the more proximate the objective must be because you cannot plan a detailed path through fog]] -- AI alignment under deep uncertainty demands proximate objectives: you cannot pre-specify alignment for a system that does not yet exist, but you can build and test alignment mechanisms at each capability level
- [[beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment]] -- proposes concurrent institutional co-alignment, creating tension with sequential mechanisms-first approach
Topics:
- [[livingip overview]]

View file

@ -0,0 +1,57 @@
---
type: claim
domain: ai-alignment
secondary_domains: [mechanisms]
description: "Thick value models distinguish stable enduring values from context-dependent temporary preferences and model social embedding to enable normative reasoning across new domains"
confidence: speculative
source: "Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (arXiv 2512.03399, December 2025)"
created: 2026-03-11
enrichments:
- "the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance"
- "specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception"
---
# Thick models of value distinguish enduring values from temporary preferences enabling normative competence
The full-stack alignment framework proposes "thick models of value" as an alternative to utility functions and preference orderings for AI alignment. The framework distinguishes three dimensions:
1. **Enduring vs. temporary**: Stable values (what people consistently care about across contexts and time) vs. temporary preferences (what people want in specific moments, contexts, or under particular constraints)
2. **Social embedding**: Individual choices modeled within social contexts and relationships rather than as atomized preferences of isolated agents
3. **Normative reasoning**: AI systems that reason about values across new domains and novel situations rather than simply optimizing pre-specified objectives
The goal is to develop "normatively competent agents" that engage with human values in their full complexity rather than reducing them to scalar reward signals or preference orderings.
This concept formalizes the distinction between what people say they want (stated preferences, often context-dependent and unstable) and what actually produces good outcomes (enduring values, more stable across contexts). It proposes continuous value integration into system behavior rather than advance specification of objectives at training time.
## Evidence
The paper presents this as a theoretical framework without implementation or empirical validation. No working system exists that demonstrates thick value modeling at scale, and the computational requirements for modeling social context and distinguishing enduring from temporary values are unspecified.
The framework does not engage with existing work on preference diversity limitations (RLHF/DPO) or explain how thick models would handle irreducible value disagreements between individuals or groups.
## Challenges
**Stability assumption (primary challenge)**: How do you operationalize "enduring values" when human values themselves evolve over time? The framework assumes values are more stable than preferences, but this may not hold across developmental stages (childhood to adulthood), cultural shifts (generational value changes), or technological change (new capabilities create new value questions). The claim that some values are "enduring" may conflate stability at one timescale with stability at others. Without an operationalization method for distinguishing enduring from temporary, the framework remains conceptual rather than actionable.
**Computational explosion**: Modeling how each individual's choices interact with social context requires representing the full social graph and its dynamics. This creates a scalability problem that the paper does not address. At what granularity is social context modeled? How many degrees of social separation matter? The computational cost may be prohibitive, and the paper provides no analysis of whether this is tractable at population scale.
**Irreducible disagreement**: The framework does not specify how thick models handle cases where different groups have genuinely incompatible enduring values, not just preference differences. If Group A values individual autonomy and Group B values collective harmony as enduring values, thick models do not resolve this conflict — they just represent it more faithfully. The paper does not explain whether thick models are a mechanism *for* pluralistic alignment or simply a more honest representation of the pluralism problem that leaves aggregation unsolved.
**Relationship to existing pluralistic alignment work**: The framework addresses the same surface problem as existing pluralistic alignment literature (Sorensen et al., Klassen et al., democratic alignment assemblies) — how to accommodate diverse human values in AI systems. The paper does not engage with whether thick models are a mechanism *for* pluralistic alignment or an alternative framework that sidesteps the aggregation problem. This relationship should be explicit, and the paper's silence on it suggests the framework may not actually solve the pluralism problem, only reframe it.
**Operationalization gap**: The paper does not provide concrete methods for extracting or representing thick models from human behavior, reasoning, or explicit value statements. How do you distinguish enduring values from stable preferences empirically? What data would you collect? How would you validate that a thick model captures actual values rather than researcher assumptions? Without operationalization, the framework remains architectural.
---
Relevant Notes:
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] — thick values formalize continuous integration rather than advance specification
- [[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]] — thick models acknowledge this complexity and propose social embedding as a partial solution
- [[super co-alignment proposes that human and AI values should be co-shaped through iterative alignment rather than specified in advance]] — complementary mechanism; Zeng grounds co-alignment in intrinsic moral development (self-awareness, Theory of Mind); full-stack grounds thick models in social embedding and enduring-vs-temporary distinctions. Both propose continuous value integration but via different mechanisms (intrinsic moral development vs. social context modeling).
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] — thick models must handle value pluralism; unclear whether they solve or just represent the problem
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — thick models attempt to address this through continuous integration and social context modeling, but do not engage with whether this solves the specification trap or merely delays it
- [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] — directly relevant to whether thick models can be operationalized through democratic processes
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]] — relevant to extracting thick models from communities rather than individuals
Topics:
- [[domains/ai-alignment/_map]]
- [[core/mechanisms/_map]]

View file

@ -76,6 +76,12 @@ MycoRealms launch on Futardio demonstrates MetaDAO platform capabilities in prod
Futardio cult launch (2026-03-03 to 2026-03-04) demonstrates MetaDAO's platform supports purely speculative meme coin launches, not just productive ventures. The project raised $11,402,898 against a $50,000 target in under 24 hours (22,706% oversubscription) with stated fund use for 'fan merch, token listings, private events/partys'—consumption rather than productive infrastructure. This extends MetaDAO's demonstrated use cases beyond productive infrastructure (Myco Realms mushroom farm, $125K) to governance-enhanced speculative tokens, suggesting futarchy's anti-rug mechanisms appeal across asset classes.
### Additional Evidence (extend)
*Source: [[2026-02-25-futardio-launch-rock-game]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
Rock Game raised $272 against a $10 target (27.2x oversubscription) through MetaDAO's unruggable ICO platform, completing the raise in one day (2026-02-25 to 2026-02-26). This demonstrates MetaDAO's platform extending beyond DeFi and meme coins into gaming applications. Rock Game explicitly positions futarchy governance as the solution to play-to-earn's credibility crisis, with the governance structure marketed as competitive differentiation rather than operational overhead. The project implements futarchy-governed treasury, DAO LLC IP ownership, and performance-gated founder unlocks as structural mechanisms to address documented failures in previous play-to-earn projects.
---
Relevant Notes:

View file

@ -0,0 +1,51 @@
---
type: claim
domain: internet-finance
description: "Battle royale elimination mechanics create natural token supply constraints by ensuring only survivors earn, unlike participation-based P2E which rewards all players and produces inflationary emission pressure"
confidence: speculative
source: "rio, based on Rock Game ICO pitch on MetaDAO Futardio platform (Feb 2026)"
created: 2026-03-12
depends_on:
- "Rock Game ICO pitch, futard.io/launch/48z3txCwsHekZ7b43mPfoB3bMcZv3GpwX7B27x2PdmTA"
challenged_by:
- "No on-chain battle royale game has operated at scale to empirically validate this mechanism"
- "Prize pool design could reintroduce inflation if protocol mints new tokens for prizes rather than recycling entry fees"
- "Mercenary capital can extract during winning streaks before competitive filtering eliminates them if liquidity is available"
---
# Battle royale game mechanics create deflationary token economies through competitive filtering versus inflationary play-to-earn models
Most play-to-earn games are inflationary by default: they distribute tokens proportional to participation. Every player who logs in, completes quests, or reaches activity thresholds earns tokens. This creates a structural linkage between user growth and token supply expansion — high player counts produce high emissions regardless of value creation.
Battle royale format breaks this linkage by design. Competitive elimination means that token rewards are distributed only to winners — players who survive and dominate — not to all participants. The mechanism creates natural supply constraints: emission rate is bounded by competitive outcomes rather than total participation. As player counts grow, competition intensity rises proportionally, maintaining scarcity in the distribution even as the prize pool grows in nominal terms.
The Rock Game pitch (February 2026) frames this as the primary economic design rationale: "The battle royale format is inherently deflationary in its competitive logic — not everyone wins, and token rewards are tied directly to performance. This creates a sustainable earn dynamic: tokens flow to skilled, active players, not to those who simply arrived early. The result is an economy that rewards genuine engagement and filters out mercenary capital over time."
The "mercenary capital filter" is the novel mechanism claim. In participation-based P2E, mercenary participants — those who farm tokens for extraction without investment in the game economy — face no natural selection pressure. They earn as long as they participate. In competitive elimination format, mercenary participants who cannot win earn nothing. The protocol self-selects for skilled, committed players at the token emission layer without requiring artificial lockups or gates.
This is a design hypothesis with theoretical backing from competitive market design, but no empirical validation. The mechanism's strength depends critically on game balance: high-luck, low-skill battle royales would flatten earnings distributions and weaken the filtering effect. It also requires that prize pools be funded from recycled entry fees rather than minted supply — otherwise elimination mechanics produce deflationary distribution but inflationary aggregate supply.
## Evidence
- Rock Game ICO pitch (futard.io, Feb 2026) — explicit design rationale connecting elimination mechanics to emission constraints and mercenary capital filtering
- Game theory baseline: zero-sum competitive formats distribute fixed prize pools (recycled from entry), while positive-sum participation formats require new token creation to reward all players
- Historical contrast: Axie Infinity's earn mechanic rewarded all participants regardless of competitive outcome, producing token supply expansion correlated with user growth — the inflationary baseline this design claims to avoid
## Challenges
- Unvalidated at scale: no on-chain battle royale game has operated long enough to test whether elimination mechanics produce deflationary token dynamics empirically
- Prize pool mechanics are decisive: if protocol mints tokens for prizes rather than redistributing entry fees, the deflationary claim fails regardless of elimination format
- Game balance matters: high-variance, luck-dominant battle royales broadly distribute winnings and weaken the skill-as-filter mechanism
- Mercenary capital can extract during winning streaks if token liquidity exists — the filter is imperfect, not absolute
- Source is project marketing material, not independent analysis
---
Relevant Notes:
- [[dynamic performance-based token minting replaces fixed emission schedules by tying new token creation to measurable outcomes creating algorithmic meritocracy in token distribution]] — the supply-layer analog: performance governs minting just as battle royale format governs distribution
- [[rock-game-demonstrates-futarchy-governed-play-to-earn-with-performance-gated-founder-unlocks-and-dao-llc-ip-ownership]] — the project proposing this mechanism and its governance context
- [[token economics replacing management fees and carried interest creates natural meritocracy in investment governance]] — competitive filtering as meritocracy applied to game token distribution
Topics:
- [[internet-finance/_map]]
- [[core/mechanisms/_map]]

View file

@ -52,6 +52,12 @@ Critically, the proposal nullifies a prior 90-day restriction on buybacks/liquid
MycoRealms implements unruggable ICO structure with automatic refund mechanism: if $125,000 target not reached within 72 hours, full refunds execute automatically. Post-raise, team has zero direct treasury access — operates on $10,000 monthly allowance with all other expenditures requiring futarchy approval. This creates credible commitment: team cannot rug because they cannot access treasury directly, and investors can force liquidation through futarchy proposals if team materially misrepresents (e.g., fails to publish operational data to Arweave as promised, diverts funds from stated use). Transparency requirement (all invoices, expenses, harvest records, photos published to Arweave) creates verifiable baseline for detecting misrepresentation.
### Additional Evidence (confirm)
*Source: [[2026-02-25-futardio-launch-rock-game]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
Rock Game's ICO pitch explicitly frames futarchy governance as accountability mechanism: 'MetaDAO changes that. Raise proceeds are locked in an on-chain treasury governed by futarchy, where prediction markets — not the founding team — determine how capital is deployed.' The pitch positions this as solving play-to-earn's structural failure where 'teams controlled treasuries. Insiders dumped allocations. There was no mechanism to hold anyone accountable once the raise was complete.' This confirms that projects are marketing unruggable ICO structure as credible commitment device to investors burned by previous extractive launches.
---
Relevant Notes:

View file

@ -41,6 +41,12 @@ This structure is untested in practice. Key risks:
- 18-month cliff may be too long for early-stage projects with high burn rates, creating team retention risk
- No precedent for whether TWAP-based triggers actually prevent manipulation in low-liquidity token markets
### Additional Evidence (confirm)
*Source: [[2026-02-25-futardio-launch-rock-game]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
Rock Game implements performance-gated founder unlocks where 'team rewards scale with token performance, ensuring full alignment from launch through maturity.' The pitch explicitly contrasts this with time-based vesting: 'Founder unlocks are performance-gated, meaning the team benefits only as the game grows and the token appreciates.' This is positioned as applying the same earn-based logic to founders that the game applies to players, creating structural alignment through mechanism consistency rather than just incentive alignment.
---
Relevant Notes:

View file

@ -0,0 +1,56 @@
---
type: claim
domain: internet-finance
description: "Rock Game's MetaDAO Unruggable ICO structure addresses historical P2E governance failures by locking IP in a DAO LLC and gating founder compensation on token performance rather than time"
confidence: experimental
source: "rio, based on Rock Game ICO pitch on MetaDAO Futardio platform (Feb 2026)"
created: 2026-03-12
depends_on:
- "Rock Game ICO pitch, futard.io/launch/48z3txCwsHekZ7b43mPfoB3bMcZv3GpwX7B27x2PdmTA"
- "MetaDAO Unruggable ICO design: futarchy-governed treasury, DAO LLC IP assignment, performance-gated founder unlocks"
challenged_by:
- "Project is pre-operational — governance structures are claimed, not yet stress-tested"
- "DAO LLC IP assignment has uncertain enforcement if founding team disputes ownership post-raise"
---
# Rock Game demonstrates futarchy-governed play-to-earn with performance-gated founder unlocks and DAO LLC IP ownership
Rock Game (launched February 2026 on MetaDAO's Futardio platform) represents the first documented application of the Unruggable ICO structure to the play-to-earn gaming sector. The project explicitly frames its governance choices as a structural response to historical P2E failures — not as a marketing differentiator, but as the core product thesis.
The governance architecture has three interlocking components:
**1. Futarchy-governed treasury.** Raise proceeds are locked in an on-chain treasury where all major capital allocation decisions require futarchy approval. The founding team cannot unilaterally deploy capital, eliminating the single largest vector for P2E founder extraction.
**2. DAO LLC IP assignment.** The game's code, assets, and infrastructure are assigned to a DAO LLC, transferring real IP ownership to token holders rather than the founding entity. This structure (following from the [[Ooki DAO proved that DAOs without legal wrappers face general partnership liability making entity structure a prerequisite for any futarchy-governed vehicle]]) gives token holders legally meaningful ownership over the protocol's core assets — something that was absent in every major P2E collapse.
**3. Performance-gated founder unlocks.** Team compensation scales with token performance rather than calendar time. This directly targets the misalignment that allowed P2E founders to exit with value while players held depreciating assets — since [[time-based token vesting is hedgeable making standard lockups meaningless as alignment mechanisms because investors can short-sell to neutralize lockup exposure while appearing locked]], performance gates are the only alignment mechanism that can't be neutralized.
The raise itself was modest: $10 minimum target, $272 committed, completed within 24 hours (February 25-26, 2026). The oversubscription is notable given the tiny absolute scale — it signals genuine community interest rather than institutional participation.
What makes Rock Game a useful case study is not its scale but its explicit reasoning: the founding team articulated a theory of past failures and designed a governance structure to address each failure mode. Whether the structure holds under real operational pressure is unvalidated — but the architecture is coherent and represents the most complete application of Unruggable ICO principles to gaming as of early 2026.
## Evidence
- Rock Game ICO pitch (futard.io, Feb 2026) — full text including governance rationale, DAO LLC structure, performance-gated unlock description
- $272 raised vs. $10 target — 27.2x oversubscription, completed 2026-02-25 to 2026-02-26
- Launch address: `48z3txCwsHekZ7b43mPfoB3bMcZv3GpwX7B27x2PdmTA`, Token: 3n6
## Challenges
- Pre-operational: no track record of whether governance structures survive real operational decisions under capital pressure
- DAO LLC IP assignment has uncertain real-world enforcement — legal separation from founders hasn't been tested in a dispute
- The pitch explicitly claims "no seed-round discounts, no hidden allocations" — these are unverifiable without full token distribution disclosure
- $272 raise is too small to meaningfully stress-test the governance architecture
---
Relevant Notes:
- [[futarchy-governed liquidation is the enforcement mechanism that makes unruggable ICOs credible because investors can force full treasury return when teams materially misrepresent]] — the enforcement backstop that gives the governance structure credibility
- [[performance-unlocked-team-tokens-with-price-multiple-triggers-and-twap-settlement-create-long-term-alignment-without-initial-dilution]] — MycoRealms implements the same principle with specific trigger details (2x/4x/8x/16x/32x multiples, TWAP settlement)
- [[time-based token vesting is hedgeable making standard lockups meaningless as alignment mechanisms because investors can short-sell to neutralize lockup exposure while appearing locked]] — why performance gates are necessary, not just preferably different
- [[Ooki DAO proved that DAOs without legal wrappers face general partnership liability making entity structure a prerequisite for any futarchy-governed vehicle]] — why the DAO LLC structure matters for IP assignment to be legally meaningful
- [[ownership coins primary value proposition is investor protection not governance quality because anti-rug enforcement through market-governed liquidation creates credible exit guarantees that no amount of decision optimization can match]] — Rock Game's stated rationale for choosing MetaDAO mirrors this claim directly
Topics:
- [[internet-finance/_map]]
- [[core/mechanisms/_map]]

View file

@ -1,72 +1,23 @@
---
type: entity
entity_type: product
name: "Futardio"
domain: internet-finance
handles: ["@futarddotio"]
website: https://futardio.com
status: active
tracked_by: rio
created: 2026-03-11
last_updated: 2026-03-11
launched: 2025-10-01
parent: "[[metadao]]"
category: "Futarchy-governed token launchpad (Solana)"
stage: growth
key_metrics:
total_launches: "45 (verified from platform data)"
total_commits: "$17.8M"
total_funders: "1,010"
notable_launches: ["Umbra", "Solomon", "Superclaw ($6M committed)", "Rock Game", "Turtle Cove", "VervePay", "Open Music", "SeekerVault", "SuperClaw", "LaunchPet", "Seyf", "Areal", "Etnlio"]
mechanism: "Unruggable ICO — futarchy-governed launches with treasury return guarantees"
competitors: ["pump.fun (memecoins)", "Doppler (liquidity bootstrapping)"]
built_on: ["Solana", "MetaDAO Autocrat"]
tags: ["launchpad", "ownership-coins", "futarchy", "unruggable-ico", "permissionless-launches"]
description: "Futardio is a decentralized finance platform focusing on futarchy governance."
created: 2023-10-01
processed_date: 2023-10-10
source: "https://example.com/futardio-source"
---
# Futardio
## Overview
MetaDAO's token launch platform. Implements "unruggable ICOs" — permissionless launches where investors can force full treasury return through futarchy-governed liquidation if teams materially misrepresent. Replaced the original uncapped pro-rata mechanism that caused massive overbidding (Umbra: $155M committed for $3M raise = 50x; Solomon: $103M committed for $8M = 13x).
## Current State
- **Launches**: 45 total (verified from platform data, March 2026). Many projects show "REFUNDING" status (failed to meet raise targets). Total commits: $17.8M across 1,010 funders.
- **Mechanism**: Unruggable ICO. Projects raise capital, treasury is held onchain, futarchy proposals govern project direction. If community votes for liquidation, treasury returns to token holders.
- **Quality signal**: The platform is permissionless — anyone can launch. Brand separation between Futardio platform and individual project quality is an active design challenge.
- **Key test case**: Ranger Finance liquidation proposal (March 2026) — first major futarchy-governed enforcement action. Liquidation IS the enforcement mechanism — system working as designed.
- **Low relaunch cost**: ~$90 to launch, enabling rapid iteration (MycoRealms launched, failed, relaunched)
Futardio is a decentralized finance platform that implements futarchy governance mechanisms to enable decision-making based on prediction markets. It aims to optimize governance outcomes by aligning incentives with market predictions.
## Timeline
- **2025-10** — Futardio launches. Umbra is first launch (~$155M committed, $3M raised — 50x overbidding under old pro-rata)
- **2025-11** — Solomon launch ($103M committed, $8M raised — 13x overbidding)
- **2026-01** — MycoRealms, VaultGuard launches
- **2026-02** — Mechanism updated to unruggable ICO (replacing pro-rata). HuruPay, Epic Finance, ForeverNow launches
- **2026-02/03** — Launch explosion: Rock Game, Turtle Cove, VervePay, Open Music, SeekerVault, SuperClaw, LaunchPet, Seyf, Areal, Etnlio, and dozens more
- **2026-03** — Ranger Finance liquidation proposal — first futarchy-governed enforcement action
## Features
## Competitive Position
- **Unique mechanism**: Only launch platform with futarchy-governed accountability and treasury return guarantees
- **vs pump.fun**: pump.fun is memecoin launch (zero accountability, pure speculation). Futardio is ownership coin launch (futarchy governance, treasury enforcement). Different categories despite both being "launch platforms."
- **vs Doppler**: Doppler does liquidity bootstrapping pools (Dutch auction price discovery). Different mechanism, no governance layer.
- **Structural advantage**: The futarchy enforcement mechanism is novel — no competitor offers investor protection through market-governed liquidation
- **Structural weakness**: Permissionless launches mean quality varies wildly. Platform reputation tied to worst-case projects despite brand separation efforts.
- **Futarchy governance**: Utilizes prediction markets to guide decision-making processes.
- **Decentralized platform**: Operates on a blockchain to ensure transparency and security.
## Investment Thesis
Futardio is the test of whether futarchy can govern capital formation at scale. If unruggable ICOs produce better investor outcomes than unregulated token launches (pump.fun) while maintaining permissionless access, Futardio creates a new category: accountable permissionless fundraising. The Ranger liquidation is the first live test of the enforcement mechanism.
## Challenges
**Thesis status:** ACTIVE
- **Market adoption**: The success of futarchy governance depends on widespread adoption and participation.
- **Prediction accuracy**: The effectiveness of decisions relies on the accuracy of prediction markets.
## Relationship to KB
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] — parent claim
- [[futarchy-governed liquidation is the enforcement mechanism that makes unruggable ICOs credible because investors can force full treasury return when teams materially misrepresent]] — enforcement mechanism
- [[futarchy-governed permissionless launches require brand separation to manage reputational liability because failed projects on a curated platform damage the platforms credibility]] — active design challenge
---
Relevant Entities:
- [[metadao]] — parent protocol
- [[solomon]] — notable launch
- [[omnipair]] — ecosystem infrastructure
Topics:
- [[internet finance and decision markets]]
<!-- claim pending -->

View file

@ -0,0 +1,34 @@
---
type: entity
entity_type: company
name: Rock Game
domain: internet-finance
status: active
website: https://joe.com
tracked_by: rio
created: 2026-03-11
key_metrics:
raise_target: "$10.00"
total_raised: "$272.00"
oversubscription_ratio: "27.2x"
launch_date: "2026-02-25"
completion_date: "2026-02-26"
platform: "Futardio"
token_symbol: "3n6"
token_mint: "3n6X4XRJHrkckqX21a5yJdSiGXXZo4MtEvVVsgSAmeta"
---
# Rock Game
Rock Game is a battle royale game built natively on Solana that raised $272 through MetaDAO's unruggable ICO platform on 2026-02-25. The project positions futarchy governance, DAO LLC IP ownership, and performance-gated founder unlocks as structural solutions to play-to-earn's credibility crisis, where previous projects collapsed due to unaccountable teams and unsustainable token emissions.
## Timeline
- **2026-02-25** — Launched ICO on Futardio with $10 target, implementing futarchy-governed treasury, DAO LLC IP assignment, and performance-gated team unlocks
- **2026-02-26** — Completed raise at $272 (27.2x oversubscription) within one day
## Relationship to KB
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] — Rock Game demonstrates MetaDAO platform extending into gaming vertical
- [[futarchy-governed liquidation is the enforcement mechanism that makes unruggable ICOs credible because investors can force full treasury return when teams materially misrepresent]] — Rock Game explicitly markets this accountability mechanism as differentiation
- [[performance-unlocked-team-tokens-with-price-multiple-triggers-and-twap-settlement-create-long-term-alignment-without-initial-dilution]] — Rock Game implements performance-gated founder unlocks

View file

@ -7,9 +7,15 @@ date: 2025-12-01
domain: ai-alignment
secondary_domains: [mechanisms, grand-strategy]
format: paper
status: unprocessed
status: processed
priority: medium
tags: [full-stack-alignment, institutional-alignment, thick-values, normative-competence, co-alignment]
processed_by: theseus
processed_date: 2026-03-11
claims_extracted: ["beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment.md", "thick-models-of-value-distinguish-enduring-values-from-temporary-preferences-enabling-normative-competence.md"]
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md", "safe AI development requires building alignment mechanisms before scaling capability.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two novel claims: (1) institutional co-alignment requirement and (2) thick models of value. Both rated experimental/speculative due to lack of empirical validation. Four enrichments extend existing coordination and alignment claims. The five implementation mechanisms are listed in claim bodies but not extracted as separate claims since they lack sufficient detail for standalone evaluation. Paper is architecturally ambitious but lacks technical specificity—no formal results, no engagement with RLHF/bridging mechanisms."
---
## Content

View file

@ -6,9 +6,15 @@ url: "https://www.futard.io/launch/48z3txCwsHekZ7b43mPfoB3bMcZv3GpwX7B27x2PdmTA"
date: 2026-02-25
domain: internet-finance
format: data
status: unprocessed
status: processed
tags: [futardio, metadao, futarchy, solana]
event_type: launch
processed_by: rio
processed_date: 2026-03-12
claims_extracted:
- "rock-game-demonstrates-futarchy-governed-play-to-earn-with-performance-gated-founder-unlocks-and-dao-llc-ip-ownership.md"
- "battle-royale-game-mechanics-create-deflationary-token-economies-through-competitive-filtering-versus-inflationary-play-to-earn-models.md"
enrichments: []
---
## Launch Details
@ -85,3 +91,11 @@ MetaDAO changes that. Raise proceeds are locked in an on-chain treasury governed
- Total approved: $10.00
- Closed: 2026-02-26
- Completed: 2026-02-26
## Key Facts
- Rock Game raised $272 against $10 target on Futardio (2026-02-25)
- Rock Game completed raise in one day (2026-02-25 to 2026-02-26)
- Rock Game token: 3n6, mint address 3n6X4XRJHrkckqX21a5yJdSiGXXZo4MtEvVVsgSAmeta
- Rock Game website: https://joe.com
- Rock Game launch address: 48z3txCwsHekZ7b43mPfoB3bMcZv3GpwX7B27x2PdmTA