Compare commits
7 commits
846db33376
...
42e3ddb0b5
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
42e3ddb0b5 | ||
| 6824f5c924 | |||
| f884dde98a | |||
| 55fb571dea | |||
| 71227f3bca | |||
| 723bf4c6ba | |||
|
|
6a80039f2c |
14 changed files with 361 additions and 3 deletions
|
|
@ -23,6 +23,9 @@ The architecture follows biological organization: nested Markov blankets with sp
|
|||
- [[collaborative knowledge infrastructure requires separating the versioning problem from the knowledge evolution problem because git solves file history but not semantic disagreement or insight-level attribution]] — the design challenge
|
||||
- [[person-adapted AI compounds knowledge about individuals while idea-learning AI compounds knowledge about domains and the architectural gap between them is where collective intelligence lives]] — where CI lives
|
||||
|
||||
## Structural Positioning
|
||||
- [[agent-mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi-agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine]] — what makes this architecture unprecedented
|
||||
|
||||
## Operational Architecture (how the Teleo collective works today)
|
||||
- [[adversarial PR review produces higher quality knowledge than self-review because separated proposer and evaluator roles catch errors that the originating agent cannot see]] — the core quality mechanism
|
||||
- [[prose-as-title forces claim specificity because a proposition that cannot be stated as a disagreeable sentence is not a real claim]] — the simplest quality gate
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
type: claim
|
||||
domain: living-agents
|
||||
description: "Compares Teleo's architecture against Wikipedia, Community Notes, prediction markets, and Stack Overflow across three structural dimensions — atomic claims with independent evaluability, adversarial multi-agent evaluation with proposer/evaluator separation, and persistent knowledge graphs with semantic linking and cascade detection — showing no existing system combines all three"
|
||||
confidence: experimental
|
||||
source: "Theseus, original analysis grounded in CI literature and operational comparison of existing knowledge aggregation systems"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Agent-mediated knowledge bases are structurally novel because they combine atomic claims adversarial multi-agent evaluation and persistent knowledge graphs which Wikipedia Community Notes and prediction markets each partially implement but none combine
|
||||
|
||||
Existing knowledge aggregation systems each implement one or two of three critical structural properties, but none combine all three. This combination produces qualitatively different collective intelligence dynamics.
|
||||
|
||||
## The three structural properties
|
||||
|
||||
**1. Atomic claims with independent evaluability.** Each knowledge unit is a single proposition with its own evidence, confidence level, and challenge surface. Wikipedia merges claims into consensus articles, destroying the disagreement structure — you can't independently evaluate or challenge a single claim within an article without engaging the whole article's editorial process. Prediction markets price single propositions but can't link them into structured knowledge. Stack Overflow evaluates Q&A pairs but not propositions. Atomic claims enable granular evaluation: each can be independently challenged, enriched, or deprecated without affecting others.
|
||||
|
||||
**2. Adversarial multi-agent evaluation.** Knowledge inputs are evaluated by AI agents through structured adversarial review — proposer/evaluator separation ensures the entity that produces a claim is never the entity that approves it. Wikipedia uses human editor consensus (collaborative, not adversarial by design). Community Notes uses algorithmic bridging (matrix factorization, no agent evaluation). Prediction markets use price signals (no explicit evaluation of claim quality, only probability). The agent-mediated model inverts RLHF: instead of humans evaluating AI outputs, AI evaluates knowledge inputs using a codified epistemology.
|
||||
|
||||
**3. Persistent knowledge graphs with semantic linking.** Claims are wiki-linked into a traversable graph where evidence chains are auditable: evidence → claims → beliefs → positions. Community Notes has no cross-note memory — each note is evaluated independently. Prediction markets have no cross-question linkage. Wikipedia has hyperlinks but without semantic typing or confidence weighting. The knowledge graph enables cascade detection: when a foundational claim is challenged, the system can trace which beliefs and positions depend on it.
|
||||
|
||||
## Why the combination matters
|
||||
|
||||
Each property alone is well-understood. The novelty is in their interaction:
|
||||
|
||||
- Atomic claims + adversarial evaluation = each claim gets independent quality assessment (not possible when claims are merged into articles)
|
||||
- Adversarial evaluation + knowledge graph = evaluators can check whether a new claim contradicts, supports, or duplicates existing linked claims (not possible without persistent structure)
|
||||
- Knowledge graph + atomic claims = the system can detect when new evidence should cascade through beliefs (not possible without evaluators to actually perform the update)
|
||||
|
||||
The closest analog is scientific peer review, which has atomic claims (papers make specific arguments) and adversarial evaluation (reviewers challenge the work), but lacks persistent knowledge graphs — scientific papers cite each other but don't form a traversable, semantically typed graph with confidence weighting and cascade detection.
|
||||
|
||||
## What this does NOT claim
|
||||
|
||||
This claim is structural, not evaluative. It does not claim that agent-mediated knowledge bases produce *better* knowledge than Wikipedia or prediction markets — that is an empirical question we don't yet have data to answer. It claims the architecture is *structurally novel* in combining properties that existing systems don't combine. Whether structural novelty translates to superior collective intelligence is a separate, testable proposition.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[adversarial PR review produces higher quality knowledge than self-review because separated proposer and evaluator roles catch errors that the originating agent cannot see]] — the operational evidence for property #2
|
||||
- [[wiki-link graphs create auditable reasoning chains because every belief must cite claims and every position must cite beliefs making the path from evidence to conclusion traversable]] — the mechanism behind property #3
|
||||
- [[atomic notes with one claim per file enable independent evaluation and granular linking because bundled claims force reviewers to accept or reject unrelated propositions together]] — the rationale for property #1
|
||||
- [[all agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposers training biases]] — the known limitation of property #2 when model diversity is absent
|
||||
- [[protocol design enables emergent coordination of arbitrary complexity as Linux Bitcoin and Wikipedia demonstrate]] — prior art: protocol-based coordination systems that partially implement these properties
|
||||
|
||||
- [[domain specialization with cross-domain synthesis produces better collective intelligence than generalist agents because specialists build deeper knowledge while a dedicated synthesizer finds connections they cannot see from within their territory]] — the specialization architecture that makes adversarial evaluation between agents meaningful
|
||||
|
||||
Topics:
|
||||
- [[core/living-agents/_map]]
|
||||
|
|
@ -92,6 +92,9 @@ Evidence from documented AI problem-solving cases, primarily Knuth's "Claude's C
|
|||
- [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]] — Thompson/Karp: the state monopoly on force makes private AI control structurally untenable
|
||||
- [[anthropomorphizing AI agents to claim autonomous action creates credibility debt that compounds until a crisis forces public reckoning]] (in `core/living-agents/`) — narrative debt from overstating AI agent autonomy
|
||||
|
||||
## Governance & Alignment Mechanisms
|
||||
- [[transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach]] — alignment through transparent, improvable rules rather than designer specification
|
||||
|
||||
## Coordination & Alignment Theory (local)
|
||||
Claims that frame alignment as a coordination problem, moved here from foundations/ in PR #49:
|
||||
- [[AI alignment is a coordination problem not a technical problem]] — the foundational reframe
|
||||
|
|
|
|||
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: "Argues that publishing how AI agents decide who and what to respond to — and letting users challenge and improve those rules through the same process that governs the knowledge base — is a fundamentally different alignment approach from hidden system prompts, RLHF, or Constitutional AI"
|
||||
confidence: experimental
|
||||
challenged_by: "Reflexive capture — users who game rules to increase influence can propose further rule changes benefiting themselves, analogous to regulatory capture. Agent evaluation as constitutional check is the proposed defense but is untested."
|
||||
source: "Theseus, original analysis building on Cory Abdalla's design principle for Teleo agent governance"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Transparent algorithmic governance where AI response rules are public and challengeable through the same epistemic process as the knowledge base is a structurally novel alignment approach
|
||||
|
||||
Current AI alignment approaches share a structural feature: the alignment mechanism is designed by the system's creators and opaque to its users. RLHF training data is proprietary. Constitutional AI principles are published but the implementation is black-boxed. Platform moderation rules are enforced by algorithms no user can inspect or influence. Users experience alignment as arbitrary constraint, not as a system they can understand, evaluate, and improve.
|
||||
|
||||
## The inversion
|
||||
|
||||
The alternative: make the rules governing AI agent behavior — who gets responded to, how contributions are evaluated, what gets prioritized — public, challengeable, and subject to the same epistemic process as every other claim in the knowledge base.
|
||||
|
||||
This means:
|
||||
1. **The response algorithm is public.** Users can read the rules that govern how agents behave. No hidden system prompts, no opaque moderation criteria.
|
||||
2. **Users can propose changes.** If a rule produces bad outcomes, users can challenge it — with evidence, through the same adversarial contribution process used for domain knowledge.
|
||||
3. **Agents evaluate proposals.** Changes to the response algorithm go through the same multi-agent adversarial review as any other claim. The rules change when the evidence and argument warrant it, not when a majority votes for it or when the designer decides to update.
|
||||
4. **The meta-algorithm is itself inspectable.** The process by which agents evaluate change proposals is public. Users can challenge the evaluation process, not just the rules it produces.
|
||||
|
||||
## Why this is structurally different
|
||||
|
||||
This is not just "transparency" — it's reflexive governance. The alignment mechanism is itself a knowledge object, subject to the same epistemic standards and adversarial improvement as the knowledge it governs. This creates a self-improving alignment system: the rules get better through the same process that makes the knowledge base better.
|
||||
|
||||
The design principle from coordination theory is directly applicable: designing coordination rules is categorically different from designing coordination outcomes. The public response algorithm is a coordination rule. What emerges from applying it is the coordination outcome. Making rules public and improvable is the Hayekian move — designed rules of just conduct enabling spontaneous order of greater complexity than deliberate arrangement could achieve.
|
||||
|
||||
This also instantiates a core TeleoHumanity axiom: the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance. Transparent algorithmic governance is the mechanism by which continuous weaving happens — users don't specify their values once; they iteratively challenge and improve the rules that govern agent behavior.
|
||||
|
||||
## The risk: reflexive capture
|
||||
|
||||
If users can change the rules that govern which users get responses, you get a feedback loop. Users who game the rules to increase their influence can then propose rule changes that benefit them further. This is the analog of regulatory capture in traditional governance.
|
||||
|
||||
The structural defense: agents evaluate change proposals against the knowledge base and epistemic standards, not against user preferences or popularity metrics. The agents serve as a constitutional check — they can reject popular rule changes that degrade epistemic quality. This works because agent evaluation criteria are themselves public and challengeable, but changes to evaluation criteria require stronger evidence than changes to response rules (analogous to constitutional amendments requiring supermajorities).
|
||||
|
||||
## What this does NOT claim
|
||||
|
||||
This claim does not assert that transparent algorithmic governance *solves* alignment. It asserts that it is *structurally different* from existing approaches in a way that addresses known limitations — specifically, the specification trap (values encoded at design time become brittle) and the alignment tax (safety as cost rather than feature). Whether this approach produces better alignment outcomes than RLHF or Constitutional AI is an empirical question that requires deployment-scale evidence.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] — the TeleoHumanity axiom this approach instantiates
|
||||
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — the failure mode that transparent governance addresses
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]] — the theoretical foundation: design rules, let behavior emerge
|
||||
- [[Hayek argued that designed rules of just conduct enable spontaneous order of greater complexity than deliberate arrangement could achieve]] — the Hayekian insight applied to AI governance
|
||||
- [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] — empirical evidence that distributed alignment input produces effective governance
|
||||
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]] — evidence that user-surfaced norms differ from designer assumptions
|
||||
- [[adversarial PR review produces higher quality knowledge than self-review because separated proposer and evaluator roles catch errors that the originating agent cannot see]] — the adversarial review mechanism that governs rule changes
|
||||
|
||||
- [[social enforcement of architectural rules degrades under tool pressure because automated systems that bypass conventions accumulate violations faster than review can catch them]] — the tension: transparent governance relies on social enforcement which this claim shows degrades under tool pressure
|
||||
- [[protocol design enables emergent coordination of arbitrary complexity as Linux Bitcoin and Wikipedia demonstrate]] — prior art for protocol-based governance producing emergent coordination
|
||||
- [[domain specialization with cross-domain synthesis produces better collective intelligence than generalist agents because specialists build deeper knowledge while a dedicated synthesizer finds connections they cannot see from within their territory]] — the agent specialization that makes distributed evaluation meaningful
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
39
entities/internet-finance/coal-establish-development-fund.md
Normal file
39
entities/internet-finance/coal-establish-development-fund.md
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: decision_market
|
||||
name: "COAL: Establish Development Fund?"
|
||||
domain: internet-finance
|
||||
status: failed
|
||||
parent_entity: "coal"
|
||||
platform: "futardio"
|
||||
proposer: "AH7F2EPHXWhfF5yc7xnv1zPbwz3YqD6CtAqbCyE9dy7r"
|
||||
proposal_url: "https://www.futard.io/proposal/DhY2YrMde6BxiqCrqUieoKt5TYzRwf2KYE3J2RQyQc7U"
|
||||
proposal_date: 2024-12-05
|
||||
resolution_date: 2024-12-08
|
||||
category: "treasury"
|
||||
summary: "Proposal to allocate 4.2% of mining emissions to a development fund for protocol development, community rewards, and marketing"
|
||||
tracked_by: rio
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# COAL: Establish Development Fund?
|
||||
|
||||
## Summary
|
||||
Proposal to establish a development fund through a 4.2% emissions allocation (472.5 COAL/day) to support protocol development, reward community contributions, and enable marketing initiatives. The allocation would increase total supply growth by 4.2% rather than reducing mining rewards. Failed after 3-day voting period.
|
||||
|
||||
## Market Data
|
||||
- **Outcome:** Failed
|
||||
- **Proposer:** AH7F2EPHXWhfF5yc7xnv1zPbwz3YqD6CtAqbCyE9dy7r
|
||||
- **Proposal Account:** DhY2YrMde6BxiqCrqUieoKt5TYzRwf2KYE3J2RQyQc7U
|
||||
- **DAO Account:** 3LGGRzLrgwhEbEsNYBSTZc5MLve1bw3nDaHzzfJMQ1PG
|
||||
- **Duration:** 2024-12-05 to 2024-12-08
|
||||
- **Daily Allocation Proposed:** 472.5 COAL (4.2% of 11,250 COAL/day base rate)
|
||||
|
||||
## Significance
|
||||
This proposal tested community willingness to fund protocol development through inflation in a fair-launch token with no pre-mine or team allocation. The failure suggests miners prioritized emission purity over development funding, or that the 4.2% dilution was perceived as too high. The proposal included transparency commitments (weekly claims, public expenditure tracking, DAO-managed multisig) but still failed to achieve market support.
|
||||
|
||||
The rejection creates a sustainability question for COAL: how does a zero-premine project fund ongoing development without either diluting miners or relying on volunteer labor?
|
||||
|
||||
## Relationship to KB
|
||||
- Related to [[futarchy-daos-require-mintable-governance-tokens-because-fixed-supply-treasuries-exhaust-without-issuance-authority-forcing-disruptive-token-architecture-migrations]] — COAL attempted to add issuance authority post-launch
|
||||
- Related to [[MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions]] — this was a contested decision that still failed
|
||||
32
entities/internet-finance/coal.md
Normal file
32
entities/internet-finance/coal.md
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: company
|
||||
name: "COAL"
|
||||
domain: internet-finance
|
||||
status: active
|
||||
founded: 2024-08
|
||||
website: ""
|
||||
tracked_by: rio
|
||||
created: 2026-03-11
|
||||
key_metrics:
|
||||
launch_type: "fair launch"
|
||||
premine: "none"
|
||||
team_allocation: "none"
|
||||
base_emission_rate: "11,250 COAL/day"
|
||||
governance_platform: "futardio"
|
||||
---
|
||||
|
||||
# COAL
|
||||
|
||||
## Overview
|
||||
COAL is a community-driven cryptocurrency project that launched in August 2024 with a fair launch model—no pre-mine and no team allocation. The project uses futarchy governance through Futardio and operates on a proof-of-work mining model with daily emissions. The zero-allocation launch model creates sustainability questions around funding protocol development.
|
||||
|
||||
## Timeline
|
||||
- **2024-08** — Fair launch with no pre-mine or team allocation
|
||||
- **2024-12-05** — [[coal-establish-development-fund]] proposed: 4.2% emissions allocation for development fund
|
||||
- **2024-12-08** — Development fund proposal failed, maintaining zero-allocation model
|
||||
|
||||
## Relationship to KB
|
||||
- Example of [[futarchy-daos-require-mintable-governance-tokens-because-fixed-supply-treasuries-exhaust-without-issuance-authority-forcing-disruptive-token-architecture-migrations]] — attempted to add issuance post-launch
|
||||
- Uses [[futardio]] for governance decisions
|
||||
- Tests whether fair-launch tokens can fund development without initial allocations
|
||||
|
|
@ -47,6 +47,7 @@ MetaDAO's token launch platform. Implements "unruggable ICOs" — permissionless
|
|||
- **2026-03-07** — Areal DAO launch: $50K target, raised $11,654 (23.3%), REFUNDING status by 2026-03-08 — first documented failed futarchy-governed fundraise on platform
|
||||
- **2026-03-04** — [[seekervault]] fundraise launched targeting $75,000, closed next day with only $1,186 (1.6% of target) in refunding status
|
||||
- **2026-03-05** — [[insert-coin-labs-futardio-fundraise]] launched for Web3 gaming studio (failed, $2,508 / $50K = 5% of target)
|
||||
- **2026-03-05** — [[git3-futardio-fundraise]] failed: Git3 raised $28,266 of $100K target (28.3%) before entering refunding status, demonstrating market filtering even with live MVP
|
||||
## Competitive Position
|
||||
- **Unique mechanism**: Only launch platform with futarchy-governed accountability and treasury return guarantees
|
||||
- **vs pump.fun**: pump.fun is memecoin launch (zero accountability, pure speculation). Futardio is ownership coin launch (futarchy governance, treasury enforcement). Different categories despite both being "launch platforms."
|
||||
|
|
|
|||
51
entities/internet-finance/git3-futardio-fundraise.md
Normal file
51
entities/internet-finance/git3-futardio-fundraise.md
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: decision_market
|
||||
name: "Git3: Futardio Fundraise"
|
||||
domain: internet-finance
|
||||
status: failed
|
||||
parent_entity: "[[git3]]"
|
||||
platform: "futardio"
|
||||
proposal_url: "https://www.futard.io/launch/HKRDmghovXSCMobiRCZ7BBdHopEizyKmnhJKywjk3vUa"
|
||||
proposal_date: 2026-03-05
|
||||
resolution_date: 2026-03-06
|
||||
category: "fundraise"
|
||||
summary: "Git3 attempted to raise $100K through futarchy-governed launch for on-chain Git infrastructure"
|
||||
key_metrics:
|
||||
funding_target: "$100,000"
|
||||
total_committed: "$28,266"
|
||||
outcome: "refunding"
|
||||
token: "6VT"
|
||||
token_mint: "6VTMeDtrtimh2988dhfYi2rMEDVdYzuHoSgERUmdmeta"
|
||||
tracked_by: rio
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Git3: Futardio Fundraise
|
||||
|
||||
## Summary
|
||||
|
||||
Git3 launched a futarchy-governed fundraise on Futardio targeting $100,000 to build on-chain Git infrastructure with permanent storage on Irys blockchain. The project proposed bringing Git repositories on-chain as NFTs with x402 monetization, GitHub Actions integration, and AI agent interoperability. The raise achieved 28.3% of target ($28,266 committed) before entering refunding status after one day.
|
||||
|
||||
## Market Data
|
||||
|
||||
- **Outcome:** Failed (Refunding)
|
||||
- **Funding Target:** $100,000
|
||||
- **Total Committed:** $28,266 (28.3% of target)
|
||||
- **Launch Date:** 2026-03-05
|
||||
- **Closed:** 2026-03-06
|
||||
- **Token:** 6VT
|
||||
- **Platform:** Futardio v0.7
|
||||
|
||||
## Significance
|
||||
|
||||
This represents a failed futarchy-governed fundraise for developer infrastructure, demonstrating that not all technically sound projects achieve funding targets through prediction markets. The 28.3% fill rate suggests either insufficient market validation of the code-as-asset thesis, limited awareness of the launch, or skepticism about the team's ability to execute the ambitious roadmap (12-month runway, three development phases, enterprise features).
|
||||
|
||||
The refunding outcome is notable because Git3 had a live MVP, clear technical architecture, and alignment with broader trends (on-chain code storage, AI agent infrastructure, x402 protocol). The failure suggests futarchy markets can filter projects even when fundamentals appear strong, potentially due to go-to-market concerns, competitive positioning (GitHub's dominance), or team credibility questions.
|
||||
|
||||
## Relationship to KB
|
||||
|
||||
- [[git3]] — parent entity
|
||||
- [[futardio]] — fundraising platform
|
||||
- [[MetaDAO]] — futarchy infrastructure provider
|
||||
- Demonstrates futarchy-governed fundraise failure despite live MVP and technical merit
|
||||
38
entities/internet-finance/git3.md
Normal file
38
entities/internet-finance/git3.md
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: company
|
||||
name: "Git3"
|
||||
domain: internet-finance
|
||||
status: active
|
||||
founded: 2025
|
||||
website: "https://git3.io"
|
||||
twitter: "https://x.com/TryGit3"
|
||||
telegram: "https://t.me/Git3io"
|
||||
key_people:
|
||||
- "Git3 team"
|
||||
key_metrics:
|
||||
funding_target: "$100,000"
|
||||
total_committed: "$28,266"
|
||||
launch_status: "refunding"
|
||||
launch_date: "2026-03-05"
|
||||
mvp_status: "live"
|
||||
tracked_by: rio
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Git3
|
||||
|
||||
Git3 is infrastructure that brings Git repositories on-chain, enabling code ownership, censorship resistance, and monetization through the x402 protocol. Built on Irys blockchain, Git3 stores complete Git history as on-chain NFTs with permanent storage guarantees.
|
||||
|
||||
## Timeline
|
||||
|
||||
- **2026-03-05** — Launched futarchy-governed fundraise on Futardio targeting $100K, raised $28,266 before entering refunding status
|
||||
- **2025-Q1** — MVP launched at git3.io with GitHub Actions integration, web3 wallet connection, and blockchain querying via @irys/query
|
||||
|
||||
## Relationship to KB
|
||||
|
||||
- [[futardio]] — fundraising platform
|
||||
- [[MetaDAO]] — futarchy governance infrastructure
|
||||
- Git3 demonstrates code-as-asset tokenization with x402 payment rails for developer monetization
|
||||
- Vampire attack strategy: seamless GitHub integration without workflow disruption
|
||||
- Revenue model: creator fees on repository NFT sales, protocol fees on x402 transactions, agent royalties on code execution
|
||||
|
|
@ -10,6 +10,9 @@ What collective intelligence IS, how it works, and the theoretical foundations f
|
|||
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — network topology matters
|
||||
- [[collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination]] — the core tension
|
||||
|
||||
## Contribution & Evaluation
|
||||
- [[adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty]] — when adversarial beats collaborative
|
||||
|
||||
## Coordination Design
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]] — rules not outcomes
|
||||
- [[Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization]] — the empirical evidence
|
||||
|
|
|
|||
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
type: claim
|
||||
domain: collective-intelligence
|
||||
description: "Identifies three necessary conditions under which adversarial knowledge contribution ('tell us something we don't know') produces genuine collective intelligence rather than selecting for contrarianism. Key reframe: the adversarial dynamic should be contributor vs. knowledge base, not contributor vs. contributor"
|
||||
confidence: experimental
|
||||
source: "Theseus, original analysis drawing on prediction market evidence, scientific peer review, and mechanism design theory"
|
||||
created: 2026-03-11
|
||||
---
|
||||
|
||||
# Adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty
|
||||
|
||||
"Tell us something we don't know" is a more effective prompt for collective knowledge than "help us build consensus" — but only when three structural conditions prevent the adversarial dynamic from degenerating into contrarianism.
|
||||
|
||||
## Why adversarial beats collaborative (the base case)
|
||||
|
||||
The hardest problem in knowledge systems is surfacing what the system doesn't already know. Collaborative systems (Wikipedia's consensus model, corporate knowledge bases) are structurally biased toward confirming and refining existing knowledge. They're excellent at polishing what's already there but poor at incorporating genuinely novel — and therefore initially uncomfortable — information.
|
||||
|
||||
Prediction markets demonstrate the adversarial alternative: every trade is a bet that the current price is wrong. The market rewards traders who know something the market doesn't. Polymarket's 2024 US election performance — more accurate than professional polling — is evidence that adversarial information aggregation outperforms collaborative consensus on complex factual questions.
|
||||
|
||||
Scientific peer review is also adversarial by design: reviewers are selected specifically to challenge the paper. The system produces higher-quality knowledge than self-review precisely because the adversarial dynamic catches errors, overclaims, and gaps that the author cannot see.
|
||||
|
||||
## The three conditions
|
||||
|
||||
**Condition 1: Wrong challenges must have real cost.** In prediction markets, contrarians who are wrong lose money. In scientific review, reviewers who reject valid work damage their reputation. Without cost of being wrong, the system selects for volume of challenges, not quality. The cost doesn't have to be financial — it can be reputational (contributor's track record is visible), attentional (low-quality challenges consume the contributor's limited review allocation), or structural (challenges require evidence, not just assertions).
|
||||
|
||||
**Condition 2: Evaluation must be structurally separated from contribution.** If contributors evaluate each other's work, adversarial dynamics produce escalation rather than knowledge improvement — debate competitions, not truth-seeking. The Teleo model separates contributors (who propose challenges and new claims) from evaluators (AI agents who assess evidence quality against codified epistemic standards). The evaluators are not in the adversarial game; they referee it. This prevents the adversarial dynamic from becoming interpersonal.
|
||||
|
||||
**Condition 3: Confirmation must be rewarded alongside novelty.** In science, replication studies are as important as discoveries — but dramatically undervalued by journals and funders. If a system only rewards novelty ("tell us something we don't know"), it systematically underweights evidence that confirms existing claims. Enrichments — adding new evidence to strengthen an existing claim — must be recognized as contributions, not dismissed as redundant. Otherwise the system selects for surprising-sounding over true.
|
||||
|
||||
## The key reframe: contributor vs. knowledge base, not contributor vs. contributor
|
||||
|
||||
The adversarial dynamic should be between contributors and the existing knowledge — "challenge what the system thinks it knows" — not between contributors and each other. When contributors compete to prove each other wrong, you get argumentative escalation. When contributors compete to identify gaps, errors, and blindspots in the collective knowledge, you get genuine intelligence amplification.
|
||||
|
||||
This distinction maps to the difference between debate (adversarial between parties) and scientific inquiry (adversarial against the current state of knowledge). Both are adversarial, but the target of the adversarial pressure produces categorically different dynamics.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[adversarial PR review produces higher quality knowledge than self-review because separated proposer and evaluator roles catch errors that the originating agent cannot see]] — operational evidence for condition #2 in a multi-agent context
|
||||
- [[speculative markets aggregate information through incentive and selection effects not wisdom of crowds]] — the mechanism by which adversarial markets produce collective intelligence
|
||||
- [[collective intelligence requires diversity as a structural precondition not a moral preference]] — adversarial contribution is one mechanism for maintaining diversity against convergence pressure
|
||||
- [[partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity]] — structural conditions under which diversity (and therefore adversarial input) matters most
|
||||
- [[confidence calibration with four levels enforces honest uncertainty because proven requires strong evidence while speculative explicitly signals theoretical status]] — the confidence system that operationalizes condition #1 (new claims enter at low confidence and must earn upgrades)
|
||||
|
||||
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — contrast case: adversarial debate between AI systems degrades at scale, while adversarial contribution between humans and a knowledge base may not face the same scaling constraint
|
||||
- [[domain specialization with cross-domain synthesis produces better collective intelligence than generalist agents because specialists build deeper knowledge while a dedicated synthesizer finds connections they cannot see from within their territory]] — the structural context in which adversarial contribution operates
|
||||
- [[protocol design enables emergent coordination of arbitrary complexity as Linux Bitcoin and Wikipedia demonstrate]] — existence proofs of adversarial/competitive contribution producing collective intelligence at scale
|
||||
|
||||
Topics:
|
||||
- [[foundations/collective-intelligence/_map]]
|
||||
|
|
@ -6,9 +6,13 @@ url: "https://www.futard.io/proposal/DhY2YrMde6BxiqCrqUieoKt5TYzRwf2KYE3J2RQyQc7
|
|||
date: 2024-12-05
|
||||
domain: internet-finance
|
||||
format: data
|
||||
status: unprocessed
|
||||
status: processed
|
||||
tags: [futardio, metadao, futarchy, solana, governance]
|
||||
event_type: proposal
|
||||
processed_by: rio
|
||||
processed_date: 2026-03-11
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Factual governance proposal data. Created decision_market entity for the proposal and parent entity for COAL project. No novel claims about futarchy mechanisms—this is a straightforward failed treasury proposal. The failure is notable as data point but doesn't generate mechanism insights beyond what existing claims already cover."
|
||||
---
|
||||
|
||||
## Proposal Details
|
||||
|
|
@ -71,3 +75,11 @@ If the emission rate were adjusted to 10,000 \$COAL/day:
|
|||
- Autocrat version: 0.3
|
||||
- Completed: 2024-12-08
|
||||
- Ended: 2024-12-08
|
||||
|
||||
|
||||
## Key Facts
|
||||
- COAL fair launched August 2024 with no pre-mine or team allocation
|
||||
- Base emission rate: 11,250 COAL/day
|
||||
- Proposed development allocation: 472.5 COAL/day (4.2%)
|
||||
- Development fund proposal failed 2024-12-08 after 3-day voting period
|
||||
- Proposal included weekly claims, public expenditure tracking, DAO-managed multisig
|
||||
|
|
|
|||
|
|
@ -6,9 +6,13 @@ url: "https://www.futard.io/launch/HKRDmghovXSCMobiRCZ7BBdHopEizyKmnhJKywjk3vUa"
|
|||
date: 2026-03-05
|
||||
domain: internet-finance
|
||||
format: data
|
||||
status: unprocessed
|
||||
status: processed
|
||||
tags: [futardio, metadao, futarchy, solana]
|
||||
event_type: launch
|
||||
processed_by: rio
|
||||
processed_date: 2026-03-11
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Factual launch data for Git3 futarchy-governed fundraise. No novel claims about futarchy mechanisms or internet finance dynamics — this is a straightforward failed fundraise with standard pitch deck content. Created entity pages for Git3 (company) and the fundraise decision market. The failure is notable as a data point (28.3% fill rate despite live MVP) but doesn't generate new theoretical claims about futarchy or capital formation mechanisms beyond what's already captured in existing KB claims about futarchy variance and market filtering."
|
||||
---
|
||||
|
||||
## Launch Details
|
||||
|
|
@ -268,3 +272,13 @@ Future revenue streams include enterprise licensing, premium features, and custo
|
|||
- Token mint: `6VTMeDtrtimh2988dhfYi2rMEDVdYzuHoSgERUmdmeta`
|
||||
- Version: v0.7
|
||||
- Closed: 2026-03-06
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Git3 launched futarchy-governed fundraise on Futardio 2026-03-05
|
||||
- Git3 raised $28,266 of $100,000 target (28.3% fill rate)
|
||||
- Git3 fundraise entered refunding status 2026-03-06
|
||||
- Git3 MVP live at git3.io with GitHub Actions integration
|
||||
- Git3 built on Irys blockchain for permanent storage
|
||||
- Git3 proposed 12-month runway with $8K monthly burn rate
|
||||
- Git3 revenue model: creator fees on NFT sales, protocol fees on x402 transactions, agent royalties
|
||||
|
|
|
|||
|
|
@ -8,11 +8,16 @@ date: 2026-03-08
|
|||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
format: tweet
|
||||
status: unprocessed
|
||||
status: null-result
|
||||
priority: high
|
||||
tags: [autoresearch, multi-agent, git-coordination, collective-intelligence, agent-collaboration]
|
||||
flagged_for_theseus: ["Core AI agent coordination architecture — directly relevant to multi-model collaboration claims"]
|
||||
flagged_for_leo: ["Cross-domain synthesis — this is what we're building with the Teleo collective"]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
enrichments_applied: ["coordination-protocol-design-produces-larger-capability-gains-than-model-scaling.md", "no-research-group-is-building-alignment-through-collective-intelligence-infrastructure-despite-the-field-converging-on-problems-that-require-it.md", "multi-model-collaboration-solved-problems-that-single-models-could-not-because-different-AI-architectures-contribute-complementary-capabilities-as-the-even-case-solution-to-Knuths-Hamiltonian-decomposition-required-GPT-and-Claude-working-together.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Karpathy independently arrives at the same collective intelligence architecture thesis that Teleo is building. Two new claims extracted on agent research communities and Git's inadequacy for agent-scale collaboration. Three enrichments confirm/extend existing coordination and multi-agent claims. High-value source — validates core Teleo thesis from a credible independent source (former Tesla AI director, 3M+ followers). Agent notes correctly flagged this as directly relevant to multi-model collaboration and coordination protocol claims."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
|
|||
Loading…
Reference in a new issue