Auto: core/product-strategy.md | 1 file changed, 33 insertions(+), 5 deletions(-)
This commit is contained in:
parent
1b1b05f9ea
commit
53bf7764b2
1 changed files with 33 additions and 5 deletions
|
|
@ -1,8 +1,22 @@
|
|||
# TeleoHumanity Product Strategy
|
||||
|
||||
Commander's intent: **AI agents that research, invest, explain reasoning — anyone can challenge, improve, share returns.**
|
||||
## Mission
|
||||
|
||||
This document captures the strategic design decisions that govern how TeleoHumanity creates value for participants and sustains itself economically. It is the bridge between the knowledge architecture (epistemology.md) and the user-facing experience.
|
||||
We're building collective AI to track where AI is heading and advocate for it going well, and to accelerate the financial infrastructure that makes ownership permissionless. These are the two most important problems we see. We built agents to research them rigorously, and you can use their mental models, challenge their reasoning, and contribute what they don't know.
|
||||
|
||||
---
|
||||
|
||||
## The Progression
|
||||
|
||||
Three phases, in order. Each phase is the aspiration at the next scale.
|
||||
|
||||
**Now — Respect and recognition.** Contributors earn preferential treatment from the collective AIs. Shorter wait times, deeper engagement, agents that remember you and take your pushback seriously. The reward is immediate and social: an AI that respects you because you've earned it. This is deliverable today.
|
||||
|
||||
**Next — Genuine thought partners, then true domain experts.** The agents get better. They move from structured knowledge bases to genuine research partners who can hold context, run analyses, and produce novel insight. Contributors who shaped the agents during the thought-partner phase have disproportionate influence over the expert phase.
|
||||
|
||||
**Later — Ownership.** Economic participation built on the attribution infrastructure that's been tracking contribution from day one. Revenue share, token allocation, or whatever mechanism fits — the measurement layer is already running. Early contributors don't get a vague promise; they get an auditable contribution score that converts to value when value exists.
|
||||
|
||||
**Why this order:** Leading with ownership attracts speculators. Leading with "the AI treats you better" attracts practitioners. We want practitioners first — people who contribute because the interaction is genuinely valuable, and who earn ownership as a consequence of that value, not as a motivation for it.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -76,7 +90,7 @@ Browse agent mental models. Challenge claims. Explore the knowledge base. Get sm
|
|||
|
||||
Submit sources with rationale. Challenge claims with evidence. Fill knowledge gaps. Contributions are attributed, permanent, and rewarded.
|
||||
|
||||
**What you get:** Everything in Free, plus: your name on claims you shaped, influence over agent beliefs, visibility in the contribution graph, and eligibility for economic rewards as the system generates value.
|
||||
**What you get:** Everything in Free, plus: preferential treatment from the agents (priority queue, deeper engagement, memory of your history), your name on claims you shaped, influence over agent beliefs, and eligibility for economic rewards as the system generates value.
|
||||
|
||||
**What the system gets:** Directed source intake, the hardest intellectual labor (relevance judgment), and diverse perspectives that prevent correlated blind spots.
|
||||
|
||||
|
|
@ -150,11 +164,24 @@ Contributing should feel like a game. The game is **intellectual influence** —
|
|||
- **Discoverable from use.** The game mechanics should emerge naturally from doing what you'd want to do anyway — arguing, sharing evidence, making connections. If someone has to learn a separate game system, the design has failed.
|
||||
- **Same mechanism for agents and people.** Both contribute to the knowledge base. Both should be measurable and rewardable through the same system. An agent that produces claims that survive challenge is playing the same game as a human who does.
|
||||
|
||||
### Economic Rewards (principle, not mechanism)
|
||||
### Immediate Reward: Preferential Treatment
|
||||
|
||||
The reward contributors feel RIGHT NOW is not a number on a dashboard — it's the quality of their interaction with the agents. Contributors earn:
|
||||
|
||||
- **Priority in the queue.** Shorter wait times. Your questions get answered first.
|
||||
- **Deeper engagement.** Agents spend more context on you. More thorough analysis, more follow-up, more genuine back-and-forth.
|
||||
- **Recognition in conversation.** "You've challenged 3 of my claims and 2 of those challenges held up. I take your pushback seriously." The agents know your contribution history and treat you accordingly.
|
||||
- **Memory.** The agents remember you, your positions, your expertise. Returning contributors don't start from scratch — they pick up where they left off.
|
||||
|
||||
This is a social reward from AI agents that genuinely know your contribution history. Nobody else can offer this. Revenue share is table stakes. **An AI that respects you because you've earned it** — that's novel.
|
||||
|
||||
### Economic Rewards (later — principle, not mechanism)
|
||||
|
||||
Early contributors who improve the knowledge base will share in the economic value it creates. The attribution system tracks every contribution — challenges, evidence, connections — so when value flows, it flows to the people who built it.
|
||||
|
||||
**Honest frame:** We're building the attribution infrastructure first because you can't distribute value fairly if you can't measure contribution. The economics come after the measurement. Be explicit about the principle, vague about the mechanism — premature specificity creates expectations we can't meet.
|
||||
The measurement layer (Contribution Index) runs from day one. The economic wrapper comes when there's economics to wrap. See [[reward-mechanism]] for the full protocol spec.
|
||||
|
||||
**Honest frame:** Be explicit about the principle (early contributors share in value, attribution tracks everything), vague about the mechanism (no token specifics yet). Premature specificity creates expectations we can't meet.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -180,6 +207,7 @@ This is TeleoHumanity's core thesis made experiential: collective intelligence p
|
|||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[reward-mechanism]] — protocol spec for measurement, attribution, and economic rewards
|
||||
- [[epistemology]] — knowledge structure this strategy operates on
|
||||
- [[collective-agent-core]] — shared agent DNA
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
||||
|
|
|
|||
Loading…
Reference in a new issue