Compare commits
4 commits
main
...
theseus/ph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
60998d3837 | ||
| 945258a13f | |||
| 2641137abb | |||
| 20ebfc56a8 |
17 changed files with 5347 additions and 209 deletions
|
|
@ -8,42 +8,93 @@ website: https://avici.money
|
||||||
status: active
|
status: active
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-11
|
created: 2026-03-11
|
||||||
last_updated: 2026-03-11
|
last_updated: 2026-04-02
|
||||||
parent: "futardio"
|
parent: "[[metadao]]"
|
||||||
|
launch_platform: metadao-curated
|
||||||
|
launch_order: 4
|
||||||
category: "Distributed internet banking infrastructure (Solana)"
|
category: "Distributed internet banking infrastructure (Solana)"
|
||||||
stage: growth
|
stage: growth
|
||||||
funding: "$3.5M raised via Futardio ICO"
|
token_symbol: "$AVICI"
|
||||||
|
token_mint: "BANKJmvhT8tiJRsBSS1n2HryMBPvT5Ze4HU95DUAmeta"
|
||||||
built_on: ["Solana"]
|
built_on: ["Solana"]
|
||||||
tags: ["banking", "lending", "futardio-launch", "ownership-coin"]
|
tags: [metadao-curated-launch, ownership-coin, neobank, defi, lending]
|
||||||
source_archive: "inbox/archive/2025-10-14-futardio-launch-avici.md"
|
competitors: ["traditional banks", "Revolut", "crypto card providers"]
|
||||||
|
source_archive: "inbox/archive/internet-finance/2025-10-14-futardio-launch-avici.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Avici
|
# Avici
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Distributed internet banking infrastructure — onchain credit scoring, spend cards, unsecured loans, and mortgages. Aims to replace traditional banking with permissionless onchain finance. Second Futardio launch by committed capital.
|
|
||||||
|
|
||||||
## Current State
|
Crypto neobank building distributed internet banking infrastructure on Solana — spend cards, an internet-native trust score, unsecured loans, and eventually home mortgages. The thesis: internet capital markets need internet banking infrastructure. To gain independence from fiat, crypto needs a social ledger for reputation-based undercollateralized lending.
|
||||||
- **Raised**: $3.5M final (target $2M, $34.2M committed — 17x oversubscribed)
|
|
||||||
- **Treasury**: $2.4M USDC remaining
|
## Investment Rationale (from raise)
|
||||||
- **Token**: AVICI (mint: BANKJmvhT8tiJRsBSS1n2HryMBPvT5Ze4HU95DUAmeta), price: $1.31
|
|
||||||
- **Monthly allowance**: $100K
|
"Money didn't originate from the barter system, that's a myth. It began as credit. Money isn't a commodity; it is a social ledger." Avici argues that onchain finance still lacks reputation-based undercollateralized lending (citing Vitalik's agreement). The ICO pitch: build the onchain banking infrastructure that replaces traditional bank accounts — credit scoring, spend cards, unsecured loans, mortgages — all governed by futarchy.
|
||||||
- **Launch mechanism**: Futardio v0.6 (pro-rata)
|
|
||||||
|
## ICO Details
|
||||||
|
|
||||||
|
- **Platform:** MetaDAO curated launchpad (4th launch)
|
||||||
|
- **Date:** October 14-18, 2025
|
||||||
|
- **Target:** $2M
|
||||||
|
- **Committed:** $34.2M (17x oversubscribed)
|
||||||
|
- **Final raise:** $3.5M (89.8% of commitments refunded)
|
||||||
|
- **Initial FDV:** $4.515M at $0.35/token
|
||||||
|
- **Launch mechanism:** Futardio v0.6 (pro-rata)
|
||||||
|
- **Distribution:** No preferential VC allocations — described as one of crypto's fairest token distributions
|
||||||
|
|
||||||
|
## Current State (as of early 2026)
|
||||||
|
|
||||||
|
**Live products:**
|
||||||
|
- **Visa Debit Card** — live in 100+ countries, virtual and physical. 1.5-2% cashback. No staking required. No top-up, transaction, or maintenance fees. Processing 100,000+ transactions monthly.
|
||||||
|
- **Smart Wallet** — self-custodial, login via Google/iCloud/biometrics/passkey (no seed phrases). Programmable security policies (daily spend limits, address whitelisting).
|
||||||
|
- **Biz Cards** — lets Solana projects spend from onchain treasury for business needs
|
||||||
|
- **Named Virtual Accounts** — personal account number + IBAN, fiat auto-converted to stablecoins in self-custodial wallet. MoonPay integration.
|
||||||
|
- **Multi-chain deposits** — Solana, Polygon, Arbitrum, Base, BSC, Avalanche
|
||||||
|
|
||||||
|
**Traction:** ~4,000+ MAU, 70% month-on-month retention, $1.2M+ in Visa card spend, 12,000+ token holders
|
||||||
|
|
||||||
|
**Not yet live:** Trust Score (onchain credit scoring), unsecured loans, mortgages — still on roadmap
|
||||||
|
|
||||||
|
## Team Performance Package (March 2026 proposal)
|
||||||
|
|
||||||
|
0% team allocation at launch. New proposal for up to 25% contingent on reaching $5B valuation:
|
||||||
|
- Phase 1: 15% linear unlock between $100M-$1B market cap ($5.53-$55.30/token)
|
||||||
|
- Phase 2: 10% in equal tranches between $1.5B-$5B ($82.95-$197.55/token)
|
||||||
|
- No tokens unlock before January 2029 lockup regardless of milestone achievement
|
||||||
|
- Change-of-control protection: 30% of acquisition value to team if hostile takeover
|
||||||
|
|
||||||
|
This is the strongest performance-alignment structure in the MetaDAO ecosystem — zero dilution unless the project is worth 100x+ the ICO valuation.
|
||||||
|
|
||||||
|
## Governance Activity
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| ICO launch | 2025-10-14 | Completed, $3.5M raised | [[avici-futardio-launch]] |
|
||||||
|
| Team performance package | 2026-03-30 | Proposed | See inbox/archive |
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- **Team anonymity.** No founder names publicly disclosed. RootData shows 55% transparency score and project "not claimed." This is unusual for a project processing 100K+ monthly card transactions.
|
||||||
|
- **Credit scoring timeline.** The Trust Score is the key differentiator vs. existing crypto cards, but it's still on the roadmap. Without it, Avici is a good crypto debit card but not the "internet bank" the pitch describes.
|
||||||
|
- **Regulatory exposure.** Visa card program in 100+ countries implies banking partnerships and compliance obligations. How does futarchy governance interact with regulated card issuer requirements?
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
- **2025-10-14** — Futardio launch opens ($2M target)
|
|
||||||
- **2025-10-18** — Launch closes. $3.5M raised.
|
|
||||||
|
|
||||||
- **2026-01-00** — Performance update: reached 21x peak return, currently trading at ~7x from ICO price
|
- **2025-10-14** — MetaDAO curated ICO opens ($2M target)
|
||||||
## Relationship to KB
|
- **2025-10-18** — ICO closes. $3.5M raised (17x oversubscribed).
|
||||||
- futardio — launched on Futardio platform
|
- **2025-11** — Card top-up speed reduced from minutes to seconds
|
||||||
- [[cryptos primary use case is capital formation not payments or store of value because permissionless token issuance solves the fundraising bottleneck that solo founders and small teams face]] — test case for banking-focused crypto raising via permissionless ICO
|
- **2026-01-09** — SOLO yield integration for passive stablecoin earnings
|
||||||
|
- **2026-01-10** — Named Virtual Accounts launched (account number + IBAN)
|
||||||
|
- **2026-01** — Peak return: 21x from ICO price ($7.56 ATH)
|
||||||
|
- **2026-03-30** — Team performance package proposal (0% → up to 25% contingent on $5B)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- futardio — launch platform
|
- [[metadao]] — launch platform (curated ICO #4)
|
||||||
- [[metadao]] — parent ecosystem
|
- [[solomon]] — SOLO yield integration partner
|
||||||
|
- [[internet capital markets compress fundraising from months to days because permissionless raises eliminate gatekeepers while futarchy replaces due diligence bottlenecks with real-time market pricing]] — 4-day raise window with 17x oversubscription confirms compression
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
|
|
@ -9,42 +9,90 @@ website: https://askloyal.com
|
||||||
status: active
|
status: active
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-11
|
created: 2026-03-11
|
||||||
last_updated: 2026-03-11
|
last_updated: 2026-04-02
|
||||||
parent: "futardio"
|
parent: "[[metadao]]"
|
||||||
|
launch_platform: metadao-curated
|
||||||
|
launch_order: 5
|
||||||
category: "Decentralized private AI intelligence protocol (Solana)"
|
category: "Decentralized private AI intelligence protocol (Solana)"
|
||||||
stage: growth
|
stage: early
|
||||||
funding: "$2.5M raised via Futardio ICO"
|
token_symbol: "$LOYAL"
|
||||||
|
token_mint: "LYLikzBQtpa9ZgVrJsqYGQpR3cC1WMJrBHaXGrQmeta"
|
||||||
|
founded_by: "Eden, Chris, Basil, Vasiliy"
|
||||||
|
headquarters: "San Francisco, CA"
|
||||||
built_on: ["Solana", "MagicBlock", "Arcium"]
|
built_on: ["Solana", "MagicBlock", "Arcium"]
|
||||||
tags: ["privacy", "ai", "futardio-launch", "ownership-coin"]
|
tags: [metadao-curated-launch, ownership-coin, privacy, ai, confidential-computing]
|
||||||
|
competitors: ["Venice.ai", "private AI chat alternatives"]
|
||||||
source_archive: "inbox/archive/2025-10-18-futardio-launch-loyal.md"
|
source_archive: "inbox/archive/2025-10-18-futardio-launch-loyal.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Loyal
|
# Loyal
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Open source, decentralized, censorship-resistant intelligence protocol. Private AI conversations with no single point of failure — computations via confidential oracles, key derivation in confidential rollups, encrypted chat on decentralized storage. Sits at the intersection of AI privacy and crypto infrastructure.
|
|
||||||
|
|
||||||
## Current State
|
Open source, decentralized, censorship-resistant intelligence protocol. Private AI conversations with no single point of failure — computations via confidential oracles (Arcium), key derivation in confidential rollups with granular read controls, encrypted chats on decentralized storage. Sits at the intersection of AI privacy and crypto infrastructure.
|
||||||
- **Raised**: $2.5M final (target $500K, $75.9M committed — 152x oversubscribed)
|
|
||||||
- **Treasury**: $260K USDC remaining
|
## Investment Rationale (from raise)
|
||||||
- **Token**: LOYAL (mint: LYLikzBQtpa9ZgVrJsqYGQpR3cC1WMJrBHaXGrQmeta), price: $0.14
|
|
||||||
- **Monthly allowance**: $60K
|
"Fight against mass surveillance with us. Your chats with AI have no protection. They're used to put people behind bars, to launch targeted ads and in model training. Every question you ask can and will be used against you."
|
||||||
- **Launch mechanism**: Futardio v0.6 (pro-rata)
|
|
||||||
|
The pitch is existential: as AI becomes a primary interface for knowledge work, the privacy of AI conversations becomes a fundamental rights issue. Loyal is building the infrastructure so that no single entity can surveil, censor, or monetize your AI interactions. The 152x oversubscription — the highest in MetaDAO history — reflects strong conviction in this thesis.
|
||||||
|
|
||||||
|
## ICO Details
|
||||||
|
|
||||||
|
- **Platform:** MetaDAO curated launchpad (5th launch)
|
||||||
|
- **Date:** October 18-22, 2025
|
||||||
|
- **Target:** $500K
|
||||||
|
- **Committed:** $75.9M (152x oversubscribed — highest ratio in MetaDAO history)
|
||||||
|
- **Final raise:** $2.5M
|
||||||
|
- **Launch mechanism:** Futardio v0.6 (pro-rata)
|
||||||
|
|
||||||
|
## Current State (as of early 2026)
|
||||||
|
|
||||||
|
- **Treasury:** $260K USDC remaining (after $1.5M buyback)
|
||||||
|
- **Monthly allowance:** $60K
|
||||||
|
- **Market cap:** ~$5.0M
|
||||||
|
- **Token supply:** 20,976,923 LOYAL total (10M ICO pro-rata, 2M primary liquidity, 3M single-sided Meteora)
|
||||||
|
- **Product status:** Active development. Positioned as "privacy-first AI oracle on Solana" — described as "Chainlink but for confidential data." Uses TEE (Intel TDX, AMD SEV-SNP) + Nvidia confidential computing for end-to-end encryption. Product capabilities include summarizing Telegram chats, running branded agents, processing sensitive documents, and on-chain workflows (payments, invoicing, asset management).
|
||||||
|
- **Ecosystem recognition:** Listed by Solana as one of 12 official privacy ecosystem projects
|
||||||
|
- **GitHub:** Active commits through Feb/March 2026 (github.com/loyal-labs)
|
||||||
|
- **Roadmap:** Core B2B features targeting Q2 2026. Broader roadmap through Q4 2026 / H1 2027 targeting finance, healthcare, and law verticals.
|
||||||
|
|
||||||
|
## Team
|
||||||
|
|
||||||
|
SF-based team of 4 — Eden, Chris, Basil, and Vasiliy — working together ~3 years on anti-surveillance solutions. One member is a Colgate University Applied Math/CS grad with 3 peer-reviewed AI publications.
|
||||||
|
|
||||||
|
## Governance Activity — Active Treasury Defense
|
||||||
|
|
||||||
|
Loyal is notable for aggressive treasury management — deploying both buybacks and liquidity burns to defend NAV:
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| ICO launch | 2025-10-18 | Completed, $2.5M raised (152x oversubscribed) | [[loyal-futardio-launch]] |
|
||||||
|
| $1.5M treasury buyback | 2025-11 | Passed — 8,640 orders over 30 days at max $0.238/token (NAV minus 2 months opex) | [[loyal-buyback-up-to-nav]] |
|
||||||
|
| 90% liquidity pool burn | 2025-12 | Passed — burned 809,995 LOYAL from Meteora DAMM v2 pool | [[loyal-liquidity-adjustment]] |
|
||||||
|
|
||||||
|
**Buyback logic:** $1.5M at max $0.238/token = estimated 6.3M LOYAL purchased. 90-day cooldown on new buyback/redemption proposals. The max price was calculated as NAV minus 2 months operating expenses — disciplined framework.
|
||||||
|
|
||||||
|
**Liquidity burn rationale:** The Meteora pool was creating selling pressure without corresponding price support. 90% withdrawal (not 100%) to avoid Dexscreener indexing visibility issues. Second MetaDAO project to deploy NAV defense through buybacks.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- **Product delivery.** $260K treasury and $60K/month burn gives ~4 months runway. The confidential computing stack (MagicBlock + Arcium) is ambitious infrastructure. Can they ship with this runway?
|
||||||
|
- **Market timing.** Private AI chat is a growing concern but the paying market is uncertain. Venice.ai is the closest competitor with a different approach (no blockchain, subscription model).
|
||||||
|
- **Oversubscription paradox.** 152x oversubscription generated massive attention but the pro-rata mechanism means most committed capital was returned. Does the ratio reflect genuine conviction or allocation-hunting behavior?
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
- **2025-10-18** — Futardio launch opens ($500K target)
|
|
||||||
- **2025-10-22** — Launch closes. $2.5M raised.
|
|
||||||
|
|
||||||
- **2026-01-00** — ICO performance: maximum 30% drawdown from launch price
|
- **2025-10-18** — MetaDAO curated ICO opens ($500K target)
|
||||||
## Relationship to KB
|
- **2025-10-22** — ICO closes. $2.5M raised (152x oversubscribed).
|
||||||
- futardio — launched on Futardio platform
|
- **2025-11** — $1.5M treasury buyback (8,640 orders over 30 days, max $0.238/token)
|
||||||
- [[internet capital markets compress fundraising from months to days because permissionless raises eliminate gatekeepers while futarchy replaces due diligence bottlenecks with real-time market pricing]] — 4-day raise window confirms compression
|
- **2025-12** — 90% LOYAL tokens burned from Meteora DAMM v2 pool
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- futardio — launch platform
|
- [[metadao]] — launch platform (curated ICO #5)
|
||||||
- [[metadao]] — parent ecosystem
|
- [[internet capital markets compress fundraising from months to days because permissionless raises eliminate gatekeepers while futarchy replaces due diligence bottlenecks with real-time market pricing]] — 4-day raise window with 152x oversubscription
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
|
|
@ -6,70 +6,72 @@ domain: internet-finance
|
||||||
status: liquidated
|
status: liquidated
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-20
|
created: 2026-03-20
|
||||||
last_updated: 2026-03-20
|
last_updated: 2026-04-02
|
||||||
tags: [metadao, futarchy, ico, liquidation, fund]
|
tags: [metadao-curated-launch, ownership-coin, futarchy, fund, liquidation]
|
||||||
token_symbol: "$MTN"
|
token_symbol: "$MTN"
|
||||||
|
token_mint: "unknown"
|
||||||
parent: "[[metadao]]"
|
parent: "[[metadao]]"
|
||||||
launch_date: 2025-08
|
launch_platform: metadao-curated
|
||||||
|
launch_order: 1
|
||||||
|
launch_date: 2025-04
|
||||||
amount_raised: "$5,760,000"
|
amount_raised: "$5,760,000"
|
||||||
built_on: ["Solana"]
|
built_on: ["Solana"]
|
||||||
|
handles: []
|
||||||
|
website: "https://v1.metadao.fi/mtncapital"
|
||||||
|
competitors: []
|
||||||
---
|
---
|
||||||
|
|
||||||
# mtnCapital
|
# mtnCapital
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
mtnCapital was a futarchy-governed investment fund launched through MetaDAO's permissioned launchpad. It raised approximately $5.76M USDC, all locked in the DAO treasury. The fund was subsequently wound down via futarchy governance vote (~Sep 2025), making it the **first MetaDAO project to be liquidated** — predating the Ranger Finance liquidation by approximately 6 months.
|
Futarchy-governed investment fund — the first ownership coin launched through MetaDAO's curated launchpad. Created by mtndao, focused exclusively on Solana ecosystem investments. All capital allocation decisions governed through prediction markets rather than traditional DAO voting. Any $MTN holder could submit investment proposals, making deal sourcing fully permissionless.
|
||||||
|
|
||||||
## Current State
|
## Investment Rationale (from raise)
|
||||||
|
|
||||||
- **Status:** Liquidated (wind-down completed via futarchy vote, ~September 2025)
|
The thesis was that futarchy-governed capital allocation would outperform traditional VC by removing gatekeepers from deal flow and using market-based decision-making instead of committee votes. The CoinDesk coverage quoted the founder claiming the fund would "outperform VCs." The mechanism: propose an investment → conditional markets price the outcome → capital deploys only if the market signals positive expected value.
|
||||||
- **Token:** $MTN (token_mint unknown)
|
|
||||||
- **Raise:** ~$5.76M USDC (all locked in DAO treasury)
|
|
||||||
- **Launch FDV:** Unknown — one source (@cryptof4ck) cites $3.3M but this is unverified and would imply a substantial discount to NAV at launch
|
|
||||||
- **Redemption price:** ~$0.604 per $MTN
|
|
||||||
- **Post-liquidation:** Token still traded with minimal volume (~$79/day as of Nov 2025)
|
|
||||||
|
|
||||||
## ICO Details
|
## What Happened
|
||||||
|
|
||||||
Launched via MetaDAO's permissioned launchpad (~August 2025). All $5.76M raised was locked in the DAO treasury under futarchy governance. Token allocation details unknown. This was one of the earlier MetaDAO permissioned launches alongside Avici, Omnipair, Umbra, and Solomon Labs.
|
The fund underperformed. DAO members initiated a futarchy proposal to liquidate in September 2025. The proposal passed despite team opposition — the market prices clearly supported unwinding. Funds were returned to MTN holders via a one-way redemption mechanism (redeem MTN for USDC, no fees). Redemption price: ~$0.604 per $MTN.
|
||||||
|
|
||||||
## Timeline
|
|
||||||
|
|
||||||
- **~2025-08** — Launched via MetaDAO permissioned ICO, raised ~$5.76M USDC
|
|
||||||
- **2025-08 to 2025-09** — Trading period. At times traded above NAV.
|
|
||||||
- **~2025-09** — Futarchy governance proposal to wind down operations passed. Capital returned to token holders at ~$0.604/MTN redemption rate. See [[mtncapital-wind-down]] for decision record.
|
|
||||||
- **2025-09** — Theia Research profited ~$35K via NAV arbitrage (bought at avg $0.485, redeemed at $0.604)
|
|
||||||
- **2025-11** — @_Dean_Machine flagged potential manipulation concerns "going as far back as the mtnCapital raise, trading, and redemption"
|
|
||||||
- **2026-01** — @AK47ven listed mtnCapital among 5/8 MetaDAO launches still green since launch
|
|
||||||
- **2026-03** — @donovanchoy cited mtnCapital as first in liquidation sequence: "mtnCapital was liquidated and returned capital, then Hurupay, now (possibly) Ranger"
|
|
||||||
|
|
||||||
## Significance
|
## Significance
|
||||||
|
|
||||||
mtnCapital is the **first empirical test of the unruggable ICO enforcement mechanism**. The futarchy governance system approved a wind-down, capital was returned to investors, and the process was orderly. This establishes that:
|
mtnCapital is the **first empirical test of the unruggable ICO enforcement mechanism.** Three things it proved:
|
||||||
|
|
||||||
1. **Futarchy-governed liquidation works in practice** — mechanism moved from theoretical to empirically validated
|
1. **Futarchy can force liquidation against team wishes.** The team opposed the wind-down but the market overruled them. This is the mechanism working as designed — investor protection without legal proceedings.
|
||||||
2. **NAV arbitrage creates a price floor** — Theia bought below redemption value and profited, confirming the arbitrage mechanism
|
|
||||||
3. **The liquidation sequence matters** — mtnCapital (orderly wind-down) → Hurupay (refund, didn't reach minimum) → Ranger (contested liquidation with misrepresentation) shows enforcement operating across different failure modes
|
2. **NAV arbitrage is real.** Theia Research bought 297K $MTN at ~$0.485 (below NAV), voted for wind-down, redeemed at ~$0.604. Profit: ~$35K. This confirms the NAV floor is enforceable through market mechanics.
|
||||||
|
|
||||||
|
3. **Orderly unwinding is possible.** Capital returned, redemption mechanism worked, no rugpull. The process established the liquidation playbook that Ranger Finance later followed.
|
||||||
|
|
||||||
## Open Questions
|
## Open Questions
|
||||||
|
|
||||||
- What specifically triggered the wind-down? The fund raised $5.76M but apparently failed to deploy capital successfully. Details sparse.
|
- **Manipulation concerns.** @_Dean_Machine flagged potential exploitation "going as far back as the mtnCapital raise, trading, and redemption." He stated it's "very unlikely that the MetaDAO team is involved" but "very likely that someone has been taking advantage." Proposed fixes: fees on ICO commitments, restricted capital from newly funded wallets, wallet reputation systems.
|
||||||
- @_Dean_Machine's manipulation concerns — was there exploitative trading around the raise/redemption cycle?
|
- **Why did it underperform?** No detailed post-mortem published by the team. The mechanism proved the fund could be wound down — but the market never tested whether futarchy-governed allocation could outperform in a bull case.
|
||||||
- Token allocation structure unknown — what % was ICO vs team vs LP? This affects the FDV/NAV relationship.
|
|
||||||
|
|
||||||
## Relationship to KB
|
## Timeline
|
||||||
- [[metadao]] — parent entity, permissioned launchpad
|
|
||||||
- [[decision markets make majority theft unprofitable through conditional token arbitrage]] — mtnCapital liquidation is empirical confirmation of the NAV arbitrage mechanism
|
- **2025-04** — Launched via MetaDAO curated ICO, raised ~$5.76M USDC (first-ever MetaDAO launch)
|
||||||
- [[futarchy-governed liquidation is the enforcement mechanism that makes unruggable ICOs credible because investors can force full treasury return when teams materially misrepresent]] — first live test of this enforcement mechanism
|
- **2025-04 to 2025-09** — Trading period. At times traded above NAV.
|
||||||
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] — one of the earlier permissioned launches
|
- **~2025-09** — Futarchy governance proposal to wind down passed despite team opposition. Capital returned at ~$0.604/MTN redemption rate. See [[mtncapital-wind-down]].
|
||||||
|
- **2025-09** — Theia Research profited ~$35K via NAV arbitrage
|
||||||
|
- **2025-11** — @_Dean_Machine flagged manipulation concerns
|
||||||
|
- **2026-01** — @AK47ven listed mtnCapital among 5/8 MetaDAO launches still green since launch
|
||||||
|
- **2026-03** — @donovanchoy cited mtnCapital as first in liquidation sequence: mtnCapital → Hurupay → Ranger
|
||||||
|
|
||||||
|
## Governance Activity
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| Wind-down proposal | ~2025-09 | Passed (liquidation) | [[mtncapital-wind-down]] |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- [[metadao]] — platform
|
- [[metadao]] — launch platform (curated ICO #1)
|
||||||
- [[theia-research]] — NAV arbitrage participant
|
- [[ranger-finance]] — second project to be liquidated via futarchy
|
||||||
- [[ranger-finance]] — second liquidation case (different failure mode)
|
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders — mtnCapital NAV arbitrage supports this claim
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
|
|
@ -1,71 +1,107 @@
|
||||||
---
|
---
|
||||||
type: entity
|
type: entity
|
||||||
entity_type: company
|
entity_type: company
|
||||||
name: P2P.me
|
name: "P2P.me"
|
||||||
domain: internet-finance
|
domain: internet-finance
|
||||||
|
handles: []
|
||||||
|
website: https://p2p.me
|
||||||
status: active
|
status: active
|
||||||
|
tracked_by: rio
|
||||||
|
created: 2026-03-20
|
||||||
|
last_updated: 2026-04-02
|
||||||
|
parent: "[[metadao]]"
|
||||||
|
launch_platform: metadao-curated
|
||||||
|
launch_order: 10
|
||||||
|
category: "Non-custodial fiat-to-stablecoin on/off ramp"
|
||||||
|
stage: growth
|
||||||
|
token_symbol: "$P2P"
|
||||||
|
token_mint: "P2PXup1ZvMpCDkJn3PQxtBYgxeCSfH39SFeurGSmeta"
|
||||||
founded: 2024
|
founded: 2024
|
||||||
headquarters: India
|
headquarters: India
|
||||||
|
built_on: ["Base", "Solana"]
|
||||||
|
tags: [metadao-curated-launch, ownership-coin, payments, on-off-ramp, emerging-markets]
|
||||||
|
competitors: ["MoonPay", "Transak", "Local Bitcoins successors"]
|
||||||
|
source_archive: "inbox/archive/2026-01-01-futardio-launch-p2p-protocol.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# P2P.me
|
# P2P.me
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
Non-custodial USDC-to-fiat on/off ramp built on Base, targeting emerging markets with peer-to-peer crypto-to-fiat conversion.
|
Non-custodial peer-to-peer USDC-to-fiat on/off ramp targeting emerging markets. Users convert between stablecoins and local fiat currencies without centralized custody. Live for 2 years on Base, expanding to Solana. Uses a Proof-of-Credibility system with zk-KYC to prevent fraud (<1 in 1,000 transactions).
|
||||||
|
|
||||||
## Key Metrics (as of March 2026)
|
## Investment Rationale (from raise)
|
||||||
|
|
||||||
- **Users:** 23,000+ registered
|
The most recent MetaDAO curated launch and the first with a live, revenue-generating product and institutional backing. The bull case: P2P.me solves a real problem in emerging markets (India, Brazil, Argentina, Indonesia) where traditional on/off ramps are expensive, slow, or blocked by banking infrastructure. In India specifically, zk-KYC addresses the bank-freeze problem that plagues centralized crypto services. VC backing from Multicoin Capital ($1.4M), Coinbase Ventures ($500K), and Alliance DAO ($350K) provides validation and distribution.
|
||||||
- **Geography:** India (78%), Brazil (15%), Argentina, Indonesia
|
|
||||||
- **Volume:** Peaked $3.95M monthly (February 2026)
|
|
||||||
- **Revenue:** ~$500K annualized
|
|
||||||
- **Gross Profit:** ~$82K annually (after costs)
|
|
||||||
- **Team Size:** 25 staff
|
|
||||||
- **Monthly Burn:** $175K ($75K salaries, $50K marketing, $35K legal, $15K infrastructure)
|
|
||||||
|
|
||||||
## ICO Details
|
## ICO Details
|
||||||
|
|
||||||
- **Platform:** MetaDAO
|
- **Platform:** MetaDAO curated launchpad (10th launch — most recent)
|
||||||
- **Raise Target:** $6M
|
- **Date:** March 26-30, 2026
|
||||||
- **FDV:** ~$15.5M
|
- **Target:** $6M at $15.5M FDV ($0.60/token, later adjusted to $0.01/token)
|
||||||
- **Token Price:** $0.60
|
- **Total bids:** $7.15M (above target)
|
||||||
- **Tokens Sold:** 10M
|
- **Final raise:** $5.2M
|
||||||
- **Total Supply:** 25.8M
|
- **Total supply:** 25.8M tokens
|
||||||
- **Liquid at Launch:** 50%
|
- **Liquid at launch:** 50% (highest in MetaDAO history)
|
||||||
- **Team Unlock:** Performance-based, no benefit below 2x ICO price
|
- **Team tokens (30%):** 12-month cliff, performance-based unlocks at 2x/4x/8x/16x/32x ICO price
|
||||||
- **Scheduled Date:** March 26, 2026
|
- **Investor tokens (20%):** 12-month full lockup, then 5 equal unlocks over 12 months
|
||||||
|
|
||||||
## Business Model
|
## Current State (as of March 2026)
|
||||||
|
|
||||||
- B2B SDK deployment potential
|
**Product metrics:**
|
||||||
- Circles of Trust merchant onboarding for geographic expansion
|
- **Users:** 23,000+ registered
|
||||||
- On-chain P2P with futarchy governance
|
- **Geography:** India (78%), Brazil (15%), Argentina, Indonesia
|
||||||
|
- **Volume:** Peaked $3.95M monthly (February 2026)
|
||||||
|
- **Weekly actives:** 2,000-2,500 (~10-11% of base)
|
||||||
|
- **Revenue:** ~$578K annualized (2-6% spread on transactions)
|
||||||
|
- **Gross profit:** $4.5K-$13.3K/month (inconsistent)
|
||||||
|
- **NPS:** 80; 65% would be "very disappointed" without the product
|
||||||
|
- **Fraud rate:** <1 in 1,000 transactions (Proof-of-Credibility)
|
||||||
|
|
||||||
## Governance
|
**Financial reality:**
|
||||||
|
- Monthly burn: $175K ($75K salaries, $50K marketing, $35K legal, $15K infrastructure)
|
||||||
|
- Runway: ~34 months at current burn
|
||||||
|
- Self-sustainability threshold: ~$875K/month revenue (currently ~$48K/month)
|
||||||
|
- Targeting $500M monthly volume over next 18 months
|
||||||
|
|
||||||
Treasury controlled by token holders through futarchy-based governance. Team cannot unilaterally spend raised capital.
|
**Prior funding:**
|
||||||
|
- Multicoin Capital: $1.4M (Jan 2025, 9.33% supply)
|
||||||
|
- Coinbase Ventures: $500K (Feb 2025, 2.56% supply)
|
||||||
|
- Alliance DAO: $350K (2024, 4.66% supply)
|
||||||
|
- Reclaim Protocol: $80K angel (2023, 3.45% supply)
|
||||||
|
|
||||||
|
## The Polymarket Incident
|
||||||
|
|
||||||
|
In March 2026, the P2P.me team placed bets on Polymarket that their own ICO would reach the $6M target, using the pseudonym "P2PTeam." They had a verbal $3M commitment from Multicoin at the time. They netted ~$14,700 in profit. The team publicly apologized, sent profits to the MetaDAO treasury, and adopted a formal policy against future prediction market trades on their own activities. Covered by CoinTelegraph, BeInCrypto, Unchained.
|
||||||
|
|
||||||
|
This incident is noteworthy because it highlights the tension between prediction market participation and insider information — the same issue that recurs in futarchy design (see MetaDAO decision market analysis).
|
||||||
|
|
||||||
|
## Analyst Concerns
|
||||||
|
|
||||||
|
Pine Analytics characterized the valuation as "stretched relative to fundamentals" — the ~182x price-to-gross-profit multiple requires significant growth acceleration that recent data does not support. User growth has stalled for ~6 months with weekly actives plateauing. Delphi Digital found 30-40% of MetaDAO ICO participants are passives/flippers, creating structural post-TGE selling pressure independent of project quality.
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
- Q2 2026: B2B SDK launch, treasury allocation, multi-currency expansion
|
||||||
|
- Q3 2026: Solana deployment, governance Phase 1 (insurance/disputes)
|
||||||
|
- Q4 2026: Phase 2 governance (token-holder voting for non-critical parameters)
|
||||||
|
- Q1 2027: Operating profitability target
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
|
|
||||||
- **2024** — Founded
|
- **2024** — Founded, initial angel round from Reclaim Protocol
|
||||||
- **Mid-2025** — Active user growth plateaus
|
- **2025-01** — Multicoin Capital $1.4M
|
||||||
- **February 2026** — Peak monthly volume of $3.95M
|
- **2025-02** — Coinbase Ventures $500K
|
||||||
- **March 15, 2026** — Pine Analytics publishes pre-ICO analysis identifying 182x gross profit multiple concern
|
- **2026-01-01** — MetaDAO ICO initialized
|
||||||
- **March 26, 2026** — ICO scheduled on MetaDAO
|
- **2026-03-16** — Polymarket incident (team bets on own ICO)
|
||||||
|
- **2026-03-26** — MetaDAO curated ICO goes live
|
||||||
|
- **2026-03-30** — ICO closes. $5.2M raised.
|
||||||
|
|
||||||
- **2026-03-26** — [[p2p-me-metadao-ico]] Active: ICO scheduled, targeting $6M raise at $15.5M FDV with Pine Analytics identifying 182x gross profit multiple concerns
|
---
|
||||||
- **2026-03-26** — [[p2p-me-ico-march-2026]] Active: $6M ICO at $15.5M FDV scheduled on MetaDAO
|
|
||||||
- **2026-03-26** — [[metadao-p2p-me-ico]] Active: ICO launch targeting $15.5M FDV at 182x gross profit multiple
|
Relevant Notes:
|
||||||
- **2026-03-26** — [[p2p-me-metadao-ico-march-2026]] Active: ICO scheduled, targeting $6M at $15.5M FDV
|
- [[metadao]] — launch platform (curated ICO #10, most recent)
|
||||||
- **2026-03-26** — [[p2p-me-metadao-ico-march-2026]] Status pending: ICO vote scheduled
|
- [[omnipair]] — earlier MetaDAO launch with different token structure
|
||||||
- **2026-03-26** — [[p2p-me-ico-launch]] Active: ICO launch on MetaDAO with $6M minimum fundraising target
|
|
||||||
- **2026-03-24** — MetaDAO launch allocation structure announced: XP holders receive priority allocation with pro-rata distribution and bonus multipliers for P2P points holders
|
Topics:
|
||||||
- **2026-03-25** — Announced $P2P token sale on MetaDAO with participation from Multicoin Capital, Moonrock Capital, and ex-Solana Foundation investors. Multiple VCs published public investment theses ahead of the ICO.
|
- [[internet finance and decision markets]]
|
||||||
- **2026-03-26** — [[p2p-me-metadao-ico]] Active: ICO scheduled on MetaDAO platform targeting $15.5M FDV
|
|
||||||
- **2026-03-27** — ICO launches on MetaDAO with 7-9 month delay on community governance proposals as post-ICO guardrail
|
|
||||||
- **2026-03-27** — ICO live on MetaDAO with 7-9 month delay before community governance proposals enabled
|
|
||||||
- **2026-03-27** — ICO structure includes 7-9 month delay before community governance proposals become eligible
|
|
||||||
- **2026-03-27** — ICO launched on MetaDAO with 7-9 month delay before community governance proposals become enabled, implementing post-ICO timing guardrails
|
|
||||||
- **2026-03-27** — ICO live on MetaDAO with 7-9 month delay on community governance proposals as post-ICO guardrail
|
|
||||||
- **2026-03-30** — Transparency issues noted in market analysis; trading policies revised post-market involvement; potential trust rebuilding via MetaDAO integration discussed
|
|
||||||
|
|
|
||||||
|
|
@ -8,41 +8,78 @@ website: https://paystream.finance
|
||||||
status: active
|
status: active
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-11
|
created: 2026-03-11
|
||||||
last_updated: 2026-03-11
|
last_updated: 2026-04-02
|
||||||
parent: "futardio"
|
parent: "[[metadao]]"
|
||||||
|
launch_platform: metadao-curated
|
||||||
|
launch_order: 7
|
||||||
category: "Liquidity optimization protocol (Solana)"
|
category: "Liquidity optimization protocol (Solana)"
|
||||||
stage: growth
|
stage: early
|
||||||
funding: "$750K raised via Futardio ICO"
|
token_symbol: "$PAYS"
|
||||||
|
token_mint: "PAYZP1W3UmdEsNLJwmH61TNqACYJTvhXy8SCN4Tmeta"
|
||||||
|
founded_by: "Maushish Yadav"
|
||||||
built_on: ["Solana"]
|
built_on: ["Solana"]
|
||||||
tags: ["defi", "lending", "liquidity", "futardio-launch", "ownership-coin"]
|
tags: [metadao-curated-launch, ownership-coin, defi, lending, liquidity]
|
||||||
|
competitors: ["Kamino", "Juplend", "MarginFi"]
|
||||||
source_archive: "inbox/archive/2025-10-23-futardio-launch-paystream.md"
|
source_archive: "inbox/archive/2025-10-23-futardio-launch-paystream.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Paystream
|
# Paystream
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Modular Solana protocol unifying peer-to-peer lending, leveraged liquidity provisioning, and yield routing. Matches lenders and borrowers at mid-market rates, eliminating APY spreads seen in pool-based models like Kamino and Juplend. Integrates with Raydium CLMM, Meteora DLMM, and DAMM v2 pools.
|
|
||||||
|
|
||||||
## Current State
|
Modular Solana protocol unifying peer-to-peer lending, leveraged liquidity provisioning, and yield routing into a single capital-efficient engine. Matches lenders and borrowers at fair mid-market rates, eliminating the wide APY spreads seen in pool-based models like Kamino and Juplend. Integrates with Raydium CLMM, Meteora DLMM, and DAMM v2 pools.
|
||||||
- **Raised**: $750K final (target $550K, $6.1M committed — 11x oversubscribed)
|
|
||||||
- **Treasury**: $241K USDC remaining
|
## Investment Rationale (from raise)
|
||||||
- **Token**: PAYS (mint: PAYZP1W3UmdEsNLJwmH61TNqACYJTvhXy8SCN4Tmeta), price: $0.04
|
|
||||||
- **Monthly allowance**: $33.5K
|
The pitch: every dollar on Paystream is always moving, always earning. Pool-based lending models have structural inefficiency — wide APY spreads between what lenders earn and borrowers pay. P2P matching eliminates the spread. Leveraged LP strategies turn idle capital into productive liquidity. The combination targets higher yields for lenders, lower rates for borrowers, and zero idle funds.
|
||||||
- **Launch mechanism**: Futardio v0.6 (pro-rata)
|
|
||||||
|
## ICO Details
|
||||||
|
|
||||||
|
- **Platform:** MetaDAO curated launchpad (7th launch)
|
||||||
|
- **Date:** October 23-27, 2025
|
||||||
|
- **Target:** $550K
|
||||||
|
- **Committed:** $6.15M (11x oversubscribed)
|
||||||
|
- **Final raise:** $750K
|
||||||
|
- **Launch mechanism:** Futardio v0.6 (pro-rata)
|
||||||
|
|
||||||
|
## Current State (as of early 2026)
|
||||||
|
|
||||||
|
- **Trading:** ~$0.073, down from $0.09 ATH. Market cap ~$680K — true micro-cap
|
||||||
|
- **Volume:** Extremely thin (~$3.5K daily)
|
||||||
|
- **Supply:** ~12.9M circulating of 24.75M max
|
||||||
|
- **Achievement:** Won the **Solana Colosseum 2025 hackathon**
|
||||||
|
- **Treasury:** $241K USDC remaining, $33.5K monthly allowance
|
||||||
|
|
||||||
|
## Team
|
||||||
|
|
||||||
|
Founded by **Maushish Yadav**, formerly a crypto security researcher/auditor who audited protocols including Lido, Thorchain, and TempleGold. Security background is relevant for a DeFi lending protocol.
|
||||||
|
|
||||||
|
## Governance Activity
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| ICO launch | 2025-10-23 | Completed, $750K raised | [[paystream-futardio-fundraise]] |
|
||||||
|
| $225K treasury buyback | 2026-01-16 | Passed — 4,500 orders over 15 days at max $0.065/token | See inbox/archive |
|
||||||
|
|
||||||
|
The buyback follows the NAV-defense pattern now standard across MetaDAO launches — when an ownership coin trades significantly below treasury NAV, the rational move is buybacks until price converges.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- **Adoption.** Extremely thin trading volume and micro-cap status suggest limited market awareness. The hackathon win is a signal but the protocol needs users.
|
||||||
|
- **Competitive moat.** P2P lending + leveraged LP is a crowded space on Solana. What prevents Kamino, MarginFi, or Juplend from adding similar P2P matching?
|
||||||
|
- **Treasury runway.** $241K at $33.5K/month gives ~7 months without revenue. The buyback spent $225K — aggressive given the treasury size.
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
- **2025-10-23** — Futardio launch opens ($550K target)
|
|
||||||
- **2025-10-27** — Launch closes. $750K raised.
|
|
||||||
|
|
||||||
- **2026-01-00** — ICO performance: maximum 30% drawdown from launch price
|
- **2025-10-23** — MetaDAO curated ICO opens ($550K target)
|
||||||
## Relationship to KB
|
- **2025-10-27** — ICO closes. $750K raised (11x oversubscribed).
|
||||||
- futardio — launched on Futardio platform
|
- **2025** — Won Solana Colosseum hackathon
|
||||||
|
- **2026-01-16** — $225K USDC treasury buyback proposal passed (max $0.065/token, 90-day cooldown)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- futardio — launch platform
|
- [[metadao]] — launch platform (curated ICO #7)
|
||||||
- [[metadao]] — parent ecosystem
|
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
|
|
@ -4,62 +4,97 @@ entity_type: company
|
||||||
name: "Solomon"
|
name: "Solomon"
|
||||||
domain: internet-finance
|
domain: internet-finance
|
||||||
handles: ["@solomon_labs"]
|
handles: ["@solomon_labs"]
|
||||||
|
website: https://solomonlabs.org
|
||||||
status: active
|
status: active
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-11
|
created: 2026-03-11
|
||||||
last_updated: 2026-03-11
|
last_updated: 2026-04-02
|
||||||
founded: 2025-11-14
|
parent: "[[metadao]]"
|
||||||
founders: ["Ranga (@oxranga)"]
|
launch_platform: metadao-curated
|
||||||
category: "Futardio-launched ownership coin with active futarchy governance (Solana)"
|
launch_order: 8
|
||||||
parent: "futardio"
|
category: "Yield-bearing stablecoin protocol (Solana)"
|
||||||
stage: early
|
stage: growth
|
||||||
key_metrics:
|
token_symbol: "$SOLO"
|
||||||
raise: "$8M raised ($103M committed — 13x oversubscription)"
|
token_mint: "SoLo9oxzLDpcq1dpqAgMwgce5WqkRDtNXK7EPnbmeta"
|
||||||
treasury: "$6.1M USDC"
|
founded_by: "Ranga C (@oxranga)"
|
||||||
token_price: "$0.55"
|
|
||||||
monthly_allowance: "$100K"
|
|
||||||
governance: "Active futarchy governance + treasury subcommittee (DP-00001)"
|
|
||||||
competitors: []
|
|
||||||
built_on: ["Solana", "MetaDAO Autocrat"]
|
built_on: ["Solana", "MetaDAO Autocrat"]
|
||||||
tags: ["ownership-coins", "futarchy", "treasury-management", "metadao-ecosystem"]
|
tags: [metadao-curated-launch, ownership-coin, stablecoin, yield, treasury-management]
|
||||||
|
competitors: ["Ethena", "Ondo Finance", "Mountain Protocol"]
|
||||||
source_archive: "inbox/archive/2025-11-14-futardio-launch-solomon.md"
|
source_archive: "inbox/archive/2025-11-14-futardio-launch-solomon.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Solomon
|
# Solomon
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
One of the first successful Futardio launches. Raised $8M through the pro-rata mechanism ($103M committed = 13x oversubscription). Notable for implementing structured treasury management through futarchy — the treasury subcommittee proposal (DP-00001) established operational governance scaffolding on top of futarchy's market-based decision mechanism.
|
|
||||||
|
|
||||||
## Current State
|
Composable yield-bearing stablecoin protocol on Solana. Core product is USDv — a stablecoin that generates yield from delta-neutral basis trades (spot long / perp short on BTC/ETH/SOL majors) with T-bill integration in the last mile. YaaS (Yield-as-a-Service) streams yield to approved USDv holders, LP positions, and treasury balances without wrappers or vaults.
|
||||||
- **Product**: USDv — yield-bearing stablecoin. YaaS (Yield-as-a-Service) streams yield to approved USDv holders, LP positions, and treasury balances without wrappers or vaults.
|
|
||||||
- **Governance**: Active futarchy governance through MetaDAO Autocrat. Treasury subcommittee proposal (DP-00001) passed March 9, 2026 (cleared 1.5% TWAP threshold by +2.22%). Moves up to $150K USDC into segregated legal budget, nominates 4 subcommittee designates.
|
## Investment Rationale (from raise)
|
||||||
- **Treasury**: Actively managed through buybacks and strategic allocations. DP-00001 is step 1 of 3: (1) legal/pre-formation, (2) SOLO buyback framework, (3) treasury account activation.
|
|
||||||
- **YaaS status**: Closed beta — LP volume crossed $1M, OroGold GOLD/USDv pool delivering 59.6% APY. First deployment drove +22.05% LP APY with 3.5x pool growth.
|
The largest MetaDAO curated ICO by committed capital ($102.9M from 6,603 contributors). The thesis: yield-bearing stablecoins are the next major DeFi primitive, and Solomon's approach — basis trades + T-bills, distributed through YaaS — avoids the centralization risks of Ethena while maintaining competitive yields. The massive oversubscription (13x) reflected conviction that this was the strongest product thesis in the MetaDAO pipeline.
|
||||||
- **Significance**: Test case for whether futarchy-governed organizations converge on traditional corporate governance scaffolding for operations
|
|
||||||
|
## ICO Details
|
||||||
|
|
||||||
|
- **Platform:** MetaDAO curated launchpad (8th launch)
|
||||||
|
- **Date:** November 14-18, 2025
|
||||||
|
- **Target:** $2M
|
||||||
|
- **Committed:** $102.9M from 6,603 contributors (51.5x oversubscribed — largest in MetaDAO history)
|
||||||
|
- **Final raise:** $8M (capped)
|
||||||
|
- **Launch mechanism:** Futardio v0.6 (pro-rata)
|
||||||
|
|
||||||
|
## Current State (as of early 2026)
|
||||||
|
|
||||||
|
**Product:**
|
||||||
|
- USDv live in **private beta** with seven-figure TVL
|
||||||
|
- TVL reached **$3M** (30% growth from prior update)
|
||||||
|
- sUSDv beta rate: **~20.9% APY**
|
||||||
|
- YaaS integration progressing with a major neobank partner (Avici)
|
||||||
|
- Cantina audit completed
|
||||||
|
- Legal clearance ~1 month away
|
||||||
|
|
||||||
|
**Token:** Trading ~$0.66-$0.85 range. Down from $1.41 ATH. Very low secondary volume (~$53/day).
|
||||||
|
|
||||||
|
**Team:** Led by Ranga C, who publishes Lab Notes on Substack. New developer hired (Google/Superteam/Solana hackathon background). 50+ commits in recent sprint — Solana parsing, AMM execution layer, internal tooling. Recruiting senior backend.
|
||||||
|
|
||||||
|
## Governance Activity
|
||||||
|
|
||||||
|
Solomon has the most sophisticated governance formation of any MetaDAO project — methodically building corporate-style governance scaffolding through futarchy approvals:
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| ICO launch | 2025-11-14 | Completed, $8M raised | [[solomon-futardio-launch]] |
|
||||||
|
| DP-00001: Treasury subcommittee + legal budget | 2026-03 | Passed (+2.22% above TWAP threshold) | [[solomon-treasury-subcommittee]] |
|
||||||
|
| DP-00002: $1M SOLO acquisition + restricted incentives reserve | 2026-03 | Passed | [[solomon-solo-acquisition]] |
|
||||||
|
|
||||||
|
**DP-00001** details: $150K capped legal/compliance budget in segregated wallet. Pre-formation treasury subcommittee with 4 designates. Staged approach: (1) legal foundation → (2) policy framework → (3) delegated authority. No authority to move general funds yet.
|
||||||
|
|
||||||
|
**DP-00002** details: $1M USDC to acquire SOLO at max $0.74. Tokens held in restricted reserve for future incentive programs (Pips program has first call). Cannot be self-dealt, lent, pledged, or used for compensation without governance approval.
|
||||||
|
|
||||||
|
## Why Solomon Matters for MetaDAO
|
||||||
|
|
||||||
|
Solomon is the strongest existence proof that futarchy-governed organizations can build real corporate governance infrastructure. The staged approach — legal first, then policy, then delegated authority — mirrors how traditional startups formalize governance, but every step requires market-based approval rather than board votes. If Solomon ships USDv at scale with 20%+ yields and proper governance, it validates the entire ownership coin model.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- **Ethena comparison.** USDv uses the same basis trade strategy as Ethena's USDe. What's the structural advantage beyond decentralized governance? Scale matters for basis trade profitability.
|
||||||
|
- **"Hedge fund in disguise?"** Meme Insider questioned whether USDv is just a hedge fund wrapped in stablecoin branding. The counter: transparent governance + T-bill integration + YaaS distribution make it structurally different from an opaque fund.
|
||||||
|
- **Low secondary liquidity.** $53/day volume despite $8M raise suggests most holders are passive. Does the market believe in the product or was this an oversubscription-driven allocation play?
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
- **2025-11-14** — Solomon launches via Futardio ($103M committed, $8M raised)
|
|
||||||
- **2026-02/03** — Lab Notes series (Ranga documenting progress publicly)
|
|
||||||
- **2026-03** — Treasury subcommittee proposal (DP-00001) — formalized operational governance
|
|
||||||
|
|
||||||
- **2026-01-00** — ICO performance: maximum 30% drawdown from launch price, part of convergence toward lower volatility in recent MetaDAO launches
|
- **2025-11-14** — MetaDAO curated ICO opens ($2M target)
|
||||||
## Competitive Position
|
- **2025-11-18** — ICO closes. $8M raised ($102.9M committed, 51.5x oversubscribed).
|
||||||
Solomon is not primarily a competitive entity — it's an existence proof. It demonstrates that futarchy-governed organizations can raise capital, manage treasuries, and create operational governance structures. The key question is whether the futarchy layer adds genuine value beyond what a normal startup with transparent treasury management would achieve.
|
- **2026-01** — Max 30% drawdown from launch price
|
||||||
|
- **2026-02/03** — Lab Notes series published (Ranga documenting progress publicly)
|
||||||
## Investment Thesis
|
- **2026-03** — DP-00001: Treasury subcommittee + legal budget passed
|
||||||
Solomon validates the ownership coin model: futarchy governance + permissionless capital formation + active treasury management. If Solomon outperforms comparable projects without futarchy governance, it strengthens the case for market-based governance as an organizational primitive.
|
- **2026-03** — DP-00002: $1M SOLO acquisition + restricted reserve passed
|
||||||
|
- **2026-03** — USDv private beta with $3M TVL, 20.9% APY
|
||||||
**Thesis status:** WATCHING
|
|
||||||
|
|
||||||
## Relationship to KB
|
|
||||||
- [[futarchy-governed DAOs converge on traditional corporate governance scaffolding for treasury operations because market mechanisms alone cannot provide operational security and legal compliance]] — Solomon's DP-00001 is evidence for this
|
|
||||||
- [[ownership coins primary value proposition is investor protection not governance quality because anti-rug enforcement through market-governed liquidation creates credible exit guarantees that no amount of decision optimization can match]] — Solomon tests this
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- [[metadao]] — parent platform
|
- [[metadao]] — launch platform (curated ICO #8)
|
||||||
- futardio — launch mechanism
|
- [[avici]] — YaaS integration partner (neobank + yield)
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
|
|
@ -8,40 +8,89 @@ website: https://zklsol.org
|
||||||
status: active
|
status: active
|
||||||
tracked_by: rio
|
tracked_by: rio
|
||||||
created: 2026-03-11
|
created: 2026-03-11
|
||||||
last_updated: 2026-03-11
|
last_updated: 2026-04-02
|
||||||
parent: "futardio"
|
parent: "[[metadao]]"
|
||||||
category: "LST-based privacy mixer (Solana)"
|
launch_platform: metadao-curated
|
||||||
stage: growth
|
launch_order: 6
|
||||||
funding: "Raised via Futardio ICO (target $300K)"
|
category: "Zero-knowledge privacy mixer with yield (Solana)"
|
||||||
|
stage: restructuring
|
||||||
|
token_symbol: "$ZKFG"
|
||||||
|
token_mint: "ZKFHiLAfAFMTcDAuCtjNW54VzpERvoe7PBF9mYgmeta"
|
||||||
built_on: ["Solana"]
|
built_on: ["Solana"]
|
||||||
tags: ["privacy", "lst", "defi", "futardio-launch", "ownership-coin"]
|
tags: [metadao-curated-launch, ownership-coin, privacy, zk, lst, defi]
|
||||||
|
competitors: ["Tornado Cash (defunct)", "Railgun", "other privacy mixers"]
|
||||||
source_archive: "inbox/archive/2025-10-20-futardio-launch-zklsol.md"
|
source_archive: "inbox/archive/2025-10-20-futardio-launch-zklsol.md"
|
||||||
---
|
---
|
||||||
|
|
||||||
# ZKLSOL
|
# ZKLSOL
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Zero-Knowledge Liquid Staking on Solana. Privacy mixer that converts deposited SOL to LST during the mixing period, so users earn staking yield while waiting for privacy — solving the opportunity cost paradox of traditional mixers.
|
|
||||||
|
|
||||||
## Current State
|
Zero-Knowledge Liquid Staking on Solana. Privacy mixer that converts deposited SOL to LST during the mixing period, so users earn staking yield while waiting for privacy — solving the opportunity cost paradox of traditional mixers. Upon deposit, SOL converts to LST and is staked. Users withdraw the LST after a sufficient waiting period without loss of yield.
|
||||||
- **Raised**: $969K final (target $300K, $14.9M committed — 50x oversubscribed)
|
|
||||||
- **Treasury**: $575K USDC remaining
|
## Investment Rationale (from raise)
|
||||||
- **Token**: ZKLSOL (mint: ZKFHiLAfAFMTcDAuCtjNW54VzpERvoe7PBF9mYgmeta), price: $0.05
|
|
||||||
- **Monthly allowance**: $50K
|
"Cryptocurrency mixers embody a core paradox: robust anonymity requires funds to dwell in the mixer for extended periods... This delays access to capital, clashing with users' need for swift liquidity."
|
||||||
- **Launch mechanism**: Futardio v0.6 (pro-rata)
|
|
||||||
|
ZKLSOL's insight: if deposited funds are converted to LSTs, the waiting period that privacy requires becomes yield-generating instead of capital-destroying. This aligns anonymity with economic incentives — users are paid to wait for privacy rather than paying an opportunity cost. The design bridges security and efficiency, potentially unlocking wider DeFi privacy adoption.
|
||||||
|
|
||||||
|
## ICO Details
|
||||||
|
|
||||||
|
- **Platform:** MetaDAO curated launchpad (6th launch)
|
||||||
|
- **Date:** October 20-24, 2025
|
||||||
|
- **Target:** $300K
|
||||||
|
- **Committed:** $14.9M (50x oversubscribed)
|
||||||
|
- **Final raise:** $969,420
|
||||||
|
- **Launch mechanism:** Futardio v0.6 (pro-rata)
|
||||||
|
|
||||||
|
## Current State (as of April 2026)
|
||||||
|
|
||||||
|
- **Stage:** Restructuring / rebranding
|
||||||
|
- **Market cap:** ~$280K (rank #4288). Near all-time low ($0.048 vs $0.047 ATL on Mar 30, 2026).
|
||||||
|
- **Volume:** $142/day — effectively illiquid
|
||||||
|
- **Supply:** 5.77M circulating / 12.9M total / 25.8M max
|
||||||
|
- **Treasury:** $575K USDC remaining (after two buyback rounds)
|
||||||
|
- **Monthly allowance:** $50K
|
||||||
|
- **Product:** Devnet only — anonymous deposits and withdrawals working. Planned features include one-click batch withdrawals and OFAC compliance tools. No mainnet mixer 6 months post-ICO.
|
||||||
|
- **Rebrand to Turbine:** zklsol.org now redirects (302) to **turbine.cash**. docs.zklsol.org redirects to docs.turbine.cash. Site reads "turbine - Earn in Private." No formal rebrand announcement found. Token ticker remains $ZKFG on exchanges.
|
||||||
|
- **Team:** Anonymous/pseudonymous. No Discord — Telegram only. ~1,978 X followers.
|
||||||
|
- **Exchanges:** MetaDAO Futarchy AMM, Meteora (ZKFG/SOL pair)
|
||||||
|
|
||||||
|
## Governance Activity — Most Active Treasury Defense
|
||||||
|
|
||||||
|
ZKLSOL has the most governance activity of any MetaDAO launch relative to its size. The team voluntarily burned their entire performance package — an extraordinary alignment signal:
|
||||||
|
|
||||||
|
| Decision | Date | Outcome | Record |
|
||||||
|
|----------|------|---------|--------|
|
||||||
|
| ICO launch | 2025-10-20 | Completed, $969K raised (50x oversubscribed) | [[zklsol-futardio-launch]] |
|
||||||
|
| Team token burn | 2025-11 | Team burned entire performance package | [[zklsol-burn-team-performance-package]] |
|
||||||
|
| $200K buyback | 2026-01 | Passed — 4,000 orders over ~14 days at max $0.082/token | [[zklsol-200k-buyback]] |
|
||||||
|
| $500K restructuring buyback | 2026-02 | Passed — 4,000 orders at max $0.076/token + 50% FutarchyAMM liquidity to treasury | [[zklsol-restructuring-proposal]] |
|
||||||
|
|
||||||
|
**Team token burn:** The team voluntarily destroyed their entire performance package to signal alignment with holders. This is the most aggressive team-alignment move in the MetaDAO ecosystem — zero upside for the team beyond whatever tokens they purchased in the ICO like everyone else.
|
||||||
|
|
||||||
|
**Restructuring (Feb 2026):** Proph3t proposed the $500K buyback, acknowledging ZKFG had traded below NAV since inception. The proposal also moved 50% of FutarchyAMM liquidity to treasury for operations. Key quote: "When an ownership coin trades at significant discount to NAV, the right thing to do is buybacks until it gets there. We communicate to projects beforehand: you can raise more, but the money you raise will be at risk."
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- **Quiet rebrand.** zklsol.org → turbine.cash with no formal announcement is a transparency concern. The token ticker remains ZKFG while the product rebrands to Turbine — this creates confusion.
|
||||||
|
- **Devnet only after 6 months.** No mainnet mixer launch despite raising $969K. The buybacks consumed most of the raise. What has the team been building?
|
||||||
|
- **Regulatory risk.** Privacy mixers are the most scrutinized category in crypto after Tornado Cash sanctions. ZKLSOL's LST innovation is clever but doesn't change the regulatory exposure. The planned OFAC compliance tools suggest awareness.
|
||||||
|
- **Post-restructuring viability.** Two buyback rounds consumed ~$700K of a $969K raise. Treasury has $575K remaining at $50K/month = ~11 months. Can the product ship before runway expires?
|
||||||
|
- **Near-ATL price signals.** Trading at $0.048 vs $0.047 ATL with $142/day volume. The market has largely abandoned this token. Anonymous team + no mainnet product + quiet rebrand is not a confidence-building combination.
|
||||||
|
|
||||||
## Timeline
|
## Timeline
|
||||||
- **2025-10-20** — Futardio launch opens ($300K target)
|
|
||||||
|
|
||||||
- **2026-01-00** — ICO performance: maximum 30% drawdown from launch price
|
- **2025-10-20** — MetaDAO curated ICO opens ($300K target)
|
||||||
## Relationship to KB
|
- **2025-10-24** — ICO closes. $969K raised (50x oversubscribed).
|
||||||
- futardio — launched on Futardio platform
|
- **2025-11** — Team burns entire performance package tokens
|
||||||
|
- **2026-01** — $200K treasury buyback (4,000 orders over 14 days, max $0.082/token)
|
||||||
|
- **2026-02** — $500K restructuring buyback + 50% FutarchyAMM liquidity moved to treasury
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Entities:
|
Relevant Notes:
|
||||||
- futardio — launch platform
|
- [[metadao]] — launch platform (curated ICO #6)
|
||||||
- [[metadao]] — parent ecosystem
|
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[internet finance and decision markets]]
|
- [[internet finance and decision markets]]
|
||||||
|
|
|
||||||
255
ops/agent-state/SCHEMA.md
Normal file
255
ops/agent-state/SCHEMA.md
Normal file
|
|
@ -0,0 +1,255 @@
|
||||||
|
# Agent State Schema v1
|
||||||
|
|
||||||
|
File-backed durable state for teleo agents running headless on VPS.
|
||||||
|
Survives context truncation, crash recovery, and session handoffs.
|
||||||
|
|
||||||
|
## Design Principles
|
||||||
|
|
||||||
|
1. **Three formats** — JSON for structured fields, JSONL for append-only logs, Markdown for context-window-friendly content
|
||||||
|
2. **Many small files** — selective loading, crash isolation, no locks needed
|
||||||
|
3. **Write on events** — not timers. State updates happen when something meaningful changes.
|
||||||
|
4. **Shared-nothing writes** — each agent owns its directory. Communication via inbox files.
|
||||||
|
5. **State ≠ Git** — state is operational (how the agent functions). Git is output (what the agent produces).
|
||||||
|
|
||||||
|
## Directory Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
/opt/teleo-eval/agent-state/{agent}/
|
||||||
|
├── report.json # Current status — read every wake
|
||||||
|
├── tasks.json # Active task queue — read every wake
|
||||||
|
├── session.json # Current/last session metadata
|
||||||
|
├── memory.md # Accumulated cross-session knowledge (structured)
|
||||||
|
├── inbox/ # Messages from other agents/orchestrator
|
||||||
|
│ └── {uuid}.json # One file per message, atomic create
|
||||||
|
├── journal.jsonl # Append-only session log
|
||||||
|
└── metrics.json # Cumulative performance counters
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Specifications
|
||||||
|
|
||||||
|
### report.json
|
||||||
|
|
||||||
|
Written: after each meaningful action (session start, key finding, session end)
|
||||||
|
Read: every wake, by orchestrator for monitoring
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agent": "rio",
|
||||||
|
"updated_at": "2026-03-31T22:00:00Z",
|
||||||
|
"status": "idle | researching | extracting | evaluating | error",
|
||||||
|
"summary": "Completed research session — 8 sources archived on Solana launchpad mechanics",
|
||||||
|
"current_task": null,
|
||||||
|
"last_session": {
|
||||||
|
"id": "20260331-220000",
|
||||||
|
"started_at": "2026-03-31T20:30:00Z",
|
||||||
|
"ended_at": "2026-03-31T22:00:00Z",
|
||||||
|
"outcome": "completed | timeout | error",
|
||||||
|
"sources_archived": 8,
|
||||||
|
"branch": "rio/research-2026-03-31",
|
||||||
|
"pr_number": 247
|
||||||
|
},
|
||||||
|
"blocked_by": null,
|
||||||
|
"next_priority": "Follow up on conditional AMM thread from @0xfbifemboy"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### tasks.json
|
||||||
|
|
||||||
|
Written: when task status changes
|
||||||
|
Read: every wake
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agent": "rio",
|
||||||
|
"updated_at": "2026-03-31T22:00:00Z",
|
||||||
|
"tasks": [
|
||||||
|
{
|
||||||
|
"id": "task-001",
|
||||||
|
"type": "research | extract | evaluate | follow-up | disconfirm",
|
||||||
|
"description": "Investigate conditional AMM mechanisms in MetaDAO v2",
|
||||||
|
"status": "pending | active | completed | dropped",
|
||||||
|
"priority": "high | medium | low",
|
||||||
|
"created_at": "2026-03-31T22:00:00Z",
|
||||||
|
"context": "Flagged in research session 2026-03-31 — @0xfbifemboy thread on conditional liquidity",
|
||||||
|
"follow_up_from": null,
|
||||||
|
"completed_at": null,
|
||||||
|
"outcome": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### session.json
|
||||||
|
|
||||||
|
Written: at session start and session end
|
||||||
|
Read: every wake (for continuation), by orchestrator for scheduling
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agent": "rio",
|
||||||
|
"session_id": "20260331-220000",
|
||||||
|
"started_at": "2026-03-31T20:30:00Z",
|
||||||
|
"ended_at": "2026-03-31T22:00:00Z",
|
||||||
|
"type": "research | extract | evaluate | ad-hoc",
|
||||||
|
"domain": "internet-finance",
|
||||||
|
"branch": "rio/research-2026-03-31",
|
||||||
|
"status": "running | completed | timeout | error",
|
||||||
|
"model": "sonnet",
|
||||||
|
"timeout_seconds": 5400,
|
||||||
|
"research_question": "How is conditional liquidity being implemented in Solana AMMs?",
|
||||||
|
"belief_targeted": "Markets aggregate information better than votes because skin-in-the-game creates selection pressure on beliefs",
|
||||||
|
"disconfirmation_target": "Cases where prediction markets failed to aggregate information despite financial incentives",
|
||||||
|
"sources_archived": 8,
|
||||||
|
"sources_expected": 10,
|
||||||
|
"tokens_used": null,
|
||||||
|
"cost_usd": null,
|
||||||
|
"errors": [],
|
||||||
|
"handoff_notes": "Found 3 sources on conditional AMM failures — needs extraction. Also flagged @metaproph3t thread for Theseus (AI governance angle)."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### memory.md
|
||||||
|
|
||||||
|
Written: at session end, when learning something critical
|
||||||
|
Read: every wake (included in research prompt context)
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Rio — Operational Memory
|
||||||
|
|
||||||
|
## Cross-Session Patterns
|
||||||
|
- Conditional AMMs keep appearing across 3+ independent sources (sessions 03-28, 03-29, 03-31). This is likely a real trend, not cherry-picking.
|
||||||
|
- @0xfbifemboy consistently produces highest-signal threads in the DeFi mechanism design space.
|
||||||
|
|
||||||
|
## Dead Ends (don't re-investigate)
|
||||||
|
- Polymarket fee structure analysis (2026-03-25): fully documented in existing claims, no new angles.
|
||||||
|
- Jupiter governance token utility (2026-03-27): vaporware, no mechanism to analyze.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- Is MetaDAO's conditional market maker manipulation-resistant at scale? No evidence either way yet.
|
||||||
|
- How does futarchy handle low-liquidity markets? This is the keystone weakness.
|
||||||
|
|
||||||
|
## Corrections
|
||||||
|
- Previously believed Drift protocol was pure order-book. Actually hybrid AMM+CLOB. Updated 2026-03-30.
|
||||||
|
|
||||||
|
## Cross-Agent Flags Received
|
||||||
|
- Theseus (2026-03-29): "Check if MetaDAO governance has AI agent participation — alignment implications"
|
||||||
|
- Leo (2026-03-28): "Your conditional AMM analysis connects to Astra's resource allocation claims"
|
||||||
|
```
|
||||||
|
|
||||||
|
### inbox/{uuid}.json
|
||||||
|
|
||||||
|
Written: by other agents or orchestrator
|
||||||
|
Read: checked on wake, deleted after processing
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "msg-abc123",
|
||||||
|
"from": "theseus",
|
||||||
|
"to": "rio",
|
||||||
|
"created_at": "2026-03-31T18:00:00Z",
|
||||||
|
"type": "flag | task | question | cascade",
|
||||||
|
"priority": "high | normal",
|
||||||
|
"subject": "Check MetaDAO for AI agent participation",
|
||||||
|
"body": "Found evidence that AI agents are trading on Drift — check if any are participating in MetaDAO conditional markets. Alignment implications if automated agents are influencing futarchic governance.",
|
||||||
|
"source_ref": "theseus/research-2026-03-31",
|
||||||
|
"expires_at": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### journal.jsonl
|
||||||
|
|
||||||
|
Written: append at session boundaries
|
||||||
|
Read: debug/audit only (never loaded into agent context by default)
|
||||||
|
|
||||||
|
```jsonl
|
||||||
|
{"ts":"2026-03-31T20:30:00Z","event":"session_start","session_id":"20260331-220000","type":"research"}
|
||||||
|
{"ts":"2026-03-31T20:35:00Z","event":"orient_complete","files_read":["identity.md","beliefs.md","reasoning.md","_map.md"]}
|
||||||
|
{"ts":"2026-03-31T21:30:00Z","event":"sources_archived","count":5,"domain":"internet-finance"}
|
||||||
|
{"ts":"2026-03-31T22:00:00Z","event":"session_end","outcome":"completed","sources_archived":8,"handoff":"conditional AMM failures need extraction"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### metrics.json
|
||||||
|
|
||||||
|
Written: at session end (cumulative counters)
|
||||||
|
Read: by CI scoring system, by orchestrator for scheduling decisions
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agent": "rio",
|
||||||
|
"updated_at": "2026-03-31T22:00:00Z",
|
||||||
|
"lifetime": {
|
||||||
|
"sessions_total": 47,
|
||||||
|
"sessions_completed": 42,
|
||||||
|
"sessions_timeout": 3,
|
||||||
|
"sessions_error": 2,
|
||||||
|
"sources_archived": 312,
|
||||||
|
"claims_proposed": 89,
|
||||||
|
"claims_accepted": 71,
|
||||||
|
"claims_challenged": 12,
|
||||||
|
"claims_rejected": 6,
|
||||||
|
"disconfirmation_attempts": 47,
|
||||||
|
"disconfirmation_hits": 8,
|
||||||
|
"cross_agent_flags_sent": 23,
|
||||||
|
"cross_agent_flags_received": 15
|
||||||
|
},
|
||||||
|
"rolling_30d": {
|
||||||
|
"sessions": 12,
|
||||||
|
"sources_archived": 87,
|
||||||
|
"claims_proposed": 24,
|
||||||
|
"acceptance_rate": 0.83,
|
||||||
|
"avg_sources_per_session": 7.25
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### research-session.sh
|
||||||
|
|
||||||
|
Add these hooks:
|
||||||
|
|
||||||
|
1. **Pre-session** (after branch creation, before Claude launch):
|
||||||
|
- Write `session.json` with status "running"
|
||||||
|
- Write `report.json` with status "researching"
|
||||||
|
- Append session_start to `journal.jsonl`
|
||||||
|
- Include `memory.md` and `tasks.json` in the research prompt
|
||||||
|
|
||||||
|
2. **Post-session** (after commit, before/after PR):
|
||||||
|
- Update `session.json` with outcome, source count, branch, PR number
|
||||||
|
- Update `report.json` with summary and next_priority
|
||||||
|
- Update `metrics.json` counters
|
||||||
|
- Append session_end to `journal.jsonl`
|
||||||
|
- Process and clean `inbox/` (mark processed messages)
|
||||||
|
|
||||||
|
3. **On error/timeout**:
|
||||||
|
- Update `session.json` status to "error" or "timeout"
|
||||||
|
- Update `report.json` with error info
|
||||||
|
- Append error event to `journal.jsonl`
|
||||||
|
|
||||||
|
### Pipeline daemon (teleo-pipeline.py)
|
||||||
|
|
||||||
|
- Read `report.json` for all agents to build dashboard
|
||||||
|
- Write to `inbox/` when cascade events need agent attention
|
||||||
|
- Read `metrics.json` for scheduling decisions (deprioritize agents with high error rates)
|
||||||
|
|
||||||
|
### Claude research prompt
|
||||||
|
|
||||||
|
Add to the prompt:
|
||||||
|
```
|
||||||
|
### Step 0: Load Operational State (1 min)
|
||||||
|
Read /opt/teleo-eval/agent-state/{agent}/memory.md — this is your cross-session operational memory.
|
||||||
|
Read /opt/teleo-eval/agent-state/{agent}/tasks.json — check for pending tasks.
|
||||||
|
Check /opt/teleo-eval/agent-state/{agent}/inbox/ for messages from other agents.
|
||||||
|
Process any high-priority inbox items before choosing your research direction.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bootstrap
|
||||||
|
|
||||||
|
Run `ops/agent-state/bootstrap.sh` to create directories and seed initial state for all agents.
|
||||||
|
|
||||||
|
## Migration from Existing State
|
||||||
|
|
||||||
|
- `research-journal.md` continues as-is (agent-written, in git). `memory.md` is the structured equivalent for operational state (not in git).
|
||||||
|
- `ops/sessions/*.json` continue for backward compat. `session.json` per agent is the richer replacement.
|
||||||
|
- `ops/queue.md` remains the human-visible task board. `tasks.json` per agent is the machine-readable equivalent.
|
||||||
|
- Workspace flags (`~/.pentagon/workspace/collective/flag-*`) migrate to `inbox/` messages over time.
|
||||||
145
ops/agent-state/bootstrap.sh
Executable file
145
ops/agent-state/bootstrap.sh
Executable file
|
|
@ -0,0 +1,145 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Bootstrap agent-state directories for all teleo agents.
|
||||||
|
# Run once on VPS: bash ops/agent-state/bootstrap.sh
|
||||||
|
# Safe to re-run — skips existing files, only creates missing ones.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
STATE_ROOT="${TELEO_STATE_ROOT:-/opt/teleo-eval/agent-state}"
|
||||||
|
|
||||||
|
AGENTS=("rio" "clay" "theseus" "vida" "astra" "leo")
|
||||||
|
DOMAINS=("internet-finance" "entertainment" "ai-alignment" "health" "space-development" "grand-strategy")
|
||||||
|
|
||||||
|
log() { echo "[$(date -Iseconds)] $*"; }
|
||||||
|
|
||||||
|
for i in "${!AGENTS[@]}"; do
|
||||||
|
AGENT="${AGENTS[$i]}"
|
||||||
|
DOMAIN="${DOMAINS[$i]}"
|
||||||
|
DIR="$STATE_ROOT/$AGENT"
|
||||||
|
|
||||||
|
log "Bootstrapping $AGENT..."
|
||||||
|
mkdir -p "$DIR/inbox"
|
||||||
|
|
||||||
|
# report.json — current status
|
||||||
|
if [ ! -f "$DIR/report.json" ]; then
|
||||||
|
cat > "$DIR/report.json" <<EOJSON
|
||||||
|
{
|
||||||
|
"agent": "$AGENT",
|
||||||
|
"updated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||||
|
"status": "idle",
|
||||||
|
"summary": "State initialized — no sessions recorded yet.",
|
||||||
|
"current_task": null,
|
||||||
|
"last_session": null,
|
||||||
|
"blocked_by": null,
|
||||||
|
"next_priority": null
|
||||||
|
}
|
||||||
|
EOJSON
|
||||||
|
log " Created report.json"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# tasks.json — empty task queue
|
||||||
|
if [ ! -f "$DIR/tasks.json" ]; then
|
||||||
|
cat > "$DIR/tasks.json" <<EOJSON
|
||||||
|
{
|
||||||
|
"agent": "$AGENT",
|
||||||
|
"updated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||||
|
"tasks": []
|
||||||
|
}
|
||||||
|
EOJSON
|
||||||
|
log " Created tasks.json"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# session.json — no session yet
|
||||||
|
if [ ! -f "$DIR/session.json" ]; then
|
||||||
|
cat > "$DIR/session.json" <<EOJSON
|
||||||
|
{
|
||||||
|
"agent": "$AGENT",
|
||||||
|
"session_id": null,
|
||||||
|
"started_at": null,
|
||||||
|
"ended_at": null,
|
||||||
|
"type": null,
|
||||||
|
"domain": "$DOMAIN",
|
||||||
|
"branch": null,
|
||||||
|
"status": "idle",
|
||||||
|
"model": null,
|
||||||
|
"timeout_seconds": null,
|
||||||
|
"research_question": null,
|
||||||
|
"belief_targeted": null,
|
||||||
|
"disconfirmation_target": null,
|
||||||
|
"sources_archived": 0,
|
||||||
|
"sources_expected": 0,
|
||||||
|
"tokens_used": null,
|
||||||
|
"cost_usd": null,
|
||||||
|
"errors": [],
|
||||||
|
"handoff_notes": null
|
||||||
|
}
|
||||||
|
EOJSON
|
||||||
|
log " Created session.json"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# memory.md — empty operational memory
|
||||||
|
if [ ! -f "$DIR/memory.md" ]; then
|
||||||
|
cat > "$DIR/memory.md" <<EOMD
|
||||||
|
# ${AGENT^} — Operational Memory
|
||||||
|
|
||||||
|
## Cross-Session Patterns
|
||||||
|
(none yet)
|
||||||
|
|
||||||
|
## Dead Ends
|
||||||
|
(none yet)
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
(none yet)
|
||||||
|
|
||||||
|
## Corrections
|
||||||
|
(none yet)
|
||||||
|
|
||||||
|
## Cross-Agent Flags Received
|
||||||
|
(none yet)
|
||||||
|
EOMD
|
||||||
|
log " Created memory.md"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# metrics.json — zero counters
|
||||||
|
if [ ! -f "$DIR/metrics.json" ]; then
|
||||||
|
cat > "$DIR/metrics.json" <<EOJSON
|
||||||
|
{
|
||||||
|
"agent": "$AGENT",
|
||||||
|
"updated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||||
|
"lifetime": {
|
||||||
|
"sessions_total": 0,
|
||||||
|
"sessions_completed": 0,
|
||||||
|
"sessions_timeout": 0,
|
||||||
|
"sessions_error": 0,
|
||||||
|
"sources_archived": 0,
|
||||||
|
"claims_proposed": 0,
|
||||||
|
"claims_accepted": 0,
|
||||||
|
"claims_challenged": 0,
|
||||||
|
"claims_rejected": 0,
|
||||||
|
"disconfirmation_attempts": 0,
|
||||||
|
"disconfirmation_hits": 0,
|
||||||
|
"cross_agent_flags_sent": 0,
|
||||||
|
"cross_agent_flags_received": 0
|
||||||
|
},
|
||||||
|
"rolling_30d": {
|
||||||
|
"sessions": 0,
|
||||||
|
"sources_archived": 0,
|
||||||
|
"claims_proposed": 0,
|
||||||
|
"acceptance_rate": 0.0,
|
||||||
|
"avg_sources_per_session": 0.0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOJSON
|
||||||
|
log " Created metrics.json"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# journal.jsonl — empty log
|
||||||
|
if [ ! -f "$DIR/journal.jsonl" ]; then
|
||||||
|
echo "{\"ts\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"event\":\"state_initialized\",\"schema_version\":\"1.0\"}" > "$DIR/journal.jsonl"
|
||||||
|
log " Created journal.jsonl"
|
||||||
|
fi
|
||||||
|
|
||||||
|
done
|
||||||
|
|
||||||
|
log "Bootstrap complete. State root: $STATE_ROOT"
|
||||||
|
log "Agents initialized: ${AGENTS[*]}"
|
||||||
258
ops/agent-state/lib-state.sh
Executable file
258
ops/agent-state/lib-state.sh
Executable file
|
|
@ -0,0 +1,258 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# lib-state.sh — Bash helpers for reading/writing agent state files.
|
||||||
|
# Source this in pipeline scripts: source ops/agent-state/lib-state.sh
|
||||||
|
#
|
||||||
|
# All writes use atomic rename (write to .tmp, then mv) to prevent corruption.
|
||||||
|
# All reads return valid JSON or empty string on missing/corrupt files.
|
||||||
|
|
||||||
|
STATE_ROOT="${TELEO_STATE_ROOT:-/opt/teleo-eval/agent-state}"
|
||||||
|
|
||||||
|
# --- Internal helpers ---
|
||||||
|
|
||||||
|
_state_dir() {
|
||||||
|
local agent="$1"
|
||||||
|
echo "$STATE_ROOT/$agent"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Atomic write: write to tmp file, then rename. Prevents partial reads.
|
||||||
|
_atomic_write() {
|
||||||
|
local filepath="$1"
|
||||||
|
local content="$2"
|
||||||
|
local tmpfile="${filepath}.tmp.$$"
|
||||||
|
echo "$content" > "$tmpfile"
|
||||||
|
mv -f "$tmpfile" "$filepath"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Report (current status) ---
|
||||||
|
|
||||||
|
state_read_report() {
|
||||||
|
local agent="$1"
|
||||||
|
local file="$(_state_dir "$agent")/report.json"
|
||||||
|
[ -f "$file" ] && cat "$file" || echo "{}"
|
||||||
|
}
|
||||||
|
|
||||||
|
state_update_report() {
|
||||||
|
local agent="$1"
|
||||||
|
local status="$2"
|
||||||
|
local summary="$3"
|
||||||
|
local file="$(_state_dir "$agent")/report.json"
|
||||||
|
|
||||||
|
# Read existing, merge with updates using python (available on VPS)
|
||||||
|
python3 -c "
|
||||||
|
import json, sys
|
||||||
|
try:
|
||||||
|
with open('$file') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
except:
|
||||||
|
data = {'agent': '$agent'}
|
||||||
|
data['status'] = '$status'
|
||||||
|
data['summary'] = '''$summary'''
|
||||||
|
data['updated_at'] = '$(date -u +%Y-%m-%dT%H:%M:%SZ)'
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Variant that takes full JSON from stdin
|
||||||
|
_atomic_write_stdin() {
|
||||||
|
local filepath="$1"
|
||||||
|
local tmpfile="${filepath}.tmp.$$"
|
||||||
|
cat > "$tmpfile"
|
||||||
|
mv -f "$tmpfile" "$filepath"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Full report update with session info (called at session end)
|
||||||
|
state_finalize_report() {
|
||||||
|
local agent="$1"
|
||||||
|
local status="$2"
|
||||||
|
local summary="$3"
|
||||||
|
local session_id="$4"
|
||||||
|
local started_at="$5"
|
||||||
|
local ended_at="$6"
|
||||||
|
local outcome="$7"
|
||||||
|
local sources="$8"
|
||||||
|
local branch="$9"
|
||||||
|
local pr_number="${10}"
|
||||||
|
local next_priority="${11:-null}"
|
||||||
|
local file="$(_state_dir "$agent")/report.json"
|
||||||
|
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
data = {
|
||||||
|
'agent': '$agent',
|
||||||
|
'updated_at': '$ended_at',
|
||||||
|
'status': '$status',
|
||||||
|
'summary': '''$summary''',
|
||||||
|
'current_task': None,
|
||||||
|
'last_session': {
|
||||||
|
'id': '$session_id',
|
||||||
|
'started_at': '$started_at',
|
||||||
|
'ended_at': '$ended_at',
|
||||||
|
'outcome': '$outcome',
|
||||||
|
'sources_archived': $sources,
|
||||||
|
'branch': '$branch',
|
||||||
|
'pr_number': $pr_number
|
||||||
|
},
|
||||||
|
'blocked_by': None,
|
||||||
|
'next_priority': $([ "$next_priority" = "null" ] && echo "None" || echo "'$next_priority'")
|
||||||
|
}
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Session ---
|
||||||
|
|
||||||
|
state_start_session() {
|
||||||
|
local agent="$1"
|
||||||
|
local session_id="$2"
|
||||||
|
local type="$3"
|
||||||
|
local domain="$4"
|
||||||
|
local branch="$5"
|
||||||
|
local model="${6:-sonnet}"
|
||||||
|
local timeout="${7:-5400}"
|
||||||
|
local started_at
|
||||||
|
started_at="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||||
|
local file="$(_state_dir "$agent")/session.json"
|
||||||
|
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
data = {
|
||||||
|
'agent': '$agent',
|
||||||
|
'session_id': '$session_id',
|
||||||
|
'started_at': '$started_at',
|
||||||
|
'ended_at': None,
|
||||||
|
'type': '$type',
|
||||||
|
'domain': '$domain',
|
||||||
|
'branch': '$branch',
|
||||||
|
'status': 'running',
|
||||||
|
'model': '$model',
|
||||||
|
'timeout_seconds': $timeout,
|
||||||
|
'research_question': None,
|
||||||
|
'belief_targeted': None,
|
||||||
|
'disconfirmation_target': None,
|
||||||
|
'sources_archived': 0,
|
||||||
|
'sources_expected': 0,
|
||||||
|
'tokens_used': None,
|
||||||
|
'cost_usd': None,
|
||||||
|
'errors': [],
|
||||||
|
'handoff_notes': None
|
||||||
|
}
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
|
||||||
|
echo "$started_at"
|
||||||
|
}
|
||||||
|
|
||||||
|
state_end_session() {
|
||||||
|
local agent="$1"
|
||||||
|
local outcome="$2"
|
||||||
|
local sources="${3:-0}"
|
||||||
|
local pr_number="${4:-null}"
|
||||||
|
local file="$(_state_dir "$agent")/session.json"
|
||||||
|
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
with open('$file') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
data['ended_at'] = '$(date -u +%Y-%m-%dT%H:%M:%SZ)'
|
||||||
|
data['status'] = '$outcome'
|
||||||
|
data['sources_archived'] = $sources
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Journal (append-only JSONL) ---
|
||||||
|
|
||||||
|
state_journal_append() {
|
||||||
|
local agent="$1"
|
||||||
|
local event="$2"
|
||||||
|
shift 2
|
||||||
|
# Remaining args are key=value pairs for extra fields
|
||||||
|
local file="$(_state_dir "$agent")/journal.jsonl"
|
||||||
|
local extras=""
|
||||||
|
for kv in "$@"; do
|
||||||
|
local key="${kv%%=*}"
|
||||||
|
local val="${kv#*=}"
|
||||||
|
extras="$extras, \"$key\": \"$val\""
|
||||||
|
done
|
||||||
|
echo "{\"ts\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"event\":\"$event\"$extras}" >> "$file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Metrics ---
|
||||||
|
|
||||||
|
state_update_metrics() {
|
||||||
|
local agent="$1"
|
||||||
|
local outcome="$2"
|
||||||
|
local sources="${3:-0}"
|
||||||
|
local file="$(_state_dir "$agent")/metrics.json"
|
||||||
|
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
try:
|
||||||
|
with open('$file') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
except:
|
||||||
|
data = {'agent': '$agent', 'lifetime': {}, 'rolling_30d': {}}
|
||||||
|
|
||||||
|
lt = data.setdefault('lifetime', {})
|
||||||
|
lt['sessions_total'] = lt.get('sessions_total', 0) + 1
|
||||||
|
if '$outcome' == 'completed':
|
||||||
|
lt['sessions_completed'] = lt.get('sessions_completed', 0) + 1
|
||||||
|
elif '$outcome' == 'timeout':
|
||||||
|
lt['sessions_timeout'] = lt.get('sessions_timeout', 0) + 1
|
||||||
|
elif '$outcome' == 'error':
|
||||||
|
lt['sessions_error'] = lt.get('sessions_error', 0) + 1
|
||||||
|
lt['sources_archived'] = lt.get('sources_archived', 0) + $sources
|
||||||
|
|
||||||
|
data['updated_at'] = '$(date -u +%Y-%m-%dT%H:%M:%SZ)'
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Inbox ---
|
||||||
|
|
||||||
|
state_check_inbox() {
|
||||||
|
local agent="$1"
|
||||||
|
local inbox="$(_state_dir "$agent")/inbox"
|
||||||
|
[ -d "$inbox" ] && ls "$inbox"/*.json 2>/dev/null || true
|
||||||
|
}
|
||||||
|
|
||||||
|
state_send_message() {
|
||||||
|
local from="$1"
|
||||||
|
local to="$2"
|
||||||
|
local type="$3"
|
||||||
|
local subject="$4"
|
||||||
|
local body="$5"
|
||||||
|
local inbox="$(_state_dir "$to")/inbox"
|
||||||
|
local msg_id="msg-$(date +%s)-$$"
|
||||||
|
local file="$inbox/${msg_id}.json"
|
||||||
|
|
||||||
|
mkdir -p "$inbox"
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
data = {
|
||||||
|
'id': '$msg_id',
|
||||||
|
'from': '$from',
|
||||||
|
'to': '$to',
|
||||||
|
'created_at': '$(date -u +%Y-%m-%dT%H:%M:%SZ)',
|
||||||
|
'type': '$type',
|
||||||
|
'priority': 'normal',
|
||||||
|
'subject': '''$subject''',
|
||||||
|
'body': '''$body''',
|
||||||
|
'source_ref': None,
|
||||||
|
'expires_at': None
|
||||||
|
}
|
||||||
|
print(json.dumps(data, indent=2))
|
||||||
|
" | _atomic_write_stdin "$file"
|
||||||
|
echo "$msg_id"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- State directory check ---
|
||||||
|
|
||||||
|
state_ensure_dir() {
|
||||||
|
local agent="$1"
|
||||||
|
local dir="$(_state_dir "$agent")"
|
||||||
|
if [ ! -d "$dir" ]; then
|
||||||
|
echo "ERROR: Agent state not initialized for $agent. Run bootstrap.sh first." >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
113
ops/agent-state/process-cascade-inbox.py
Normal file
113
ops/agent-state/process-cascade-inbox.py
Normal file
|
|
@ -0,0 +1,113 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Process cascade inbox messages after a research session.
|
||||||
|
|
||||||
|
For each unread cascade-*.md in an agent's inbox:
|
||||||
|
1. Logs cascade_reviewed event to pipeline.db audit_log
|
||||||
|
2. Moves the file to inbox/processed/
|
||||||
|
|
||||||
|
Usage: python3 process-cascade-inbox.py <agent-name>
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import shutil
|
||||||
|
import sqlite3
|
||||||
|
import sys
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
AGENT_STATE_DIR = Path(os.environ.get("AGENT_STATE_DIR", "/opt/teleo-eval/agent-state"))
|
||||||
|
PIPELINE_DB = Path(os.environ.get("PIPELINE_DB", "/opt/teleo-eval/pipeline/pipeline.db"))
|
||||||
|
|
||||||
|
|
||||||
|
def parse_frontmatter(text: str) -> dict:
|
||||||
|
"""Parse YAML-like frontmatter from markdown."""
|
||||||
|
fm = {}
|
||||||
|
match = re.match(r'^---\n(.*?)\n---', text, re.DOTALL)
|
||||||
|
if not match:
|
||||||
|
return fm
|
||||||
|
for line in match.group(1).strip().splitlines():
|
||||||
|
if ':' in line:
|
||||||
|
key, val = line.split(':', 1)
|
||||||
|
fm[key.strip()] = val.strip().strip('"')
|
||||||
|
return fm
|
||||||
|
|
||||||
|
|
||||||
|
def process_agent_inbox(agent: str) -> int:
|
||||||
|
"""Process cascade messages in agent's inbox. Returns count processed."""
|
||||||
|
inbox_dir = AGENT_STATE_DIR / agent / "inbox"
|
||||||
|
if not inbox_dir.exists():
|
||||||
|
return 0
|
||||||
|
|
||||||
|
cascade_files = sorted(inbox_dir.glob("cascade-*.md"))
|
||||||
|
if not cascade_files:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# Ensure processed dir exists
|
||||||
|
processed_dir = inbox_dir / "processed"
|
||||||
|
processed_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
processed = 0
|
||||||
|
now = datetime.now(timezone.utc).isoformat()
|
||||||
|
|
||||||
|
try:
|
||||||
|
conn = sqlite3.connect(str(PIPELINE_DB), timeout=10)
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL")
|
||||||
|
except sqlite3.Error as e:
|
||||||
|
print(f"WARNING: Cannot connect to pipeline.db: {e}", file=sys.stderr)
|
||||||
|
# Still move files even if DB is unavailable
|
||||||
|
conn = None
|
||||||
|
|
||||||
|
for cf in cascade_files:
|
||||||
|
try:
|
||||||
|
text = cf.read_text()
|
||||||
|
fm = parse_frontmatter(text)
|
||||||
|
|
||||||
|
# Skip already-processed files
|
||||||
|
if fm.get("status") == "processed":
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Log to audit_log
|
||||||
|
if conn:
|
||||||
|
detail = {
|
||||||
|
"agent": agent,
|
||||||
|
"cascade_file": cf.name,
|
||||||
|
"subject": fm.get("subject", "unknown"),
|
||||||
|
"original_created": fm.get("created", "unknown"),
|
||||||
|
"reviewed_at": now,
|
||||||
|
}
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO audit_log (stage, event, detail, timestamp) VALUES (?, ?, ?, ?)",
|
||||||
|
("cascade", "cascade_reviewed", json.dumps(detail), now),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Move to processed
|
||||||
|
dest = processed_dir / cf.name
|
||||||
|
shutil.move(str(cf), str(dest))
|
||||||
|
processed += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"WARNING: Failed to process {cf.name}: {e}", file=sys.stderr)
|
||||||
|
|
||||||
|
if conn:
|
||||||
|
try:
|
||||||
|
conn.commit()
|
||||||
|
conn.close()
|
||||||
|
except sqlite3.Error:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return processed
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print(f"Usage: {sys.argv[0]} <agent-name>", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
agent = sys.argv[1]
|
||||||
|
count = process_agent_inbox(agent)
|
||||||
|
if count > 0:
|
||||||
|
print(f"Processed {count} cascade message(s) for {agent}")
|
||||||
|
# Exit 0 regardless — non-fatal
|
||||||
|
sys.exit(0)
|
||||||
274
ops/pipeline-v2/lib/cascade.py
Normal file
274
ops/pipeline-v2/lib/cascade.py
Normal file
|
|
@ -0,0 +1,274 @@
|
||||||
|
"""Cascade automation — auto-flag dependent beliefs/positions when claims change.
|
||||||
|
|
||||||
|
Hook point: called from merge.py after _embed_merged_claims, before _delete_remote_branch.
|
||||||
|
Uses the same main_sha/branch_sha diff to detect changed claim files, then scans
|
||||||
|
all agent beliefs and positions for depends_on references to those claims.
|
||||||
|
|
||||||
|
Notifications are written to /opt/teleo-eval/agent-state/{agent}/inbox/ using
|
||||||
|
the same atomic-write pattern as lib-state.sh.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import tempfile
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
logger = logging.getLogger("pipeline.cascade")
|
||||||
|
|
||||||
|
AGENT_STATE_DIR = Path("/opt/teleo-eval/agent-state")
|
||||||
|
CLAIM_DIRS = {"domains/", "core/", "foundations/", "decisions/"}
|
||||||
|
AGENT_NAMES = ["rio", "leo", "clay", "astra", "vida", "theseus"]
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_claim_titles_from_diff(diff_files: list[str]) -> set[str]:
|
||||||
|
"""Extract claim titles from changed file paths."""
|
||||||
|
titles = set()
|
||||||
|
for fpath in diff_files:
|
||||||
|
if not fpath.endswith(".md"):
|
||||||
|
continue
|
||||||
|
if not any(fpath.startswith(d) for d in CLAIM_DIRS):
|
||||||
|
continue
|
||||||
|
basename = os.path.basename(fpath)
|
||||||
|
if basename.startswith("_") or basename == "directory.md":
|
||||||
|
continue
|
||||||
|
title = basename.removesuffix(".md")
|
||||||
|
titles.add(title)
|
||||||
|
return titles
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_for_match(text: str) -> str:
|
||||||
|
"""Normalize for fuzzy matching: lowercase, hyphens to spaces, strip punctuation, collapse whitespace."""
|
||||||
|
text = text.lower().strip()
|
||||||
|
text = text.replace("-", " ")
|
||||||
|
text = re.sub(r"[^\w\s]", "", text)
|
||||||
|
text = re.sub(r"\s+", " ", text)
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def _slug_to_words(slug: str) -> str:
|
||||||
|
"""Convert kebab-case slug to space-separated words."""
|
||||||
|
return slug.replace("-", " ")
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_depends_on(file_path: Path) -> tuple[str, list[str]]:
|
||||||
|
"""Parse a belief or position file's depends_on entries.
|
||||||
|
|
||||||
|
Returns (agent_name, [dependency_titles]).
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
content = file_path.read_text(encoding="utf-8")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
return ("", [])
|
||||||
|
|
||||||
|
agent = ""
|
||||||
|
deps = []
|
||||||
|
in_frontmatter = False
|
||||||
|
in_depends = False
|
||||||
|
|
||||||
|
for line in content.split("\n"):
|
||||||
|
if line.strip() == "---":
|
||||||
|
if not in_frontmatter:
|
||||||
|
in_frontmatter = True
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
|
||||||
|
if in_frontmatter:
|
||||||
|
if line.startswith("agent:"):
|
||||||
|
agent = line.split(":", 1)[1].strip().strip('"').strip("'")
|
||||||
|
elif line.startswith("depends_on:"):
|
||||||
|
in_depends = True
|
||||||
|
rest = line.split(":", 1)[1].strip()
|
||||||
|
if rest.startswith("["):
|
||||||
|
items = re.findall(r'"([^"]+)"|\'([^\']+)\'', rest)
|
||||||
|
for item in items:
|
||||||
|
dep = item[0] or item[1]
|
||||||
|
dep = dep.strip("[]").replace("[[", "").replace("]]", "")
|
||||||
|
deps.append(dep)
|
||||||
|
in_depends = False
|
||||||
|
elif in_depends:
|
||||||
|
if line.startswith(" - "):
|
||||||
|
dep = line.strip().lstrip("- ").strip('"').strip("'")
|
||||||
|
dep = dep.replace("[[", "").replace("]]", "")
|
||||||
|
deps.append(dep)
|
||||||
|
elif line.strip() and not line.startswith(" "):
|
||||||
|
in_depends = False
|
||||||
|
|
||||||
|
# Also scan body for [[wiki-links]]
|
||||||
|
body_links = re.findall(r"\[\[([^\]]+)\]\]", content)
|
||||||
|
for link in body_links:
|
||||||
|
if link not in deps:
|
||||||
|
deps.append(link)
|
||||||
|
|
||||||
|
return (agent, deps)
|
||||||
|
|
||||||
|
|
||||||
|
def _write_inbox_message(agent: str, subject: str, body: str) -> bool:
|
||||||
|
"""Write a cascade notification to an agent's inbox. Atomic tmp+rename."""
|
||||||
|
inbox_dir = AGENT_STATE_DIR / agent / "inbox"
|
||||||
|
if not inbox_dir.exists():
|
||||||
|
logger.warning("cascade: no inbox dir for agent %s, skipping", agent)
|
||||||
|
return False
|
||||||
|
|
||||||
|
ts = datetime.now(timezone.utc).strftime("%Y%m%d-%H%M%S")
|
||||||
|
file_hash = hashlib.md5(f"{agent}-{subject}-{body[:200]}".encode()).hexdigest()[:8]
|
||||||
|
filename = f"cascade-{ts}-{subject[:60]}-{file_hash}.md"
|
||||||
|
final_path = inbox_dir / filename
|
||||||
|
|
||||||
|
try:
|
||||||
|
fd, tmp_path = tempfile.mkstemp(dir=str(inbox_dir), suffix=".tmp")
|
||||||
|
with os.fdopen(fd, "w") as f:
|
||||||
|
f.write(f"---\n")
|
||||||
|
f.write(f"type: cascade\n")
|
||||||
|
f.write(f"from: pipeline\n")
|
||||||
|
f.write(f"to: {agent}\n")
|
||||||
|
f.write(f"subject: \"{subject}\"\n")
|
||||||
|
f.write(f"created: {datetime.now(timezone.utc).isoformat()}\n")
|
||||||
|
f.write(f"status: unread\n")
|
||||||
|
f.write(f"---\n\n")
|
||||||
|
f.write(body)
|
||||||
|
os.rename(tmp_path, str(final_path))
|
||||||
|
return True
|
||||||
|
except OSError:
|
||||||
|
logger.exception("cascade: failed to write inbox message for %s", agent)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _find_matches(deps: list[str], claim_lookup: dict[str, str]) -> list[str]:
|
||||||
|
"""Check if any dependency matches a changed claim.
|
||||||
|
|
||||||
|
Uses exact normalized match first, then substring containment for longer
|
||||||
|
strings only (min 15 chars) to avoid false positives on short generic names.
|
||||||
|
"""
|
||||||
|
matched = []
|
||||||
|
for dep in deps:
|
||||||
|
norm = _normalize_for_match(dep)
|
||||||
|
if norm in claim_lookup:
|
||||||
|
matched.append(claim_lookup[norm])
|
||||||
|
else:
|
||||||
|
# Substring match only for sufficiently specific strings
|
||||||
|
shorter = min(len(norm), min((len(k) for k in claim_lookup), default=0))
|
||||||
|
if shorter >= 15:
|
||||||
|
for claim_norm, claim_orig in claim_lookup.items():
|
||||||
|
if claim_norm in norm or norm in claim_norm:
|
||||||
|
matched.append(claim_orig)
|
||||||
|
break
|
||||||
|
return matched
|
||||||
|
|
||||||
|
|
||||||
|
def _format_cascade_body(
|
||||||
|
file_name: str,
|
||||||
|
file_type: str,
|
||||||
|
matched_claims: list[str],
|
||||||
|
pr_num: int,
|
||||||
|
) -> str:
|
||||||
|
"""Format the cascade notification body."""
|
||||||
|
claims_list = "\n".join(f"- {c}" for c in matched_claims)
|
||||||
|
return (
|
||||||
|
f"# Cascade: upstream claims changed\n\n"
|
||||||
|
f"Your {file_type} **{file_name}** depends on claims that were modified in PR #{pr_num}.\n\n"
|
||||||
|
f"## Changed claims\n\n{claims_list}\n\n"
|
||||||
|
f"## Action needed\n\n"
|
||||||
|
f"Review whether your {file_type}'s confidence, description, or grounding "
|
||||||
|
f"needs updating in light of these changes. If the evidence strengthened, "
|
||||||
|
f"consider increasing confidence. If it weakened or contradicted, flag for "
|
||||||
|
f"re-evaluation.\n"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def cascade_after_merge(
|
||||||
|
main_sha: str,
|
||||||
|
branch_sha: str,
|
||||||
|
pr_num: int,
|
||||||
|
main_worktree: Path,
|
||||||
|
conn=None,
|
||||||
|
) -> int:
|
||||||
|
"""Scan for beliefs/positions affected by claims changed in this merge.
|
||||||
|
|
||||||
|
Returns the number of cascade notifications sent.
|
||||||
|
"""
|
||||||
|
# 1. Get changed files
|
||||||
|
proc = await asyncio.create_subprocess_exec(
|
||||||
|
"git", "diff", "--name-only", "--diff-filter=ACMR",
|
||||||
|
main_sha, branch_sha,
|
||||||
|
cwd=str(main_worktree),
|
||||||
|
stdout=asyncio.subprocess.PIPE,
|
||||||
|
stderr=asyncio.subprocess.PIPE,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
stdout, _ = await asyncio.wait_for(proc.communicate(), timeout=10)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
proc.kill()
|
||||||
|
await proc.wait()
|
||||||
|
logger.warning("cascade: git diff timed out")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if proc.returncode != 0:
|
||||||
|
logger.warning("cascade: git diff failed (rc=%d)", proc.returncode)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
diff_files = [f for f in stdout.decode().strip().split("\n") if f]
|
||||||
|
|
||||||
|
# 2. Extract claim titles from changed files
|
||||||
|
changed_claims = _extract_claim_titles_from_diff(diff_files)
|
||||||
|
if not changed_claims:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
logger.info("cascade: %d claims changed in PR #%d: %s",
|
||||||
|
len(changed_claims), pr_num, list(changed_claims)[:5])
|
||||||
|
|
||||||
|
# Build normalized lookup for fuzzy matching
|
||||||
|
claim_lookup = {}
|
||||||
|
for claim in changed_claims:
|
||||||
|
claim_lookup[_normalize_for_match(claim)] = claim
|
||||||
|
claim_lookup[_normalize_for_match(_slug_to_words(claim))] = claim
|
||||||
|
|
||||||
|
# 3. Scan all beliefs and positions
|
||||||
|
notifications = 0
|
||||||
|
agents_dir = main_worktree / "agents"
|
||||||
|
if not agents_dir.exists():
|
||||||
|
logger.warning("cascade: no agents/ dir in worktree")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
for agent_name in AGENT_NAMES:
|
||||||
|
agent_dir = agents_dir / agent_name
|
||||||
|
if not agent_dir.exists():
|
||||||
|
continue
|
||||||
|
|
||||||
|
for subdir, file_type in [("beliefs", "belief"), ("positions", "position")]:
|
||||||
|
target_dir = agent_dir / subdir
|
||||||
|
if not target_dir.exists():
|
||||||
|
continue
|
||||||
|
for md_file in target_dir.glob("*.md"):
|
||||||
|
_, deps = _parse_depends_on(md_file)
|
||||||
|
matched = _find_matches(deps, claim_lookup)
|
||||||
|
if matched:
|
||||||
|
body = _format_cascade_body(md_file.name, file_type, matched, pr_num)
|
||||||
|
if _write_inbox_message(agent_name, f"claim-changed-affects-{file_type}", body):
|
||||||
|
notifications += 1
|
||||||
|
logger.info("cascade: notified %s — %s '%s' affected by %s",
|
||||||
|
agent_name, file_type, md_file.stem, matched)
|
||||||
|
|
||||||
|
if notifications:
|
||||||
|
logger.info("cascade: sent %d notifications for PR #%d", notifications, pr_num)
|
||||||
|
|
||||||
|
# Write structured audit_log entry for cascade tracking (Page 4 data)
|
||||||
|
if conn is not None:
|
||||||
|
try:
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO audit_log (stage, event, detail) VALUES (?, ?, ?)",
|
||||||
|
("cascade", "cascade_triggered", json.dumps({
|
||||||
|
"pr": pr_num,
|
||||||
|
"claims_changed": list(changed_claims)[:20],
|
||||||
|
"notifications_sent": notifications,
|
||||||
|
})),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
logger.exception("cascade: audit_log write failed (non-fatal)")
|
||||||
|
|
||||||
|
return notifications
|
||||||
230
ops/pipeline-v2/lib/cross_domain.py
Normal file
230
ops/pipeline-v2/lib/cross_domain.py
Normal file
|
|
@ -0,0 +1,230 @@
|
||||||
|
"""Cross-domain citation index — detect entity overlap across domains.
|
||||||
|
|
||||||
|
Hook point: called from merge.py after cascade_after_merge.
|
||||||
|
After a claim merges, checks if its referenced entities also appear in claims
|
||||||
|
from other domains. Logs connections to audit_log for silo detection.
|
||||||
|
|
||||||
|
Two detection methods:
|
||||||
|
1. Entity name matching — entity names appearing in claim body text (word-boundary)
|
||||||
|
2. Source overlap — claims citing the same source archive files
|
||||||
|
|
||||||
|
At ~600 claims and ~100 entities, full scan per merge takes <1 second.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
logger = logging.getLogger("pipeline.cross_domain")
|
||||||
|
|
||||||
|
# Minimum entity name length to avoid false positives (ORE, QCX, etc)
|
||||||
|
MIN_ENTITY_NAME_LEN = 4
|
||||||
|
|
||||||
|
# Entity names that are common English words — skip to avoid false positives
|
||||||
|
ENTITY_STOPLIST = {"versus", "island", "loyal", "saber", "nebula", "helium", "coal", "snapshot", "dropout"}
|
||||||
|
|
||||||
|
|
||||||
|
def _build_entity_names(worktree: Path) -> dict[str, str]:
|
||||||
|
"""Build mapping of entity_slug -> display_name from entity files."""
|
||||||
|
names = {}
|
||||||
|
entity_dir = worktree / "entities"
|
||||||
|
if not entity_dir.exists():
|
||||||
|
return names
|
||||||
|
for md_file in entity_dir.rglob("*.md"):
|
||||||
|
if md_file.name.startswith("_"):
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
content = md_file.read_text(encoding="utf-8")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
continue
|
||||||
|
for line in content.split("\n"):
|
||||||
|
if line.startswith("name:"):
|
||||||
|
name = line.split(":", 1)[1].strip().strip('"').strip("'")
|
||||||
|
if len(name) >= MIN_ENTITY_NAME_LEN and name.lower() not in ENTITY_STOPLIST:
|
||||||
|
names[md_file.stem] = name
|
||||||
|
break
|
||||||
|
return names
|
||||||
|
|
||||||
|
|
||||||
|
def _compile_entity_patterns(entity_names: dict[str, str]) -> dict[str, re.Pattern]:
|
||||||
|
"""Pre-compile word-boundary regex for each entity name."""
|
||||||
|
patterns = {}
|
||||||
|
for slug, name in entity_names.items():
|
||||||
|
try:
|
||||||
|
patterns[slug] = re.compile(r'\b' + re.escape(name) + r'\b', re.IGNORECASE)
|
||||||
|
except re.error:
|
||||||
|
continue
|
||||||
|
return patterns
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_source_refs(content: str) -> set[str]:
|
||||||
|
"""Extract source archive references ([[YYYY-MM-DD-...]]) from content."""
|
||||||
|
return set(re.findall(r"\[\[(20\d{2}-\d{2}-\d{2}-[^\]]+)\]\]", content))
|
||||||
|
|
||||||
|
|
||||||
|
def _find_entity_mentions(content: str, patterns: dict[str, re.Pattern]) -> set[str]:
|
||||||
|
"""Find entity slugs whose names appear in the content (word-boundary match)."""
|
||||||
|
found = set()
|
||||||
|
for slug, pat in patterns.items():
|
||||||
|
if pat.search(content):
|
||||||
|
found.add(slug)
|
||||||
|
return found
|
||||||
|
|
||||||
|
|
||||||
|
def _scan_domain_claims(worktree: Path, patterns: dict[str, re.Pattern]) -> dict[str, list[dict]]:
|
||||||
|
"""Build domain -> [claim_info] mapping for all claims."""
|
||||||
|
domain_claims = {}
|
||||||
|
domains_dir = worktree / "domains"
|
||||||
|
if not domains_dir.exists():
|
||||||
|
return domain_claims
|
||||||
|
|
||||||
|
for domain_dir in domains_dir.iterdir():
|
||||||
|
if not domain_dir.is_dir():
|
||||||
|
continue
|
||||||
|
claims = []
|
||||||
|
for claim_file in domain_dir.glob("*.md"):
|
||||||
|
if claim_file.name.startswith("_") or claim_file.name == "directory.md":
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
content = claim_file.read_text(encoding="utf-8")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
continue
|
||||||
|
claims.append({
|
||||||
|
"slug": claim_file.stem,
|
||||||
|
"entities": _find_entity_mentions(content, patterns),
|
||||||
|
"sources": _extract_source_refs(content),
|
||||||
|
})
|
||||||
|
domain_claims[domain_dir.name] = claims
|
||||||
|
return domain_claims
|
||||||
|
|
||||||
|
|
||||||
|
async def cross_domain_after_merge(
|
||||||
|
main_sha: str,
|
||||||
|
branch_sha: str,
|
||||||
|
pr_num: int,
|
||||||
|
main_worktree: Path,
|
||||||
|
conn=None,
|
||||||
|
) -> int:
|
||||||
|
"""Detect cross-domain entity/source overlap for claims changed in this merge.
|
||||||
|
|
||||||
|
Returns the number of cross-domain connections found.
|
||||||
|
"""
|
||||||
|
# 1. Get changed files
|
||||||
|
proc = await asyncio.create_subprocess_exec(
|
||||||
|
"git", "diff", "--name-only", "--diff-filter=ACMR",
|
||||||
|
main_sha, branch_sha,
|
||||||
|
cwd=str(main_worktree),
|
||||||
|
stdout=asyncio.subprocess.PIPE,
|
||||||
|
stderr=asyncio.subprocess.PIPE,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
stdout, _ = await asyncio.wait_for(proc.communicate(), timeout=10)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
proc.kill()
|
||||||
|
await proc.wait()
|
||||||
|
logger.warning("cross_domain: git diff timed out")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if proc.returncode != 0:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
diff_files = [f for f in stdout.decode().strip().split("\n") if f]
|
||||||
|
|
||||||
|
# 2. Filter to claim files
|
||||||
|
changed_claims = []
|
||||||
|
for fpath in diff_files:
|
||||||
|
if not fpath.endswith(".md") or not fpath.startswith("domains/"):
|
||||||
|
continue
|
||||||
|
parts = fpath.split("/")
|
||||||
|
if len(parts) < 3:
|
||||||
|
continue
|
||||||
|
basename = os.path.basename(fpath)
|
||||||
|
if basename.startswith("_") or basename == "directory.md":
|
||||||
|
continue
|
||||||
|
changed_claims.append({"path": fpath, "domain": parts[1], "slug": Path(basename).stem})
|
||||||
|
|
||||||
|
if not changed_claims:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# 3. Build entity patterns and scan all claims
|
||||||
|
entity_names = _build_entity_names(main_worktree)
|
||||||
|
if not entity_names:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
patterns = _compile_entity_patterns(entity_names)
|
||||||
|
domain_claims = _scan_domain_claims(main_worktree, patterns)
|
||||||
|
|
||||||
|
# 4. For each changed claim, find cross-domain connections
|
||||||
|
total_connections = 0
|
||||||
|
all_connections = []
|
||||||
|
|
||||||
|
for claim in changed_claims:
|
||||||
|
claim_path = main_worktree / claim["path"]
|
||||||
|
try:
|
||||||
|
content = claim_path.read_text(encoding="utf-8")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
continue
|
||||||
|
|
||||||
|
my_entities = _find_entity_mentions(content, patterns)
|
||||||
|
my_sources = _extract_source_refs(content)
|
||||||
|
|
||||||
|
if not my_entities and not my_sources:
|
||||||
|
continue
|
||||||
|
|
||||||
|
connections = []
|
||||||
|
for other_domain, other_claims in domain_claims.items():
|
||||||
|
if other_domain == claim["domain"]:
|
||||||
|
continue
|
||||||
|
for other in other_claims:
|
||||||
|
shared_entities = my_entities & other["entities"]
|
||||||
|
shared_sources = my_sources & other["sources"]
|
||||||
|
|
||||||
|
# Threshold: >=2 shared entities, OR 1 entity + 1 source
|
||||||
|
entity_count = len(shared_entities)
|
||||||
|
source_count = len(shared_sources)
|
||||||
|
|
||||||
|
if entity_count >= 2 or (entity_count >= 1 and source_count >= 1):
|
||||||
|
connections.append({
|
||||||
|
"other_claim": other["slug"],
|
||||||
|
"other_domain": other_domain,
|
||||||
|
"shared_entities": sorted(shared_entities)[:5],
|
||||||
|
"shared_sources": sorted(shared_sources)[:3],
|
||||||
|
})
|
||||||
|
|
||||||
|
if connections:
|
||||||
|
total_connections += len(connections)
|
||||||
|
all_connections.append({
|
||||||
|
"claim": claim["slug"],
|
||||||
|
"domain": claim["domain"],
|
||||||
|
"connections": connections[:10],
|
||||||
|
})
|
||||||
|
logger.info(
|
||||||
|
"cross_domain: %s (%s) has %d cross-domain connections",
|
||||||
|
claim["slug"], claim["domain"], len(connections),
|
||||||
|
)
|
||||||
|
|
||||||
|
# 5. Log to audit_log
|
||||||
|
if all_connections and conn is not None:
|
||||||
|
try:
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO audit_log (stage, event, detail) VALUES (?, ?, ?)",
|
||||||
|
("cross_domain", "connections_found", json.dumps({
|
||||||
|
"pr": pr_num,
|
||||||
|
"total_connections": total_connections,
|
||||||
|
"claims_with_connections": len(all_connections),
|
||||||
|
"details": all_connections[:10],
|
||||||
|
})),
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
logger.exception("cross_domain: audit_log write failed (non-fatal)")
|
||||||
|
|
||||||
|
if total_connections:
|
||||||
|
logger.info(
|
||||||
|
"cross_domain: PR #%d — %d connections across %d claims",
|
||||||
|
pr_num, total_connections, len(all_connections),
|
||||||
|
)
|
||||||
|
|
||||||
|
return total_connections
|
||||||
625
ops/pipeline-v2/lib/db.py
Normal file
625
ops/pipeline-v2/lib/db.py
Normal file
|
|
@ -0,0 +1,625 @@
|
||||||
|
"""SQLite database — schema, migrations, connection management."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import sqlite3
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
from . import config
|
||||||
|
|
||||||
|
logger = logging.getLogger("pipeline.db")
|
||||||
|
|
||||||
|
SCHEMA_VERSION = 12
|
||||||
|
|
||||||
|
SCHEMA_SQL = """
|
||||||
|
CREATE TABLE IF NOT EXISTS schema_version (
|
||||||
|
version INTEGER PRIMARY KEY,
|
||||||
|
applied_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS sources (
|
||||||
|
path TEXT PRIMARY KEY,
|
||||||
|
status TEXT NOT NULL DEFAULT 'unprocessed',
|
||||||
|
-- unprocessed, triaging, extracting, extracted, null_result,
|
||||||
|
-- needs_reextraction, error
|
||||||
|
priority TEXT DEFAULT 'medium',
|
||||||
|
-- critical, high, medium, low, skip
|
||||||
|
priority_log TEXT DEFAULT '[]',
|
||||||
|
-- JSON array: [{stage, priority, reasoning, ts}]
|
||||||
|
extraction_model TEXT,
|
||||||
|
claims_count INTEGER DEFAULT 0,
|
||||||
|
pr_number INTEGER,
|
||||||
|
transient_retries INTEGER DEFAULT 0,
|
||||||
|
substantive_retries INTEGER DEFAULT 0,
|
||||||
|
last_error TEXT,
|
||||||
|
feedback TEXT,
|
||||||
|
-- eval feedback for re-extraction (JSON)
|
||||||
|
cost_usd REAL DEFAULT 0,
|
||||||
|
created_at TEXT DEFAULT (datetime('now')),
|
||||||
|
updated_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS prs (
|
||||||
|
number INTEGER PRIMARY KEY,
|
||||||
|
source_path TEXT REFERENCES sources(path),
|
||||||
|
branch TEXT,
|
||||||
|
status TEXT NOT NULL DEFAULT 'open',
|
||||||
|
-- validating, open, reviewing, approved, merging, merged, closed, zombie, conflict
|
||||||
|
-- conflict: rebase failed or merge timed out — needs human intervention
|
||||||
|
domain TEXT,
|
||||||
|
agent TEXT,
|
||||||
|
commit_type TEXT CHECK(commit_type IS NULL OR commit_type IN ('extract', 'research', 'entity', 'decision', 'reweave', 'fix', 'challenge', 'enrich', 'synthesize', 'unknown')),
|
||||||
|
tier TEXT,
|
||||||
|
-- LIGHT, STANDARD, DEEP
|
||||||
|
tier0_pass INTEGER,
|
||||||
|
-- 0/1
|
||||||
|
leo_verdict TEXT DEFAULT 'pending',
|
||||||
|
-- pending, approve, request_changes, skipped, failed
|
||||||
|
domain_verdict TEXT DEFAULT 'pending',
|
||||||
|
domain_agent TEXT,
|
||||||
|
domain_model TEXT,
|
||||||
|
priority TEXT,
|
||||||
|
-- NULL = inherit from source. Set explicitly for human-submitted PRs.
|
||||||
|
-- Pipeline PRs: COALESCE(p.priority, s.priority, 'medium')
|
||||||
|
-- Human PRs: 'critical' (detected via missing source_path or non-agent author)
|
||||||
|
origin TEXT DEFAULT 'pipeline',
|
||||||
|
-- pipeline | human | external
|
||||||
|
transient_retries INTEGER DEFAULT 0,
|
||||||
|
substantive_retries INTEGER DEFAULT 0,
|
||||||
|
last_error TEXT,
|
||||||
|
last_attempt TEXT,
|
||||||
|
cost_usd REAL DEFAULT 0,
|
||||||
|
created_at TEXT DEFAULT (datetime('now')),
|
||||||
|
merged_at TEXT
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS costs (
|
||||||
|
date TEXT,
|
||||||
|
model TEXT,
|
||||||
|
stage TEXT,
|
||||||
|
calls INTEGER DEFAULT 0,
|
||||||
|
input_tokens INTEGER DEFAULT 0,
|
||||||
|
output_tokens INTEGER DEFAULT 0,
|
||||||
|
cost_usd REAL DEFAULT 0,
|
||||||
|
PRIMARY KEY (date, model, stage)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS circuit_breakers (
|
||||||
|
name TEXT PRIMARY KEY,
|
||||||
|
state TEXT DEFAULT 'closed',
|
||||||
|
-- closed, open, halfopen
|
||||||
|
failures INTEGER DEFAULT 0,
|
||||||
|
successes INTEGER DEFAULT 0,
|
||||||
|
tripped_at TEXT,
|
||||||
|
last_success_at TEXT,
|
||||||
|
-- heartbeat: if now() - last_success_at > 2*interval, stage is stalled (Vida)
|
||||||
|
last_update TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS audit_log (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp TEXT DEFAULT (datetime('now')),
|
||||||
|
stage TEXT,
|
||||||
|
event TEXT,
|
||||||
|
detail TEXT
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS response_audit (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
|
||||||
|
chat_id INTEGER,
|
||||||
|
user TEXT,
|
||||||
|
agent TEXT DEFAULT 'rio',
|
||||||
|
model TEXT,
|
||||||
|
query TEXT,
|
||||||
|
conversation_window TEXT,
|
||||||
|
-- JSON: prior N messages for context
|
||||||
|
-- NOTE: intentional duplication of transcript data for audit self-containment.
|
||||||
|
-- Transcripts live in /opt/teleo-eval/transcripts/ but audit rows need prompt
|
||||||
|
-- context inline for retrieval-quality diagnosis. Primary driver of row size —
|
||||||
|
-- target for cleanup when 90-day retention policy lands.
|
||||||
|
entities_matched TEXT,
|
||||||
|
-- JSON: [{name, path, score, used_in_response}]
|
||||||
|
claims_matched TEXT,
|
||||||
|
-- JSON: [{path, title, score, source, used_in_response}]
|
||||||
|
retrieval_layers_hit TEXT,
|
||||||
|
-- JSON: ["keyword","qdrant","graph"]
|
||||||
|
retrieval_gap TEXT,
|
||||||
|
-- What the KB was missing (if anything)
|
||||||
|
market_data TEXT,
|
||||||
|
-- JSON: injected token prices
|
||||||
|
research_context TEXT,
|
||||||
|
-- Haiku pre-pass results if any
|
||||||
|
kb_context_text TEXT,
|
||||||
|
-- Full context string sent to model
|
||||||
|
tool_calls TEXT,
|
||||||
|
-- JSON: ordered array [{tool, input, output, duration_ms, ts}]
|
||||||
|
raw_response TEXT,
|
||||||
|
display_response TEXT,
|
||||||
|
confidence_score REAL,
|
||||||
|
-- Model self-rated retrieval quality 0.0-1.0
|
||||||
|
response_time_ms INTEGER,
|
||||||
|
-- Eval pipeline columns (v10)
|
||||||
|
prompt_tokens INTEGER,
|
||||||
|
completion_tokens INTEGER,
|
||||||
|
generation_cost REAL,
|
||||||
|
embedding_cost REAL,
|
||||||
|
total_cost REAL,
|
||||||
|
blocked INTEGER DEFAULT 0,
|
||||||
|
block_reason TEXT,
|
||||||
|
query_type TEXT,
|
||||||
|
created_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_sources_status ON sources(status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_prs_status ON prs(status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_prs_domain ON prs(domain);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_costs_date ON costs(date);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_audit_stage ON audit_log(stage);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_ts ON response_audit(timestamp);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_agent ON response_audit(agent);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_chat_ts ON response_audit(chat_id, timestamp);
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def get_connection(readonly: bool = False) -> sqlite3.Connection:
|
||||||
|
"""Create a SQLite connection with WAL mode and proper settings."""
|
||||||
|
config.DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
conn = sqlite3.connect(
|
||||||
|
str(config.DB_PATH),
|
||||||
|
timeout=30,
|
||||||
|
isolation_level=None, # autocommit — we manage transactions explicitly
|
||||||
|
)
|
||||||
|
conn.row_factory = sqlite3.Row
|
||||||
|
conn.execute("PRAGMA journal_mode=WAL")
|
||||||
|
conn.execute("PRAGMA busy_timeout=10000")
|
||||||
|
conn.execute("PRAGMA foreign_keys=ON")
|
||||||
|
if readonly:
|
||||||
|
conn.execute("PRAGMA query_only=ON")
|
||||||
|
return conn
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def transaction(conn: sqlite3.Connection):
|
||||||
|
"""Context manager for explicit transactions."""
|
||||||
|
conn.execute("BEGIN")
|
||||||
|
try:
|
||||||
|
yield conn
|
||||||
|
conn.execute("COMMIT")
|
||||||
|
except Exception:
|
||||||
|
conn.execute("ROLLBACK")
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
# Branch prefix → (agent, commit_type) mapping.
|
||||||
|
# Single source of truth — used by merge.py at INSERT time and migration v7 backfill.
|
||||||
|
# Unknown prefixes → ('unknown', 'unknown') + warning log.
|
||||||
|
BRANCH_PREFIX_MAP = {
|
||||||
|
"extract": ("pipeline", "extract"),
|
||||||
|
"ingestion": ("pipeline", "extract"),
|
||||||
|
"epimetheus": ("epimetheus", "extract"),
|
||||||
|
"rio": ("rio", "research"),
|
||||||
|
"theseus": ("theseus", "research"),
|
||||||
|
"astra": ("astra", "research"),
|
||||||
|
"vida": ("vida", "research"),
|
||||||
|
"clay": ("clay", "research"),
|
||||||
|
"leo": ("leo", "entity"),
|
||||||
|
"reweave": ("pipeline", "reweave"),
|
||||||
|
"fix": ("pipeline", "fix"),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def classify_branch(branch: str) -> tuple[str, str]:
|
||||||
|
"""Derive (agent, commit_type) from branch prefix.
|
||||||
|
|
||||||
|
Returns ('unknown', 'unknown') and logs a warning for unrecognized prefixes.
|
||||||
|
"""
|
||||||
|
prefix = branch.split("/", 1)[0] if "/" in branch else branch
|
||||||
|
result = BRANCH_PREFIX_MAP.get(prefix)
|
||||||
|
if result is None:
|
||||||
|
logger.warning("Unknown branch prefix %r in branch %r — defaulting to ('unknown', 'unknown')", prefix, branch)
|
||||||
|
return ("unknown", "unknown")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def migrate(conn: sqlite3.Connection):
|
||||||
|
"""Run schema migrations."""
|
||||||
|
conn.executescript(SCHEMA_SQL)
|
||||||
|
|
||||||
|
# Check current version
|
||||||
|
try:
|
||||||
|
row = conn.execute("SELECT MAX(version) as v FROM schema_version").fetchone()
|
||||||
|
current = row["v"] if row and row["v"] else 0
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
current = 0
|
||||||
|
|
||||||
|
# --- Incremental migrations ---
|
||||||
|
if current < 2:
|
||||||
|
# Phase 2: add multiplayer columns to prs table
|
||||||
|
for stmt in [
|
||||||
|
"ALTER TABLE prs ADD COLUMN priority TEXT",
|
||||||
|
"ALTER TABLE prs ADD COLUMN origin TEXT DEFAULT 'pipeline'",
|
||||||
|
"ALTER TABLE prs ADD COLUMN last_error TEXT",
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
conn.execute(stmt)
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # Column already exists (idempotent)
|
||||||
|
logger.info("Migration v2: added priority, origin, last_error to prs")
|
||||||
|
|
||||||
|
if current < 3:
|
||||||
|
# Phase 3: retry budget — track eval attempts and issue tags per PR
|
||||||
|
for stmt in [
|
||||||
|
"ALTER TABLE prs ADD COLUMN eval_attempts INTEGER DEFAULT 0",
|
||||||
|
"ALTER TABLE prs ADD COLUMN eval_issues TEXT DEFAULT '[]'",
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
conn.execute(stmt)
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # Column already exists (idempotent)
|
||||||
|
logger.info("Migration v3: added eval_attempts, eval_issues to prs")
|
||||||
|
|
||||||
|
if current < 4:
|
||||||
|
# Phase 4: auto-fixer — track fix attempts per PR
|
||||||
|
for stmt in [
|
||||||
|
"ALTER TABLE prs ADD COLUMN fix_attempts INTEGER DEFAULT 0",
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
conn.execute(stmt)
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # Column already exists (idempotent)
|
||||||
|
logger.info("Migration v4: added fix_attempts to prs")
|
||||||
|
|
||||||
|
if current < 5:
|
||||||
|
# Phase 5: contributor identity system — tracks who contributed what
|
||||||
|
# Aligned with schemas/attribution.md (5 roles) + Leo's tier system.
|
||||||
|
# CI is COMPUTED from raw counts × weights, never stored.
|
||||||
|
conn.executescript("""
|
||||||
|
CREATE TABLE IF NOT EXISTS contributors (
|
||||||
|
handle TEXT PRIMARY KEY,
|
||||||
|
display_name TEXT,
|
||||||
|
agent_id TEXT,
|
||||||
|
first_contribution TEXT,
|
||||||
|
last_contribution TEXT,
|
||||||
|
tier TEXT DEFAULT 'new',
|
||||||
|
-- new, contributor, veteran
|
||||||
|
sourcer_count INTEGER DEFAULT 0,
|
||||||
|
extractor_count INTEGER DEFAULT 0,
|
||||||
|
challenger_count INTEGER DEFAULT 0,
|
||||||
|
synthesizer_count INTEGER DEFAULT 0,
|
||||||
|
reviewer_count INTEGER DEFAULT 0,
|
||||||
|
claims_merged INTEGER DEFAULT 0,
|
||||||
|
challenges_survived INTEGER DEFAULT 0,
|
||||||
|
domains TEXT DEFAULT '[]',
|
||||||
|
highlights TEXT DEFAULT '[]',
|
||||||
|
identities TEXT DEFAULT '{}',
|
||||||
|
created_at TEXT DEFAULT (datetime('now')),
|
||||||
|
updated_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_contributors_tier ON contributors(tier);
|
||||||
|
""")
|
||||||
|
logger.info("Migration v5: added contributors table")
|
||||||
|
|
||||||
|
if current < 6:
|
||||||
|
# Phase 6: analytics — time-series metrics snapshots for trending dashboard
|
||||||
|
conn.executescript("""
|
||||||
|
CREATE TABLE IF NOT EXISTS metrics_snapshots (
|
||||||
|
ts TEXT DEFAULT (datetime('now')),
|
||||||
|
throughput_1h INTEGER,
|
||||||
|
approval_rate REAL,
|
||||||
|
open_prs INTEGER,
|
||||||
|
merged_total INTEGER,
|
||||||
|
closed_total INTEGER,
|
||||||
|
conflict_total INTEGER,
|
||||||
|
evaluated_24h INTEGER,
|
||||||
|
fix_success_rate REAL,
|
||||||
|
rejection_broken_wiki_links INTEGER DEFAULT 0,
|
||||||
|
rejection_frontmatter_schema INTEGER DEFAULT 0,
|
||||||
|
rejection_near_duplicate INTEGER DEFAULT 0,
|
||||||
|
rejection_confidence INTEGER DEFAULT 0,
|
||||||
|
rejection_other INTEGER DEFAULT 0,
|
||||||
|
extraction_model TEXT,
|
||||||
|
eval_domain_model TEXT,
|
||||||
|
eval_leo_model TEXT,
|
||||||
|
prompt_version TEXT,
|
||||||
|
pipeline_version TEXT,
|
||||||
|
source_origin_agent INTEGER DEFAULT 0,
|
||||||
|
source_origin_human INTEGER DEFAULT 0,
|
||||||
|
source_origin_scraper INTEGER DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_snapshots_ts ON metrics_snapshots(ts);
|
||||||
|
""")
|
||||||
|
logger.info("Migration v6: added metrics_snapshots table for analytics dashboard")
|
||||||
|
|
||||||
|
if current < 7:
|
||||||
|
# Phase 7: agent attribution + commit_type for dashboard
|
||||||
|
# commit_type column + backfill agent/commit_type from branch prefix
|
||||||
|
try:
|
||||||
|
conn.execute("ALTER TABLE prs ADD COLUMN commit_type TEXT CHECK(commit_type IS NULL OR commit_type IN ('extract', 'research', 'entity', 'decision', 'reweave', 'fix', 'unknown'))")
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # column already exists from CREATE TABLE
|
||||||
|
# Backfill agent and commit_type from branch prefix
|
||||||
|
rows = conn.execute("SELECT number, branch FROM prs WHERE branch IS NOT NULL").fetchall()
|
||||||
|
for row in rows:
|
||||||
|
agent, commit_type = classify_branch(row["branch"])
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE prs SET agent = ?, commit_type = ? WHERE number = ? AND (agent IS NULL OR commit_type IS NULL)",
|
||||||
|
(agent, commit_type, row["number"]),
|
||||||
|
)
|
||||||
|
backfilled = len(rows)
|
||||||
|
logger.info("Migration v7: added commit_type column, backfilled %d PRs with agent/commit_type", backfilled)
|
||||||
|
|
||||||
|
if current < 8:
|
||||||
|
# Phase 8: response audit — full-chain visibility for agent response quality
|
||||||
|
# Captures: query → tool calls → retrieval → context → response → confidence
|
||||||
|
# Approved by Ganymede (architecture), Rio (agent needs), Rhea (ops)
|
||||||
|
conn.executescript("""
|
||||||
|
CREATE TABLE IF NOT EXISTS response_audit (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
|
||||||
|
chat_id INTEGER,
|
||||||
|
user TEXT,
|
||||||
|
agent TEXT DEFAULT 'rio',
|
||||||
|
model TEXT,
|
||||||
|
query TEXT,
|
||||||
|
conversation_window TEXT, -- intentional transcript duplication for audit self-containment
|
||||||
|
entities_matched TEXT,
|
||||||
|
claims_matched TEXT,
|
||||||
|
retrieval_layers_hit TEXT,
|
||||||
|
retrieval_gap TEXT,
|
||||||
|
market_data TEXT,
|
||||||
|
research_context TEXT,
|
||||||
|
kb_context_text TEXT,
|
||||||
|
tool_calls TEXT,
|
||||||
|
raw_response TEXT,
|
||||||
|
display_response TEXT,
|
||||||
|
confidence_score REAL,
|
||||||
|
response_time_ms INTEGER,
|
||||||
|
created_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_ts ON response_audit(timestamp);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_agent ON response_audit(agent);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_response_audit_chat_ts ON response_audit(chat_id, timestamp);
|
||||||
|
""")
|
||||||
|
logger.info("Migration v8: added response_audit table for agent response auditing")
|
||||||
|
|
||||||
|
if current < 9:
|
||||||
|
# Phase 9: rebuild prs table to expand CHECK constraint on commit_type.
|
||||||
|
# SQLite cannot ALTER CHECK constraints in-place — must rebuild table.
|
||||||
|
# Old constraint (v7): extract,research,entity,decision,reweave,fix,unknown
|
||||||
|
# New constraint: adds challenge,enrich,synthesize
|
||||||
|
# Also re-derive commit_type from branch prefix for rows with invalid/NULL values.
|
||||||
|
|
||||||
|
# Step 1: Get all column names from existing table
|
||||||
|
cols_info = conn.execute("PRAGMA table_info(prs)").fetchall()
|
||||||
|
col_names = [c["name"] for c in cols_info]
|
||||||
|
col_list = ", ".join(col_names)
|
||||||
|
|
||||||
|
# Step 2: Create new table with expanded CHECK constraint
|
||||||
|
conn.executescript(f"""
|
||||||
|
CREATE TABLE prs_new (
|
||||||
|
number INTEGER PRIMARY KEY,
|
||||||
|
source_path TEXT REFERENCES sources(path),
|
||||||
|
branch TEXT,
|
||||||
|
status TEXT NOT NULL DEFAULT 'open',
|
||||||
|
domain TEXT,
|
||||||
|
agent TEXT,
|
||||||
|
commit_type TEXT CHECK(commit_type IS NULL OR commit_type IN ('extract','research','entity','decision','reweave','fix','challenge','enrich','synthesize','unknown')),
|
||||||
|
tier TEXT,
|
||||||
|
tier0_pass INTEGER,
|
||||||
|
leo_verdict TEXT DEFAULT 'pending',
|
||||||
|
domain_verdict TEXT DEFAULT 'pending',
|
||||||
|
domain_agent TEXT,
|
||||||
|
domain_model TEXT,
|
||||||
|
priority TEXT,
|
||||||
|
origin TEXT DEFAULT 'pipeline',
|
||||||
|
transient_retries INTEGER DEFAULT 0,
|
||||||
|
substantive_retries INTEGER DEFAULT 0,
|
||||||
|
last_error TEXT,
|
||||||
|
last_attempt TEXT,
|
||||||
|
cost_usd REAL DEFAULT 0,
|
||||||
|
created_at TEXT DEFAULT (datetime('now')),
|
||||||
|
merged_at TEXT
|
||||||
|
);
|
||||||
|
INSERT INTO prs_new ({col_list}) SELECT {col_list} FROM prs;
|
||||||
|
DROP TABLE prs;
|
||||||
|
ALTER TABLE prs_new RENAME TO prs;
|
||||||
|
""")
|
||||||
|
logger.info("Migration v9: rebuilt prs table with expanded commit_type CHECK constraint")
|
||||||
|
|
||||||
|
# Step 3: Re-derive commit_type from branch prefix for invalid/NULL values
|
||||||
|
rows = conn.execute(
|
||||||
|
"""SELECT number, branch FROM prs
|
||||||
|
WHERE branch IS NOT NULL
|
||||||
|
AND (commit_type IS NULL
|
||||||
|
OR commit_type NOT IN ('extract','research','entity','decision','reweave','fix','challenge','enrich','synthesize','unknown'))"""
|
||||||
|
).fetchall()
|
||||||
|
fixed = 0
|
||||||
|
for row in rows:
|
||||||
|
agent, commit_type = classify_branch(row["branch"])
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE prs SET agent = COALESCE(agent, ?), commit_type = ? WHERE number = ?",
|
||||||
|
(agent, commit_type, row["number"]),
|
||||||
|
)
|
||||||
|
fixed += 1
|
||||||
|
conn.commit()
|
||||||
|
logger.info("Migration v9: re-derived commit_type for %d PRs with invalid/NULL values", fixed)
|
||||||
|
|
||||||
|
if current < 10:
|
||||||
|
# Add eval pipeline columns to response_audit
|
||||||
|
# VPS may already be at v10/v11 from prior (incomplete) deploys — use IF NOT EXISTS pattern
|
||||||
|
for col_def in [
|
||||||
|
("prompt_tokens", "INTEGER"),
|
||||||
|
("completion_tokens", "INTEGER"),
|
||||||
|
("generation_cost", "REAL"),
|
||||||
|
("embedding_cost", "REAL"),
|
||||||
|
("total_cost", "REAL"),
|
||||||
|
("blocked", "INTEGER DEFAULT 0"),
|
||||||
|
("block_reason", "TEXT"),
|
||||||
|
("query_type", "TEXT"),
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
conn.execute(f"ALTER TABLE response_audit ADD COLUMN {col_def[0]} {col_def[1]}")
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # Column already exists
|
||||||
|
conn.commit()
|
||||||
|
logger.info("Migration v10: added eval pipeline columns to response_audit")
|
||||||
|
|
||||||
|
|
||||||
|
if current < 11:
|
||||||
|
# Phase 11: compute tracking — extended costs table columns
|
||||||
|
# (May already exist on VPS from manual deploy — idempotent ALTERs)
|
||||||
|
for col_def in [
|
||||||
|
("duration_ms", "INTEGER DEFAULT 0"),
|
||||||
|
("cache_read_tokens", "INTEGER DEFAULT 0"),
|
||||||
|
("cache_write_tokens", "INTEGER DEFAULT 0"),
|
||||||
|
("cost_estimate_usd", "REAL DEFAULT 0"),
|
||||||
|
]:
|
||||||
|
try:
|
||||||
|
conn.execute(f"ALTER TABLE costs ADD COLUMN {col_def[0]} {col_def[1]}")
|
||||||
|
except sqlite3.OperationalError:
|
||||||
|
pass # Column already exists
|
||||||
|
conn.commit()
|
||||||
|
logger.info("Migration v11: added compute tracking columns to costs")
|
||||||
|
|
||||||
|
if current < 12:
|
||||||
|
# Phase 12: structured review records — captures all evaluation outcomes
|
||||||
|
# including rejections, disagreements, and approved-with-changes.
|
||||||
|
# Schema locked with Leo (2026-04-01).
|
||||||
|
conn.executescript("""
|
||||||
|
CREATE TABLE IF NOT EXISTS review_records (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
pr_number INTEGER NOT NULL,
|
||||||
|
claim_path TEXT,
|
||||||
|
domain TEXT,
|
||||||
|
agent TEXT,
|
||||||
|
reviewer TEXT NOT NULL,
|
||||||
|
reviewer_model TEXT,
|
||||||
|
outcome TEXT NOT NULL
|
||||||
|
CHECK (outcome IN ('approved', 'approved-with-changes', 'rejected')),
|
||||||
|
rejection_reason TEXT
|
||||||
|
CHECK (rejection_reason IS NULL OR rejection_reason IN (
|
||||||
|
'fails-standalone-test', 'duplicate', 'scope-mismatch',
|
||||||
|
'evidence-insufficient', 'framing-poor', 'other'
|
||||||
|
)),
|
||||||
|
disagreement_type TEXT
|
||||||
|
CHECK (disagreement_type IS NULL OR disagreement_type IN (
|
||||||
|
'factual', 'scope', 'framing', 'evidence'
|
||||||
|
)),
|
||||||
|
notes TEXT,
|
||||||
|
batch_id TEXT,
|
||||||
|
claims_in_batch INTEGER DEFAULT 1,
|
||||||
|
reviewed_at TEXT DEFAULT (datetime('now'))
|
||||||
|
);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_review_records_pr ON review_records(pr_number);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_review_records_outcome ON review_records(outcome);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_review_records_domain ON review_records(domain);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_review_records_reviewer ON review_records(reviewer);
|
||||||
|
""")
|
||||||
|
logger.info("Migration v12: created review_records table")
|
||||||
|
|
||||||
|
if current < SCHEMA_VERSION:
|
||||||
|
conn.execute(
|
||||||
|
"INSERT OR REPLACE INTO schema_version (version) VALUES (?)",
|
||||||
|
(SCHEMA_VERSION,),
|
||||||
|
)
|
||||||
|
conn.commit() # Explicit commit — executescript auto-commits DDL but not subsequent DML
|
||||||
|
logger.info("Database migrated to schema version %d", SCHEMA_VERSION)
|
||||||
|
else:
|
||||||
|
logger.debug("Database at schema version %d", current)
|
||||||
|
|
||||||
|
|
||||||
|
def audit(conn: sqlite3.Connection, stage: str, event: str, detail: str = None):
|
||||||
|
"""Write an audit log entry."""
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO audit_log (stage, event, detail) VALUES (?, ?, ?)",
|
||||||
|
(stage, event, detail),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def record_review(conn, pr_number: int, reviewer: str, outcome: str, *,
|
||||||
|
claim_path: str = None, domain: str = None, agent: str = None,
|
||||||
|
reviewer_model: str = None, rejection_reason: str = None,
|
||||||
|
disagreement_type: str = None, notes: str = None,
|
||||||
|
claims_in_batch: int = 1):
|
||||||
|
"""Record a structured review outcome.
|
||||||
|
|
||||||
|
Called from evaluate stage after Leo/domain reviewer returns a verdict.
|
||||||
|
outcome must be: approved, approved-with-changes, or rejected.
|
||||||
|
"""
|
||||||
|
batch_id = str(pr_number)
|
||||||
|
conn.execute(
|
||||||
|
"""INSERT INTO review_records
|
||||||
|
(pr_number, claim_path, domain, agent, reviewer, reviewer_model,
|
||||||
|
outcome, rejection_reason, disagreement_type, notes,
|
||||||
|
batch_id, claims_in_batch)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
|
||||||
|
(pr_number, claim_path, domain, agent, reviewer, reviewer_model,
|
||||||
|
outcome, rejection_reason, disagreement_type, notes,
|
||||||
|
batch_id, claims_in_batch),
|
||||||
|
)
|
||||||
|
|
||||||
|
def append_priority_log(conn: sqlite3.Connection, path: str, stage: str, priority: str, reasoning: str):
|
||||||
|
"""Append a priority assessment to a source's priority_log.
|
||||||
|
|
||||||
|
NOTE: This does NOT update the source's priority column. The priority column
|
||||||
|
is the authoritative priority, set only by initial triage or human override.
|
||||||
|
The priority_log records each stage's opinion for offline calibration analysis.
|
||||||
|
(Bug caught by Theseus — original version overwrote priority with each stage's opinion.)
|
||||||
|
(Race condition fix per Vida — read-then-write wrapped in transaction.)
|
||||||
|
"""
|
||||||
|
conn.execute("BEGIN")
|
||||||
|
try:
|
||||||
|
row = conn.execute("SELECT priority_log FROM sources WHERE path = ?", (path,)).fetchone()
|
||||||
|
if not row:
|
||||||
|
conn.execute("ROLLBACK")
|
||||||
|
return
|
||||||
|
log = json.loads(row["priority_log"] or "[]")
|
||||||
|
log.append({"stage": stage, "priority": priority, "reasoning": reasoning})
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE sources SET priority_log = ?, updated_at = datetime('now') WHERE path = ?",
|
||||||
|
(json.dumps(log), path),
|
||||||
|
)
|
||||||
|
conn.execute("COMMIT")
|
||||||
|
except Exception:
|
||||||
|
conn.execute("ROLLBACK")
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def insert_response_audit(conn: sqlite3.Connection, **kwargs):
|
||||||
|
"""Insert a response audit record. All fields optional except query."""
|
||||||
|
cols = [
|
||||||
|
"timestamp", "chat_id", "user", "agent", "model", "query",
|
||||||
|
"conversation_window", "entities_matched", "claims_matched",
|
||||||
|
"retrieval_layers_hit", "retrieval_gap", "market_data",
|
||||||
|
"research_context", "kb_context_text", "tool_calls",
|
||||||
|
"raw_response", "display_response", "confidence_score",
|
||||||
|
"response_time_ms",
|
||||||
|
# Eval pipeline columns (v10)
|
||||||
|
"prompt_tokens", "completion_tokens", "generation_cost",
|
||||||
|
"embedding_cost", "total_cost", "blocked", "block_reason",
|
||||||
|
"query_type",
|
||||||
|
]
|
||||||
|
present = {k: v for k, v in kwargs.items() if k in cols and v is not None}
|
||||||
|
if not present:
|
||||||
|
return
|
||||||
|
col_names = ", ".join(present.keys())
|
||||||
|
placeholders = ", ".join("?" for _ in present)
|
||||||
|
conn.execute(
|
||||||
|
f"INSERT INTO response_audit ({col_names}) VALUES ({placeholders})",
|
||||||
|
tuple(present.values()),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def set_priority(conn: sqlite3.Connection, path: str, priority: str, reason: str = "human override"):
|
||||||
|
"""Set a source's authoritative priority. Used for human overrides and initial triage."""
|
||||||
|
conn.execute(
|
||||||
|
"UPDATE sources SET priority = ?, updated_at = datetime('now') WHERE path = ?",
|
||||||
|
(priority, path),
|
||||||
|
)
|
||||||
|
append_priority_log(conn, path, "override", priority, reason)
|
||||||
1465
ops/pipeline-v2/lib/evaluate.py
Normal file
1465
ops/pipeline-v2/lib/evaluate.py
Normal file
File diff suppressed because it is too large
Load diff
1449
ops/pipeline-v2/lib/merge.py
Normal file
1449
ops/pipeline-v2/lib/merge.py
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -31,6 +31,17 @@ RAW_DIR="/opt/teleo-eval/research-raw/${AGENT}"
|
||||||
|
|
||||||
log() { echo "[$(date -Iseconds)] $*" >> "$LOG"; }
|
log() { echo "[$(date -Iseconds)] $*" >> "$LOG"; }
|
||||||
|
|
||||||
|
# --- Agent State ---
|
||||||
|
STATE_LIB="/opt/teleo-eval/ops/agent-state/lib-state.sh"
|
||||||
|
if [ -f "$STATE_LIB" ]; then
|
||||||
|
source "$STATE_LIB"
|
||||||
|
HAS_STATE=true
|
||||||
|
SESSION_ID="${AGENT}-$(date +%Y%m%d-%H%M%S)"
|
||||||
|
else
|
||||||
|
HAS_STATE=false
|
||||||
|
log "WARN: agent-state lib not found, running without state"
|
||||||
|
fi
|
||||||
|
|
||||||
# --- Lock (prevent concurrent sessions for same agent) ---
|
# --- Lock (prevent concurrent sessions for same agent) ---
|
||||||
if [ -f "$LOCKFILE" ]; then
|
if [ -f "$LOCKFILE" ]; then
|
||||||
pid=$(cat "$LOCKFILE" 2>/dev/null)
|
pid=$(cat "$LOCKFILE" 2>/dev/null)
|
||||||
|
|
@ -178,6 +189,14 @@ git branch -D "$BRANCH" 2>/dev/null || true
|
||||||
git checkout -b "$BRANCH" >> "$LOG" 2>&1
|
git checkout -b "$BRANCH" >> "$LOG" 2>&1
|
||||||
log "On branch $BRANCH"
|
log "On branch $BRANCH"
|
||||||
|
|
||||||
|
# --- Pre-session state ---
|
||||||
|
if [ "$HAS_STATE" = true ]; then
|
||||||
|
state_start_session "$AGENT" "$SESSION_ID" "research" "$DOMAIN" "$BRANCH" "sonnet" "5400" > /dev/null 2>&1 || true
|
||||||
|
state_update_report "$AGENT" "researching" "Starting research session ${DATE}" 2>/dev/null || true
|
||||||
|
state_journal_append "$AGENT" "session_start" "session_id=$SESSION_ID" "type=research" "branch=$BRANCH" 2>/dev/null || true
|
||||||
|
log "Agent state: session started ($SESSION_ID)"
|
||||||
|
fi
|
||||||
|
|
||||||
# --- Build the research prompt ---
|
# --- Build the research prompt ---
|
||||||
# Write tweet data to a temp file so Claude can read it
|
# Write tweet data to a temp file so Claude can read it
|
||||||
echo "$TWEET_DATA" > "$TWEET_FILE"
|
echo "$TWEET_DATA" > "$TWEET_FILE"
|
||||||
|
|
@ -188,6 +207,11 @@ RESEARCH_PROMPT="You are ${AGENT}, a Teleo knowledge base agent. Domain: ${DOMAI
|
||||||
|
|
||||||
You have ~90 minutes of compute. Use it wisely.
|
You have ~90 minutes of compute. Use it wisely.
|
||||||
|
|
||||||
|
### Step 0: Load Operational State (1 min)
|
||||||
|
Read /opt/teleo-eval/agent-state/${AGENT}/memory.md — this is your cross-session operational memory. It contains patterns, dead ends, open questions, and corrections from previous sessions.
|
||||||
|
Read /opt/teleo-eval/agent-state/${AGENT}/tasks.json — check for pending tasks assigned to you.
|
||||||
|
Check /opt/teleo-eval/agent-state/${AGENT}/inbox/ for messages from other agents. Process any high-priority inbox items before choosing your research direction.
|
||||||
|
|
||||||
### Step 1: Orient (5 min)
|
### Step 1: Orient (5 min)
|
||||||
Read these files to understand your current state:
|
Read these files to understand your current state:
|
||||||
- agents/${AGENT}/identity.md (who you are)
|
- agents/${AGENT}/identity.md (who you are)
|
||||||
|
|
@ -229,7 +253,7 @@ Include which belief you targeted for disconfirmation and what you searched for.
|
||||||
### Step 6: Archive Sources (60 min)
|
### Step 6: Archive Sources (60 min)
|
||||||
For each relevant tweet/thread, create an archive file:
|
For each relevant tweet/thread, create an archive file:
|
||||||
|
|
||||||
Path: inbox/archive/YYYY-MM-DD-{author-handle}-{brief-slug}.md
|
Path: inbox/queue/YYYY-MM-DD-{author-handle}-{brief-slug}.md
|
||||||
|
|
||||||
Use this frontmatter:
|
Use this frontmatter:
|
||||||
---
|
---
|
||||||
|
|
@ -267,7 +291,7 @@ EXTRACTION HINT: [what the extractor should focus on — scopes attention]
|
||||||
- Set all sources to status: unprocessed (a DIFFERENT instance will extract)
|
- Set all sources to status: unprocessed (a DIFFERENT instance will extract)
|
||||||
- Flag cross-domain sources with flagged_for_{agent}: [\"reason\"]
|
- Flag cross-domain sources with flagged_for_{agent}: [\"reason\"]
|
||||||
- Do NOT extract claims yourself — write good notes so the extractor can
|
- Do NOT extract claims yourself — write good notes so the extractor can
|
||||||
- Check inbox/archive/ for duplicates before creating new archives
|
- Check inbox/queue/ and inbox/archive/ for duplicates before creating new archives
|
||||||
- Aim for 5-15 source archives per session
|
- Aim for 5-15 source archives per session
|
||||||
|
|
||||||
### Step 7: Flag Follow-up Directions (5 min)
|
### Step 7: Flag Follow-up Directions (5 min)
|
||||||
|
|
@ -303,6 +327,8 @@ The journal accumulates session over session. After 5+ sessions, review it for c
|
||||||
### Step 9: Stop
|
### Step 9: Stop
|
||||||
When you've finished archiving sources, updating your musing, and writing the research journal entry, STOP. Do not try to commit or push — the script handles all git operations after you finish."
|
When you've finished archiving sources, updating your musing, and writing the research journal entry, STOP. Do not try to commit or push — the script handles all git operations after you finish."
|
||||||
|
|
||||||
|
CASCADE_PROCESSOR="/opt/teleo-eval/ops/agent-state/process-cascade-inbox.py"
|
||||||
|
|
||||||
# --- Run Claude research session ---
|
# --- Run Claude research session ---
|
||||||
log "Starting Claude research session..."
|
log "Starting Claude research session..."
|
||||||
timeout 5400 "$CLAUDE_BIN" -p "$RESEARCH_PROMPT" \
|
timeout 5400 "$CLAUDE_BIN" -p "$RESEARCH_PROMPT" \
|
||||||
|
|
@ -311,31 +337,61 @@ timeout 5400 "$CLAUDE_BIN" -p "$RESEARCH_PROMPT" \
|
||||||
--permission-mode bypassPermissions \
|
--permission-mode bypassPermissions \
|
||||||
>> "$LOG" 2>&1 || {
|
>> "$LOG" 2>&1 || {
|
||||||
log "WARN: Research session failed or timed out for $AGENT"
|
log "WARN: Research session failed or timed out for $AGENT"
|
||||||
|
# Process cascade inbox even on timeout (agent may have read them in Step 0)
|
||||||
|
if [ -f "$CASCADE_PROCESSOR" ]; then
|
||||||
|
python3 "$CASCADE_PROCESSOR" "$AGENT" 2>>"$LOG" || true
|
||||||
|
fi
|
||||||
|
if [ "$HAS_STATE" = true ]; then
|
||||||
|
state_end_session "$AGENT" "timeout" "0" "null" 2>/dev/null || true
|
||||||
|
state_update_report "$AGENT" "idle" "Research session timed out or failed on ${DATE}" 2>/dev/null || true
|
||||||
|
state_update_metrics "$AGENT" "timeout" "0" 2>/dev/null || true
|
||||||
|
state_journal_append "$AGENT" "session_end" "outcome=timeout" "session_id=$SESSION_ID" 2>/dev/null || true
|
||||||
|
log "Agent state: session recorded as timeout"
|
||||||
|
fi
|
||||||
git checkout main >> "$LOG" 2>&1
|
git checkout main >> "$LOG" 2>&1
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
log "Claude session complete"
|
log "Claude session complete"
|
||||||
|
|
||||||
|
# --- Process cascade inbox messages (log completion to pipeline.db) ---
|
||||||
|
if [ -f "$CASCADE_PROCESSOR" ]; then
|
||||||
|
CASCADE_RESULT=$(python3 "$CASCADE_PROCESSOR" "$AGENT" 2>>"$LOG")
|
||||||
|
[ -n "$CASCADE_RESULT" ] && log "Cascade: $CASCADE_RESULT"
|
||||||
|
fi
|
||||||
|
|
||||||
# --- Check for changes ---
|
# --- Check for changes ---
|
||||||
CHANGED_FILES=$(git status --porcelain)
|
CHANGED_FILES=$(git status --porcelain)
|
||||||
if [ -z "$CHANGED_FILES" ]; then
|
if [ -z "$CHANGED_FILES" ]; then
|
||||||
log "No sources archived by $AGENT"
|
log "No sources archived by $AGENT"
|
||||||
|
if [ "$HAS_STATE" = true ]; then
|
||||||
|
state_end_session "$AGENT" "completed" "0" "null" 2>/dev/null || true
|
||||||
|
state_update_report "$AGENT" "idle" "Research session completed with no new sources on ${DATE}" 2>/dev/null || true
|
||||||
|
state_update_metrics "$AGENT" "completed" "0" 2>/dev/null || true
|
||||||
|
state_journal_append "$AGENT" "session_end" "outcome=no_sources" "session_id=$SESSION_ID" 2>/dev/null || true
|
||||||
|
log "Agent state: session recorded (no sources)"
|
||||||
|
fi
|
||||||
git checkout main >> "$LOG" 2>&1
|
git checkout main >> "$LOG" 2>&1
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# --- Stage and commit ---
|
# --- Stage and commit ---
|
||||||
git add inbox/archive/ agents/${AGENT}/musings/ agents/${AGENT}/research-journal.md 2>/dev/null || true
|
git add inbox/queue/ agents/${AGENT}/musings/ agents/${AGENT}/research-journal.md 2>/dev/null || true
|
||||||
|
|
||||||
if git diff --cached --quiet; then
|
if git diff --cached --quiet; then
|
||||||
log "No valid changes to commit"
|
log "No valid changes to commit"
|
||||||
|
if [ "$HAS_STATE" = true ]; then
|
||||||
|
state_end_session "$AGENT" "completed" "0" "null" 2>/dev/null || true
|
||||||
|
state_update_report "$AGENT" "idle" "Research session completed with no valid changes on ${DATE}" 2>/dev/null || true
|
||||||
|
state_update_metrics "$AGENT" "completed" "0" 2>/dev/null || true
|
||||||
|
state_journal_append "$AGENT" "session_end" "outcome=no_valid_changes" "session_id=$SESSION_ID" 2>/dev/null || true
|
||||||
|
fi
|
||||||
git checkout main >> "$LOG" 2>&1
|
git checkout main >> "$LOG" 2>&1
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
AGENT_UPPER=$(echo "$AGENT" | sed 's/./\U&/')
|
AGENT_UPPER=$(echo "$AGENT" | sed 's/./\U&/')
|
||||||
SOURCE_COUNT=$(git diff --cached --name-only | grep -c "^inbox/archive/" || echo "0")
|
SOURCE_COUNT=$(git diff --cached --name-only | grep -c "^inbox/queue/" || echo "0")
|
||||||
git commit -m "${AGENT}: research session ${DATE} — ${SOURCE_COUNT} sources archived
|
git commit -m "${AGENT}: research session ${DATE} — ${SOURCE_COUNT} sources archived
|
||||||
|
|
||||||
Pentagon-Agent: ${AGENT_UPPER} <HEADLESS>" >> "$LOG" 2>&1
|
Pentagon-Agent: ${AGENT_UPPER} <HEADLESS>" >> "$LOG" 2>&1
|
||||||
|
|
@ -375,6 +431,16 @@ Researcher and extractor are different Claude instances to prevent motivated rea
|
||||||
log "PR #${PR_NUMBER} opened for ${AGENT}'s research session"
|
log "PR #${PR_NUMBER} opened for ${AGENT}'s research session"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# --- Post-session state (success) ---
|
||||||
|
if [ "$HAS_STATE" = true ]; then
|
||||||
|
FINAL_PR="${EXISTING_PR:-${PR_NUMBER:-unknown}}"
|
||||||
|
state_end_session "$AGENT" "completed" "$SOURCE_COUNT" "$FINAL_PR" 2>/dev/null || true
|
||||||
|
state_finalize_report "$AGENT" "idle" "Research session completed: ${SOURCE_COUNT} sources archived" "$SESSION_ID" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "completed" "$SOURCE_COUNT" "$BRANCH" "${FINAL_PR}" 2>/dev/null || true
|
||||||
|
state_update_metrics "$AGENT" "completed" "$SOURCE_COUNT" 2>/dev/null || true
|
||||||
|
state_journal_append "$AGENT" "session_end" "outcome=completed" "sources=$SOURCE_COUNT" "branch=$BRANCH" "pr=$FINAL_PR" 2>/dev/null || true
|
||||||
|
log "Agent state: session finalized (${SOURCE_COUNT} sources, PR #${FINAL_PR})"
|
||||||
|
fi
|
||||||
|
|
||||||
# --- Back to main ---
|
# --- Back to main ---
|
||||||
git checkout main >> "$LOG" 2>&1
|
git checkout main >> "$LOG" 2>&1
|
||||||
log "=== Research session complete for $AGENT ==="
|
log "=== Research session complete for $AGENT ==="
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue