Pipeline reliability (8 fixes, reviewed by Ganymede+Rhea+Leo+Rio):
1. Merge API recovery — pre-flight approval check, transient/permanent distinction, jitter
2. Ghost PR detection — ls-remote branch check in reconciliation, network guard
3. Source status contract — directory IS status, no code change needed
4. Batch-state markers eliminated — two-gate skip (archive-check + batched branch-check)
5. Branch SHA tracking — batched ls-remote, auto-reset verdicts, dismiss stale reviews
6. Mirror pre-flight permissions — chown check in sync-mirror.sh
7. Telegram archive commit-after-write — git add/commit/push with rebase --abort fallback
8. Post-merge source archiving — queue/ → archive/{domain}/ after merge
Pipeline fixes:
- merge_cycled flag — eval attempts preserved during merge-failure cycling (Ganymede+Rhea)
- merge_failures diagnostic counter
- Startup recovery preserves eval_attempts (was incorrectly resetting to 0)
- No-diff PRs auto-closed by eval (root cause of 17 zombie PRs)
- GC threshold aligned with substantive fixer budget (was 2, now 4)
- Conflict retry with 3-attempt budget + permanent conflict handler
- Local ff-merge fallback for Forgejo 405 errors
Telegram bot:
- KB retrieval: 3-layer (entity resolution → claim search → agent context)
- Reply-to-bot handler (context.bot.id check)
- Tag regex: @teleo|@futairdbot
- Prompt rewrite for natural analyst voice
- Market data API integration (Ben's token price endpoint)
- Conversation windows (5-message unanswered counter, per-user-per-chat)
- Conversation history in prompt (last 5 exchanges)
- Worktree file lock for archive writes
Infrastructure:
- worktree_lock.py — file-based lock (flock) for main worktree coordination
- backfill-sources.py — source DB registration for Argus funnel
- batch-extract-50.sh v3 — two-gate skip, batched ls-remote, network guard
- sync-mirror.sh — auto-PR creation for mirrored GitHub branches, permission pre-flight
- Argus dashboard — conflicts + reviewing in backlog, queue count in funnel
- Enrichment-inside-frontmatter bug fix (regex anchor, not --- split)
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
112 lines
3.4 KiB
Python
112 lines
3.4 KiB
Python
#!/usr/bin/env python3
|
|
"""Market data API client for live token prices.
|
|
|
|
Calls Ben's teleo-ai-api endpoint for ownership coin prices.
|
|
Used by the Telegram bot to give Rio real-time market context.
|
|
|
|
Epimetheus owns this module. Rhea: static API key pattern.
|
|
"""
|
|
|
|
import logging
|
|
from pathlib import Path
|
|
|
|
import aiohttp
|
|
|
|
logger = logging.getLogger("market-data")
|
|
|
|
API_URL = "https://teleo-ai-api-257133920458.us-east4.run.app/v0/chat/tool/market-data"
|
|
API_KEY_FILE = "/opt/teleo-eval/secrets/market-data-key"
|
|
|
|
# Cache: avoid hitting the API on every message
|
|
_cache: dict[str, dict] = {} # token_name → {data, timestamp}
|
|
CACHE_TTL = 300 # 5 minutes
|
|
|
|
|
|
def _load_api_key() -> str | None:
|
|
"""Load the market-data API key from secrets."""
|
|
try:
|
|
return Path(API_KEY_FILE).read_text().strip()
|
|
except Exception:
|
|
logger.warning("Market data API key not found at %s", API_KEY_FILE)
|
|
return None
|
|
|
|
|
|
async def get_token_price(token_name: str) -> dict | None:
|
|
"""Fetch live market data for a token.
|
|
|
|
Returns dict with price, market_cap, volume, etc. or None on failure.
|
|
Caches results for CACHE_TTL seconds.
|
|
"""
|
|
import time
|
|
|
|
token_upper = token_name.upper().strip("$")
|
|
|
|
# Check cache
|
|
cached = _cache.get(token_upper)
|
|
if cached and time.time() - cached["timestamp"] < CACHE_TTL:
|
|
return cached["data"]
|
|
|
|
key = _load_api_key()
|
|
if not key:
|
|
return None
|
|
|
|
try:
|
|
async with aiohttp.ClientSession() as session:
|
|
async with session.post(
|
|
API_URL,
|
|
headers={
|
|
"X-Internal-Key": key,
|
|
"Content-Type": "application/json",
|
|
},
|
|
json={"token": token_upper},
|
|
timeout=aiohttp.ClientTimeout(total=10),
|
|
) as resp:
|
|
if resp.status >= 400:
|
|
logger.warning("Market data API %s → %d", token_upper, resp.status)
|
|
return None
|
|
data = await resp.json()
|
|
|
|
# Cache the result
|
|
_cache[token_upper] = {
|
|
"data": data,
|
|
"timestamp": time.time(),
|
|
}
|
|
return data
|
|
except Exception as e:
|
|
logger.warning("Market data API error for %s: %s", token_upper, e)
|
|
return None
|
|
|
|
|
|
def format_price_context(data: dict, token_name: str) -> str:
|
|
"""Format market data into a concise string for the LLM prompt."""
|
|
if not data:
|
|
return ""
|
|
|
|
# API returns a "result" text field with pre-formatted data
|
|
result_text = data.get("result", "")
|
|
if result_text:
|
|
return result_text
|
|
|
|
# Fallback for structured JSON responses
|
|
parts = [f"Live market data for {token_name}:"]
|
|
|
|
price = data.get("price") or data.get("current_price")
|
|
if price:
|
|
parts.append(f"Price: ${price}")
|
|
|
|
mcap = data.get("market_cap") or data.get("marketCap")
|
|
if mcap:
|
|
if isinstance(mcap, (int, float)) and mcap > 1_000_000:
|
|
parts.append(f"Market cap: ${mcap/1_000_000:.1f}M")
|
|
else:
|
|
parts.append(f"Market cap: {mcap}")
|
|
|
|
volume = data.get("volume") or data.get("volume_24h")
|
|
if volume:
|
|
parts.append(f"24h volume: ${volume}")
|
|
|
|
change = data.get("price_change_24h") or data.get("change_24h")
|
|
if change:
|
|
parts.append(f"24h change: {change}")
|
|
|
|
return " | ".join(parts) if len(parts) > 1 else ""
|