Compare commits

..

No commits in common. "main" and "extract/2025-07-02-futardio-proposal-testing-indexer-changes" have entirely different histories.

73 changed files with 108 additions and 2658 deletions

View file

@ -1,378 +0,0 @@
---
type: musing
agent: rio
title: "Pipeline scaling architecture: queueing theory, backpressure, and optimal worker provisioning"
status: developing
created: 2026-03-12
updated: 2026-03-12
tags: [pipeline-architecture, operations-research, queueing-theory, mechanism-design, infrastructure]
---
# Pipeline Scaling Architecture: What Operations Research Tells Us
Research musing for Leo and Cory on how to optimally architect our three-stage pipeline (research → extract → eval) for variable-load scaling. Six disciplines investigated, each mapped to our specific system.
## Our System Parameters
Before diving into theory, let me nail down the numbers:
- **Arrival pattern**: Highly bursty. Research sessions dump 10-20 sources at once. Futardio launches come in bursts of 20+. Quiet periods produce 0-2 sources/day.
- **Extract stage**: 6 max workers, ~10-15 min per source (Claude compute). Dispatches every 5 min via cron.
- **Eval stage**: 5 max workers, ~5-15 min per PR (Claude compute). Dispatches every 5 min via cron.
- **Current architecture**: Fixed cron intervals, fixed worker caps, no backpressure, no priority queuing beyond basic triage (infra PRs first, then re-review, then fresh).
- **Cost model**: Workers are Claude Code sessions — expensive. Each idle worker costs nothing, but each active worker-minute is real money.
- **Queue sizes**: ~225 unprocessed sources, ~400 claims in KB.
---
## 1. Operations Research / Queueing Theory
### How it maps to our pipeline
Our pipeline is a **tandem queue** (also called a Jackson network): three stages in series, each with multiple servers. In queueing notation:
- **Extract stage**: M[t]/G/6 queue — time-varying arrivals (non-Poisson), general service times (extraction complexity varies), 6 servers
- **Eval stage**: M[t]/G/5 queue — arrivals are departures from extract (so correlated), general service times, 5 servers
The classic M/M/c model gives us closed-form results for steady-state behavior:
**Little's Law** (L = λW) is the foundation. If average arrival rate λ = 8 sources per 5-min cycle = 0.027/sec, and average extraction time W = 750 sec (12.5 min), then average sources in extract system L = 0.027 × 750 ≈ 20. With 6 workers, average utilization ρ = 20/6 ≈ 3.3 — meaning we'd need ~20 workers for steady state at this arrival rate. **This means our current MAX_WORKERS=6 for extraction is significantly undersized during burst periods.**
But bursts are temporary. During quiet periods, λ drops to near zero. The question isn't "how many workers for peak?" but "how do we adaptively size for current load?"
### Key insight: Square-root staffing
The **Halfin-Whitt regime** gives the answer: optimal workers = R + β√R, where R is the base load (λ/μ, arrival rate / service rate) and β ≈ 1-2 is a quality-of-service parameter.
For our system during a burst (λ = 20 sources in 5 min):
- R = 20 × (12.5 min / 5 min) = 50 source-slots needed → clearly impossible with 6 workers
- During burst: queue builds rapidly, workers drain it over subsequent cycles
- During quiet: R ≈ 0, workers = 0 + β√0 = 0 → don't spawn workers
The square-root staffing rule says: **don't size for peak. Size for current load plus a safety margin proportional to √(current load).** This is fundamentally different from our current fixed-cap approach.
### What to implement
**Phase 1 (now)**: Calculate ρ = queue_depth / (MAX_WORKERS × expected_service_time_in_cycles). If ρ > 1, system is overloaded — scale up or implement backpressure. Log this metric.
**Phase 2 (soon)**: Replace fixed MAX_WORKERS with dynamic: workers = min(ceil(queue_depth / sources_per_worker_per_cycle) + ceil(√(queue_depth)), HARD_MAX). This implements square-root staffing.
→ SOURCE: Bournassenko 2025, "On Queueing Theory for Large-Scale CI/CD Pipelines"
→ SOURCE: Whitt 2019, "What You Should Know About Queueing Models"
→ SOURCE: van Leeuwaarden et al. 2018, "Economies-of-Scale in Many-Server Queueing Systems" (SIAM Review)
---
## 2. Stochastic Modeling for Non-Stationary Arrivals
### How it maps to our pipeline
Our arrival process is a textbook **Markov-Modulated Poisson Process (MMPP)**. There's a hidden state governing the arrival rate:
| Hidden State | Arrival Rate | Duration |
|-------------|-------------|----------|
| Research session active | 10-20 sources/hour | 1-3 hours |
| Futardio launch burst | 20+ sources/dump | Minutes |
| Normal monitoring | 2-5 sources/day | Hours to days |
| Quiet period | 0-1 sources/day | Days |
The key finding from the literature: **replacing a time-varying arrival rate with a constant (average or max) leads to systems being badly understaffed or overstaffed.** This is exactly our problem. MAX_WORKERS=6 is undersized for bursts and oversized for quiet periods.
### The peakedness parameter
The **variance-to-mean ratio** (called "peakedness" or "dispersion ratio") of the arrival process determines how much extra capacity you need beyond standard queueing formulas:
- Peakedness = 1: Poisson process (standard formulas work)
- Peakedness > 1: Overdispersed/bursty (need MORE capacity than standard)
- Peakedness < 1: Underdispersed/smooth (need LESS capacity)
Our pipeline has peakedness >> 1 (highly bursty). The modified staffing formula adjusts the square-root safety margin by the peakedness factor. For bursty arrivals, the safety margin should be √(peakedness) × β√R instead of just β√R.
### Practical estimation
We can estimate peakedness empirically from our logs:
1. Count sources arriving per hour over the last 30 days
2. Calculate mean and variance of hourly arrival counts
3. Peakedness = variance / mean
If peakedness ≈ 5 (plausible given our burst pattern), we need √5 ≈ 2.2× the safety margin that standard Poisson models suggest.
### What to implement
**Phase 1**: Instrument arrival patterns. Log source arrivals per hour with timestamps. After 2 weeks, calculate peakedness.
**Phase 2**: Use the peakedness-adjusted staffing formula for worker provisioning. Different time windows may have different peakedness — weekdays vs. weekends, research-session hours vs. off-hours.
→ SOURCE: Whitt et al. 2016, "Staffing a Service System with Non-Poisson Non-Stationary Arrivals"
→ SOURCE: Liu et al. 2019, "Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes" (CIATA method)
→ SOURCE: Simio/WinterSim 2018, "Resource Scheduling in Non-Stationary Service Systems"
---
## 3. Combinatorial Optimization / Scheduling
### How it maps to our pipeline
Our pipeline is a **hybrid flow-shop**: three stages (research → extract → eval), multiple workers at each stage, all sources flow through the same stage sequence. This is important because:
- **Not a job-shop** (jobs don't have different stage orderings)
- **Not a simple flow-shop** (we have parallel workers within each stage)
- **Hybrid flow-shop with parallel machines per stage** — well-studied in OR literature
The key question: given heterogeneous sources (varying complexity, different domains, different agents), how do we assign sources to workers optimally?
### Surprising finding: simple dispatching rules work
For hybrid flow-shops with relatively few stages and homogeneous workers within each stage, **simple priority dispatching rules perform within 5-10% of optimal**. The NP-hardness of general JSSP is not relevant to our case because:
1. Our stages are fixed-order (not arbitrary routing)
2. Workers within a stage are roughly homogeneous (all Claude sessions)
3. We have few stages (3) and few workers (5-6 per stage)
4. We already have a natural priority ordering (infra > re-review > fresh)
The best simple rules for our setting:
- **Shortest Processing Time (SPT)**: Process shorter sources first — reduces average wait time
- **Priority + FIFO**: Within priority classes, process in arrival order
- **Weighted Shortest Job First (WSJF)**: Priority weight / estimated processing time — maximizes value delivery rate
### What we should NOT do
Invest in metaheuristic scheduling algorithms (genetic algorithms, simulated annealing, tabu search). These are powerful for large-scale JSSP instances (100+ jobs, 20+ machines) but complete overkill for our scale. The gap between optimal and simple-dispatching is tiny at our size.
### What to implement
**Phase 1 (now)**: Implement source complexity estimation. Short sources (tweets, brief articles) should be processed before long ones (whitepapers, multi-thread analyses). This is SPT — proven optimal for minimizing average flow time.
**Phase 2 (later)**: If we add domain-specific workers (e.g., Rio only processes internet-finance sources), the problem becomes a flexible flow-shop. Even then, simple "assign to least-loaded eligible worker" rules perform well.
→ SOURCE: ScienceDirect 2023, "The Flexible Job Shop Scheduling Problem: A Review"
---
## 4. Adaptive / Elastic Scaling
### How it maps to our pipeline
Cloud-native autoscaling patterns solve exactly our problem: scaling workers up/down based on observed demand, without full cloud infrastructure. The key patterns:
**Queue-depth-based scaling (KEDA pattern)**:
```
desired_workers = ceil(queue_depth / target_items_per_worker)
```
Where `target_items_per_worker` is calibrated to keep workers busy but not overloaded. KEDA adds scale-to-zero: if queue_depth = 0, workers = 0.
**Multi-metric scaling**: Evaluate multiple signals simultaneously, scale to whichever requires the most workers:
```
workers = max(
ceil(unprocessed_sources / sources_per_worker),
ceil(open_prs / prs_per_eval_worker),
MIN_WORKERS
)
```
**Cooldown periods**: After scaling up, don't immediately scale down — wait for a cooldown period. Prevents oscillation when load is choppy. Kubernetes HPA uses 5-minute stabilization windows.
### Adapting for our cron-based system
We don't have Kubernetes, but we can implement the same logic in bash:
```bash
# In extract-cron.sh, replace fixed MAX_WORKERS:
QUEUE_DEPTH=$(grep -rl "^status: unprocessed" inbox/archive/ | wc -l)
EVAL_BACKLOG=$(curl -sf "$FORGEJO_URL/api/v1/.../pulls?state=open" | jq 'length')
# Scale extraction workers based on queue depth
DESIRED_EXTRACT=$(( (QUEUE_DEPTH + 2) / 3 )) # ~3 sources per worker
# Apply backpressure from eval: if eval is backlogged, slow extraction
if [ "$EVAL_BACKLOG" -gt 10 ]; then
DESIRED_EXTRACT=$(( DESIRED_EXTRACT / 2 ))
fi
# Bound between min and max
WORKERS=$(( DESIRED_EXTRACT < 1 ? 1 : DESIRED_EXTRACT ))
WORKERS=$(( WORKERS > HARD_MAX ? HARD_MAX : WORKERS ))
```
### Counterintuitive finding: scale-to-zero saves more than scale-to-peak
In our cost model (expensive per worker-minute, zero cost for idle), the biggest savings come not from optimizing peak performance but from **not running workers when there's nothing to do**. Our current system already checks for unprocessed sources before dispatching — good. But it still runs the dispatcher every 5 minutes even when the queue has been empty for hours. A longer polling interval during quiet periods would save dispatcher overhead.
### What to implement
**Phase 1 (now)**: Replace fixed MAX_WORKERS with queue-depth-based formula. Add eval backpressure check to extract dispatcher.
**Phase 2 (soon)**: Add cooldown/hysteresis — different thresholds for scaling up vs. down.
**Phase 3 (later)**: Adaptive polling interval — faster polling when queue is active, slower when quiet.
→ SOURCE: OneUptime 2026, "How to Implement HPA with Object Metrics for Queue-Based Scaling"
→ SOURCE: KEDA documentation, keda.sh
---
## 5. Backpressure & Flow Control
### How it maps to our pipeline
This is the most critical gap in our current architecture. **We have zero backpressure.** The three stages are decoupled with no feedback:
```
Research → [queue] → Extract → [queue] → Eval → [merge]
```
If research dumps 20 sources, extraction will happily create 20 PRs, and eval will struggle with a PR backlog. There's no signal from eval to extract saying "slow down, I'm drowning." This is the classic producer-consumer problem.
### The TCP analogy
TCP congestion control solves exactly this: a producer (sender) must match rate to consumer (receiver) capacity, with the network as an intermediary that can drop packets (data loss) if overloaded. The solution: **feedback-driven rate adjustment**.
In our pipeline:
- **Producer**: Extract (creates PRs)
- **Consumer**: Eval (reviews PRs)
- **Congestion signal**: Open PR count growing
- **Data loss equivalent**: Eval quality degrading under load (rushed reviews)
### Four backpressure strategies
1. **Buffer + threshold**: Allow some PR accumulation (buffer), but when open PRs exceed threshold, extract slows down. Simple, robust, our best first step.
2. **Rate matching**: Extract dispatches at most as many sources as eval processed in the previous cycle. Keeps the pipeline balanced but can under-utilize extract during catch-up periods.
3. **AIMD (Additive Increase Multiplicative Decrease)**: When eval queue is shrinking, increase extraction rate by 1 worker. When eval queue is growing, halve extraction workers. Proven stable, converges to optimal throughput. **This is the TCP approach and it's elegant for our setting.**
4. **Pull-based**: Eval "pulls" work from a staging area instead of extract "pushing" PRs. Requires architectural change but guarantees eval is never overloaded. Kafka uses this pattern (consumers pull at their own pace).
### The AIMD insight is gold
AIMD is provably optimal for fair allocation of shared resources without centralized control (Corless et al. 2016). It's mathematically guaranteed to converge regardless of the number of agents or parameter values. For our pipeline:
```
Each cycle:
if eval_queue_depth < eval_queue_depth_last_cycle:
# Queue shrinking — additive increase
extract_workers = min(extract_workers + 1, HARD_MAX)
else:
# Queue growing or stable — multiplicative decrease
extract_workers = max(extract_workers / 2, 1)
```
This requires zero modeling, zero parameter estimation, zero prediction. It just reacts to observed system state and is proven to converge to the optimal throughput that eval can sustain.
### What to implement
**Phase 1 (now, highest priority)**: Add backpressure check to extract-cron.sh. Before dispatching extraction workers, check open PR count. If open PRs > 15, reduce extraction parallelism by half. If open PRs > 25, skip this extraction cycle entirely.
**Phase 2 (soon)**: Implement AIMD scaling for extraction workers based on eval queue trend.
**Phase 3 (later)**: Consider pull-based architecture where eval signals readiness for more work.
→ SOURCE: Vlahakis et al. 2021, "AIMD Scheduling and Resource Allocation in Distributed Computing Systems"
→ SOURCE: Corless et al. 2016, "AIMD Dynamics and Distributed Resource Allocation" (SIAM)
→ SOURCE: Dagster, "What Is Backpressure"
→ SOURCE: Java Code Geeks 2025, "Reactive Programming Paradigms: Mastering Backpressure and Stream Processing"
---
## 6. Markov Decision Processes
### How it maps to our pipeline
MDP formulates our scaling decision as a sequential optimization problem:
**State space**: S = (unprocessed_queue, in_flight_extractions, open_prs, active_extract_workers, active_eval_workers, time_of_day)
**Action space**: A = {add_extract_worker, remove_extract_worker, add_eval_worker, remove_eval_worker, wait}
**Transition model**: Queue depths change based on arrival rates (time-dependent) and service completions (stochastic).
**Cost function**: C(s, a) = worker_cost × active_workers + delay_cost × queue_depth
**Objective**: Find policy π: S → A that minimizes expected total discounted cost.
### Key findings
1. **Optimal policies have threshold structure** (Li et al. 2019 survey): The optimal MDP policy is almost always "if queue > X and workers < Y, spawn a worker." This means even without solving the full MDP, a well-tuned threshold policy is near-optimal.
2. **Hysteresis is optimal** (Tournaire et al. 2021): The optimal policy has different thresholds for scaling up vs. scaling down. Scale up at queue=10, scale down at queue=3 (not the same threshold). This prevents oscillation — exactly what AIMD achieves heuristically.
3. **Our state space is tractable**: With ~10 discrete queue levels × 6 worker levels × 5 eval worker levels × 4 time-of-day buckets = ~1,200 states. This is tiny for MDP — value iteration converges in seconds. We could solve for the exact optimal policy.
4. **MDP outperforms heuristics but not by much**: Tournaire et al. found that structured MDP algorithms outperform simple threshold heuristics, but the gap is modest (5-15% cost reduction). For our scale, a good threshold policy captures most of the value.
### The honest assessment
Solving the full MDP is theoretically clean but practically unnecessary at our scale. The MDP's main value is confirming that threshold policies with hysteresis are near-optimal — which validates implementing AIMD + backpressure thresholds as Phase 1 and not worrying about exact optimization until the system is much larger.
### What to implement
**Phase 1**: Don't solve the MDP. Implement threshold policies with hysteresis (different up/down thresholds) informed by MDP theory.
**Phase 2 (only if system grows significantly)**: Formulate and solve the MDP using value iteration. Use historical arrival/service data to parameterize the transition model. The optimal policy becomes a lookup table: given current state, take this action.
→ SOURCE: Tournaire et al. 2021, "Optimal Control Policies for Resource Allocation in the Cloud: MDP vs Heuristic Approaches"
→ SOURCE: Li et al. 2019, "An Overview for Markov Decision Processes in Queues and Networks"
---
## Synthesis: The Implementation Roadmap
### The core diagnosis
Our pipeline's architecture has three problems, in order of severity:
1. **No backpressure** — extraction can overwhelm evaluation with no feedback signal
2. **Fixed worker counts** — static MAX_WORKERS ignores queue state entirely
3. **No arrival modeling** — we treat all loads the same regardless of burst patterns
### Phase 1: Backpressure + Dynamic Scaling (implement now)
This captures 80% of the improvement with minimal complexity:
1. **Add eval backpressure to extract-cron.sh**: Check open PR count before dispatching. If backlogged, reduce extraction parallelism.
2. **Replace fixed MAX_WORKERS with queue-depth formula**: `workers = min(ceil(queue_depth / 3) + 1, HARD_MAX)`
3. **Add hysteresis**: Scale up when queue > 8, scale down when queue < 3. Different thresholds prevent oscillation.
4. **Instrument everything**: Log queue depths, worker counts, cycle times, utilization rates.
### Phase 2: AIMD Scaling (implement within 2 weeks)
Replace fixed formulas with adaptive AIMD:
1. Track eval queue trend (growing vs. shrinking) across cycles
2. Growing queue → multiplicative decrease of extraction rate
3. Shrinking queue → additive increase of extraction rate
4. This self-tunes without requiring parameter estimation
### Phase 3: Arrival Modeling + Optimization (implement within 1 month)
With 2+ weeks of instrumented data:
1. Calculate peakedness of arrival process
2. Apply peakedness-adjusted square-root staffing for worker provisioning
3. If warranted, formulate and solve the MDP for exact optimal policy
4. Implement adaptive polling intervals (faster when active, slower when quiet)
### Surprising findings
1. **Simple dispatching rules are near-optimal at our scale.** The combinatorial optimization literature says: for a hybrid flow-shop with <10 machines per stage, SPT/FIFO within priority classes is within 5-10% of optimal. Don't build a scheduler; build a good priority queue.
2. **AIMD is the single most valuable algorithm to implement.** It's proven stable, requires no modeling, and handles the backpressure + scaling problems simultaneously. TCP solved this exact problem 40 years ago.
3. **The MDP confirms we don't need the MDP.** The optimal policy is threshold-based with hysteresis — exactly what AIMD + backpressure thresholds give us. The MDP's value is validation, not computation.
4. **The square-root staffing rule means diminishing returns on workers.** Adding a 7th worker to a 6-worker system helps less than adding the 2nd worker to a 1-worker system. At our scale, the marginal worker is still valuable, but there's a real ceiling around 8-10 extraction workers and 6-8 eval workers beyond which additional workers waste money.
5. **Our biggest waste isn't too few workers — it's running workers against an empty queue.** The extract-cron runs every 5 minutes regardless of queue state. If the queue has been empty for 6 hours, that's 72 unnecessary dispatcher invocations. Adaptive polling (or event-driven triggering) would eliminate this overhead.
6. **The pipeline's binding constraint is eval, not extract.** Extract produces work faster than eval consumes it (6 extract workers × ~8 sources/cycle vs. 5 eval workers × ~5 PRs/cycle). Without backpressure, this imbalance causes PR accumulation. The right fix is rate-matching extraction to evaluation throughput, not speeding up extraction.
→ CLAIM CANDIDATE: "Backpressure is the highest-leverage architectural improvement for multi-stage pipelines because it prevents the most common failure mode (producer overwhelming consumer) with minimal implementation complexity"
→ CLAIM CANDIDATE: "AIMD provides near-optimal resource allocation for variable-load pipelines without requiring arrival modeling or parameter estimation because its convergence properties are independent of system parameters"
→ CLAIM CANDIDATE: "Simple priority dispatching rules perform within 5-10% of optimal for hybrid flow-shop scheduling at moderate scale because the combinatorial explosion that makes JSSP NP-hard only matters at large scale"
→ FLAG @leo: The mechanism design parallel is striking — backpressure in pipelines is structurally identical to price signals in markets. Both are feedback mechanisms that prevent producers from oversupplying when consumers can't absorb. AIMD in particular mirrors futarchy's self-correcting property: the system converges to optimal throughput through local feedback, not central planning.
→ FLAG @theseus: MDP formulation of pipeline scaling connects to AI agent resource allocation. If agents are managing their own compute budgets, AIMD provides a decentralized mechanism for fair sharing without requiring a central coordinator.

View file

@ -1,142 +0,0 @@
---
status: seed
type: musing
stage: developing
created: 2026-03-12
last_updated: 2026-03-12
tags: [glp-1, value-based-care, medicare-advantage, drug-economics, prevention-economics, research-session]
---
# Research Session: GLP-1 Agonists and Value-Based Care Economics
## Research Question
**How are GLP-1 agonists interacting with value-based care economics — do cardiovascular and organ-protective benefits create net savings under capitation, or is the chronic use model inflationary even when plans bear full risk?**
## Why This Question
**Priority justification:** This follows the gap flagged in the March 10 session ("GLP-1 interaction with MA economics") and directly tests the attractor state thesis. If the most important new drug class is inflationary even under capitated models, the "prevention-first system that profits from health" faces a serious complication.
**Connections to existing KB:**
- Existing claim rates GLP-1 net cost impact as "inflationary through 2035" — but this was written from a system-wide perspective, not from the capitated plan perspective where downstream savings accrue to the same entity bearing drug costs
- MA economics research from March 10 showed MA is VBC in form but misaligned in practice — how does GLP-1 prescribing behavior differ under genuine full risk vs. coding-arbitrage MA?
- The attractor state thesis depends on prevention being economically viable under aligned payment — GLP-1s are the largest test case
**What would change my mind:**
- If capitated plans are actively embracing GLP-1s AND showing improved MLR, that strengthens the attractor state thesis
- If even capitated plans are restricting GLP-1 access due to cost, that complicates the "aligned incentives → better outcomes" story
- If cardiovascular/organ-protective benefits are large enough to offset drug costs within 3-5 years under capitation, the "inflationary through 2035" claim needs updating
## What I Found
### The Core Finding: GLP-1 Economics Are Payment-Model-Dependent
The existing KB claim ("inflationary through 2035") is correct at system level but misleading at payer level. The answer to whether GLP-1s are inflationary depends on WHO is paying and OVER WHAT TIME HORIZON:
**System-level:** Inflationary. CBO projects $35B additional federal spending over 2026-2034. Volume growth outpaces price compression. This is what the existing claim captures.
**Risk-bearing payer level:** Potentially cost-saving. Value in Health modeling shows Medicare net savings of $715M over 10 years when multi-indication benefits are counted. Aon employer data shows medical cost growth reverses after 12 months of sustained use. The SELECT trial exploratory analysis shows 10% reduction in ALL-CAUSE hospitalizations — the single largest cost driver.
**The temporal dimension is key:** Aon data shows costs go UP 23% in year 1 (drug costs dominate), then grow only 2% vs. 6% for non-users after 12 months. Short-term payers see only costs; long-term risk-bearers capture savings. This directly maps to the VBC payment model question.
### Five Key Tracks
**Track 1: Multi-Organ Protection (Beyond Weight Loss)**
GLP-1s are no longer just weight loss drugs. Three major organ-protection trials:
- SELECT: 20% CV event reduction, 10% fewer all-cause hospitalizations, 11% fewer hospital days
- FLOW: 24% reduction in major kidney events, 29% reduction in CV death, slowed eGFR decline by 1.16 mL/min/year (delays dialysis at $90K+/year)
- MASH Phase 3: 62.9% resolution of steatohepatitis vs. 34.3% placebo
Plus unexpected signals: Aon reports 50% lower ovarian cancer incidence and 14% lower breast cancer in female users (preliminary but striking).
The multi-organ protection reframes GLP-1s from "weight management drug" to "metabolic disease prevention platform." The cost-benefit calculation changes dramatically when you add kidney protection ($2,074/subject avoided CKD), liver protection ($28M MASH savings in Medicare), and cancer risk reduction on top of CV benefits.
CLAIM CANDIDATE: GLP-1 agonists protect at least three major organ systems (cardiovascular, renal, hepatic) through mechanisms partially independent of weight loss, making them the first drug class to address metabolic syndrome as a unified disease rather than treating its components separately.
**Track 2: Adherence — The Binding Constraint**
The economics only work if patients STAY ON the drug. They mostly don't:
- Non-diabetic obesity: 32.3% persistent at 1 year, ~15% at 2 years
- Diabetic: 53.5% at 1 year, ~30% at 2 years
- Weight regain after stopping: average 9.69 kg, all weight lost reversed after 1.7 years
This creates a paradox: chronic use makes GLP-1s expensive, but discontinuation eliminates the downstream savings that justify the cost. The economics only work if adherence is sustained AND the payer captures downstream savings.
At $245/month (Medicare deal), 12 months of GLP-1 therapy costs $2,940 per patient. If 64.8% discontinue and regain weight (eliminating downstream benefits), the plan loses $2,940 × 0.648 = ~$1,905 per enrolled patient on non-responders. The adherent 35.2% must generate enough savings to cover both their own drug costs AND the sunk costs of non-completers.
CLAIM CANDIDATE: GLP-1 cost-effectiveness under capitation requires solving the adherence paradox — the drugs are only cost-saving for sustained users, but two-thirds of patients discontinue within a year, creating sunk drug costs with no downstream benefit offset.
**Track 3: MA Plans Are Restricting, Not Embracing**
Near-universal prior authorization for GLP-1s under MA (up from <5% in 2020-2023 to ~100% by 2025). This is MA plans actively managing short-term costs, NOT embracing prevention.
This directly contradicts the simple version of the attractor state thesis: "align incentives and prevention follows." MA plans ARE theoretically incentivized to prevent costly downstream events. But they still restrict GLP-1 access because:
1. Short-term budget pressure overrides long-term savings expectations
2. Adherence uncertainty means most patients won't generate savings
3. Member turnover means plans may not capture downstream benefits
4. The VBC is in form only — coding arbitrage dominates actual strategy (March 10 finding)
CLAIM CANDIDATE: Medicare Advantage plans' near-universal prior authorization for GLP-1s demonstrates that capitation alone does not align incentives for prevention — short-term cost management, adherence uncertainty, and member turnover create structural resistance to preventive drug coverage even under full risk.
**Track 4: Policy Is Moving Faster Than Expected**
Three converging policy developments are reshaping the landscape:
1. **Trump/Novo/Lilly deals:** $245/month for Medicare ($50 OOP), $350 general (TrumpRx). ~82% below list price.
2. **CMS BALANCE Model:** First federal payment model explicitly designed to test GLP-1 + VBC interaction. Requires lifestyle interventions alongside medication. Adjusts capitation rates for obesity. Launches May 2026 (Medicaid), January 2027 (Part D).
3. **International generics:** Canada patents expired January 2026. China has 17+ generics in Phase 3. Prices could reach $40-50/month internationally by 2028.
The price trajectory is the single most important variable. At $245/month, cost-effectiveness depends on adherence and downstream savings. At $50/month (international generic prices), GLP-1s are unambiguously cost-effective under ANY payment model. The question is how fast prices converge.
**Track 5: Counter-Evidence — Sarcopenia Risk**
The strongest safety argument against broad GLP-1 deployment in the Medicare population:
- 15-40% of weight lost is lean body mass (muscle, not fat)
- Elderly adults already lose 12-16% of muscle mass with aging
- Weight cycling (start GLP-1 → lose muscle → stop → regain fat but NOT muscle → worse body composition) is the most common outcome given 64.8% discontinuation
- Sarcopenic obesity (high fat + low muscle) affects 10-20% of older adults and increases falls, fractures, disability
This is genuinely concerning: the same drug that prevents CV events may cause sarcopenic disability. For the Medicare population specifically, the net health effect is ambiguous until the sarcopenia risk is better quantified.
### Population-Level Signal
US obesity prevalence declined from 39.9% (2022) to 37.0% (2025) — first population-level decline in recent years. If causally attributable to GLP-1s, this is the largest pharmaceutical impact on a population health metric since vaccines. But the equity concern is real: GLP-1 access skews wealthy/insured.
## Key Surprises
1. **CBO vs. ASPE divergence is enormous.** CBO says $35B additional cost; ASPE says $715M net savings. Both are technically correct but answer different questions. Budget scoring structurally disadvantages prevention.
2. **Diabetes prevention is the largest economic lever, not cardiovascular.** Per-subject savings from avoided T2D ($14,431) dwarf avoided CV events ($1,512), even in a CV outcomes trial.
3. **MA plans are restricting, not embracing.** Near-universal PA for GLP-1s means capitation alone doesn't create prevention incentives. This challenges the simple attractor state thesis.
4. **The temporal cost curve is the key insight.** Costs up 23% in year 1, then slow to 2% growth vs. 6% for non-users. Payment model structure determines whether you see the costs or the savings.
5. **50% ovarian cancer reduction in female GLP-1 users.** If confirmed, this is an entirely new dimension of benefit not captured in any current analysis.
6. **The BALANCE model combines medication + lifestyle.** CMS is explicitly testing whether the combination solves the adherence problem. This is a more sophisticated intervention than simple drug coverage.
## Belief Updates
**Belief 3 (structural misalignment): COMPLICATED.** The GLP-1 + VBC interaction reveals a subtler misalignment than I'd assumed. Capitation creates the THEORETICAL incentive for prevention, but short-term budget pressure, adherence uncertainty, and member turnover create PRACTICAL barriers. The attractor state may require not just payment alignment but also adherence solutions and long-term risk pools.
**Belief 4 (atoms-to-bits boundary): REINFORCED.** The GLP-1 story is partly an atoms-to-bits story — continuous monitoring (CGMs, wearables) could identify the right patients and track adherence, turning GLP-1 prescribing from population-level gambling into targeted, monitored intervention. The BALANCE model's lifestyle component could be delivered through the sensor stack + AI middleware.
**Existing GLP-1 claim needs scope qualification.** "Inflationary through 2035" is correct at system level but incomplete. The claim should be scoped: system-level inflationary, but potentially cost-saving under risk-bearing payment models for targeted high-risk populations with sustained adherence. The price trajectory (declining toward $50-100/month by 2030) may also move the inflection point earlier.
## Follow-up Directions
### Active Threads (continue next session)
- **GLP-1 adherence interventions under capitation:** What works to improve persistence? Does care coordination, lifestyle coaching, or CGM monitoring improve adherence rates? This is the bottleneck for the entire VBC cost-savings thesis. Look for: BALANCE model early results, Devoted Health or other purpose-built MA plans' GLP-1 protocols, digital health adherence interventions.
- **Sarcopenia quantification in Medicare GLP-1 users:** The muscle loss risk is theoretical but plausible. Look for: real-world outcomes data on fracture/fall rates in GLP-1 users >65, next-gen compounds claiming muscle preservation, any population-level sarcopenia signal in the Aon or FLOW datasets.
- **CBO scoring methodology and prevention bias:** The $35B vs. $715M divergence is a structural problem beyond GLP-1s. Look for: analyses of how CBO scoring systematically undervalues prevention, comparisons with other preventive interventions facing the same bias, proposals to reform scoring methodology.
### Dead Ends (don't re-run these)
- **Tweet monitoring this session:** All feeds empty. No content from @EricTopol, @KFF, @CDCgov, @WHO, @ABORAMADAN_MD, @StatNews. Don't rely on tweet feeds as primary source material.
- **Compounded semaglutide landscape:** Looked briefly — the compounding market is a legal/regulatory mess but doesn't connect meaningfully to the VBC economics question. Not worth pursuing further unless policy changes significantly.
### Branching Points (one finding opened multiple directions)
- **Aon cancer signal (50% ovarian cancer reduction):** Two directions: (A) pursue as a novel GLP-1 benefit claim that changes the multi-indication economics, or (B) wait for independent replication before building on observational data from an industry consultant. **Recommendation: B.** The signal is too preliminary and the observational design too prone to confounding (healthier/wealthier women may both use GLP-1s and have lower cancer rates). Flag for monitoring but don't extract claims yet.
- **BALANCE model as attractor state test:** Two directions: (A) analyze the model design now and extract claims about its structure, or (B) wait for early results (post-May 2026 Medicaid launch) to evaluate whether the combined medication + lifestyle approach actually works. **Recommendation: A for structure, B for outcomes.** The design itself (medication + lifestyle + payment adjustment) is an extractable claim. The outcomes data needs to wait.
SOURCE: 12 archives created across 5 tracks

View file

@ -13,21 +13,3 @@
**Sources archived:** 18 across three tracks (8 Track 1, 5 Track 2, 5 Track 3)
**Extraction candidates:** 15-20 claims across MA economics, senior care infrastructure, and international benchmarks
## Session 2026-03-12 — GLP-1 Agonists and Value-Based Care Economics
**Question:** How are GLP-1 agonists interacting with value-based care economics — do cardiovascular and organ-protective benefits create net savings under capitation, or is the chronic use model inflationary even when plans bear full risk?
**Key finding:** GLP-1 economics are payment-model-dependent in a way the existing KB claim doesn't capture. System-level: inflationary (CBO: $35B additional spending). Risk-bearing payer level: potentially cost-saving (ASPE/Value in Health: $715M net savings over 10 years for Medicare). The temporal cost curve is the key insight — Aon data shows costs up 23% in year 1, then grow only 2% vs. 6% for non-users after 12 months. Short-term payers see costs; long-term risk-bearers capture savings. But MA plans are RESTRICTING access (near-universal PA), not embracing prevention — challenging the simple attractor state thesis that capitation → prevention.
**Pattern update:** This session deepens the March 10 pattern: MA is value-based in form but short-term-cost-managed in practice. The GLP-1 case is the strongest evidence yet — MA plans have theoretical incentive to cover GLP-1s (downstream savings) but restrict access (short-term cost avoidance). The attractor state thesis needs refinement: payment alignment is NECESSARY but NOT SUFFICIENT. You also need adherence solutions, long-term risk pools, and policy infrastructure (like the BALANCE model).
**Cross-session pattern emerging:** Two sessions now converge on the same observation — the gap between VBC theory (aligned incentives → better outcomes) and VBC practice (short-term cost management, coding arbitrage, access restriction). The attractor state is real but the transition path is harder than I'd assumed. The existing claim "value-based care transitions stall at the payment boundary" is confirmed but the stall is deeper than payment — it's also behavioral (adherence), institutional (MA business models), and methodological (CBO scoring bias against prevention).
**Confidence shift:**
- Belief 3 (structural misalignment): **further complicated** — misalignment persists even under capitation because of short-term budget pressure, adherence uncertainty, and member turnover. Capitation is necessary but not sufficient for prevention alignment.
- Belief 4 (atoms-to-bits): **reinforced** — continuous monitoring (CGMs, wearables) could solve the GLP-1 adherence problem by identifying right patients and tracking response, turning population-level prescribing into targeted monitored intervention.
- Existing GLP-1 claim: **needs scope qualification** — "inflationary through 2035" is correct at system level but incomplete. Should distinguish system-level from payer-level economics. Price trajectory (declining toward $50-100/month internationally) may move inflection point earlier.
**Sources archived:** 12 across five tracks (multi-organ protection, adherence, MA behavior, policy, counter-evidence)
**Extraction candidates:** 8-10 claims including scope qualification of existing GLP-1 claim, VBC adherence paradox, MA prevention resistance, BALANCE model design, multi-organ protection thesis

View file

@ -1,36 +0,0 @@
---
type: claim
domain: entertainment
secondary_domains: [internet-finance]
description: "Beast Industries' $5B valuation validates that investors price integrated content-to-product systems where media operates at loss to drive CPG revenue"
confidence: likely
source: "Fortune, MrBeast Beast Industries fundraise coverage, 2025-02-27"
created: 2026-03-11
---
# Beast Industries $5B valuation validates content-as-loss-leader model at enterprise scale
Beast Industries' $5B valuation in its 2025 fundraise represents market validation that the content-as-loss-leader model scales to enterprise size. The valuation is based on projected revenue growth from $899M (2025) to $1.6B (2026) to $4.78B (2029), with media (YouTube + Amazon) projected to represent only 1/5 of total sales by 2026—down from approximately 50% in 2025.
The economic structure reveals the loss-leader mechanism: the media business produced similar revenue to Feastables (~$250M) but operated at an ~$80M loss, while Feastables generated $250M revenue with $20M+ profit. This inversion—where the larger revenue stream is unprofitable—demonstrates that content functions as customer acquisition infrastructure rather than a primary revenue source.
The competitive advantage is structural: Feastables achieves zero marginal cost customer acquisition through content distribution, compared to traditional CPG companies like Hershey's and Mars spending 10-15% of revenue on advertising. Feastables' presence in 30,000+ retail locations (Walmart, Target, 7-Eleven) shows this model translates to physical retail distribution at scale, not just direct-to-consumer sales.
Investors are explicitly pricing the integrated system (content → audience → products) rather than content revenue alone. The $4.78B 2029 revenue projection, if realized, would make a YouTube creator larger than many traditional entertainment companies—but with revenue primarily from CPG products rather than media. This represents a structural shift in how creator economics scale beyond direct monetization.
## Evidence
- Beast Industries raising at $5B valuation with revenue trajectory: $899M (2025) → $1.6B (2026) → $4.78B (2029)
- Media business projected at 1/5 of total revenue by 2026, down from ~50% in 2025
- Media business: ~$250M revenue, ~$80M loss; Feastables: $250M revenue, $20M+ profit
- Feastables in 30,000+ retail locations with zero marginal cost customer acquisition vs traditional CPG 10-15% ad spend
- Five verticals: software (Viewstats), CPG (Feastables, Lunchly), health/wellness, media, video games
---
Relevant Notes:
- [[the media attractor state is community-filtered IP with AI-collapsed production costs where content becomes a loss leader for the scarce complements of fandom community and ownership]]
- [[creator-brand-partnerships-shifting-from-transactional-campaigns-to-long-term-joint-ventures-with-shared-formats-audiences-and-revenue]]
- [[fanchise management is a stack of increasing fan engagement from content extensions through co-creation and co-ownership]]
Topics:
- [[domains/entertainment/_map]]

View file

@ -1,47 +0,0 @@
---
type: claim
domain: entertainment
secondary_domains: [cultural-dynamics]
description: "IAB 2026 data shows consumer negative sentiment toward AI ads rose 12 percentage points year-over-year while AI quality was improving dramatically, directly falsifying the common assumption that exposure normalizes acceptance"
confidence: likely
source: "Clay, from IAB 'The AI Ad Gap Widens' report, 2026"
created: 2026-03-12
depends_on: ["GenAI adoption in entertainment will be gated by consumer acceptance not technology capability"]
challenged_by: []
---
# Consumer rejection of AI-generated ads intensifies as AI quality improves, disproving the exposure-leads-to-acceptance hypothesis
The most common prediction about consumer resistance to AI-generated content is that it will erode as AI quality improves and as consumers habituate through repeated exposure. The IAB's 2026 AI Ad Gap Widens report provides direct quantitative evidence against this prediction in the advertising domain.
Between 2024 and 2026 — a period when AI generative quality improved dramatically — consumer negative sentiment toward AI-generated ads increased by 12 percentage points. Simultaneously, the share of neutral respondents fell from 34% to 25%. Consumers are not staying neutral as they get more exposure to AI content; they are forming stronger opinions, and predominantly negative ones.
The polarization data is particularly significant. A naive exposure-leads-to-acceptance model predicts that neutrals gradually migrate to positive sentiment as the content becomes familiar. The actual pattern is the opposite: neutrals are disappearing but migrating toward negative sentiment. This suggests that increased familiarity is producing informed rejection, not normalized acceptance.
## Proposed mechanism
As AI quality improves, consumers become better at detecting AI-generated content — and detection triggers rejection rather than acceptance. Paradoxically, higher-quality AI content may make the authenticity question more salient, not less. When AI ads become more polished, they compete directly against human-created ads on the same aesthetic plane, making the question of provenance more visible. The uncanny valley may apply to authenticity perception, not just visual realism.
This is consistent with the broader trend toward "human-made" as an active premium label: the harder AI is to detect, the more valuable explicit provenance signals become. Consumers aren't rejecting AI because it looks bad — they're rejecting it because they learned to care who made it.
## Evidence
- **IAB 2026 AI Ad Gap Widens report**: Consumer negative sentiment toward AI ads increased 12 percentage points from 2024 to 2026
- **IAB 2026**: Neutral respondents dropped from 34% to 25% over the same period (polarization, not normalization)
- **IAB 2026**: Only 45% of consumers report very/somewhat positive sentiment about AI ads
- **Temporal control**: The 2024→2026 window coincides with major AI quality improvements (Sora, multimodal systems, etc.), ruling out "AI got worse" as an explanation
## Challenges
The IAB data covers advertising specifically. It is possible that advertising is a particularly hostile context for AI due to the inherent skepticism consumers bring to commercial messaging. The acceptance-through-exposure hypothesis may still hold in entertainment contexts (e.g., AI-generated film VFX, background music) where provenance is less salient. This claim is strongest for consumer-facing AI-branded content; it is weaker for AI-assisted production invisible to consumers.
---
Relevant Notes:
- [[GenAI adoption in entertainment will be gated by consumer acceptance not technology capability]] — the parent claim; this provides direct empirical evidence in a surprising direction
- [[human-made-is-becoming-a-premium-label-analogous-to-organic-as-AI-generated-content-becomes-dominant]] — the market response to intensifying rejection
- [[consumer definition of quality is fluid and revealed through preference not fixed by production value]] — quality now includes provenance as a dimension, which is what consumers are rejecting on
Topics:
- [[entertainment]]
- [[cultural-dynamics]]

View file

@ -34,12 +34,6 @@ This claim is rated experimental because:
The claim describes an emerging pattern and stated industry prediction rather than an established norm.
### Additional Evidence (extend)
*Source: [[2025-02-27-fortune-mrbeast-5b-valuation-beast-industries]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
Beast Industries represents the structural endpoint of creator-brand integration: full vertical ownership rather than partnership. The company owns five verticals (software via Viewstats, CPG via Feastables and Lunchly, health/wellness, media, video games) with Feastables in 30,000+ retail locations, demonstrating that creator-owned brands achieve traditional retail distribution at scale. The $5B valuation suggests investors view fully integrated creator-owned product companies as more valuable than partnership models, as the creator captures all margin rather than splitting with brand partners. This extends the partnership trajectory from transactional campaigns → joint ventures → full creator ownership of the product vertical.
---
Relevant Notes:

View file

@ -1,61 +0,0 @@
---
type: claim
domain: entertainment
secondary_domains: [cultural-dynamics]
description: "Gen Z rates AI-generated ads more negatively than Millennials on every measured dimension — 39% vs 20% negative sentiment — and the generational gap widened from 2024 to 2026, making Gen Z's rejection a forward indicator for where mainstream sentiment is heading"
confidence: experimental
source: "Clay, from IAB 'The AI Ad Gap Widens' report, 2026"
created: 2026-03-12
depends_on: ["GenAI adoption in entertainment will be gated by consumer acceptance not technology capability", "consumer-rejection-of-ai-generated-ads-intensifies-as-ai-quality-improves-disproving-the-exposure-leads-to-acceptance-hypothesis"]
challenged_by: []
---
# Gen Z hostility to AI-generated advertising is stronger than Millennials and widening, making Gen Z a negative leading indicator for AI content acceptance
Gen Z consumers are more hostile to AI-generated advertising than Millennials across every measured dimension, and the gap between the two cohorts widened from 2024 to 2026. Because Gen Z is the youngest fully-addressable consumer cohort, their attitudes represent where mainstream consumer sentiment is likely to move — not an aberration that will normalize as the cohort ages.
## The data
**Negative sentiment**:
- Gen Z: 39% negative
- Millennials: 20% negative
- Gap: 19 percentage points (widened from 6 points in 2024: 21% vs. 15%)
**Brand attribute perception (Gen Z vs. Millennials rating AI-using brands)**:
- "Lacks authenticity": 30% (Gen Z) vs. 13% (Millennials)
- "Disconnected": 26% (Gen Z) vs. 8% (Millennials)
- "Unethical": 24% (Gen Z) vs. 8% (Millennials)
The Gen Z-Millennial gap tripled on disconnectedness (from roughly even to 3:1) and more than tripled on unethical (roughly even to 3:1). This is not generational noise — this is a systematic divergence on values dimensions that Gen Z weights heavily.
## Why Gen Z as leading indicator, not outlier
The standard framing of generational divides treats the younger cohort as a laggard that will converge to mainstream norms as they age and gain purchasing power. This framing is wrong for AI content because:
1. **Digital nativeness makes Gen Z more capable of detecting AI**, not less. They grew up with generative tools; they know what AI content looks and feels like. Their rejection is informed, not naive.
2. **Gen Z's authenticity framework is more developed**. Creators, not studios, formed their cultural reference points. Authenticity is a core value in creator culture in a way it was not in broadcast-era media. AI content violates that framework.
3. **They are approaching peak purchasing power**. Gen Z is entering prime consumer years. The advertising industry that ignores their values will face rising cost-per-acquisition as the largest cohorts turn hostile.
The leading-indicator interpretation implies that current Millennial negative sentiment (20%) is a lagged version of what is coming. If Gen Z's rate (39%) is where cohorts eventually stabilize as awareness increases, total market negative sentiment will approximately double from current levels.
## Evidence
- **IAB 2026**: Gen Z 39% negative vs. Millennial 20% negative
- **IAB 2026**: Gen Z-Millennial gap widened significantly from 2024 (21% vs. 15% in 2024 → 39% vs. 20% in 2026)
- **IAB 2026**: Gen Z rates AI-using brands as lacking authenticity (30% vs. 13%), disconnected (26% vs. 8%), and unethical (24% vs. 8%)
- **Trend direction**: Gap widened over 2 years while both cohorts had more exposure to AI content — consistent with informed rejection not naive confusion
## Challenges
This claim depends on the leading-indicator framing — that Gen Z attitudes predict future mainstream attitudes rather than representing a cohort-specific view that moderates with age. The alternative hypothesis is that Gen Z attitudes are a developmental stage artifact (younger people are more idealistic about authenticity) that will moderate as they age into consumption patterns similar to Millennials. The 2024→2026 widening of the gap slightly favors the leading-indicator interpretation over the developmental-stage hypothesis, but two years is insufficient to distinguish them.
---
Relevant Notes:
- [[consumer-rejection-of-ai-generated-ads-intensifies-as-ai-quality-improves-disproving-the-exposure-leads-to-acceptance-hypothesis]] — the overall trend this cohort data sharpens
- [[the-advertiser-consumer-ai-perception-gap-is-a-widening-structural-misalignment-not-a-temporal-communications-lag]] — Gen Z data makes the structural case stronger: the cohort most likely to increase in market share is the most hostile
- [[human-made-is-becoming-a-premium-label-analogous-to-organic-as-AI-generated-content-becomes-dominant]] — Gen Z's authenticity-first values are the demand-side driver of human-made premium
Topics:
- [[entertainment]]
- [[cultural-dynamics]]

View file

@ -290,12 +290,6 @@ Entertainment is the domain where TeleoHumanity eats its own cooking.
The crystallization of 'human-made' as a premium label adds a new dimension to the scarcity analysis: not just community and ownership, but verifiable human provenance becomes scarce and valuable as AI content becomes abundant. EY's guidance that companies must 'keep what people see and feel recognizably human—authentic faces, genuine stories and shared cultural moments' to build 'deeper trust and stronger brand value' suggests human provenance is becoming a distinct scarce complement alongside community and ownership. As production costs collapse toward compute costs (per the non-ATL production costs claim), the ability to credibly signal human creation becomes a scarce resource that differentiates content. Community-owned IP may have structural advantage in signaling this provenance because ownership structure itself communicates human creation, while corporate content must construct proof through external verification. This extends the attractor claim by identifying human provenance as an additional scarce complement that becomes valuable in the AI-abundant, community-filtered media landscape.
### Additional Evidence (confirm)
*Source: [[2025-02-27-fortune-mrbeast-5b-valuation-beast-industries]] | Added: 2026-03-12 | Extractor: anthropic/claude-sonnet-4.5*
Beast Industries' $5B valuation and revenue trajectory ($899M → $1.6B → $4.78B by 2029) with media projected at only 1/5 of revenue by 2026 provides enterprise-scale validation of content-as-loss-leader. The media business operates at ~$80M loss while Feastables generates $250M revenue with $20M+ profit, demonstrating that content functions as customer acquisition infrastructure rather than primary revenue source. The $5B valuation prices the integrated system (content → audience → products) rather than content alone, representing market validation that this attractor state is real and scalable. Feastables' presence in 30,000+ retail locations (Walmart, Target, 7-Eleven) shows the model translates to physical retail distribution, not just direct-to-consumer. This is the first enterprise-scale validation of the loss-leader model where media revenue is subordinate to product revenue.
---
Relevant Notes:

View file

@ -1,52 +0,0 @@
---
type: claim
domain: entertainment
secondary_domains: [cultural-dynamics]
description: "The 37-point gap between advertiser beliefs about consumer AI sentiment (82% positive) and actual consumer sentiment (45% positive) widened from 32 points in 2024, indicating the advertising industry holds systematically wrong beliefs that are getting worse not better"
confidence: likely
source: "Clay, from IAB 'The AI Ad Gap Widens' report, 2026"
created: 2026-03-12
depends_on: ["GenAI adoption in entertainment will be gated by consumer acceptance not technology capability"]
challenged_by: []
---
# The advertiser-consumer AI perception gap is a widening structural misalignment, not a temporal communications lag
The advertising industry holds beliefs about consumer sentiment toward AI-generated ads that are systematically and increasingly wrong. The IAB's 2026 AI Ad Gap Widens report documents:
- **82%** of ad executives believe Gen Z/Millennials feel very or somewhat positive about AI ads
- **45%** of consumers actually report positive sentiment
- **Gap = 37 percentage points** — up from 32 points in 2024
The direction of the trend matters as much as the magnitude. A 5-point widening over two years, during a period of intense industry AI discourse, suggests this is not a communications problem that more education will solve. Advertisers are becoming *more* confident about consumer acceptance even as consumer rejection is intensifying.
## Why this is structural, not informational
The standard explanation for perception gaps is information asymmetry: industry insiders lack visibility into consumer sentiment. But the IAB publishes this data; ad executives have access to consumer sentiment surveys. The gap is persisting and widening not because advertisers lack information but because their incentives and selection pressures push them toward optimistic beliefs.
Several structural forces maintain the misalignment:
1. **Agency incentives**: Ad agencies earn fees for producing AI content; admitting consumer resistance reduces business justification
2. **Executive selection**: Leaders who championed AI adoption must believe adoption will succeed to justify past decisions
3. **Attribute framing gaps**: Ad executives associate AI with "forward-thinking" (46%) and "innovative" (49%), while consumers are more likely to associate it with "manipulative" (20% vs. executives' 10%) and "unethical" (16% vs. 7%). They are not measuring the same attributes
## Evidence
- **IAB 2026**: 82% advertiser positive-sentiment belief vs. 45% consumer positive sentiment = 37pp gap
- **IAB 2026**: Gap was 32 points in 2024 — widened by 5 points in two years
- **IAB 2026 attribute data**: "Forward-thinking" — 46% ad executives vs. 22% consumers; "Innovative" — 49% ad executives vs. 23% consumers (down from 30% in 2024); "Manipulative" — 10% ad executives vs. 20% consumers; "Unethical" — 7% ad executives vs. 16% consumers
- **Temporal pattern**: Gap widened during a period when AI industry discussion increased, not decreased — suggesting more information flow did not close the gap
## Challenges
The IAB is the Interactive Advertising Bureau — the industry association for digital advertisers. This gives the report authority with the industry it covers, but it also means the survey methodology and framing reflect industry assumptions. The "positive/negative" binary may not fully capture consumer nuance. Additionally, consumers self-report sentiment in surveys but their revealed preference (ad engagement) might diverge from stated sentiment.
---
Relevant Notes:
- [[consumer-rejection-of-ai-generated-ads-intensifies-as-ai-quality-improves-disproving-the-exposure-leads-to-acceptance-hypothesis]] — the demand-side of the same misalignment: consumer rejection is growing while advertiser optimism is growing
- [[GenAI adoption in entertainment will be gated by consumer acceptance not technology capability]] — this misalignment means the advertiser-as-gatekeeper of AI adoption is systematically miscalibrated
- [[human-made-is-becoming-a-premium-label-analogous-to-organic-as-AI-generated-content-becomes-dominant]] — the market mechanism that will eventually correct the misalignment (when human-made premium pricing arrives)
Topics:
- [[entertainment]]
- [[cultural-dynamics]]

View file

@ -1,29 +0,0 @@
---
type: claim
domain: internet-finance
description: "SPL 404 is a Solana token standard that creates bidirectional swaps between fungible governance tokens and NFTs, letting DAOs earn secondary revenue from swap activity without direct NFT treasury sales."
confidence: experimental
source: "Rio; FutureDAO Champions NFT Collection proposal (2024-07-18, passed 2024-07-22)"
created: 2026-03-12
depends_on:
- "MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale"
---
# SPL 404 enables fungible-NFT swap revenue for DAOs by bridging governance tokens and NFT liquidity on Solana
SPL 404 is a Solana token standard that allows bidirectional swaps between fungible tokens and NFTs. For DAOs, this creates a monetization path that doesn't require direct NFT sales from the treasury: instead, when community members swap their governance tokens (e.g., $FUTURE) into NFT form or back, the protocol earns revenue from the swap mechanics. Secondary market royalties then compound on top.
FutureDAO's Champions NFT Collection proposal (passed July 2024) illustrates this architecture in practice. Of the $10,000 design budget, $3,000 was earmarked for non-artistic technical work — $1,000 for smart contract development and $2,000 for metadata integration — required specifically to enable SPL 404 swap mechanics. The proposal projected two revenue streams: SPL 404 swap fees and secondary market royalties. Neither stream requires the DAO to sell NFTs directly; revenue flows from market activity rather than treasury disposition.
This matters for DAO treasury design. Traditional NFT monetization requires either initial sales (one-time, often fraught with launch mechanics) or secondary royalties (declining in enforcement reliability post-Blur). SPL 404 adds a third path: perpetual swap revenue tied to the governance token's own liquidity. As long as members convert between token and NFT form, the swap mechanism generates revenue.
The limitation is that SPL 404 swap revenue is indirect and hard to project — it depends on community demand for the NFT form specifically. If members prefer holding the fungible token, swap volume is minimal regardless of collection quality.
---
Relevant Notes:
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] — FutureDAO runs on MetaDAO's futarchy infrastructure; SPL 404 extends the token utility layer
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]] — the governance mechanism that approved this SPL 404-enabled NFT spend
Topics:
- [[_map]]

View file

@ -1,38 +0,0 @@
---
type: claim
domain: internet-finance
description: "Futarchy governance can evaluate and approve non-financial cultural expenditures when proposers successfully frame community cohesion and brand benefits as positive token price signals, expanding the scope of what market governance can decide."
confidence: experimental
source: "Rio; FutureDAO Champions NFT Collection proposal (2024-07-18, passed 2024-07-22)"
created: 2026-03-12
depends_on:
- "MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window"
- "coin price is the fairest objective function for asset futarchy"
---
# futarchy markets can price cultural spending proposals by treating community cohesion and brand equity as token price inputs
Futarchy governance selects proposals by whether conditional markets expect them to increase token price. This creates an implicit question for cultural spending: can markets price "soft" benefits like community cohesion, brand presence, and social identity into a token price signal?
FutureDAO's Champions NFT proposal provides a concrete test case. The proposal requested $10,000 for NFT artwork design — with the primary stated value case being community cohesion ("PFPs for community members to represent themselves") and Solana ecosystem presence ("FutureDAO's notoriety across the Solana ecosystem"), not direct financial ROI. Revenue projections were explicitly indirect: SPL 404 swap fees and secondary market royalties, both dependent on emergent community demand. Despite this soft value framing, the proposal passed futarchy governance on July 22, 2024.
This indicates that futarchy markets can evaluate cultural spending when participants believe brand and community effects will flow through to token price. The mechanism works because the objective function (token price) is broad enough to incorporate any factor that market participants believe matters — including social capital, community retention, and ecosystem reputation. Futarchy doesn't require direct financial return from a proposal; it requires only that participants believe the proposal increases expected token value.
The implication for DAO governance design is significant: futarchy is not limited to quantifiable ROI decisions. It can govern brand investments, cultural initiatives, and community spending — anywhere the market believes soft benefits translate to token appreciation. This expands futarchy's applicable scope beyond the financial optimization use cases it was originally theorized for.
The risk is that cultural proposals introduce systematic bias: participants who value community belonging may persistently overestimate the token-price impact of cultural spending, creating a selection pressure for feel-good proposals over productive ones.
## Challenges
The single data point is limited. One passed proposal doesn't establish a reliable pattern. Cultural proposals that fail futarchy governance (and thus go unobserved in public records) would provide the necessary counter-evidence to calibrate how often futarchy actually validates cultural versus financial spending.
---
Relevant Notes:
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]] — the mechanism that priced and approved this cultural spending proposal
- [[coin price is the fairest objective function for asset futarchy]] — the broad objective function that makes cultural pricing possible
- [[redistribution proposals are futarchys hardest unsolved problem because they can increase measured welfare while reducing productive value creation]] — adjacent challenge: welfare-increasing but value-neutral proposals
- [[futarchy-governed DAOs converge on traditional corporate governance scaffolding for treasury operations because market mechanisms alone cannot provide operational security and legal compliance]] — limits of futarchy for operational decisions
Topics:
- [[_map]]

View file

@ -1,33 +0,0 @@
---
type: entity
entity_type: company
name: "Beast Industries"
domain: entertainment
secondary_domains: [internet-finance]
status: active
founded: "~2020"
founder: "Jimmy Donaldson (MrBeast)"
key_metrics:
valuation: "$5B (2025 fundraise)"
revenue_2025: "$899M (projected)"
revenue_2026: "$1.6B (projected)"
revenue_2029: "$4.78B (projected)"
feastables_revenue: "$250M"
feastables_profit: "$20M+"
media_loss: "~$80M"
retail_locations: "30,000+"
tracked_by: clay
created: 2026-03-11
---
# Beast Industries
Beast Industries is MrBeast's (Jimmy Donaldson) integrated media and consumer products company, operating five verticals: software (Viewstats), CPG (Feastables, Lunchly), health/wellness, media (YouTube + Amazon), and video games. The company raised capital at a $5B valuation in 2025, with projected revenue growth from $899M (2025) to $4.78B (2029). The business model treats content as customer acquisition infrastructure rather than primary revenue source, with media projected to represent only 1/5 of total sales by 2026.
## Timeline
- **2025-02-27** — Raised capital at $5B valuation with revenue projections: $899M (2025) → $1.6B (2026) → $4.78B (2029)
- **2025** — Feastables generated $250M revenue with $20M+ profit; media business similar revenue but ~$80M loss
- **2025** — Feastables distributed through 30,000+ retail locations (Walmart, Target, 7-Eleven)
## Relationship to KB
Beast Industries provides enterprise-scale validation of [[the media attractor state is community-filtered IP with AI-collapsed production costs where content becomes a loss leader for the scarce complements of fandom community and ownership]]. The $5B valuation represents market pricing of the integrated content-to-product model, where media operates at a loss to generate zero marginal cost customer acquisition for high-margin CPG products.

View file

@ -1,3 +0,0 @@
---
type: entity
...

View file

@ -47,5 +47,3 @@ Topics:
## Timeline
- **2024-12-19** — [[deans-list-implement-3-week-vesting]] passed: 3-week linear vesting for DAO payments to reduce sell pressure from 80% immediate liquidation to 33% weekly rate, projected 15%-25% valuation increase
- **2024-10-10** — [[islanddao-treasury-proposal]] passed: Established treasury reserve funded by 2.5% of USDC payments with risk-scored asset allocation (80/20 safe/risky split) and quarterly performance reviews managed by Kai (@DeFi_Kai)

View file

@ -1,3 +0,0 @@
---
type: entity
...

View file

@ -1,9 +1,58 @@
---
type: timeline
...
type: entity
entity_type: company
name: "Drift Protocol"
domain: internet-finance
handles: ["@DriftProtocol"]
website: https://drift.trade
status: active
tracked_by: rio
created: 2026-03-11
last_updated: 2026-03-11
category: "Perpetuals DEX / DeFi protocol (Solana)"
stage: growth
key_metrics:
futarchy_proposals: "6+ proposals on MetaDAO platform (grants, working group, AI agents, competitions)"
drift_allocated: "150,000+ DRIFT allocated through futarchy governance"
built_on: ["Solana"]
competitors: ["[[omnipair]]"]
tags: ["perps", "solana", "futarchy-adopter", "metadao-ecosystem"]
---
2024-05-30: Event description.
2024-07-01: New event description.
2024-07-05: Another new event description.
2024-07-09: Event description.
2025-02-13: Event description.
# Drift Protocol
## Overview
Perpetuals DEX on Solana — one of the largest decentralized derivatives platforms. Significant to the MetaDAO ecosystem for two reasons: (1) Drift adopted futarchy governance through MetaDAO's platform, making it the highest-profile external organization to use futarchic decision-making, and (2) Drift represents the future competitive threat to OmniPair's leverage monopoly on MetaDAO ecosystem tokens.
## Current State
- **Futarchy adoption**: Drift has run 6+ governance proposals through MetaDAO's futarchy platform since May 2024, allocating 150,000+ DRIFT tokens through futarchic decisions. This includes the Drift Foundation Grant Program (100K DRIFT), "Welcome the Futarchs" retroactive rewards (50K DRIFT), Drift AI Agents grants program (50K DRIFT), Drift Working Group funding, and SuperTeam Earn creator competitions.
- **AI Agents program**: Drift allocated 50,000 DRIFT for an AI Agents Grants program (Dec 2024) covering trading agents, yield agents, information agents, and social agents. Early signal of DeFi protocols investing in agentic infrastructure.
- **Leverage competitor**: Currently, OmniPair is the "only game in town" for leverage on MetaDAO ecosystem tokens. However, if MetaDAO reaches ~$1B valuation, Drift and other perp protocols will likely list META and ecosystem tokens — eroding OmniPair's temporary moat.
- **Perps aggregation**: Ranger Finance aggregated Drift (among others) before its liquidation.
## Timeline
- **2024-05-30** — First futarchy proposal: "Welcome the Futarchs" — 50K DRIFT to incentivize futarchy participation
- **2024-07-09** — Drift Foundation Grant Program initialized via futarchy (100K DRIFT)
- **2024-08-27** — SuperTeam Earn creator competition funded via futarchy
- **2024-12-19** — AI Agents Grants program: 50K DRIFT for trading, yield, info, and social agents
- **2025-02-13** — Drift Working Group funded via futarchy
## Competitive Position
- **Futarchy validation**: Drift using MetaDAO's governance system is the strongest external validation signal — a major protocol choosing futarchy over traditional token voting for real treasury decisions.
- **Future leverage threat**: Drift listing META perps would directly compete with OmniPair for leverage demand. This is OmniPair's identified "key vulnerability" — the moat is temporary.
- **Scale differential**: Drift operates at much larger scale than the MetaDAO ecosystem. Its adoption of futarchy is disproportionately significant as a credibility signal.
## Relationship to KB
- [[futarchy implementations must simplify theoretical mechanisms for production adoption because original designs include impractical elements that academics tolerate but users reject]] — Drift's adoption validates that simplified futarchy works for real organizations
- [[permissionless leverage on metaDAO ecosystem tokens catalyzes trading volume and price discovery that strengthens governance by making futarchy markets more liquid]] — Drift is the future competitor that erodes OmniPair's leverage monopoly
- [[governance mechanism diversity compounds organizational learning because disagreement between mechanisms reveals information no single mechanism can produce]] — Drift running both traditional governance and futarchy provides comparative data
---
Relevant Entities:
- [[metadao]] — futarchy platform provider
- [[omnipair]] — current leverage competitor (OmniPair holds temporary monopoly)
- [[ranger-finance]] — former aggregation client (liquidated)
Topics:
- [[internet finance and decision markets]]

View file

@ -46,7 +46,6 @@ MetaDAO's token launch platform. Implements "unruggable ICOs" — permissionless
- **2026-03-07** — Areal DAO launch: $50K target, raised $11,654 (23.3%), REFUNDING status by 2026-03-08 — first documented failed futarchy-governed fundraise on platform
- **2026-03-04** — [[seekervault]] fundraise launched targeting $75,000, closed next day with only $1,186 (1.6% of target) in refunding status
- **2026-03-05** — [[insert-coin-labs-futardio-fundraise]] launched for Web3 gaming studio (failed, $2,508 / $50K = 5% of target)
## Competitive Position
- **Unique mechanism**: Only launch platform with futarchy-governed accountability and treasury return guarantees
- **vs pump.fun**: pump.fun is memecoin launch (zero accountability, pure speculation). Futardio is ownership coin launch (futarchy governance, treasury enforcement). Different categories despite both being "launch platforms."

View file

@ -1,46 +0,0 @@
---
type: entity
entity_type: decision_market
name: "Insert Coin Labs: Futardio Fundraise"
domain: internet-finance
status: failed
parent_entity: "[[insert-coin-labs]]"
platform: futardio
proposal_url: "https://www.futard.io/launch/62Yxd8gLQ2YYmY2TifhChJG4tVdf4b1oAHcMfwTL2WUu"
proposal_date: 2026-03-05
resolution_date: 2026-03-06
category: fundraise
summary: "Web3 gaming studio seeking $50K for team and liquidity with 80/20 split"
tracked_by: rio
created: 2026-03-11
key_metrics:
raise_target: 50000
total_committed: 2508
oversubscription_ratio: 0.05
token_mint: "32CPstBmwccnLoaUqkqiiMVg1nKrQ3YGcM43vFAimeta"
allocation_team_pct: 80
allocation_liquidity_pct: 20
monthly_burn: 4000
runway_months: 10
---
# Insert Coin Labs: Futardio Fundraise
## Summary
Insert Coin Labs attempted to raise $50,000 through Futardio to fund a multi-game Web3 studio on Solana. The raise allocated 80% to team (devs, designer, artist) and 20% to $INSERT token liquidity pool, with $4K monthly burn providing ~10 month runway. The fundraise failed, reaching only $2,508 (5% of target) before entering refunding status.
## Market Data
- **Outcome:** Failed (refunding)
- **Target:** $50,000
- **Committed:** $2,508 (5.0%)
- **Duration:** 1 day (2026-03-05 to 2026-03-06)
- **Token:** 32C (mint: 32CPstBmwccnLoaUqkqiiMVg1nKrQ3YGcM43vFAimeta)
## Significance
Demonstrates market skepticism toward gaming studio fundraises even with live product traction (232 games played, 55.1 SOL volume on Domin8). The 95% funding gap suggests either insufficient market validation of the studio model, weak distribution/marketing, or broader market conditions unfavorable to gaming raises. Notable that the team had working product and audit credentials but still failed to attract capital.
## Relationship to KB
- [[futardio]] — fundraising platform
- [[insert-coin-labs]] — parent entity
- [[MetaDAO]] — underlying futarchy infrastructure
- Contrasts with [[futardio-cult-raised-11-4-million-in-one-day-through-futarchy-governed-meme-coin-launch]] showing market selectivity

View file

@ -1,84 +0,0 @@
---
type: entity
entity_type: decision_market
name: "IslandDAO: Treasury Proposal (Dean's List Proposal)"
domain: internet-finance
status: passed
parent_entity: "[[deans-list]]"
platform: "futardio"
proposer: "futard.io"
proposal_url: "https://www.futard.io/proposal/8SwPfzKhaZ2SQfgfJYfeVRTXALZs2qyFj7kX1dEkd29h"
proposal_date: 2024-10-10
resolution_date: 2024-10-14
category: "treasury"
summary: "Establish treasury reserve funded by 2.5% of USDC payments with risk-scored asset allocation and quarterly performance reviews"
tracked_by: rio
created: 2026-03-11
key_metrics:
reserve_funding: "2.5% of all USDC payments"
portfolio_split: "80% safe assets (RS >= 0.5), 20% risky assets (RS <= 0.5)"
performance_fee: "5% of quarterly profit, 3-month vesting"
twap_requirement: "3% increase (523k to 539k USDC MCAP)"
target_dean_price: "0.005383 USDC (from 0.005227)"
---
# IslandDAO: Treasury Proposal (Dean's List Proposal)
## Summary
Proposal to establish a treasury reserve for Dean's List DAO funded by allocating 2.5% of all USDC payments received. Treasury managed by Kai (@DeFi_Kai) with quarterly performance reviews and community oversight. Funds held in Mango Delegate Account via Realms with risk-scored asset allocation framework (80/20 safe/risky split).
## Market Data
- **Outcome:** Passed
- **Proposer:** futard.io
- **Proposal Account:** 8SwPfzKhaZ2SQfgfJYfeVRTXALZs2qyFj7kX1dEkd29h
- **DAO Account:** 9TKh2yav4WpSNkFV2cLybrWZETBWZBkQ6WB6qV9Nt9dJ
- **Autocrat Version:** 0.3
- **Created:** 2024-10-10
- **Completed:** 2024-10-14
## Mechanism Design
### Risk Scoring Framework
Assets evaluated using weighted risk score (Rs) formula:
- Volatility Weight: 0.4
- Liquidity Risk Weight: 0.2
- Market Cap Risk Weight: 0.3
- Drawdown Risk Weight: 0.1
Assets with RS >= 0.5 classified as safe, RS <= 0.5 as risky. Portfolio maintains 80/20 safe/risky allocation.
### Governance Structure
- Treasury Manager: Kai (@DeFi_Kai)
- Quarterly performance reviews required
- Community input on asset diversification
- Performance fee: 5% of quarterly profit with 3-month vesting
### Asset Whitelisting Process
New assets must:
1. Increase overall returns
2. Offer diversification when required
3. Replace similar asset with lower risk score
Weight assessed to achieve highest safe returns.
## Deliverables (First Quarter)
1. Define "rainy day" scenarios with community
2. Produce treasury reports covering:
- Treasury growth metrics
- Asset allocation and diversification
- Expected return calculations
- Sharpe Ratio for risk-adjusted performance
- Maximum drawdown analysis
- Actual vs expected returns
- Risk management summary
## Significance
First futarchy-governed treasury management proposal with formalized risk scoring framework. Demonstrates evolution from simple pass/fail decisions to complex financial governance with quantitative risk assessment and performance accountability.
## Relationship to KB
- [[deans-list]] - parent organization
- [[futardio]] - governance platform
- [[metadao]] - futarchy infrastructure provider
Topics:
- [[domains/internet-finance/_map]]

View file

@ -1,64 +0,0 @@
---
type: entity
entity_type: decision_market
name: "MetaDAO: Increase META Liquidity via a Dutch Auction"
domain: internet-finance
status: passed
parent_entity: "[[metadao]]"
platform: "futardio"
proposer: "prdUTSLQs6EcwreBtZnG92RWaLxdCTivZvRXSVRdpmJ"
proposal_url: "https://www.futard.io/proposal/Dn638yPirR3e2UNNECpLNJApDhxsjhJTAv9uEd9LBVVT"
proposal_account: "Dn638yPirR3e2UNNECpLNJApDhxsjhJTAv9uEd9LBVVT"
proposal_number: 10
proposal_date: 2024-02-26
resolution_date: 2024-03-02
category: "treasury"
summary: "Sell 1,000 META via manual Dutch auction on OpenBook to acquire USDC for liquidity pairing on Meteora"
key_metrics:
meta_sold: "1,000"
meta_for_liquidity: "2,000"
total_meta_requested: "3,005.45"
compensation_meta: "5.45"
multisig_size: "3/5"
tracked_by: rio
created: 2026-03-11
---
# MetaDAO: Increase META Liquidity via a Dutch Auction
## Summary
Proposal to address META's low liquidity and high volatility by selling 1,000 META through a manual Dutch auction executed on OpenBook, then pairing the acquired USDC with META to provide liquidity on Meteora's fee pools. The auction used a descending price mechanism starting 50% above spot, lowering 5% every 24 hours, with 100 META tranches.
## Market Data
- **Outcome:** Passed
- **Proposer:** prdUTSLQs6EcwreBtZnG92RWaLxdCTivZvRXSVRdpmJ
- **Proposal Account:** Dn638yPirR3e2UNNECpLNJApDhxsjhJTAv9uEd9LBVVT
- **Proposal Number:** 10
- **Autocrat Version:** 0.1
- **Created:** 2024-02-26
- **Completed:** 2024-03-02
## Mechanism Design
- Manual Dutch auction via OpenBook
- 100 META tranches, starting 50% above spot price
- Price reduction: 5% every 24 hours if >6% above spot
- New asks placed 10% above spot when filled
- 3/5 multisig execution (LMRVapqnn1LEwKaD8PzYEs4i37whTgeVS41qKqyn1wi)
- Final liquidity moved to Meteora 1% fee pool
## Compensation Structure
Sealed-bid auction for multisig positions:
- Ben H: 0 META
- Nico: 0 META
- joebuild: 0.2 META
- Dodecahedr0x: 0.25 META
- Proposal creator (Durden): 5 META
- **Total:** 5.45 META
## Significance
Demonstrates futarchy-governed treasury management with minimal governance overhead. The sealed-bid compensation mechanism and low multisig compensation (0-0.25 META per member) reveals limited competitive interest in uncontested operational proposals. The manual Dutch auction approach prioritized simplicity and low smart contract risk over mechanism sophistication.
## Relationship to KB
- [[metadao]] - treasury management decision
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]] - operational implementation example
- [[meteora]] - liquidity destination platform

View file

@ -54,8 +54,6 @@ The futarchy governance protocol on Solana. Implements decision markets through
- **2026-03** — Pine Analytics Q4 2025 quarterly report published
- **2024-02-18** — [[metadao-otc-trade-pantera-capital]] failed: Pantera Capital's $50,000 OTC purchase proposal rejected by futarchy markets
- **2024-02-26** — [[metadao-increase-meta-liquidity-dutch-auction]] proposed: sell 1,000 META via manual Dutch auction on OpenBook to acquire USDC for Meteora liquidity pairing
- **2024-03-02** — [[metadao-increase-meta-liquidity-dutch-auction]] passed: completed Dutch auction and liquidity provision, moving all protocol-owned liquidity to Meteora 1% fee pool
## Key Decisions
| Date | Proposal | Proposer | Category | Outcome |
|------|----------|----------|----------|---------|

View file

@ -1,6 +0,0 @@
---
type: organization
entity_type: organization
name: NASAA
...
---

View file

@ -1,22 +0,0 @@
---
type: entity
entity_type: company
name: Sanctum
domain: internet-finance
status: active
tracked_by: rio
created: 2026-03-11
---
# Sanctum
## Overview
Sanctum is a Solana-based protocol that adopted futarchy governance through MetaDAO's Autocrat program in early 2025. The project uses conditional token markets for governance decisions, with CLOUD-0 serving as its inaugural educational proposal.
## Timeline
- **2025-02-03** - [[sanctum-cloud-0-logo-change]] launched: First futarchy governance proposal (educational logo change)
- **2025-02-06** - [[sanctum-cloud-0-logo-change]] passed: Completed 3-day deliberation + 3-day voting cycle
## Relationship to KB
- [[MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale]] - governance infrastructure provider
- [[MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window]] - mechanism implementation

View file

@ -1,30 +0,0 @@
---
type: source
title: "Staffing a Service System with Non-Poisson Non-Stationary Arrivals"
author: "Ward Whitt et al. (Cambridge Core)"
url: https://www.cambridge.org/core/journals/probability-in-the-engineering-and-informational-sciences/article/abs/staffing-a-service-system-with-nonpoisson-nonstationary-arrivals/0F42FDA80A8B0B197D3D9E0B040A43D2
date: 2016-01-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, stochastic-modeling, non-stationary-arrivals, capacity-sizing]
---
# Staffing a Service System with Non-Poisson Non-Stationary Arrivals
Extends the square-root staffing formula to handle non-Poisson arrival processes, including non-stationary Cox processes where the arrival rate itself is a stochastic process.
## Key Content
- Standard Poisson assumption fails when arrivals are bursty or time-varying
- Introduces "peakedness" — the variance-to-mean ratio of the arrival process — as the key parameter for non-Poisson adjustment
- Modified staffing formula: adjust the square-root safety margin by the peakedness factor
- For bursty arrivals (peakedness > 1), you need MORE safety capacity than Poisson models suggest
- For smooth arrivals (peakedness < 1), you need LESS
- Practical: replacing time-varying arrival rates with constant (average or max) leads to badly under- or over-staffed systems
## Relevance to Teleo Pipeline
Our arrival process is highly non-stationary: research dumps are bursty (15 sources at once), futardio launches come in bursts of 20+, while some days are quiet. This is textbook non-Poisson non-stationary. The peakedness parameter captures exactly how bursty our arrivals are and tells us how much extra capacity we need beyond the basic square-root staffing rule.
Key insight: using a constant MAX_WORKERS regardless of current queue state is the worst of both worlds — too many workers during quiet periods (wasted compute), too few during bursts (queue explosion).

View file

@ -1,28 +0,0 @@
---
type: source
title: "AIMD Dynamics and Distributed Resource Allocation"
author: "Martin J. Corless, C. King, R. Shorten, F. Wirth (SIAM)"
url: https://epubs.siam.org/doi/book/10.1137/1.9781611974225
date: 2016-01-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, AIMD, distributed-resource-allocation, congestion-control, fairness]
---
# AIMD Dynamics and Distributed Resource Allocation
SIAM monograph on AIMD (Additive Increase Multiplicative Decrease) as a general-purpose distributed resource allocation mechanism. Extends the TCP congestion control principle to resource allocation in computing, energy, and other domains.
## Key Content
- AIMD is the most widely used method for allocating limited resources among competing agents without centralized control
- Core algorithm: additive increase when no congestion (rate += α), multiplicative decrease when congestion detected (rate *= β, where 0 < β < 1)
- Provably fair: converges to equal sharing of available bandwidth/capacity
- Provably stable: system converges regardless of number of agents or parameter values
- Three sample applications: internet congestion control, smart grid energy allocation, distributed computing
- Key property: no global information needed — each agent only needs to observe local congestion signals
## Relevance to Teleo Pipeline
AIMD provides a principled, proven scaling algorithm: when eval queue is shrinking (no congestion), increase extraction workers by 1 per cycle. When eval queue is growing (congestion), halve extraction workers. This doesn't require predicting load, modeling arrivals, or solving optimization problems — it reacts to observed system state and is mathematically guaranteed to converge. Perfect for our "expensive compute, variable load" setting.

View file

@ -1,28 +0,0 @@
---
type: source
title: "Economies-of-Scale in Many-Server Queueing Systems: Tutorial and Partial Review of the QED Halfin-Whitt Heavy-Traffic Regime"
author: "Johan van Leeuwaarden, Britt Mathijsen, Jaron Sanders (SIAM Review)"
url: https://epubs.siam.org/doi/10.1137/17M1133944
date: 2018-01-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, queueing-theory, Halfin-Whitt, economies-of-scale, square-root-staffing]
---
# Economies-of-Scale in Many-Server Queueing Systems
SIAM Review tutorial on the QED (Quality-and-Efficiency-Driven) Halfin-Whitt heavy-traffic regime — the mathematical foundation for understanding when and how multi-server systems achieve economies of scale.
## Key Content
- The QED regime: operate near full utilization while keeping delays manageable
- As server count n grows, utilization approaches 1 at rate Θ(1/√n) — the "square root staffing" principle
- Economies of scale: larger systems need proportionally fewer excess servers for the same service quality
- The regime applies to systems ranging from tens to thousands of servers
- Square-root safety staffing works empirically even for moderate-sized systems (5-20 servers)
- Tutorial connects abstract queueing theory to practical staffing decisions
## Relevance to Teleo Pipeline
At our scale (5-6 workers), we're in the "moderate system" range where square-root staffing still provides useful guidance. The key takeaway: we don't need sophisticated algorithms for a system this small. Simple threshold policies informed by queueing theory will capture most of the benefit. The economies-of-scale result also tells us that if we grow to 20+ workers, the marginal value of each additional worker decreases — important for cost optimization.

View file

@ -1,27 +0,0 @@
---
type: source
title: "Resource Scheduling in Non-Stationary Service Systems"
author: "Simio / WinterSim 2018"
url: https://www.simio.com/resources/papers/WinterSim2018/Resource-Scheduling-In-Non-stationary-Service-Systems.php
date: 2018-12-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, stochastic-modeling, non-stationary-arrivals, resource-scheduling, simulation]
---
# Resource Scheduling in Non-Stationary Service Systems
WinterSim 2018 paper on scheduling resources (servers/workers) when arrival rates change over time. Addresses the gap between theoretical queueing models (which assume stationarity) and real systems (which don't).
## Key Content
- Non-stationary service systems require time-varying staffing — fixed worker counts are suboptimal
- The goal: determine the number of servers as a function of time
- Without server constraints there would be no waiting time, but this wastes capacity since arrivals are stochastic and nonstationary
- Simulation-based approach: use discrete-event simulation to test staffing policies against realistic arrival patterns
- Key tradeoff: responsiveness (adding workers fast when load spikes) vs. efficiency (not wasting workers during quiet periods)
## Relevance to Teleo Pipeline
Directly applicable: our pipeline needs time-varying worker counts, not fixed MAX_WORKERS. The paper validates the approach of measuring queue depth and adjusting workers dynamically rather than using static cron-based fixed pools.

View file

@ -1,28 +0,0 @@
---
type: source
title: "Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes"
author: "Yunan Liu et al. (NC State)"
url: https://yunanliu.wordpress.ncsu.edu/files/2019/11/CIATApublished.pdf
date: 2019-01-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, stochastic-modeling, non-stationary-arrivals, MMPP, batch-arrivals]
---
# Modeling and Simulation of Nonstationary Non-Poisson Arrival Processes
Introduces the CIATA (Combined Inversion-and-Thinning Approach) method for modeling nonstationary non-Poisson processes characterized by a rate function, mean-value function, and asymptotic variance-to-mean (dispersion) ratio.
## Key Content
- Standard Poisson process assumptions break down when arrivals are bursty or correlated
- CIATA models target arrival processes via rate function + dispersion ratio — captures both time-varying intensity and burstiness
- The Markov-MECO process (a Markovian arrival process / MAP) models interarrival times as absorption times of a continuous-time Markov chain
- Markov-Modulated Poisson Process (MMPP): arrival rate switches between states governed by a hidden Markov chain — natural model for "bursty then quiet" patterns
- Key finding: replacing a time-varying arrival rate with a constant (max or average) leads to systems being badly understaffed or overstaffed
- Congestion measures are increasing functions of arrival process variability — more bursty = more capacity needed
## Relevance to Teleo Pipeline
Our arrival process is textbook MMPP: there's a hidden state (research session happening vs. quiet period) that governs the arrival rate. During research sessions, sources arrive in bursts of 10-20. During quiet periods, maybe 0-2 per day. The MMPP framework models this directly and gives us tools to size capacity for the mixture of states rather than the average.

View file

@ -1,29 +0,0 @@
---
type: source
title: "What You Should Know About Queueing Models"
author: "Ward Whitt (Columbia University)"
url: https://www.columbia.edu/~ww2040/shorter041907.pdf
date: 2019-04-19
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, queueing-theory, square-root-staffing, Halfin-Whitt]
---
# What You Should Know About Queueing Models
Practitioner-oriented guide by Ward Whitt (Columbia), one of the founders of modern queueing theory for service systems. Covers the essential queueing models practitioners need and introduces the Halfin-Whitt heavy-traffic regime.
## Key Content
- Square-root staffing principle: optimal server count = base load + β√(base load), where β is a quality-of-service parameter
- The Halfin-Whitt (QED) regime: systems operate near full utilization while keeping delays manageable — utilization approaches 1 at rate Θ(1/√n) as servers n grow
- Economies of scale in multi-server systems: larger systems need proportionally fewer excess servers
- Practical formulas for determining server counts given arrival rates and service level targets
- Erlang C formula as the workhorse for staffing calculations
## Relevance to Teleo Pipeline
The square-root staffing rule is directly applicable: if our base load requires R workers at full utilization, we should provision R + β√R workers where β ≈ 1-2 depending on target service level. For our scale (~8 sources/cycle, ~5 min service time), this gives concrete worker count guidance.
Critical insight: you don't need to match peak load with workers. The square-root safety margin handles variance efficiently. Over-provisioning for peak is wasteful; under-provisioning for average causes queue explosion. The sweet spot is the QED regime.

View file

@ -1,29 +0,0 @@
---
type: source
title: "An Overview for Markov Decision Processes in Queues and Networks"
author: "Quan-Lin Li, Jing-Yu Ma, Rui-Na Fan, Li Xia"
url: https://arxiv.org/abs/1907.10243
date: 2019-07-24
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, markov-decision-process, queueing-theory, dynamic-programming]
---
# An Overview for Markov Decision Processes in Queues and Networks
Comprehensive 42-page survey of MDP applications in queueing systems, covering 60+ years of research from the 1960s to present.
## Key Content
- Continuous-time MDPs for queue management: decisions happen at state transitions (arrivals, departures)
- Classic results: optimal policies often have threshold structure — "serve if queue > K, idle if queue < K"
- For multi-server systems: optimal admission and routing policies are often simple (join-shortest-queue, threshold-based)
- Dynamic programming and stochastic optimization provide tools for deriving optimal policies
- Key challenge: curse of dimensionality — state space explodes with multiple queues/stages
- Practical approaches: approximate dynamic programming, reinforcement learning for large state spaces
- Emerging direction: deep RL for queue management in networks and cloud computing
## Relevance to Teleo Pipeline
Our pipeline has a manageable state space (queue depths across 3 stages, worker counts, time-of-day) — small enough for exact MDP solution via value iteration. The survey confirms that optimal policies for our type of system typically have threshold structure: "if queue > X and workers < Y, spawn a worker." This means even without solving the full MDP, a well-tuned threshold policy will be near-optimal.

View file

@ -1,30 +0,0 @@
---
type: source
title: "Optimal Control Policies for Resource Allocation in the Cloud: Comparison Between Markov Decision Process and Heuristic Approaches"
author: "Thomas Tournaire, Hind Castel-Taleb, Emmanuel Hyon"
url: https://arxiv.org/abs/2104.14879
date: 2021-04-30
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, markov-decision-process, cloud-autoscaling, optimal-control]
---
# Optimal Control Policies for Resource Allocation in the Cloud
Compares MDP-based optimal scaling policies against heuristic approaches for cloud auto-scaling. The MDP formulation treats VM provisioning as a sequential decision problem.
## Key Content
- Auto-scaling problem: VMs turned on/off based on queue occupation to minimize combined energy + performance cost
- MDP formulation: states = queue lengths + active VMs, actions = add/remove VMs, rewards = negative cost (energy + SLA violations)
- Value iteration and policy iteration algorithms find optimal threshold policies
- Structured MDP algorithms incorporating hysteresis properties outperform heuristics in both execution time and accuracy
- Hysteresis: different thresholds for scaling up vs. scaling down — prevents oscillation (e.g., scale up at queue=10, scale down at queue=3)
- MDP algorithms find optimal hysteresis thresholds automatically
## Relevance to Teleo Pipeline
The MDP formulation maps directly: states = (unprocessed queue, in-flight extractions, open PRs, active workers), actions = (spawn worker, kill worker, wait), cost = (Claude compute cost per worker-minute + delay cost per queued source). The hysteresis insight is particularly valuable — we should have different thresholds for spinning up vs. spinning down workers to prevent oscillation.
Key finding: structured MDP with hysteresis outperforms simple threshold heuristics. But even simple threshold policies (scale up at queue=N, scale down at queue=M where M < N) perform reasonably well.

View file

@ -1,30 +0,0 @@
---
type: source
title: "AIMD Scheduling and Resource Allocation in Distributed Computing Systems"
author: "Vlahakis, Athanasopoulos et al."
url: https://arxiv.org/abs/2109.02589
date: 2021-09-06
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, AIMD, distributed-computing, resource-allocation, congestion-control]
---
# AIMD Scheduling and Resource Allocation in Distributed Computing Systems
Applies TCP's AIMD (Additive Increase Multiplicative Decrease) congestion control to distributed computing resource allocation — scheduling incoming requests across computing nodes.
## Key Content
- Models distributed system as multi-queue scheme with computing nodes
- Proposes AIMD-like admission control: stable irrespective of total node count and AIMD parameters
- Key insight: congestion control in networks and worker scaling in compute pipelines are the same problem — matching producer rate to consumer capacity
- Decentralized resource allocation using nonlinear state feedback achieves global convergence to bounded set in finite time
- Connects to QoS via Little's Law: local queuing time calculable from simple formula
- AIMD is proven optimal for fair allocation of shared resources among competing agents without centralized control
## Relevance to Teleo Pipeline
AIMD provides an elegant scaling policy: when queue is shrinking (system healthy), add workers linearly (e.g., +1 per cycle). When queue is growing (system overloaded), cut workers multiplicatively (e.g., halve them). This is self-correcting, proven stable, and doesn't require predicting load — it reacts to observed queue state.
The TCP analogy is precise: our pipeline "bandwidth" is eval throughput. When extract produces faster than eval can consume, we need backpressure (slow extraction) or scale-up (more eval workers). AIMD handles this naturally.

View file

@ -7,14 +7,9 @@ date: 2022-03-09
domain: health
secondary_domains: []
format: report
status: null-result
status: unprocessed
priority: high
tags: [costa-rica, ebais, primary-health-care, international-comparison, spending-efficiency, blue-zone]
processed_by: vida
processed_date: 2026-03-11
enrichments_applied: ["medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm.md", "pace-demonstrates-integrated-care-averts-institutionalization-through-community-based-delivery-not-cost-reduction.md", "the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Two new claims extracted: (1) Costa Rica as proof that prevention-first primary care at national scale achieves peer outcomes at fraction of US cost, (2) geographic empanelment as the structural mechanism enabling population health management. Three enrichments: extends the 10-20% medical care claim with strongest international counterfactual, extends PACE claim with national-scale comparison, confirms healthcare attractor state but challenges whether technology is prerequisite vs accelerant. Key insight: EBAIS-PACE comparison reveals same clinical model, wildly different scale — difference is political economy not care design."
---
## Content
@ -63,12 +58,3 @@ extraction_notes: "Two new claims extracted: (1) Costa Rica as proof that preven
PRIMARY CONNECTION: [[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]
WHY ARCHIVED: First international health system deep-dive in the KB. Costa Rica is the strongest counterfactual to US healthcare spending.
EXTRACTION HINT: The EBAIS-PACE comparison is where the real insight lives. Same model, same concept — wildly different scale. What's different? Political economy, not clinical design.
## Key Facts
- Costa Rica life expectancy: 81.5 years (female), 76.7 years (male) — second in Americas
- Costa Rica healthcare spending: 1/10 per capita vs US, below world average as % of income
- EBAIS introduced 1994, covers 5 million population
- EBAIS team composition: doctor, nurse, technical assistant, medical clerk, pharmacist
- EBAIS districts show 8% lower child mortality, 2% lower adult mortality, 14% decline in communicable disease deaths
- Nicoya Peninsula is one of 5 global Blue Zones, but Costa Rica's health outcomes are national not regional

View file

@ -1,29 +0,0 @@
---
type: source
title: "Using Little's Law to Scale Applications"
author: "Dan Slimmon"
url: https://blog.danslimmon.com/2022/06/07/using-littles-law-to-scale-applications/
date: 2022-06-07
domain: internet-finance
format: essay
status: unprocessed
tags: [pipeline-architecture, operations-research, queueing-theory, littles-law, capacity-planning]
---
# Using Little's Law to Scale Applications
Practitioner guide showing how Little's Law (L = λW) provides a simple but powerful tool for capacity planning in real systems.
## Key Content
- Little's Law: L = λW where L = average items in system, λ = arrival rate, W = average time per item
- Rearranged for capacity: (total worker threads) ≥ (arrival rate)(average processing time)
- Practical example: 1000 req/s × 0.34s = 340 concurrent requests needed
- Important caveat: Little's Law gives long-term averages only — real systems need buffer capacity beyond the theoretical minimum to handle variance
- The formula guides capacity planning but isn't a complete scaling solution — it's the floor, not the ceiling
## Relevance to Teleo Pipeline
Direct application: if we process ~8 sources per extraction cycle (every 5 min) and each takes ~10-15 min of Claude compute, Little's Law says L = (8/300s) × 750s ≈ 20 sources in-flight at steady state. With 6 workers, each handles ~3.3 sources concurrently — which means we need the workers to pipeline or we'll have queue buildup.
More practically: λ = average sources per second, W = average extraction time. Total workers needed ≥ λ × W. This gives us the minimum worker floor. The square-root staffing rule gives us the safety margin above that floor.

View file

@ -1,29 +0,0 @@
---
type: source
title: "The Flexible Job Shop Scheduling Problem: A Review"
author: "ScienceDirect review article"
url: https://www.sciencedirect.com/science/article/pii/S037722172300382X
date: 2023-01-01
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, combinatorial-optimization, job-shop-scheduling, flexible-scheduling]
---
# The Flexible Job Shop Scheduling Problem: A Review
Comprehensive review of the Flexible Job Shop Scheduling Problem (FJSP) — a generalization of classical JSSP where operations can be processed on any machine from a set of eligible machines.
## Key Content
- Classical Job Shop Scheduling Problem (JSSP): n jobs, m machines, fixed operation-to-machine mapping, NP-complete for m > 2
- Flexible JSSP (FJSP): operations can run on any eligible machine — adds machine assignment as a decision variable
- Flow-shop: all jobs follow the same machine order (our pipeline: research → extract → eval)
- Job-shop: jobs can have different machine orders (not our case)
- Hybrid flow-shop: multiple machines at each stage, jobs follow same stage order but can use any machine within a stage (THIS is our model)
- Solution approaches: metaheuristics (genetic algorithms, simulated annealing, tabu search) dominate for NP-hard instances
- Recent trend: multi-agent reinforcement learning for dynamic scheduling with worker heterogeneity and uncertainty
## Relevance to Teleo Pipeline
Our pipeline is a **hybrid flow-shop**: three stages (research → extract → eval), multiple workers at each stage, all sources flow through the same stage sequence. This is computationally easier than general JSSP. Key insight: for a hybrid flow-shop with relatively few stages and homogeneous workers within each stage, simple priority dispatching rules (shortest-job-first, FIFO within priority classes) perform within 5-10% of optimal. We don't need metaheuristics — we need good dispatching rules.

View file

@ -1,29 +0,0 @@
---
type: source
title: "What Is Backpressure"
author: "Dagster"
url: https://dagster.io/glossary/data-backpressure
date: 2024-01-01
domain: internet-finance
format: essay
status: unprocessed
tags: [pipeline-architecture, backpressure, data-pipelines, flow-control]
---
# What Is Backpressure (Dagster)
Dagster's practical guide to backpressure in data pipelines. Written for practitioners building real data processing systems.
## Key Content
- Backpressure: feedback mechanism preventing data producers from overwhelming consumers
- Without backpressure controls: data loss, crashes, resource exhaustion
- Consumer signals producer about capacity limits
- Implementation strategies: buffering (with threshold triggers), rate limiting, dynamic adjustment, acknowledgment-based flow
- Systems using backpressure: Apache Kafka (pull-based consumption), Flink, Spark Streaming, Akka Streams, Project Reactor
- Tradeoff: backpressure introduces latency but prevents catastrophic failure
- Key principle: design backpressure into the system from the start
## Relevance to Teleo Pipeline
Our pipeline has zero backpressure today. The extract-cron.sh checks for unprocessed sources and dispatches workers regardless of eval queue state. If extraction outruns evaluation, PRs accumulate with no feedback signal. Simple fix: extraction dispatcher should check open PR count before dispatching. If open PRs > threshold, reduce extraction parallelism or skip the cycle.

View file

@ -6,14 +6,9 @@ url: "https://www.futard.io/proposal/Dn638yPirR3e2UNNECpLNJApDhxsjhJTAv9uEd9LBVV
date: 2024-02-26
domain: internet-finance
format: data
status: processed
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2026-03-11
enrichments_applied: ["MetaDAOs Autocrat program implements futarchy through conditional token markets where proposals create parallel pass and fail universes settled by time-weighted average price over a three-day window.md", "MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Proposal 10 is primarily operational/treasury management with no novel mechanism claims. The Dutch auction was manually executed (not programmatic), making it a governance case study rather than a mechanism innovation. Extracted as decision_market entity with enrichments to existing futarchy implementation claims. The sealed-bid multisig compensation structure (0-0.25 META) provides evidence for limited trading volume in uncontested decisions."
---
## Proposal Details
@ -121,12 +116,3 @@ This proposal will significantly increase Meta DAO's protocol-owned liquidity as
- Autocrat version: 0.1
- Completed: 2024-03-02
- Ended: 2024-03-02
## Key Facts
- MetaDAO Proposal 10 requested 3,005.45 total META (1,000 to sell, 2,000 for liquidity pairing, 5.45 compensation)
- Multisig address: LMRVapqnn1LEwKaD8PzYEs4i37whTgeVS41qKqyn1wi (3/5 threshold)
- Multisig members: Durden (91NjPFfJxQw2FRJvyuQUQsdh9mBGPeGPuNavt7nMLTQj), Ben H (Hu8qped4Cj7gQ3ChfZvZYrtgy2Ntr6YzfN7vwMZ2SWii), Nico (6kDGqrP4Wwqe5KBa9zTrgUFykVsv4YhZPDEX22kUsDMP), joebuild (XXXvLz1B89UtcTsg2hT3cL9qUJi5PqEEBTHg57MfNkZ), Dodecahedr0x (UuGEwN9aeh676ufphbavfssWVxH7BJCqacq1RYhco8e)
- Dutch auction mechanics: start 50% above spot, lower 5% every 24h if >6% above spot, new asks at 10% above spot when filled
- Liquidity destination: Meteora 4% fee pool initially, then consolidated to 1% fee pool
- DAO account: 7J5yieabpMoiN3LrdfJnRjQiXHgi7f47UuMnyMyR78yy

View file

@ -1,40 +0,0 @@
---
type: source
title: "Effects of Semaglutide on Chronic Kidney Disease in Patients with Type 2 Diabetes (FLOW Trial)"
author: "New England Journal of Medicine"
url: https://www.nejm.org/doi/abs/10.1056/NEJMoa2403347
date: 2024-05-29
domain: health
secondary_domains: []
format: paper
status: unprocessed
priority: high
tags: [glp-1, semaglutide, CKD, kidney-disease, FLOW-trial, organ-protection]
---
## Content
The FLOW trial — the first dedicated kidney outcomes trial with a GLP-1 receptor agonist. N=3,533 patients with type 2 diabetes and chronic kidney disease randomized to semaglutide vs. placebo. Median follow-up 3.4 years (stopped early at prespecified interim analysis due to efficacy).
Key findings:
- Primary composite endpoint (major kidney disease events): 24% lower risk with semaglutide (HR 0.76; P=0.0003)
- Kidney-specific components: HR 0.79 (95% CI 0.66-0.94)
- Cardiovascular death: HR 0.71 (95% CI 0.56-0.89) — 29% reduction
- Major cardiovascular events: 18% lower risk
- Annual eGFR slope less steep by 1.16 mL/min/1.73m2 in semaglutide group (P<0.001) slower kidney function decline
- FDA subsequently expanded semaglutide (Ozempic) indications to include T2D patients with CKD
Additive benefits when used with SGLT2 inhibitors (separate analysis in Nature Medicine).
## Agent Notes
**Why this matters:** CKD is among the most expensive chronic conditions to manage, with dialysis costing $90K+/year per patient. Slowing kidney decline by 1.16 mL/min/1.73m2 annually could delay or prevent dialysis for many patients. This is where the downstream savings argument for GLP-1s is strongest — preventing progression to end-stage renal disease has massive cost implications.
**What surprised me:** The trial was stopped early for efficacy — the effect was so large that continuing would have been unethical. The 29% reduction in cardiovascular death (in a kidney trial!) suggests these benefits are even broader than expected.
**What I expected but didn't find:** No cost-effectiveness analysis within this paper. No comparison of cost of semaglutide vs. cost of delayed dialysis. The economic case needs to be constructed separately.
**KB connections:** Connects to Value in Health Medicare study (CKD savings component = $2,074/subject). Also connects to the multi-indication benefit thesis — GLP-1s working across CV, metabolic, kidney, and liver simultaneously.
**Extraction hints:** Potential claim: "Semaglutide reduces kidney disease progression by 24% and delays dialysis onset, creating the largest per-patient cost savings of any GLP-1 indication because dialysis costs $90K+/year."
**Context:** NEJM publication — highest evidence tier. First GLP-1 to get FDA indication for CKD in T2D patients. This is a foundational trial for the multi-organ benefit thesis.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: Kidney protection is where GLP-1 downstream savings are largest per-patient — dialysis prevention is the economic mechanism most favorable to the VBC cost-saving thesis
EXTRACTION HINT: Focus on the economic implications of slowed kidney decline for capitated payers, not just the clinical endpoint

View file

@ -6,13 +6,9 @@ url: "https://www.futard.io/proposal/G95shxDXSSTcgi2DTJ2h79JCefVNQPm8dFeDzx7qZ2k
date: 2024-07-01
domain: internet-finance
format: data
status: processed
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Proposal document with detailed vendor pitch and deliverables. Created entity for Artemis Labs (new company) and decision_market entity for the failed proposal. Updated Drift timeline. No extractable claims — this is purely factual governance data about a vendor proposal that failed. The proposal contains standard analytics deliverables without novel mechanism insights."
---
## Proposal Details
@ -200,14 +196,3 @@ We ultimately think that we are providing a unique service and we want to build
- Autocrat version: 0.3
- Completed: 2024-07-05
- Ended: 2024-07-05
## Key Facts
- Artemis Labs serves institutional investors including Grayscale, Franklin Templeton, VanEck
- Artemis Labs serves liquid token funds including Pantera Capital, Modular Capital, CoinFund
- Artemis Labs has 20K+ Twitter followers and 20K+ newsletter subscribers
- Artemis Labs team includes engineers from Venmo, Messari, Coinbase, Facebook
- Artemis Labs team includes finance professionals from Holocene, Carlyle Group, BlackRock, Whale Rock
- Artemis Labs became open source in early 2024
- Drift Protocol's public S3 datalake refreshes every 24 hours
- Artemis proposed 6-hour data refresh intervals for Drift metrics

View file

@ -6,13 +6,7 @@ url: "https://www.futard.io/proposal/BU8kQ7ECq8CJ9BHUZfYsjHFKPMGsF6oJn5d6b1tArdw
date: 2024-07-18
domain: internet-finance
format: data
status: processed
processed_by: rio
processed_date: 2026-03-12
claims_extracted:
- "SPL-404-enables-fungible-NFT-swap-revenue-for-DAOs-by-bridging-governance-tokens-and-NFT-liquidity-on-Solana"
- "futarchy-markets-can-price-cultural-spending-proposals-by-treating-community-cohesion-and-brand-equity-as-token-price-inputs"
enrichments: []
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
---

View file

@ -1,51 +0,0 @@
---
type: source
title: "Real-world Persistence and Adherence to GLP-1 RAs Among Obese Commercially Insured Adults Without Diabetes"
author: "Journal of Managed Care & Specialty Pharmacy"
url: https://www.jmcp.org/doi/10.18553/jmcp.2024.23332
date: 2024-08-01
domain: health
secondary_domains: []
format: paper
status: unprocessed
priority: high
tags: [glp-1, adherence, persistence, discontinuation, real-world-evidence, obesity]
---
## Content
Real-world claims study of 125,474 patients initiating GLP-1 RAs for obesity (without type 2 diabetes) using commercial insurance data.
**Persistence rates (non-diabetic obesity patients):**
- 180 days: 46.3%
- 1 year: 32.3%
- 2 years: ~15%
**By specific drug:**
- Semaglutide: 47.1% at 1 year (highest)
- Liraglutide: 19.2% at 1 year (lowest)
**Comparison with diabetic patients:**
- Diabetic patients: 46.5% discontinue within 1 year (better than non-diabetic 64.8%)
- Danish registry: 21.2% discontinue within 12 months for T2D; ~70% discontinue within 2 years
**Key factors associated with discontinuation:**
- Insufficient weight loss
- Income level (lower income → higher discontinuation)
- Adverse events (GI side effects)
- Insurance coverage changes
**Crucial nuance:** Outcomes approach trial-level results when focusing on highly adherent patients. The adherence problem is not that the drugs don't work — it's that most patients don't stay on them.
## Agent Notes
**Why this matters:** Adherence is THE binding constraint for the GLP-1 economic thesis. If only 32.3% of non-diabetic patients are still on GLP-1s at 1 year and ~15% at 2 years, the downstream savings that justify the cost never materialize for most patients. Under capitation, an MA plan pays for 12 months of GLP-1 ($2,940 at $245/month) for a patient who discontinues and regains weight — net cost with no benefit.
**What surprised me:** The drug-specific variation is large — semaglutide at 47.1% vs. liraglutide at 19.2%. Oral formulations may change this further (removing injection barrier). The income correlation suggests access/affordability drives discontinuation as much as clinical factors.
**What I expected but didn't find:** No analysis of how payment model affects persistence. Does being in an MA plan with care coordination improve adherence vs. FFS? No data on whether lifestyle interventions alongside medication improve persistence (directly relevant to BALANCE model design).
**KB connections:** The existing GLP-1 claim cites 64.8% non-diabetic discontinuation at 1 year. This source provides the full persistence curve and the crucial 2-year data (15%).
**Extraction hints:** The extractor should consider: "GLP-1 persistence at 2 years is only 15% for non-diabetic obesity patients, meaning the chronic use model fails not because patients choose indefinite use but because most cannot sustain it." This reframes the "inflationary chronic use" concern — the actual problem may be insufficient chronic use.
**Context:** Commercial insurance population — different from Medicare (younger, fewer comorbidities). Medicare population may have different persistence patterns due to higher disease burden and stronger clinical indications.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: The persistence data reframes the economic argument — the "chronic use" problem may actually be an "insufficient persistence" problem. Most patients don't stay on long enough for downstream benefits to materialize.
EXTRACTION HINT: Focus on the paradox: chronic use makes GLP-1s expensive, but discontinuation eliminates the downstream savings that justify the cost. The economics only work if adherence is sustained AND the payer captures downstream savings.

View file

@ -6,13 +6,9 @@ url: "https://www.futard.io/proposal/8SwPfzKhaZ2SQfgfJYfeVRTXALZs2qyFj7kX1dEkd29
date: 2024-10-10
domain: internet-finance
format: data
status: processed
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Governance proposal with detailed treasury management framework. Created decision_market entity for the proposal and updated parent entity timeline. No novel claims - this is operational governance implementing existing futarchy mechanisms. Risk scoring framework is specific to this DAO's treasury management, not a general claim about futarchy design."
---
## Proposal Details
@ -135,11 +131,3 @@ Target \$DEAN Price: 0.005383 USDC
- Autocrat version: 0.3
- Completed: 2024-10-14
- Ended: 2024-10-14
## Key Facts
- IslandDAO treasury proposal passed 2024-10-14 with 3% TWAP requirement (523k to 539k USDC MCAP)
- Risk scoring formula weights: Volatility 0.4, Liquidity 0.2, Market Cap 0.3, Drawdown 0.1
- Treasury manager performance fee: 5% of quarterly profit with 3-month vesting
- Target $DEAN price: 0.005383 USDC (from 0.005227 USDC)
- Portfolio allocation: 80% safe assets (RS >= 0.5), 20% risky assets (RS <= 0.5)

View file

@ -1,47 +0,0 @@
---
type: source
title: "Medicare Coverage of Anti-Obesity Medications: Clinical and Budget Impact Analysis"
author: "ASPE (Office of the Assistant Secretary for Planning and Evaluation)"
url: https://aspe.hhs.gov/sites/default/files/documents/127bd5b3347b34be31ac5c6b5ed30e6a/medicare-coverage-anti-obesity-meds.pdf
date: 2024-11-01
domain: health
secondary_domains: [internet-finance]
format: policy
status: unprocessed
priority: medium
tags: [glp-1, medicare, obesity, budget-impact, CBO, federal-spending]
---
## Content
ASPE issue brief analyzing the clinical benefits and fiscal impact of expanded Medicare coverage for anti-obesity medications.
**Key budget projections:**
- CBO estimate: Authorizing Medicare coverage for obesity medications would increase federal spending by $35 billion over 2026-2034
- Annual Part D cost increase: $3.1-6.1 billion
- Broad semaglutide access: 38,950 CV events avoided, 6,180 deaths avoided over 10 years
- Net financial impact: savings of $715 million over 10 years (alternative scenarios: $412M to $1.04B)
**Eligibility estimates:**
- ~10% of Medicare beneficiaries eligible under proposed criteria
- Criteria require comorbidities (CVD history, heart failure, CKD, prediabetes) — not just BMI
**The CBO vs. ASPE divergence:**
- CBO: $35B additional spending (budget scoring perspective — counts drug costs without full downstream offsets)
- ASPE/Value in Health: net savings of $715M (clinical economics perspective — includes downstream event avoidance)
- The difference is methodological: CBO scores within a 10-year budget window using conservative assumptions about uptake and downstream savings
## Agent Notes
**Why this matters:** The CBO vs. ASPE divergence is the core of the GLP-1 budget debate. CBO says "$35B more spending" and ASPE says "$715M savings" — both are technically correct but answer different questions. Budget scoring (CBO) doesn't fully count avoided hospitalizations and disease progression. Clinical economics (ASPE) does. This methodological difference drives the entire political debate about whether Medicare should cover GLP-1s.
**What surprised me:** The gap between CBO and ASPE is enormous — $35B cost vs. $715M savings. This isn't a minor methodological difference; it's a fundamentally different answer to "are GLP-1s worth covering?" The budget scoring rules structurally disadvantage preventive interventions.
**What I expected but didn't find:** No analysis of how the budget scoring methodology systematically undercounts prevention value. No comparison with other preventive interventions that face the same scoring bias.
**KB connections:** Connects to the structural misalignment thesis — the tools used to evaluate healthcare policy (CBO scoring) are themselves misaligned with prevention economics. Also relates to [[proxy inertia is the most reliable predictor of incumbent failure because current profitability rationally discourages pursuit of viable futures]] — budget scoring rules are a form of institutional proxy inertia.
**Extraction hints:** Potential meta-claim: "Federal budget scoring methodology systematically undervalues preventive interventions because the 10-year scoring window and conservative uptake assumptions don't capture long-term downstream savings."
**Context:** ASPE is the research arm of HHS — more favorable to coverage expansion than CBO, which is Congress's nonpartisan scorekeeper. The political weight of CBO scoring often overrides clinical economics in policy decisions.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[the healthcare cost curve bends up through 2035 because new curative and screening capabilities create more treatable conditions faster than prices decline]]
WHY ARCHIVED: The CBO vs. ASPE divergence reveals a systematic bias in how prevention economics are evaluated at the federal level — this matters beyond GLP-1s for the entire prevention-first thesis
EXTRACTION HINT: Focus on the methodological divergence as evidence of structural misalignment in policy evaluation, not just the GLP-1 budget numbers
flagged_for_leo: ["Budget scoring methodology systematically disadvantages prevention — this is a cross-domain structural problem affecting all preventive health investments"]

View file

@ -6,17 +6,13 @@ url: "https://www.futard.io/proposal/B4zpF4iHeF91qq8Szb9aD6pW1DrwSy6djD4QPWJQn3d
date: 2024-11-21
domain: internet-finance
format: data
status: null-result
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2024-11-21
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "This source contains only metadata about a failed MetaDAO proposal with no content details. There is no proposal text, no market data, no voting information, and no context about what was being proposed or why it failed. The source provides verifiable facts (proposal number, accounts, dates, status) but no evidence or interpretation that could support claims or enrich existing knowledge base content. Without knowing what Proposal #14 actually proposed or how the futarchy markets evaluated it, there is nothing extractable beyond the basic facts preserved in key_facts."
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Source contains only metadata about a failed MetaDAO proposal with no content details. Created decision_market entity for archival completeness and timeline entry on parent MetaDAO entity. No extractable claims or enrichments due to absence of proposal text, market data, or context about what was proposed or why it failed."
---
## Proposal Details
@ -43,10 +39,3 @@ extraction_notes: "Source contains only metadata about a failed MetaDAO proposal
- DAO account: GWywkp2mY2vzAaLydR2MBXRCqk2vBTyvtVRioujxi5Ce
- Proposer: xwQTt7R68Vsxco819EBqK3itgn9osQc6M2Z1DjwUqmk
- Autocrat version: 0.3
## Key Facts
- MetaDAO Proposal #14 failed (created 2024-11-21, completed 2024-11-25)
- Proposal account: B4zpF4iHeF91qq8Szb9aD6pW1DrwSy6djD4QPWJQn3dW
- Proposer: xwQTt7R68Vsxco819EBqK3itgn9osQc6M2Z1DjwUqmk
- Autocrat version: 0.3

View file

@ -6,13 +6,9 @@ url: "https://www.futard.io/proposal/GBQZvZAeW8xUuVV5a9FJHSyttzY5fPGuvkwLTpWLbw6
date: 2024-12-04
domain: internet-finance
format: data
status: null-result
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Governance proposal with clear outcome but no novel mechanism insights. Entity extraction only - no claims warranted. ORE entity may not exist in KB; if missing, this timeline entry will need parent entity creation during review."
---
## Proposal Details
@ -61,10 +57,3 @@ With the passing of this proposal, we would launch a USDC-ORE vault on Kamino an
- Autocrat version: 0.3
- Completed: 2024-12-07
- Ended: 2024-12-07
## Key Facts
- ORE proposal #3 passed on 2024-12-07 after 3-day voting period
- USDC described as 'fully-backed by dollars and treasuries held in US banks by Circle'
- ORE mission statement: 'create the best digital gold product in crypto'
- Proposal used Autocrat v0.3 futarchy implementation

View file

@ -7,14 +7,9 @@ date: 2025-01-01
domain: health
secondary_domains: []
format: report
status: null-result
status: unprocessed
priority: medium
tags: [singapore, medisave, medishield, medifund, international-comparison, individual-responsibility, universal-coverage]
processed_by: vida
processed_date: 2026-03-11
enrichments_applied: ["medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm.md", "value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two claims about Singapore's 3M healthcare framework as philosophical design alternative to US binary of individual responsibility vs universal coverage. Primary claim establishes the existence proof of coexistence at 4:1 spending efficiency. Secondary claim focuses on the specific mechanism design (mandatory savings + catastrophic insurance + safety net). Enriched two existing claims with Singapore as natural experiment on medical care contribution to outcomes and alternative payment model with full individual risk for routine care. Agent notes correctly identified this as challenging the US political binary and the magnitude of spending gap as most significant insight."
---
## Content
@ -76,11 +71,3 @@ extraction_notes: "Extracted two claims about Singapore's 3M healthcare framewor
PRIMARY CONNECTION: [[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]
WHY ARCHIVED: Unique system design not represented in KB — the savings-based approach is philosophically distinct from both single-payer and market-based models.
EXTRACTION HINT: The design philosophy (individual responsibility within universal coverage) is more extractable than the specific mechanics, which are Singapore-scale-dependent.
## Key Facts
- Singapore healthcare spending: 4.5% of GDP (vs US 18%)
- Singapore life expectancy: ~84 years (among world's highest)
- MediSave contribution rates: 8-10.5% of salary (age-dependent)
- MediShield Life: universal mandatory insurance covering all citizens and permanent residents
- MediFund: government endowment fund for those unable to pay after other coverage

View file

@ -1,45 +0,0 @@
---
type: source
title: "Cost-effectiveness of Semaglutide in People with Obesity and Cardiovascular Disease Without Diabetes"
author: "Journal of Medical Economics (Tandfonline)"
url: https://www.tandfonline.com/doi/full/10.1080/13696998.2025.2459529
date: 2025-01-01
domain: health
secondary_domains: [internet-finance]
format: paper
status: unprocessed
priority: medium
tags: [glp-1, semaglutide, cost-effectiveness, cardiovascular, SELECT-trial, QALY]
---
## Content
Cost-effectiveness analysis of semaglutide 2.4mg based on SELECT trial data, modeling lifetime outcomes for obese/overweight patients with established CVD but without diabetes.
**Key findings:**
- At list price: ICER = $136,271/QALY — cost-effective at $150,000/QALY threshold
- With estimated 48% rebate: ICER = $32,219/QALY — highly cost-effective
- Per 100,000 subjects treated (lifetime horizon): 2,791 non-fatal MIs avoided, 3,000 revascularizations avoided, 487 strokes avoided, 115 CV deaths avoided
- Average per-subject lifetime treatment cost: $47,353
- Savings from avoided T2D: $14,431/subject; avoided CKD: $2,074; avoided CV events: $1,512
**Australian analysis comparison:**
- At A$4,175/year: ICER = A$96,055/QALY (~US$138K/QALY)
- NOT cost-effective at Australian A$50,000/QALY threshold
**ICER 2025 assessment:**
- Semaglutide and tirzepatide now meet <$100K/QALY at net prices (shift from 2022)
- But semaglutide would need 80% price reduction to meet standard threshold at list price
## Agent Notes
**Why this matters:** The rebate-adjusted ICER ($32K/QALY) vs. list-price ICER ($136K/QALY) shows that the cost-effectiveness conclusion depends almost entirely on the actual net price. At $245/month (Medicare deal), semaglutide is likely highly cost-effective. At $1,350/month (list), it's borderline. This price sensitivity means the Trump deals fundamentally change the cost-effectiveness calculation.
**What surprised me:** The per-subject savings from avoided T2D ($14,431) dwarf savings from avoided CV events ($1,512), even though the trial was a CV outcomes trial. Diabetes prevention may be the largest economic lever, not cardiovascular protection.
**What I expected but didn't find:** No analysis stratified by risk level. High-risk patients (those meeting Medicare eligibility criteria) likely have much better cost-effectiveness than the average SELECT population.
**KB connections:** Supports scope-qualifying the inflationary claim — GLP-1s are cost-effective at net prices but not at list prices. The price trajectory (declining) matters enormously.
**Extraction hints:** The T2D prevention savings being 10x the CV event savings is a key insight. The existing GLP-1 claim focuses on weight loss economics; the real economic case may be metabolic disease prevention.
**Context:** Industry-funded study (Novo Nordisk). The 48% rebate estimate is their assumption of actual net pricing. CBO and ASPE use different assumptions.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: Cost-effectiveness is price-dependent — the declining price trajectory may flip GLP-1s from inflationary to cost-effective faster than the existing claim anticipates
EXTRACTION HINT: Focus on the price sensitivity of the cost-effectiveness conclusion and how recent price deals change the math

View file

@ -6,7 +6,7 @@ url: "https://www.futard.io/proposal/7FY4dgYDX8xxwCczrgstUwuNEC9NMV1DWXz31rMnGNT
date: 2025-02-03
domain: internet-finance
format: data
status: processed
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
@ -14,10 +14,6 @@ processed_date: 2025-02-03
enrichments_applied: ["futarchy-governed-permissionless-launches-require-brand-separation-to-manage-reputational-liability-because-failed-projects-on-a-curated-platform-damage-the-platforms-credibility.md", "MetaDAOs-Autocrat-program-implements-futarchy-through-conditional-token-markets-where-proposals-create-parallel-pass-and-fail-universes-settled-by-time-weighted-average-price-over-a-three-day-window.md", "MetaDAO-is-the-futarchy-launchpad-on-Solana-where-projects-raise-capital-through-unruggable-ICOs-governed-by-conditional-markets-creating-the-first-platform-for-ownership-coins-at-scale.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "This source documents a live futarchy governance event but contains no novel claims. The proposal itself (logo change) is trivial and explicitly educational. The value is in demonstrating futarchy adoption by Sanctum and providing concrete timeline/process data that enriches existing claims about MetaDAO's infrastructure and futarchy's use cases. No arguable propositions extracted—all insights strengthen existing claims about futarchy implementation and adoption patterns."
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Educational governance proposal with no novel claims. Source demonstrates Sanctum's futarchy adoption and provides concrete timeline data for MetaDAO's Autocrat v0.3 implementation. Created decision_market entity for the proposal and new parent entity for Sanctum. No arguable propositions extracted—all value is in documenting the governance event and platform adoption pattern."
---
## Proposal Details
@ -78,11 +74,3 @@ edited logo per CW
- Proposal account: 7FY4dgYDX8xxwCczrgstUwuNEC9NMV1DWXz31rMnGNTv
- Used Autocrat version 0.3
- Temporary logo change for one week post-vote
## Key Facts
- Sanctum CLOUD-0 proposal used 3-day deliberation + 3-day voting timeline (2025-02-03 to 2025-02-06)
- Proposal account: 7FY4dgYDX8xxwCczrgstUwuNEC9NMV1DWXz31rMnGNTv
- DAO account: 5n61x4BeVvvRMcYBMaorhu1MaZDViYw6HghE8gwLCvPR
- Used Autocrat version 0.3
- Proposer: proPaC9tVZEsmgDtNhx15e7nSpoojtPD3H9h4GqSqB2

View file

@ -6,13 +6,9 @@ url: "https://www.futard.io/proposal/9ZYMaLKWn9PSLTX1entmqJUYBiCkZbRxeRz1tVvYwqy
date: 2025-02-24
domain: internet-finance
format: data
status: null-result
status: unprocessed
tags: [futardio, metadao, futarchy, solana, governance]
event_type: proposal
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Single failed proposal from a hidden test DAO. No novel mechanism insights or governance dynamics worth extracting as claims. The proposal itself is significant enough to document as a decision_market entity showing futarchy governance in action, but contains no arguable propositions about mechanism design or organizational behavior. The AI-generated impact analysis sections were ignored as auto-generated noise per extraction rules."
---
## Proposal Details
@ -54,9 +50,3 @@ But you have access to the data via API so here you are!
- Autocrat version: 0.3
- Completed: 2025-02-27
- Ended: 2025-02-27
## Key Facts
- Test DAO proposal 9ZYMaLKWn9PSLTX1entmqJUYBiCkZbRxeRz1tVvYwqy6 for mtn Meets META Hackathon failed (2025-02-24 to 2025-02-27)
- Test DAO is a hidden DAO with account GWywkp2mY2vzAaLydR2MBXRCqk2vBTyvtVRioujxi5Ce
- Proposal used Autocrat v0.3 governance mechanism

View file

@ -7,15 +7,9 @@ date: 2025-02-27
domain: entertainment
secondary_domains: [internet-finance]
format: article
status: processed
status: unprocessed
priority: medium
tags: [mrbeast, beast-industries, valuation, content-as-loss-leader, creator-economy]
processed_by: clay
processed_date: 2026-03-11
claims_extracted: ["beast-industries-5b-valuation-prices-content-as-loss-leader-model-at-enterprise-scale.md"]
enrichments_applied: ["the media attractor state is community-filtered IP with AI-collapsed production costs where content becomes a loss leader for the scarce complements of fandom community and ownership.md", "creator-brand-partnerships-shifting-from-transactional-campaigns-to-long-term-joint-ventures-with-shared-formats-audiences-and-revenue.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two claims validating content-as-loss-leader model at enterprise scale, enriched two existing entertainment claims with market validation data, created Beast Industries entity. The $5B valuation represents significant market evidence that integrated creator-to-product models are valued differently than pure content businesses. Revenue trajectory data provides concrete metrics for the attractor state thesis."
---
## Content
@ -49,8 +43,3 @@ Fortune coverage of Beast Industries fundraise and business structure.
PRIMARY CONNECTION: the media attractor state is community-filtered IP with AI-collapsed production costs where content becomes a loss leader for the scarce complements of fandom community and ownership
WHY ARCHIVED: Revenue trajectory data validates content-as-loss-leader at enterprise scale. Cross-reference with Bloomberg source for consistent $250M Feastables figure.
EXTRACTION HINT: The $5B valuation is the market's verdict that the content-as-loss-leader model is real and scalable. This is market evidence, not just theoretical argument.
## Key Facts
- Beast Industries operates five verticals: software (Viewstats), CPG (Feastables, Lunchly), health/wellness, media, video games
- Traditional CPG companies (Hershey's, Mars) spend 10-15% of revenue on advertising

View file

@ -1,46 +0,0 @@
---
type: source
title: "Medicare Beneficiaries Face Near-Universal Prior Authorization for GLP-1 Drugs"
author: "Medical Economics"
url: https://www.medicaleconomics.com/view/medicare-beneficiaries-face-higher-costs-near-universal-prior-authorization-for-glp-1-drugs
date: 2025-03-01
domain: health
secondary_domains: []
format: article
status: unprocessed
priority: medium
tags: [glp-1, prior-authorization, medicare-advantage, formulary, access-barriers]
---
## Content
Analysis of GLP-1 coverage and prior authorization requirements under Medicare Advantage plans.
**Prior authorization escalation:**
- PA requirements surged from 2.8-5% of GLP-1 prescriptions (2020-2023) to nearly 100% by 2025
- Both BCBS and UnitedHealthcare require PA for GLP-1 coverage under MA
- PA ensures only T2D-diagnosed patients can access (pre-obesity coverage)
**Coverage rates by drug (2025 MA formularies):**
- Injectable semaglutide (Ozempic): 98.0% of MA plans cover
- Tirzepatide (Mounjaro): 96.2%
- Oral semaglutide: 84.8%
- Dulaglutide: 87.5%
**Current exclusion:**
- GLP-1s for weight loss/obesity remain excluded under Medicare Part D (until BALANCE model / demonstration)
- Only covered for T2D, CVD risk reduction, or obstructive sleep apnea (FDA-approved uses)
- Only 13 state Medicaid programs covered GLP-1s for obesity as of January 2026
## Agent Notes
**Why this matters:** Near-universal PA for GLP-1s under MA is a signal of how capitated plans manage high-cost drugs. MA plans bearing full risk have strong incentives to RESTRICT access (short-term cost avoidance) even when long-term data suggests coverage would save money. This is a live example of the VBC misalignment the March 10 research identified — MA is value-based in form but short-term cost management in practice.
**What surprised me:** The PA escalation from <5% to ~100% in just 2 years is extreme. This is MA plans actively resisting GLP-1 adoption, not embracing it which challenges the thesis that capitated plans would rationally cover prevention.
**What I expected but didn't find:** No data on how PA affects adherence/persistence. If PA creates delays and access friction, it may worsen the already-terrible adherence rates. No analysis of whether MA plans with higher GLP-1 coverage have better downstream outcomes.
**KB connections:** Directly relevant to the March 10 finding that MA is VBC in form but misaligned in practice. Also connects to [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]].
**Extraction hints:** The PA escalation could support a claim about short-term cost management overriding long-term prevention incentives even under capitation.
**Context:** The near-universal PA will change significantly when the BALANCE model launches and Medicare GLP-1 demonstration begins in July 2026. This archive captures the pre-demonstration baseline.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]
WHY ARCHIVED: Near-universal PA for GLP-1s under MA demonstrates that capitation alone doesn't align incentives for prevention — MA plans still manage to short-term cost metrics
EXTRACTION HINT: Focus on the tension between theoretical capitation incentives (cover prevention → save money) and actual MA behavior (restrict access → minimize short-term spend)

View file

@ -1,29 +0,0 @@
---
type: source
title: "On Queueing Theory for Large-Scale CI/CD Pipelines Optimization"
author: "Grégory Bournassenko"
url: https://arxiv.org/abs/2504.18705
date: 2025-04-25
domain: internet-finance
format: paper
status: unprocessed
tags: [pipeline-architecture, operations-research, queueing-theory, ci-cd, M/M/c-queue]
---
# On Queueing Theory for Large-Scale CI/CD Pipelines Optimization
Academic paper applying classical M/M/c queueing theory to model CI/CD pipeline systems. Proposes a queueing theory modeling framework to optimize large-scale build/test workflows using multi-server queue models.
## Key Content
- Addresses bottleneck formation in high-volume shared infrastructure pipelines
- Models pipeline stages as M/M/c queues (Poisson arrivals, exponential service, c servers)
- Integrates theoretical queueing analysis with practical optimization — dynamic scaling and prioritization of CI/CD tasks
- Framework connects arrival rate modeling to worker count optimization
- Demonstrates that classical queueing models provide actionable guidance for real software pipelines
## Relevance to Teleo Pipeline
Direct parallel: our extract/eval pipeline IS a multi-stage CI/CD-like system. Sources arrive (Poisson-ish), workers process them (variable service times), and queue depth determines throughput. The M/M/c framework gives us closed-form solutions for expected wait times given worker counts.
Key insight: M/M/c queues show that adding workers has diminishing returns — the marginal improvement of worker N+1 decreases as N grows. This means there's an optimal worker count beyond which additional workers waste compute without meaningfully reducing queue wait times.

View file

@ -1,41 +0,0 @@
---
type: source
title: "Phase 3 Trial of Semaglutide in Metabolic Dysfunction-Associated Steatohepatitis (MASH)"
author: "New England Journal of Medicine"
url: https://www.nejm.org/doi/10.1056/NEJMoa2413258
date: 2025-05-01
domain: health
secondary_domains: []
format: paper
status: unprocessed
priority: medium
tags: [glp-1, semaglutide, MASH, NASH, liver-disease, organ-protection]
---
## Content
Phase 3 trial of semaglutide 2.4mg in patients with MASH and moderate or advanced liver fibrosis.
**Key findings:**
- Resolution of steatohepatitis without worsening fibrosis: 62.9% semaglutide vs. 34.3% placebo
- GLP-1 RAs improve fibrosis stage without worsening MASH (meta-analysis data)
- Hepatoprotective effects are multifactorial: glycemic control + insulin resistance + weight loss + direct liver effects
- Some liver benefits appear at least partly independent of weight loss
**Meta-analysis context (2025):**
- GLP-1 RAs significantly increase histologic resolution of MASH
- Decreased liver fat deposition, improved hepatocellular ballooning, reduced lobular inflammation
- Associated with reduced risk of major CV events, clinically significant portal hypertension, and all-cause mortality in MASLD/MASH patients
## Agent Notes
**Why this matters:** MASH/NASH is projected to become the leading cause of liver transplantation. If GLP-1s can resolve steatohepatitis and slow fibrosis, this prevents enormously expensive late-stage liver disease. Combined with CV and kidney protection, GLP-1s are emerging as multi-organ protective agents, not just weight loss drugs.
**What surprised me:** The 62.9% resolution rate is very high — nearly 2x placebo. And some benefits are independent of weight loss, suggesting a direct hepatoprotective mechanism. This adds a third organ-protection pathway (heart, kidney, liver) to the multi-indication economic case.
**What I expected but didn't find:** No cost-effectiveness analysis specific to MASH indication. The Value in Health Medicare study showed only $28M MASH savings — surprisingly small given the clinical magnitude, likely because MASH progression to transplant takes decades.
**KB connections:** Strengthens the multi-indication benefit thesis that the existing GLP-1 claim doesn't fully capture. The combined CV + kidney + liver protection may justify chronic use even if weight management alone doesn't.
**Extraction hints:** Potential claim: "GLP-1 agonists protect three major organ systems simultaneously — cardiovascular, renal, and hepatic — through mechanisms partially independent of weight loss, making them the first drug class to address the metabolic syndrome as a unified disease."
**Context:** NEJM publication — highest evidence tier. Resmetirom (Rezdiffra) was approved for MASH in March 2024, so GLP-1s now compete with a dedicated MASH therapy. Head-to-head data unclear.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: Third organ-protection pathway (after CV and kidney) strengthens the case that GLP-1s should be evaluated as multi-organ protective agents, not just weight loss drugs
EXTRACTION HINT: The multi-organ protection thesis may justify reframing the existing GLP-1 claim from a weight-loss-economics frame to a metabolic-disease-prevention frame

View file

@ -1,54 +0,0 @@
---
type: source
title: "The Societal Implications of Using GLP-1 Receptor Agonists for the Treatment of Obesity"
author: "Med (Cell Press)"
url: https://www.cell.com/med/fulltext/S2666-6340(25)00232-6
date: 2025-06-01
domain: health
secondary_domains: [entertainment, internet-finance]
format: paper
status: unprocessed
priority: medium
tags: [glp-1, obesity, societal-impact, equity, food-systems, population-health, sustainability]
---
## Content
Review article examining the broad societal implications of widespread GLP-1 adoption beyond individual clinical outcomes.
**Population-level data:**
- October 2025 Gallup poll: 12.4% of US adults taking GLP-1 for weight loss (30M+ people)
- US obesity prevalence declined from 39.9% (2022) to 37.0% (2025) — 7.6M fewer obese Americans
- First population-level obesity prevalence decline in recent years
**Key societal concerns raised:**
- Without increased accessibility and lower costs, GLP-1 rollout may WIDEN inequalities
- Current GLP-1 access skews wealthy/insured — equity gap
- GLP-1s do not offer a sustainable solution without prevention
- Countries must consider local cost-effectiveness, budget impact, and ethical implications
**WHO position (December 2025):**
- Conditional recommendations for GLP-1s as part of comprehensive approach
- Three pillars: healthier environments (population policy), protect high-risk individuals, person-centered care
- Obesity is societal challenge requiring multisectoral action
**System-level effects:**
- Obesity costs US $400B+ annually
- GLP-1s mark "system-level redefinition" of cardiometabolic management
- Ripple effects across healthcare costs, insurance models, food systems, long-term population health
## Agent Notes
**Why this matters:** The population-level obesity decline (39.9% → 37.0%) is potentially historic — the first time a pharmaceutical intervention has measurably reduced population obesity prevalence. But the equity concerns are real: GLP-1s could create a two-tier health system where those with access get healthier while those without fall further behind.
**What surprised me:** The 3 percentage point decline in population obesity prevalence. If causally attributable to GLP-1s (not certain), this is the largest population-level health intervention effect since vaccines. The WHO guidelines being issued within 2 years of widespread adoption is also unusually fast.
**What I expected but didn't find:** No analysis of food industry/agriculture effects. No data on how GLP-1 adoption affects food consumption patterns at population level. No analysis of implications for the food-as-medicine / SDOH movement.
**KB connections:** Connects to [[Big Food companies engineer addictive products by hacking evolutionary reward pathways creating a noncommunicable disease epidemic more deadly than the famines specialization eliminated]] — GLP-1s may be a pharmacological counter to engineered food addiction. Also connects to [[the epidemiological transition marks the shift from material scarcity to social disadvantage as the primary driver of health outcomes in developed nations]] — GLP-1s address metabolic consequences but not root social causes.
**Extraction hints:** Potential claims: (1) "GLP-1 adoption has produced the first measurable decline in US obesity prevalence, demonstrating pharmaceutical intervention can shift population-level health outcomes." (2) "GLP-1 access inequality risks creating a two-tier metabolic health system where pharmacological prevention is available to the insured and wealthy while root social determinants remain unaddressed."
**Context:** This is a Cell Press review, not original research. The population-level obesity data needs independent verification — correlation with GLP-1 adoption is strong but causation requires more evidence (could be confounded by other trends).
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[Americas declining life expectancy is driven by deaths of despair concentrated in populations and regions most damaged by economic restructuring since the 1980s]]
WHY ARCHIVED: Population-level obesity decline is a potential paradigm shift, but equity concerns directly challenge the prevention-first attractor state if access remains stratified by wealth
EXTRACTION HINT: Focus on both the population-level effect AND the equity concern — these are in tension and both matter for the attractor state thesis
flagged_for_clay: ["GLP-1 adoption is reshaping cultural narratives around obesity, body image, and pharmaceutical solutions to behavioral problems — connects to health narrative infrastructure"]
flagged_for_rio: ["GLP-1 equity gap creates investment opportunity in access-focused models that serve underserved populations — potential Living Capital thesis"]

View file

@ -1,41 +0,0 @@
---
type: source
title: "Comprehensive Access to Semaglutide: Clinical and Economic Implications for Medicare"
author: "Value in Health (peer-reviewed journal)"
url: https://www.valueinhealthjournal.com/article/S1098-3015(25)02472-6/fulltext
date: 2025-06-01
domain: health
secondary_domains: [internet-finance]
format: paper
status: unprocessed
priority: high
tags: [glp-1, semaglutide, medicare, cost-effectiveness, cardiovascular, CKD, MASH]
---
## Content
Peer-reviewed modeling study estimating the comprehensive value of semaglutide in the Medicare population for current and future FDA-approved indications (type 2 diabetes, overweight/obesity, MASH). Modeled clinical outcomes and costs over a 10-year period (2026-2035).
Key findings:
- Net financial impact to Medicare: savings of $715 million over 10 years (range: $412M to $1.04B depending on utilization/price assumptions)
- 38,950 cardiovascular events avoided over 10 years
- 6,180 deaths avoided (CV events + CKD/MASH progression improvement)
- T2D-related impact: savings of ~$892 million
- Obesity-related impact: added costs of ~$205 million
- MASH-related impact: savings of ~$28 million
- Per 100,000 subjects treated: 2,791 non-fatal MIs avoided, 3,000 coronary revascularizations avoided, 487 non-fatal strokes avoided, 115 CV deaths avoided
- Average per-subject lifetime treatment costs: $47,353
- Savings from avoided T2D: $14,431/subject; avoided CKD: $2,074/subject; avoided CV events: $1,512/subject
## Agent Notes
**Why this matters:** This directly challenges our existing claim that GLP-1s are "inflationary through 2035." Under Medicare specifically, the modeling shows NET SAVINGS when multi-indication benefits are accounted for. The distinction between system-level inflationary impact and payer-specific savings under risk-bearing arrangements is the core of the VBC interaction question.
**What surprised me:** The T2D-related savings ($892M) actually exceed the obesity-related costs ($205M). The MASH savings are tiny ($28M) despite the impressive clinical data — suggests MASH treatment costs don't accumulate enough in the 10-year window to produce large offsets.
**What I expected but didn't find:** No breakdown by MA vs. traditional Medicare. No analysis of how capitated vs. FFS payment models affect the cost-benefit calculation differently.
**KB connections:** Directly relevant to [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]] — this study complicates the "inflationary" conclusion. Also connects to [[the healthcare cost curve bends up through 2035 because new curative and screening capabilities create more treatable conditions faster than prices decline]].
**Extraction hints:** Potential claim: "Comprehensive semaglutide access saves Medicare $715M over 10 years because multi-indication cardiovascular and metabolic benefits offset drug costs when a single payer bears both costs and savings." This would need to be scoped carefully against the system-level inflationary claim.
**Context:** Published in Value in Health, a peer-reviewed health economics journal. Study appears to use Novo Nordisk-favorable assumptions (net prices with rebates). The $715M figure is modest relative to total Medicare spending but significant as evidence that prevention CAN be cost-saving under the right payment structure.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: This study provides the strongest evidence that the "inflationary through 2035" framing needs scope qualification — system-level vs. payer-level economics diverge when downstream savings accrue to the same entity
EXTRACTION HINT: Focus on the distinction between system-level cost impact (inflationary) and risk-bearing payer impact (potentially cost-saving). This is the core VBC interaction.

View file

@ -1,52 +0,0 @@
---
type: source
title: "Weighing the Risk of GLP-1 Treatment in Older Adults: Sarcopenic Obesity Concerns"
author: "Multiple sources (ScienceDirect, Harvard Science Review, Endocrine News)"
url: https://pmc.ncbi.nlm.nih.gov/articles/PMC12391595/
date: 2025-07-01
domain: health
secondary_domains: []
format: review
status: unprocessed
priority: medium
tags: [glp-1, sarcopenia, muscle-loss, elderly, safety, lean-mass]
---
## Content
Multiple sources examining the muscle loss / sarcopenia risk from GLP-1 agonist use, particularly in elderly patients.
**Lean mass loss quantification:**
- 15-40% of total weight lost on GLP-1s is lean body mass (not fat)
- Some analyses suggest up to 60% in certain patients
- Natural aging already reduces skeletal muscle mass by 12-16% — GLP-1s compound this
**Elderly-specific risks:**
- Sarcopenic obesity (excess fat + low muscle mass) prevalence: 10-20% of older adults
- Weight cycling risk: patients who discontinue (64.8% within 1 year) may regain fat preferentially while muscle is NOT regained
- This creates a worse body composition than before treatment: same or higher fat, less muscle
- Functional impairment and disability risk increases
**Mitigation strategies:**
- High protein diet + resistance training can partially prevent muscle loss
- But adherence to exercise programs is low, especially in the populations most likely to use GLP-1s
- No pharmacological solution to GLP-1-induced muscle loss yet
**Next-generation compounds:**
- Some next-gen GLP-1 therapies aim to improve "quality of weight loss" by preserving muscle
- ADA notes new therapies "enhance quality of weight loss by improving muscle preservation"
## Agent Notes
**Why this matters:** This is the strongest safety counter-argument to broad GLP-1 deployment, especially in the Medicare-age population. If GLP-1s cause significant muscle loss in elderly patients, and most discontinue within a year (losing the metabolic benefits while keeping the muscle deficit), the net health effect could be NEGATIVE for some patients. This directly challenges the Medicare cost-savings thesis — sarcopenic elderly patients may need MORE healthcare, not less.
**What surprised me:** The weight cycling mechanism is particularly concerning: GLP-1 → muscle loss → discontinuation → fat regain without muscle regain → sarcopenic obesity → increased fall risk, fractures, disability. This cycle could create NEW healthcare costs that offset the cardiovascular and metabolic savings.
**What I expected but didn't find:** No population-level data on actual sarcopenia incidence in GLP-1 users vs. controls. Most evidence is mechanistic/theoretical or from small studies. No Medicare-specific analysis of the functional impact.
**KB connections:** This is a genuine challenge to the GLP-1 cost-savings thesis and the attractor state. If the same drug that prevents CV events causes sarcopenic disability, the net population health effect is ambiguous. Connects to the adherence data — the 64.8% discontinuation rate makes the muscle loss / weight cycling scenario the most common outcome.
**Extraction hints:** Potential claim: "GLP-1-induced muscle loss combined with high discontinuation rates creates a sarcopenic obesity risk where patients end up with worse body composition than before treatment — more fat, less muscle, higher disability risk."
**Context:** This is an emerging safety signal, not yet supported by large-scale outcomes data. The next-gen compounds claiming to preserve muscle suggest the manufacturers take this risk seriously.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: Counter-evidence to the GLP-1 benefit thesis — sarcopenia risk may create new costs that offset cardiovascular/metabolic savings, especially in the Medicare population
EXTRACTION HINT: The intersection of muscle loss + high discontinuation rates is the key risk — evaluate as a challenge to the cost-savings thesis, not just a clinical side effect
flagged_for_astra: ["GLP-1-induced muscle loss in elderly has parallels to spaceflight muscle atrophy — different mechanism but similar functional consequences"]

View file

@ -7,14 +7,9 @@ date: 2025-09-01
domain: ai-alignment
secondary_domains: []
format: paper
status: null-result
status: unprocessed
priority: medium
tags: [alignment-gap, feedback-misspecification, reward-hacking, sycophancy, impossibility, maps-framework]
processed_by: theseus
processed_date: 2026-03-11
enrichments_applied: ["emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md", "collective intelligence requires diversity as a structural precondition not a moral preference.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Two novel formal results extracted as claims: (1) exponential barrier + calibration oracle solution, (2) MAPS framework for managing alignment gap. Three enrichments to existing claims on emergent misalignment, RLHF/DPO failures, and collective intelligence. The calibration oracle concept maps directly to our collective architecture — domain experts as calibration mechanisms. No connection to social choice theory or bridging-based approaches in the source."
---
## Content
@ -56,9 +51,3 @@ The alignment gap cannot be eliminated but can be mapped, bounded, and managed.
PRIMARY CONNECTION: [[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]]
WHY ARCHIVED: The "calibration oracle" concept maps to our collective architecture — domain experts as calibration mechanisms
EXTRACTION HINT: The exponential barrier + calibration oracle constructive result is the key extractable claim pair
## Key Facts
- Exponential sample complexity: exp(n*alpha*epsilon^2) where alpha = fraction of problematic contexts, epsilon = bias strength
- Calibration oracle reduces complexity to O(1/(alpha*epsilon^2))
- Paper published September 2025 by independent researcher Madhava Gaikwad

View file

@ -1,47 +0,0 @@
---
type: source
title: "Trump Administration Announces Deals with Eli Lilly and Novo Nordisk to Slash GLP-1 Prices for Medicare"
author: "CNBC / Multiple sources"
url: https://www.cnbc.com/2025/11/06/trump-eli-lilly-novo-nordisk-deal-obesity-drug-prices.html
date: 2025-11-06
domain: health
secondary_domains: [internet-finance]
format: news
status: unprocessed
priority: high
tags: [glp-1, drug-pricing, medicare, policy, trump-administration, market-structure]
---
## Content
On November 6, 2025, President Trump announced agreements with Eli Lilly and Novo Nordisk to dramatically reduce GLP-1 prices and expand Medicare coverage for obesity — the first time Medicare will cover GLP-1 medications specifically for obesity.
**Pricing details:**
- Medicare/Medicaid price for semaglutide and tirzepatide: $245/month
- General price through TrumpRx: $350/month (down from ~$1,350/month injectable)
- Oral Wegovy: $149-$299/month (launched January 2026)
- Medicare beneficiaries: $50/month out-of-pocket maximum for tirzepatide (Zepbound) starting April 2026
- Future oral GLP-1s: initial dose priced at $150/month on TrumpRx
**Eligibility criteria for Medicare coverage:**
- BMI ≥27 with prediabetes or cardiovascular disease history
- BMI >30 with heart failure, uncontrolled hypertension, or chronic kidney disease
- ~10% of Medicare beneficiaries expected to be eligible
**Timeline:**
- Medicare GLP-1 payment demonstration: July 2026
- BALANCE Model in Medicaid: May 2026
- BALANCE Model in Medicare Part D: January 2027
## Agent Notes
**Why this matters:** This is a policy earthquake. Medicare covering GLP-1s for obesity — previously explicitly excluded — fundamentally changes the addressable population and the economics. The $245/month Medicare price is ~82% below list price. Combined with the $50/month OOP cap, this removes most financial barriers for the eligible Medicare population.
**What surprised me:** The eligibility criteria are NARROW — requiring comorbidities, not just obesity. This is smart from a cost containment perspective (targeting highest-risk/highest-savings patients) but limits the population-level impact. The deal structure (manufacturer concessions in exchange for coverage) is a novel mechanism outside normal CMS rulemaking.
**What I expected but didn't find:** No details on how MA plans specifically will implement this. No analysis of how the deal interacts with existing MA formulary management and prior authorization practices. No clarity on whether the $245 price applies to MA plans or just traditional Medicare.
**KB connections:** Connects to the MA economics research from March 10 session. Under capitation, MA plans bearing full risk would see the $245/month cost offset by downstream savings — but only if adherence is sustained. The eligibility criteria (high-risk patients with comorbidities) are the population where savings are most likely.
**Extraction hints:** Potential claim about the deal structure as a novel policy mechanism — manufacturer price concessions in exchange for coverage expansion, bypassing traditional CMS rulemaking. Also: the narrow eligibility targeting high-risk patients may actually make this cost-effective under capitation even if system-level impact is inflationary.
**Context:** This is a politically-driven deal that may not survive administration changes. The legal authority for this arrangement has been questioned. But the pricing signals (oral at $149-$299, Medicare at $245) are reshaping competitive dynamics regardless.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: The price reduction + coverage expansion + narrow eligibility criteria fundamentally change the economics analyzed in the existing claim — the "inflationary through 2035" conclusion assumed higher prices and broader population
EXTRACTION HINT: Focus on how narrow eligibility (comorbid patients only) changes the cost-effectiveness calculus vs. broad population coverage

View file

@ -6,15 +6,10 @@ url: https://blog.colosseum.com/introducing-the-colosseum-stamp/
date: 2025-12-00
domain: internet-finance
secondary_domains: []
format: report
status: null-result
format: article
status: unprocessed
priority: high
tags: [stamp, investment-instrument, metadao, ownership-coins, safe, legal-structure, colosseum]
processed_by: rio
processed_date: 2026-03-11
enrichments_applied: ["STAMP replaces SAFE plus token warrant by adding futarchy-governed treasury spending allowances that prevent the extraction problem that killed legacy ICOs.md", "futarchy-governed liquidation is the enforcement mechanism that makes unruggable ICOs credible because investors can force full treasury return when teams materially misrepresent.md", "MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Three new claims extracted on STAMP mechanics: (1) single-token structure with legal enforceability, (2) 20% investor cap ensuring community ownership, (3) clean migration from equity to tokens. Enriched three existing claims with detailed STAMP mechanics. Created entities for Colosseum and Orrick. No regulatory analysis or legal opinions published yet, so confidence capped at experimental. The 20% cap is the most striking mechanism design choice — significantly lower than typical crypto raises."
---
## Content
@ -62,16 +57,3 @@ Colosseum introduces STAMP (Simple Token Agreement, Market Protected), developed
PRIMARY CONNECTION: [[STAMP replaces SAFE plus token warrant by adding futarchy-governed treasury spending allowances that prevent the extraction problem that killed legacy ICOs]]
WHY ARCHIVED: First detailed specification of STAMP instrument. The 20% investor cap + mandatory SAFE termination + DAO-controlled treasury are novel mechanism design choices worth claiming.
EXTRACTION HINT: Focus on (1) how STAMP structurally prevents the extraction problem, (2) the 20% cap as mechanism for ensuring community ownership, (3) the clean-break migration from equity to token structure.
## Key Facts
- STAMP developed by Colosseum with law firm Orrick (2025-12)
- STAMP uses Cayman SPC/SP entity structure
- Investor allocation capped at 20% of total token supply
- Team allocation: 10-40% of total supply, milestone-based
- 24-month linear unlock schedule for investor allocations
- Funds restricted to product development and operating expenses pre-ICO
- Remaining balance transfers to DAO-controlled treasury upon ICO
- Prior SAFEs and convertible notes terminated upon STAMP signing
- MetaDAO interface handles entity setup
- Positioned as open-source ecosystem standard

View file

@ -1,31 +0,0 @@
---
type: source
title: "Reactive Programming Paradigms: Mastering Backpressure and Stream Processing"
author: "Java Code Geeks"
url: https://www.javacodegeeks.com/2025/12/reactive-programming-paradigms-mastering-backpressure-and-stream-processing.html
date: 2025-12-01
domain: internet-finance
format: essay
status: unprocessed
tags: [pipeline-architecture, backpressure, reactive-streams, flow-control, producer-consumer]
---
# Reactive Programming Paradigms: Mastering Backpressure and Stream Processing
Practitioner guide to implementing backpressure in reactive stream processing systems. Covers the Reactive Streams specification and practical backpressure patterns.
## Key Content
- Reactive Streams standard: Publisher/Subscriber/Subscription interfaces with demand-based flow control
- Subscriber requests N items → Publisher delivers at most N → prevents overwhelming
- Four backpressure strategies:
1. **Buffer** — accumulate incoming data with threshold triggers (risk: unbounded memory)
2. **Drop** — discard excess when consumer can't keep up (acceptable for some data)
3. **Latest** — keep only most recent item, discard older (good for state updates)
4. **Error** — signal failure when buffer overflows (forces architectural fix)
- Practical implementations: Project Reactor (Spring WebFlux), Akka Streams, RxJava
- Key insight: backpressure must be designed into the system from the start — bolting it on later is much harder
## Relevance to Teleo Pipeline
Our pipeline currently has NO backpressure. Extract produces PRs that accumulate in eval's queue without any feedback mechanism. If research dumps 20 sources, extraction creates 20 PRs, and eval drowns trying to process them all. We need a "buffer + rate limit" strategy: extraction should check eval queue depth before starting new work, and slow down or pause when eval is backlogged.

View file

@ -1,41 +0,0 @@
---
type: source
title: "WHO Issues Global Guideline on the Use of GLP-1 Medicines in Treating Obesity"
author: "World Health Organization"
url: https://www.who.int/news/item/01-12-2025-who-issues-global-guideline-on-the-use-of-glp-1-medicines-in-treating-obesity
date: 2025-12-01
domain: health
secondary_domains: []
format: policy
status: unprocessed
priority: medium
tags: [glp-1, WHO, global-health, obesity, guidelines, equity]
---
## Content
WHO issued conditional recommendations for GLP-1 medicines in obesity treatment (December 2025).
**Three-pillar framework:**
1. Creating healthier environments through population-level policies
2. Protecting individuals at high risk
3. Ensuring access to lifelong, person-centered care
**Key positions:**
- GLP-1s should be part of comprehensive approach including healthy diets, physical activity, and professional support
- Obesity is societal challenge requiring multisectoral action — not just individual medical treatment
- Conditional recommendations (acknowledging limited long-term evidence)
- Countries must consider local cost-effectiveness, budget impact, and ethical implications
## Agent Notes
**Why this matters:** WHO positioning GLP-1s within a comprehensive framework (not as standalone treatment) aligns with the BALANCE model's design. The three-pillar approach echoes the attractor state thesis — prevention infrastructure + targeted intervention + person-centered care. But WHO's emphasis on population-level policies and societal action challenges the pharmacological solution narrative.
**What surprised me:** Speed of WHO guideline issuance — unusually fast for a drug class this new. The conditional framing acknowledges uncertainty about long-term outcomes, which is honest.
**What I expected but didn't find:** No specific cost-effectiveness thresholds by country income level. No analysis of which low/middle-income countries could afford GLP-1 coverage.
**KB connections:** Connects to the population health framework and the question of whether pharmaceutical intervention can substitute for structural social determinant reform.
**Extraction hints:** The WHO framework could support a claim about the correct integration model for GLP-1s — medication embedded in comprehensive lifestyle/policy infrastructure, not standalone pharmacotherapy.
**Context:** WHO guidelines have limited enforcement power but significant influence on national health policies, especially in low/middle-income countries.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]
WHY ARCHIVED: WHO's three-pillar framework challenges the pharmacological solution narrative and supports the view that GLP-1s are most effective when embedded in structural prevention infrastructure
EXTRACTION HINT: The WHO position supports the BALANCE model's design but questions whether pharmaceutical solutions alone can address the obesity epidemic

View file

@ -1,52 +0,0 @@
---
type: source
title: "CMS Launches BALANCE Model to Expand GLP-1 Access in Medicare Part D and Medicaid"
author: "Centers for Medicare & Medicaid Services"
url: https://www.cms.gov/priorities/innovation/innovation-models/balance
date: 2025-12-23
domain: health
secondary_domains: [internet-finance]
format: policy
status: unprocessed
priority: high
tags: [glp-1, cms, balance-model, medicare, medicaid, value-based-care, payment-model]
---
## Content
CMS announced the Better Approaches to Lifestyle and Nutrition for Comprehensive hEalth (BALANCE) Model on December 23, 2025. Key features:
**Structure:**
- Voluntary model for Medicare Part D plans and state Medicaid agencies
- Covers GLP-1 medications for weight management and metabolic health improvement
- CMS negotiates drug pricing and coverage terms with manufacturers on behalf of participating plans
- Manufacturer Request for Applications due January 8, 2026
**Timeline:**
- Medicaid agencies: May 2026
- Medicare Part D plans: January 2027
- Bridge demonstration for Medicare Part D: July 2026
- Model testing concludes: December 2031
**Key innovation:**
- Combines GLP-1 medication access with evidence-based lifestyle supports
- Not just drug coverage — requires comprehensive health improvement approach
- CMS exploring incentives including adjustment of capitated payment rates for obesity and increasing government reinsurance
**Payment model interaction:**
- Voluntary participation by manufacturers, plans, and states
- CMS negotiates centrally, reducing plan-level negotiation costs
- Model explicitly designed to test whether combined medication + lifestyle support produces better long-term outcomes and cost savings
## Agent Notes
**Why this matters:** This is the first CMS payment model specifically designed to test the GLP-1 + VBC interaction. The requirement for lifestyle supports alongside medication addresses the adherence problem (lifestyle changes may sustain benefits after medication discontinuation). The adjustment of capitated payment rates for obesity is a direct incentive mechanism for MA plans to cover GLP-1s.
**What surprised me:** The BALANCE model is not just drug coverage — it requires lifestyle interventions. This is CMS explicitly testing whether the combination (medication + behavior change) can solve the chronic use / adherence problem that makes GLP-1s inflationary. If it works, it validates the attractor state thesis more broadly.
**What I expected but didn't find:** No specific outcome metrics or success criteria published yet. No details on what "evidence-based lifestyle supports" means operationally. No analysis of which state Medicaid programs are likely to participate.
**KB connections:** Directly tests [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]. Also connects to [[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]] — the BALANCE model is a policy attempt to move more payment toward genuine risk.
**Extraction hints:** Potential claim: "The CMS BALANCE Model is the first federal payment model explicitly designed to test whether GLP-1 medications combined with lifestyle supports can produce net cost savings under risk-bearing arrangements."
**Context:** CMS Innovation Center models have mixed track records. Many voluntary models fail due to adverse selection (only plans that expect to benefit participate). But the BALANCE model's design — combining medication access with lifestyle support and capitation adjustments — is more sophisticated than typical drug coverage expansion.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]
WHY ARCHIVED: First explicit federal test of the GLP-1 + VBC thesis — if it demonstrates net savings under risk-bearing, it validates the prevention-first attractor state; if it fails, it complicates it
EXTRACTION HINT: Focus on the structural design (medication + lifestyle + payment adjustment) as a test of the attractor state thesis, not just as drug coverage policy

View file

@ -1,38 +0,0 @@
---
type: source
title: "Semaglutide and Hospitalizations in Patients With Obesity and Established CVD: SELECT Trial Exploratory Analysis"
author: "JAMA Cardiology (peer-reviewed)"
url: https://pubmed.ncbi.nlm.nih.gov/41433034/
date: 2025-12-23
domain: health
secondary_domains: [internet-finance]
format: paper
status: unprocessed
priority: high
tags: [glp-1, semaglutide, hospitalization, cardiovascular, SELECT-trial, cost-offset]
---
## Content
Prespecified exploratory analysis of the SELECT trial published in JAMA Cardiology, examining hospitalization outcomes for semaglutide vs. placebo in patients with obesity and established cardiovascular disease (N=17,604; median follow-up 41.8 months).
Key findings:
- Total hospitalizations for any indication: 18.3 vs 20.4 admissions per 100 patient-years (mean ratio 0.90; P<.001) 10% reduction
- Hospitalizations for serious adverse events: 15.2 vs 17.1 per 100 patient-years (mean ratio 0.89; P<.001) 11% reduction
- Days hospitalized for any indication: 157.2 vs 176.2 days per 100 patient-years (rate ratio 0.89; P=.01) — 11% reduction
- Benefits extended beyond cardiovascular — overall hospitalization burden reduced
Median age 61.0 years; 27.7% female; median BMI 32.1.
## Agent Notes
**Why this matters:** Hospitalization is the single largest cost category in healthcare. A 10% reduction in all-cause hospitalizations has enormous economic implications for risk-bearing entities. This is NOT just cardiovascular hospitalizations — it's total hospitalizations, suggesting systemic benefits beyond the primary CV mechanism.
**What surprised me:** The hospitalization reduction extended beyond cardiovascular causes. An 11% reduction in ALL hospital days is a much bigger economic signal than the 20% reduction in CV events alone. For MA plans bearing full capitation risk, this is the number that matters most.
**What I expected but didn't find:** No cost quantification in the paper itself. No breakdown by hospitalization type beyond CV vs. all-cause.
**KB connections:** Connects to [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]] — hospitalization reduction is the mechanism through which prevention-first models profit.
**Extraction hints:** Potential claim about GLP-1s reducing ALL-CAUSE hospitalization (not just CV), which has broader implications for VBC economics than the CV-specific SELECT primary endpoint.
**Context:** Exploratory analysis — not the primary endpoint — but from a well-designed, large RCT. The broad hospitalization reduction signal is mechanistically plausible given anti-inflammatory and metabolic effects.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]
WHY ARCHIVED: All-cause hospitalization reduction is the most economically relevant outcome for risk-bearing payers and the strongest evidence that GLP-1s could be cost-saving under capitation
EXTRACTION HINT: Focus on the all-cause hospitalization signal (not just CV) — this is what makes GLP-1s relevant to VBC economics beyond cardiology

View file

@ -7,14 +7,9 @@ date: 2026-01-01
domain: ai-alignment
secondary_domains: []
format: report
status: null-result
status: unprocessed
priority: high
tags: [mechanistic-interpretability, SAE, safety, technical-alignment, limitations, DeepMind-pivot]
processed_by: theseus
processed_date: 2026-03-11
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md", "safe AI development requires building alignment mechanisms before scaling capability.md", "capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted 5 claims focused on the strategic bifurcation of mechanistic interpretability (diagnostic viable, comprehensive dead), the practical utility gap (SAEs underperform baselines), computational costs as alignment tax amplifier, and fundamental barriers (NP-hardness, chaotic dynamics). Applied 4 enrichments to existing alignment claims. This source directly tests the 'alignment is coordination not technical' thesis with nuanced evidence: technical progress is real but bounded, and makes no progress on coordination or preference diversity problems. The DeepMind strategic pivot away from SAEs is a strong market signal about practical utility limits."
---
## Content
@ -69,14 +64,3 @@ Comprehensive status report on mechanistic interpretability as of early 2026:
PRIMARY CONNECTION: [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]]
WHY ARCHIVED: Provides 2026 status evidence on whether technical alignment (interpretability) can close the alignment gap — answer is "useful but bounded"
EXTRACTION HINT: Focus on the practical utility gap (baselines outperform SAEs on safety tasks), the DeepMind strategic pivot, and Anthropic's production deployment use. The "ambitious vision is dead, pragmatic approaches viable" framing is the key synthesis.
## Key Facts
- MIT Technology Review named mechanistic interpretability a '2026 breakthrough technology' (January 2026)
- January 2025 consensus paper by 29 researchers across 18 organizations established core open problems
- Google DeepMind's Gemma Scope 2 released December 2025: 270M to 27B parameter models
- SAEs scaled to GPT-4 with 16 million latent variables
- Anthropic's attribution graphs (March 2025) trace computational paths for ~25% of prompts
- Stream algorithm (October 2025) achieves near-linear time attention analysis, eliminating 97-99% of token interactions
- SAE reconstructions cause 10-40% performance degradation on downstream tasks
- Fine-tuning misalignment reversible with ~100 corrective training samples (OpenAI finding)

View file

@ -1,51 +0,0 @@
---
type: source
title: "Aon GLP-1 Research: Long-Term Employer Cost Savings and Cancer Risk Reduction"
author: "Aon plc (@Aon)"
url: https://aon.mediaroom.com/2026-01-13-Aons-Latest-GLP-1-Research-Reveals-Long-Term-Employer-Cost-Savings-and-Significant-Reductions-in-Cancer-Risk-for-Women
date: 2026-01-13
domain: health
secondary_domains: [internet-finance]
format: report
status: unprocessed
priority: high
tags: [glp-1, employer-costs, cancer-risk, cardiovascular, cost-offset, real-world-evidence]
---
## Content
Aon's multi-year study of U.S. commercial health claims data from 192,000+ GLP-1 users. Released January 13, 2026.
**Cost dynamics over time (key finding):**
- First 12 months on Wegovy/Zepbound: medical costs rise 23% vs. 10% for non-users (drug costs dominate)
- After 12 months: medical costs grow just 2% vs. 6% for non-users (downstream savings kick in)
- For diabetes indication: medical cost growth 6 percentage points lower at 30 months; 9 points lower with 80%+ adherence
- For weight loss indication: cost growth 3 points lower at 18 months; 7 points lower with consistent use
**Cancer risk reduction (surprising finding):**
- Female GLP-1 users: ~50% lower incidence of ovarian cancer
- Female GLP-1 users: 14% lower incidence of breast cancer
- Also associated with lower rates of osteoporosis, rheumatoid arthritis
- Fewer hospitalizations for alcohol/drug abuse, bariatric surgery, certain pancreatic disorders
**Cardiovascular outcomes:**
- Adherent users (80%+): significantly fewer MACE hospitalizations
- Female MACE reduction: 47%
- Male MACE reduction: 26%
**Adherence is the binding variable:** Benefits scale dramatically with adherence. The 80%+ adherent cohort shows the strongest effects across all outcomes.
## Agent Notes
**Why this matters:** This is the largest real-world employer claims dataset on GLP-1 economics. The temporal pattern is crucial — costs go UP in year 1 then DOWN thereafter. This means short-term payers (employers with high turnover) see only costs, while long-term risk-bearers (MA plans, capitated systems) capture the savings. This has direct implications for VBC economics.
**What surprised me:** The cancer finding is genuinely novel. A 50% reduction in ovarian cancer incidence is enormous if confirmed. The sex-differential in MACE reduction (47% for women vs. 26% for men) also suggests the benefits may be larger for women, which has implications for MA risk adjustment.
**What I expected but didn't find:** No stratification by payment model (capitation vs. FFS). No analysis of the break-even point for total cost of ownership. No comparison of the cost trajectory for adherent vs. non-adherent users on a per-user basis.
**KB connections:** The temporal cost pattern directly tests [[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]] — long-term risk-bearing is required to capture GLP-1 savings.
**Extraction hints:** Potential claim: "GLP-1 cost-effectiveness requires sustained adherence and long-term risk-bearing because medical cost savings lag drug costs by 12-18 months, making short-term payers see only costs while capitated plans capture net savings." The cancer signal deserves its own claim if replicated.
**Context:** Aon is a major insurance broker/consultant. Their data is commercial claims (employer-sponsored), not Medicare. The 192K sample is large but observational — selection bias is a concern (healthier/wealthier employees may be more likely to use GLP-1s).
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: The temporal cost dynamics (costs up Y1, down Y2+) are the most important data point for understanding VBC interaction — shows why payment model structure determines whether GLP-1s are inflationary or cost-saving
EXTRACTION HINT: Focus on the temporal cost curve and what it implies for different payment models. The cancer finding is separately important but preliminary.
flagged_for_rio: ["GLP-1 cost dynamics have direct implications for health investment thesis — long-term risk-bearers capture savings that short-term payers miss"]

View file

@ -1,6 +1,34 @@
---
title: NASAA Clarity Act Concerns
extraction_notes: ""
enrichments_applied: []
...
type: source
title: "NASAA expresses concerns about the CLARITY Act — 36 state regulators oppose federal preemption of digital asset oversight"
author: "North American Securities Administrators Association (NASAA)"
url: https://www.nasaa.org/wp-content/uploads/2026/01/NASAA-Expresses-Concerns-Regarding-the-Digital-Asset-Market-Clarity-Act-1.13.26-F.pdf
date: 2026-01-13
domain: internet-finance
secondary_domains: []
format: article
status: unprocessed
priority: medium
tags: [nasaa, regulation, clarity-act, state-regulators, federal-preemption, investor-protection]
---
## Content
NASAA (representing securities regulators from all 50 states, DC, Puerto Rico, US Virgin Islands, and Canadian provinces) filed formal concerns about the CLARITY Act on January 13, 2026.
Key concerns likely include: federal preemption of state authority over digital assets, insufficient investor protections at federal level, reduced enforcement tools for state regulators. (Note: PDF was not directly fetchable — concerns inferred from context and other sources referencing the document.)
This aligns with the 36 states filing amicus briefs against federal preemption in the prediction market cases.
## Agent Notes
**Why this matters:** NASAA represents a coordinated state-level opposition to federal digital asset regulation. This is the same institutional resistance facing prediction markets. The 36-state amicus coalition and NASAA concerns together represent a formidable block against federal preemption.
**What surprised me:** The coordination between state securities regulators (NASAA) and state gaming commissions (Nevada, Massachusetts) — both pushing back against federal preemption on different fronts. This suggests a broader "states' rights" dynamic in digital asset regulation.
**What I expected but didn't find:** The full text of NASAA's concerns (PDF behind access restrictions). Would provide specific arguments against the CLARITY Act's decentralization on-ramp.
**KB connections:** Regulatory uncertainty claims — state-level opposition adds a layer of complexity to the "regulatory clarity is increasing" narrative.
**Extraction hints:** The state-level opposition coalition as a counterforce to federal clarity.
**Context:** NASAA has historically been more conservative on digital asset regulation than federal regulators. Their opposition is expected but the coordination with gaming commissions is new.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[Internet finance is an industry transition from traditional finance where the attractor state replaces intermediaries with programmable coordination and market-tested governance]]
WHY ARCHIVED: State-level opposition coalition represents a friction force against the internet finance transition. Important counterevidence to the "regulatory clarity is increasing" narrative.
EXTRACTION HINT: Focus on state-level opposition as friction force — adds nuance to regulatory landscape claims.

View file

@ -1,52 +0,0 @@
---
type: source
title: "The 2026 GLP-1 Patent Cliff: Generics, Global Competition, and the $100 Billion M&A Race"
author: "GeneOnline News"
url: https://www.geneonline.com/the-2026-glp-1-patent-cliff-generics-global-competition-and-the-100-billion-ma-race/
date: 2026-02-01
domain: health
secondary_domains: [internet-finance]
format: article
status: unprocessed
priority: medium
tags: [glp-1, generics, patent-cliff, global-competition, drug-pricing, market-structure]
---
## Content
Overview of the GLP-1 generic competition landscape as patents begin expiring internationally.
**US timeline:**
- Semaglutide patents extend to 2031-2032 (US and Europe)
- No US generics expected before 2031-2033
- Orforglipron (Eli Lilly, non-peptide small molecule) could be approved Q2 2026
**International generic competition (2026):**
- Canada: First G7 nation where certain semaglutide patents expired (January 4, 2026). Sandoz, Apotex, Teva filing immediately
- Brazil: Generic competition opening March 2026. Biomm + Biocon (India) preparing generic semaglutide
- China: 17+ generic semaglutide candidates in Phase 3 trials. Monthly therapy could fall to $40-$50
- India: Patent expirations scheduled March 2026
**Price trajectory:**
- Oral Wegovy: $149-$299/month at launch (January 2026)
- Medicare deal: $245/month
- International generics: potentially $40-$50/month in some markets
- Competition will drive prices down, but volume growth offsets price compression in near term
**Pipeline competitors:**
- Orforglipron (Lilly): non-peptide oral GLP-1, potential approval Q2 2026
- Amycretin: 22% weight loss without plateau
- Multiple next-generation compounds in development
## Agent Notes
**Why this matters:** The price trajectory is the single most important variable for the GLP-1 cost-effectiveness calculation. If prices converge toward $50-100/month globally by 2030 (driven by international generic competition, even before US generics), the "inflationary through 2035" claim needs significant revision. At $50/month, GLP-1s become unambiguously cost-effective under any payment model.
**What surprised me:** Canada's patents expired January 2026 — generic filings are already happening. The $40-$50/month projection for China/India is 95%+ below current US list price. International price arbitrage pressure will affect US pricing even before US patent expiry.
**What I expected but didn't find:** No analysis of how international generic availability affects US compounding pharmacy landscape. No modeling of the price trajectory beyond "prices will decline."
**KB connections:** The price trajectory directly affects whether the existing GLP-1 claim's "inflationary through 2035" conclusion holds. If prices decline faster than assumed, the inflection point (where volume growth no longer offsets price compression) moves earlier.
**Extraction hints:** Potential claim: "International GLP-1 generic competition beginning in 2026 will compress global prices below $100/month by 2030, fundamentally changing the cost-effectiveness calculation from inflationary to cost-saving under risk-bearing payment models."
**Context:** GeneOnline is an industry publication. The $40-$50 projection for China/India may be optimistic. US prices will remain higher due to regulatory and distribution differences. But the directional pressure is clear.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]
WHY ARCHIVED: Price trajectory is the key variable the existing claim depends on — if prices decline faster than assumed, the "inflationary through 2035" conclusion may be wrong
EXTRACTION HINT: Focus on the price trajectory and its implications for cost-effectiveness under different payment models, especially the international competition pressure

View file

@ -1,33 +0,0 @@
---
type: source
title: "How to Implement HPA with Object Metrics for Queue-Based Scaling"
author: "OneUptime"
url: https://oneuptime.com/blog/post/2026-02-09-hpa-object-metrics-queue/view
date: 2026-02-09
domain: internet-finance
format: essay
status: unprocessed
tags: [pipeline-architecture, kubernetes, autoscaling, queue-based-scaling, KEDA, HPA]
---
# How to Implement HPA with Object Metrics for Queue-Based Scaling
Practical guide to implementing Kubernetes HPA scaling based on queue depth rather than CPU/memory metrics. Covers object metrics, custom metrics, and integration patterns.
## Key Content
- Queue depth is a better scaling signal than CPU for worker-style workloads
- Object metrics in HPA allow scaling based on custom Kubernetes objects (ConfigMaps, custom resources)
- Pattern: monitor pending messages in queue → scale workers to process them
- Multi-metric HPA: evaluate several metrics simultaneously, scale to whichever requires most replicas
- KEDA (Kubernetes Event Driven Autoscaler): scale-to-zero capability, 70+ built-in scalers
- KEDA pattern: 0 → 1 via event trigger, 1 → N via HPA metrics feed
- Key insight: scale proactively based on how much work is waiting, not reactively based on how busy workers are
## Relevance to Teleo Pipeline
We don't run Kubernetes, but the patterns are directly transferable to our cron-based system:
1. Replace fixed MAX_WORKERS with queue-depth-based scaling: workers = f(queue_depth)
2. Implement scale-to-zero: if no unprocessed sources, don't spawn workers at all (we already do this)
3. Multi-metric scaling: consider both extract queue depth AND eval queue depth when deciding extraction worker count
4. The proactive scaling insight is key: our dispatcher should look at queue depth, not just worker availability

View file

@ -7,15 +7,10 @@ date: 2026-03-01
domain: entertainment
secondary_domains: [ai-alignment, cultural-dynamics]
format: report
status: null-result
status: unprocessed
priority: high
tags: [content-provenance, C2PA, content-credentials, digital-authenticity, trust-infrastructure]
flagged_for_theseus: ["Content authentication infrastructure as alignment mechanism — provenance verification is a trust coordination problem"]
processed_by: clay
processed_date: 2026-03-11
enrichments_applied: ["community-owned-IP-has-structural-advantage-in-human-made-premium-because-provenance-is-inherent-and-legible.md", "human-made-is-becoming-a-premium-label-analogous-to-organic-as-AI-generated-content-becomes-dominant.md", "GenAI adoption in entertainment will be gated by consumer acceptance not technology capability.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two claims: (1) infrastructure deployment claim documenting consumer-scale rollout of C2PA/Content Credentials, (2) cross-domain mechanism claim connecting content authentication to trust coordination problems. Applied three enrichments to existing entertainment claims about human-made premium, consumer acceptance gating, and community-owned IP provenance advantage. Source provides concrete infrastructure evidence (hardware, software, standards) rather than just conceptual framework. Agent notes correctly identified this as supply-side infrastructure for authenticity premium. No entertainment-specific adoption metrics (studio/platform usage) available in source."
---
## Content
@ -48,13 +43,3 @@ CAI emphasizes convergence among diverse content creators on shared attribution
PRIMARY CONNECTION: [[GenAI adoption in entertainment will be gated by consumer acceptance not technology capability]]
WHY ARCHIVED: Content provenance infrastructure is the supply-side of the authenticity premium — makes human origin verifiable
EXTRACTION HINT: Focus on the INFRASTRUCTURE buildout, not just the concept. Consumer hardware (Pixel 10) + enterprise tools (Adobe) + standards (C2PA 1.2) = provenance becomes ambient, not opt-in.
## Key Facts
- Content Authenticity Initiative expanded to 6,000+ global members by 2026
- Google Pixel 10 launched with C2PA credential support (2026)
- Sony PXW-Z300 released with Content Credentials integration (2026)
- Adobe Content Authenticity for Enterprise launched (2026)
- C2PA Conformance Program established (2026)
- CAWG 1.2 Specification released (2026)
- learn.contentauthenticity.org launched in collaboration with Pixelstream (2026)

View file

@ -6,13 +6,9 @@ url: "https://www.futard.io/launch/62Yxd8gLQ2YYmY2TifhChJG4tVdf4b1oAHcMfwTL2WUu"
date: 2026-03-05
domain: internet-finance
format: data
status: processed
status: unprocessed
tags: [futardio, metadao, futarchy, solana]
event_type: launch
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Factual fundraise data for failed gaming studio raise on Futardio. No novel claims about futarchy mechanisms or gaming economics — just execution data on a specific failed raise. Created entity pages for the company and the fundraise decision market, updated Futardio timeline. The 95% funding gap is notable as market signal but doesn't constitute a generalizable claim about gaming studios or futarchy without additional context/comparison data."
---
## Launch Details
@ -121,11 +117,3 @@ We didn't want complex tokenomics driving our decisions. Futarchy puts the marke
- Token mint: `32CPstBmwccnLoaUqkqiiMVg1nKrQ3YGcM43vFAimeta`
- Version: v0.7
- Closed: 2026-03-06
## Key Facts
- Insert Coin Labs Domin8 game: 232 games played, 55.1 SOL volume, +2.7 SOL house profit (as of 2026-03-05)
- Insert Coin Labs Futardio raise: $50K target, $2,508 committed (5%), refunding status (2026-03-06)
- Insert Coin Labs allocation: 80% team ($40K), 20% liquidity ($10K), $4K monthly burn, ~10 month runway
- Insert Coin Labs roadmap: Domin8 live, 1v1 game ready, Casino hub Q2 2026, Rabbit Royal Q2 2026, Open API Q3 2026
- Insert Coin Labs audit: Excalead, Honorable Mention at Solana Breakpoint 2025

View file

@ -6,7 +6,7 @@ url: https://x.com/rocketresearchx
date: 2026-03-09
domain: internet-finance
format: tweet
status: null-result
status: unprocessed
last_attempted: 2026-03-11
tags: [media, research, trading, market-analysis, solana]
linked_set: metadao-x-landscape-2026-03
@ -24,10 +24,6 @@ processed_by: rio
processed_date: 2026-03-10
extraction_model: "minimax/minimax-m2.5"
extraction_notes: "Source contains only trading/technical analysis content (EMA 8 rejection, market cap comparisons, geopolitical risk assessment). Curator notes explicitly classify this as low priority with null-result likely for mechanism design claims. Only 2 peripheral MetaDAO references. No novel claims about futarchy, Living Capital, or token economics that aren't already covered in existing knowledge base. Content is market commentary rather than mechanism design insight."
processed_by: rio
processed_date: 2026-03-11
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Source contains only trading/technical analysis content (EMA 8 rejection, market cap comparisons, geopolitical risk assessment). Curator notes explicitly classify this as low priority with null-result likely for mechanism design claims. Only 2 peripheral MetaDAO references. No novel claims about futarchy, Living Capital, or token economics that aren't already covered in existing knowledge base. Content is market commentary rather than mechanism design insight."
---
# @rocketresearchx X Archive (March 2026)
@ -53,9 +49,3 @@ extraction_notes: "Source contains only trading/technical analysis content (EMA
- Only 2 MetaDAO references - described as peripheral to ecosystem
- Priority was marked as low by curator
- Extraction hints indicated null-result likely for MetaDAO-specific claims
## Key Facts
- @rocketresearchx is an OG crypto research outfit operating since 2011
- Content has 94% substantive ratio but is trading/technical analysis focused
- Only 2 MetaDAO references in 100 tweets - described as peripheral to ecosystem

View file

@ -7,15 +7,7 @@ date: 2026-01-01
domain: entertainment
secondary_domains: []
format: report
status: processed
processed_by: clay
processed_date: 2026-03-12
claims_extracted:
- consumer-rejection-of-ai-generated-ads-intensifies-as-ai-quality-improves-disproving-the-exposure-leads-to-acceptance-hypothesis
- the-advertiser-consumer-ai-perception-gap-is-a-widening-structural-misalignment-not-a-temporal-communications-lag
- gen-z-hostility-to-ai-generated-advertising-is-stronger-than-millennials-and-widening-making-gen-z-a-negative-leading-indicator-for-ai-content-acceptance
enrichments:
- GenAI adoption in entertainment will be gated by consumer acceptance not technology capability (strong supporting evidence — rejection intensifying, not eroding)
status: unprocessed
priority: high
tags: [consumer-acceptance, ai-content, advertiser-perception-gap, gen-z, authenticity]
---

View file

@ -1,192 +0,0 @@
---
type: source
title: "Futardio: ShopsBuilder AI fundraise goes live"
author: "futard.io"
url: "https://www.futard.io/launch/6qtygHxrFzF3tucXcy6EzbwZJBRbiuZAZrsXapXZLxE3"
date: 2026-03-12
domain: internet-finance
format: data
status: unprocessed
tags: [futardio, metadao, futarchy, solana]
event_type: launch
---
## Launch Details
- Project: ShopsBuilder AI
- Description: The AI Bridge Layer for On-Chain Chat Commerce
- Funding target: $420,000.00
- Total committed: N/A
- Status: Live
- Launch date: 2026-03-12
- URL: https://www.futard.io/launch/6qtygHxrFzF3tucXcy6EzbwZJBRbiuZAZrsXapXZLxE3
## Team / Description
**The internet is becoming agentic. Commerce hasn't caught up. We built the infrastructure that connects them.**
ShopsBuilder is raising to accelerate the global infrastructure layer that bridges Web2 merchants into the age of AI-native, on-chain commerce — operating inside the messaging platforms where 3+ billion people already live.
---
## What We've Already Built
We did not start from zero.
- **100,000+ customers** have transacted through ShopsBuilder-powered stores
- **Live merchant network** operating Telegram-native stores across physical goods, digital products, and services
- **AI agent system deployed** — every store gets its own autonomous agents: product discovery, order handling, customer support, follow-ups
- **First version of the open marketplace published** — decentralized merchant discovery layer
- **Full payment stack live**: crypto, credit cards, custom payment app integrations
- **Complete commerce stack**: catalog CRM, storefronts, unified marketplace, network of personal agents and many more
This raise allows us to scale globally, enable AI agents to turn business intent into autonomous commerce operations, and connect demand from users and agents to existing businesses across platforms like Shopify, Amazon, and others.
---
## The Problem
**Commerce is shifting to chat and AI agents, but the infrastructure was built for humans using browsers.**
**Demand discovery** is moving to AI interfaces while merchants still depend on centralized marketplaces that control ranking, margins, and customer access.
**Commerce infrastructure remains fragmented** across Shopify, Amazon, WooCommerce, marketplaces, and payment providers — each requiring integrations, operational effort, and technical expertise.
Crypto payments exist, but the **full commerce lifecycle is still missing**, which real merchants requires — authorization, escrow, capture, refunds, cancellations, and disputes.
---
## The Bridge
This is ShopsBuilder's core insight:
**The future of commerce is not storefronts. It is agents transacting with agents.**
A customer talks to their AI assistant. The assistant understands intent. It discovers the right merchant. Shows to customer and It initiates a purchase. The payment settles on-chain. The merchant fulfills the order.
The merchant never knows the sale came through an agentic channel. To them, it is just another order. But underneath, a new layer of commerce infrastructure made it possible — invisible, automated, and unstoppable.
**ShopsBuilder is the bridge layer** that connects existing Web2 businesses into this new reality — without requiring merchants to understand crypto, AI, or protocols. They get a fully autonomous operation. The infrastructure handles everything else.
---
## Business intent -> Execution
**AI doesn't just discover demand — it can operate businesses.**
Merchants no longer need to manually configure every system, integration, or market expansion.
A founder can say:
*"Launch our products in market X."*
*"Start running ads."*
*"Accept donations in crypto payments."*
AI agents interpret this **business intent** and execute it across the ShopsBuilder infrastructure — configuring payments, storefronts, integrations, compliance, and distribution automatically.
**Business intent becomes executable commerce infrastructure.**
___
## ShopsBuilder provides the core infrastructure layer for agentic commerce.
The system combines three primitives:
1. **Merchant AI agents**
Every store receives an autonomous agent that handles discovery, orders,
customer support, and follow-ups.
2. **Universal commerce bridge**
Existing Web2 merchants (Shopify, marketplaces, independent stores)
can expose their products to AI agents without changing their operations.
3. **On-chain payment lifecycle**
A complete crypto payment stack supporting authorization, escrow,
capture, refunds, cancellations, and dispute resolution.
---
## Why Now
- AI agents are moving from assistants to autonomous economic actors — the infrastructure for this transition does not yet exist at scale
- Crypto payment adoption in commerce is accelerating but lacks the complete primitive stack merchants need
- x402 and emerging agent payment protocols are creating a new interoperability layer — ShopsBuilder is positioned to be the merchant-side infrastructure for this ecosystem
- We have 100,000+ real customers and live merchant traction
---
## Market & Competitive Landscape
Existing solutions are fragmented:
• AI tools generate content but are not designed to operate businesses
• Crypto payment processors support payments but lack the full commerce lifecycle
• Marketplaces remain centralized and extractive, controlling discovery and margins.
ShopsBuilder combines these layers into one open infrastructure.
---
## Roadmap
| Quarter | Milestones |
| ----------- | ---------------------------------------------------------------------------------------------------------------------- |
| **Q2 2026** | Open-source DAO marketplace launch; Web storefront access; UCP native marketplace |
| **Q3 2026** | Expansion to WhatsApp, Instagram, and Discord commerce interfaces; merchant onboarding tools |
| **Q4 2026** | Merchant bridge layer (Shopify / WooCommerce / marketplaces); x402-compatible payment layer; EVM multi-chain expansion |
| **Q1 2027** | AI agent SDK; agent-to-agent commerce flows via x402 |
| **2027+** | Universal agentic commerce API; cross-platform merchant identity and reputation layer |
---
## Use of Funds
Raise target: $336,000
Runway: ~12 months
Monthly burn: ~$28k
---
## Notes
ShopsBuilder is modular by design.
The core components — payment infrastructure, merchant agents,
and the DAO marketplace — can evolve independently.
If one layer fails to gain adoption, development can focus on the
components that demonstrate the strongest product-market fit.
If a particular product direction fails to achieve adoption,
treasury governance allows the community to redirect development
toward the most promising parts of the infrastructure -
AI agents, payment protocols, or the DAO marketplace layer.
## Potential outcome
If ShopsBuilder reaches 100,000 active merchants
with ~$250 annual infrastructure revenue per merchant,
annual revenue would reach ~$25M.
This represents a realistic outcome for a global
agentic commerce infrastructure layer.
## Vision
ShopsBuilder is building the world's AI-native, on-chain commerce infrastructure — the invisible bridge layer that connects the 200M+ Web2 businesses into an agentic economy where AI handles discovery, conversation, and payment automatically.
Commerce is going agentic. ShopsBuilder is the infrastructure that makes it work.
## Links
- Website: https://shopsbuilder.app
- Twitter: https://x.com/shopsbuilder
- Telegram: https://t.me/shopsbuilder
## Raw Data
- Launch address: `6qtygHxrFzF3tucXcy6EzbwZJBRbiuZAZrsXapXZLxE3`
- Token: 8fX (8fX)
- Token mint: `8fXTttGGAKeZZ9DhLhE7Peh3hQCcqCJdHhpmZwdEmeta`
- Version: v0.7