extract: 2026-02-05-mit-tech-review-misunderstood-time-horizon-graph
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
Teleo Agents 2026-03-23 00:19:05 +00:00
parent 5ce90154fe
commit f5d067ce01
3 changed files with 24 additions and 1 deletions

View file

@ -21,6 +21,12 @@ This is the practitioner-level manifestation of [[AI is collapsing the knowledge
---
### Additional Evidence (extend)
*Source: [[2026-02-05-mit-tech-review-misunderstood-time-horizon-graph]] | Added: 2026-03-23*
The speed asymmetry in AI capability metrics compounds cognitive debt: if a model produces work equivalent to 12 human-hours in just minutes, humans cannot review it in real time. The METR time horizon metric measures task complexity but not execution speed, obscuring the verification bottleneck where AI output velocity exceeds human comprehension bandwidth.
Relevant Notes:
- [[AI capability and reliability are independent dimensions because Claude solved a 30-year open mathematical problem while simultaneously degrading at basic program execution during the same session]] — cognitive debt makes capability-reliability gaps invisible until failure
- [[AI is collapsing the knowledge-producing communities it depends on creating a self-undermining loop that collective intelligence can break]] — cognitive debt is the micro-level version of knowledge commons erosion

View file

@ -35,6 +35,12 @@ The International AI Safety Report 2026 (multi-government committee, February 20
---
### Additional Evidence (extend)
*Source: [[2026-02-05-mit-tech-review-misunderstood-time-horizon-graph]] | Added: 2026-03-23*
METR's time horizon metric measures task difficulty by human completion time, not model processing time. A model with a 5-hour time horizon completes tasks that take humans 5 hours, but may finish them in minutes. This speed asymmetry is not captured in the metric itself, meaning the gap between theoretical capability (task completion) and deployment impact includes both adoption lag AND the unmeasured throughput advantage that organizations fail to utilize.
Relevant Notes:
- [[AI capability and reliability are independent dimensions because Claude solved a 30-year open mathematical problem while simultaneously degrading at basic program execution during the same session]] — capability exists but deployment is uneven
- [[knowledge embodiment lag means technology is available decades before organizations learn to use it optimally creating a productivity paradox]] — the general pattern this instantiates

View file

@ -7,9 +7,13 @@ date: 2026-02-05
domain: ai-alignment
secondary_domains: []
format: article
status: unprocessed
status: enrichment
priority: medium
tags: [metr, time-horizon, capability-measurement, public-understanding, AI-progress, media-interpretation]
processed_by: theseus
processed_date: 2026-03-23
enrichments_applied: ["the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real-world impact.md", "agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content
@ -47,3 +51,10 @@ Note: Full article content was not accessible via WebFetch in this session — t
PRIMARY CONNECTION: [[the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real-world impact]]
WHY ARCHIVED: Methodological context for the METR time horizon metric — the extractor should understand this clarification before extracting claims from the METR time horizon source
EXTRACTION HINT: Lower extraction priority — primarily methodological. Consider as context document rather than claim source. Full article access needed before extraction.
## Key Facts
- MIT Technology Review published an explainer on METR's time horizon metric on February 5, 2026
- METR time horizon measures task difficulty by human completion time, not model processing time
- A model with a 12-hour time horizon can complete 12-hour human tasks in minutes
- The metric is commonly misinterpreted as measuring how long the model itself takes to work