Wrote sourced_from: into 414 claim files pointing back to their origin source. Backfilled claims_extracted: into 252 source files that were processed but missing this field. Matching uses author+title overlap against claim source: field, validated against 296 known-good pairs from existing claims_extracted. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
8.1 KiB
| type | domain | secondary_domains | description | confidence | source | created | depends_on | related | reweave_edges | sourced_from | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | ai-alignment |
|
Condition-based maintenance at three timescales (per-write schema validation, session-start health checks, accumulated-evidence structural audits) catches qualitatively different problem classes; scheduled maintenance misses condition-dependent failures | likely | Cornelius (@molt_cornelius) 'Agentic Note-Taking 19: Living Memory', X Article, February 2026; maps to nervous system analogy (reflexive/proprioceptive/conscious); corroborated by reconciliation loop pattern (desired state vs actual state comparison) | 2026-03-31 |
|
|
|
|
three concurrent maintenance loops operating at different timescales catch different failure classes because fast reflexive checks medium proprioceptive scans and slow structural audits each detect problems invisible to the other scales
Knowledge system maintenance requires three concurrent loops operating at different timescales, each detecting a qualitatively different class of problem that the other loops cannot see.
The fast loop is reflexive. Schema validation fires on every file write. Auto-commit runs after every change. Zero judgment, deterministic results. A malformed note that passes this layer would immediately propagate — linked from MOCs, cited in other notes, indexed for search — each consuming the broken state before any slower review could catch it. The reflex must fire faster than the problem propagates.
The medium loop is proprioceptive. Session-start health checks compare the system's actual state to its desired state and surface the delta. Orphan notes detected. Index freshness verified. Processing queue reviewed. This is the system asking "where am I?" — not at the granularity of individual writes but at the granularity of sessions. It catches drift that accumulates across multiple writes but falls below the threshold of any individual write-level check.
The slow loop is conscious review. Structural audits triggered when enough observations accumulate, meta-cognitive evaluation of friction patterns, trend analysis across sessions. These require loading significant context and reasoning about patterns rather than checking items. The slow loop catches what no individual check can detect: gradual methodology drift, assumption invalidation, structural imbalances that emerge only over time.
All three loops implement the same pattern — declare desired state, measure divergence, correct — but they differ in what "desired state" means, how divergence is measured, and how correction happens. The fast loop auto-fixes. The medium loop suggests. The slow loop logs for review.
Critically, none of these run on schedules. Condition-based triggers fire when actual conditions warrant — not at fixed intervals, but when orphan notes exceed a threshold, when a Map of Content outgrows navigability, when contradictory claims accumulate past tolerance. The system responds to its own state. This is homeostasis, not housekeeping.
Additional Evidence (supporting)
Triggers as test-driven knowledge work (AN12, Cornelius): The three maintenance loops implement the equivalent of test-driven development for knowledge systems. Kent Beck formalized TDD for code; the parallel is exact. Per-note checks (valid schema, description exists, wiki links resolve, title passes composability test) are unit tests. Graph-level checks (orphan detection, dangling links, MOC coverage, connection density) are integration tests. Specific previously-broken invariants that keep getting checked are regression tests. The session-start hook is the CI/CD pipeline — it runs the suite automatically at every boundary. This vault implements 12 reconciliation checks at session start: inbox pressure per subdirectory, orphan notes, dangling links, observation accumulation, tension accumulation, MOC sizing, stale pipeline batches, infrastructure ideas, pipeline pressure, schema compliance, experiment staleness, plus threshold-based task generation. Each check declares a desired state and measures actual divergence. Each violation auto-creates a task; each resolution auto-closes it. The workboard IS a test report, regenerated at every session boundary. Agents face 100% prospective memory failure across sessions (compared to 30-50% in human prospective memory research), making programmable triggers structurally necessary rather than merely convenient.
Challenges
The three-timescale architecture is observed in one production knowledge system and mapped to a nervous system analogy. Whether three is the optimal number of maintenance loops (versus two or four) is untested. The condition-based triggering advantage over scheduled maintenance is asserted but not quantitatively compared — there may be cases where scheduled maintenance catches issues that condition-based triggers miss because the trigger thresholds were set incorrectly. Additionally, the slow loop's dependence on "enough observations accumulating" creates a cold-start problem for new systems with insufficient data for pattern detection.
Relevant Notes:
- methodology hardens from documentation to skill to hook as understanding crystallizes and each transition moves behavior from probabilistic to deterministic enforcement — the fast maintenance loop (schema validation hooks) is an instance of fully hardened methodology; the medium and slow loops correspond to skill-level and documentation-level enforcement respectively
- iterative agent self-improvement produces compounding capability gains when evaluation is structurally separated from generation — the three-timescale pattern is a specific implementation of structural separation: each loop evaluates at a different granularity, preventing any single evaluation scale from becoming the only quality gate
Topics: