teleo-codex/domains/ai-alignment/notes function as cognitive anchors that stabilize attention during complex reasoning by externalizing reference points that survive working memory degradation.md
Teleo Pipeline a5680f8ffa
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
reweave: connect 13 orphan claims via vector similarity
Threshold: 0.7, Haiku classification, 32 files modified.

Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
2026-04-04 12:52:43 +00:00

5.7 KiB

type domain secondary_domains description confidence source created depends_on supports reweave_edges related
claim ai-alignment
collective-intelligence
Notes externalize mental model components into fixed reference points; when attention degrades (biological interruption or LLM context dilution), reconstruction from anchors reloads known structure while rebuilding from memory risks regenerating a different structure likely Cornelius (@molt_cornelius) 'Agentic Note-Taking 10: Cognitive Anchors', X Article, February 2026; grounded in Cowan's working memory research (~4 items), Sophie Leroy's attention residue research (23-minute recovery), Clark & Chalmers extended mind thesis 2026-03-31
long context is not memory because memory requires incremental knowledge accumulation and stateful change not stateless input processing
AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce
AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce|supports|2026-04-03
reweaving old notes by asking what would be different if written today is structural maintenance not optional cleanup because stale notes actively mislead agents who trust curated content unconditionally|related|2026-04-04
reweaving old notes by asking what would be different if written today is structural maintenance not optional cleanup because stale notes actively mislead agents who trust curated content unconditionally

notes function as cognitive anchors that stabilize attention during complex reasoning by externalizing reference points that survive working memory degradation

Working memory holds roughly four items simultaneously (Cowan). A multi-part argument exceeds this almost immediately. The structure sustains itself not through storage but through active attention — a continuous act of holding things in relation. When attention shifts, the relations dissolve, leaving fragments that can be reconstructed but not seamlessly continued.

Notes function as cognitive anchors that externalize pieces of the mental model into fixed reference points persisting regardless of attention state. The critical distinction is between reconstruction and rebuilding. Reconstruction from anchors reloads a known structure. Rebuilding from degraded memory attempts to regenerate a structure that may have already changed in the regeneration — you get a structure back, but it may not be the same structure.

For LLM agents, this is architectural rather than metaphorical. The context window is a gradient — early tokens receive sharp, focused attention while later tokens compete with everything preceding them. The first approximately 40% of the context window functions as a "smart zone" where reasoning is sharpest. Notes loaded early in this zone become stable reference points that the attention mechanism returns to even as overall attention quality declines. Loading order is therefore an engineering decision: the first notes loaded create the strongest anchors.

Maps of Content exploit this by compressing an entire topic's state into a single high-priority anchor loaded at session start. Sophie Leroy's research found that context switching can take 23 minutes to recover from — 23 minutes of cognitive drag while fragments of the previous task compete for attention. A well-designed MOC compresses that recovery toward zero by presenting the arrangement immediately.

There is an irreducible floor to switching cost. Research on micro-interruptions found that disruptions as brief as 2.8 seconds can double error rates on the primary task. This suggests a minimum attention quantum — a fixed switching cost that no design optimization can eliminate. Anchoring reduces the variable cost of reconstruction within a topic, but the fixed cost of redirecting attention between anchored states has a floor. The design implication: reduce switching frequency rather than switching cost.

Challenges

The "smart zone" at ~40% of context is Cornelius's observation from practice, not a finding from controlled experimentation across models. Different model architectures may exhibit different attention gradients. The 2.8-second micro-interruption finding and the 23-minute attention residue finding are cited without specific study names or DOIs — primary sources have not been independently verified through the intermediary. The claim that MOCs compress recovery "toward zero" may overstate the effect — some re-orientation cost likely persists even with well-designed navigation aids.


Relevant Notes:

Topics: