teleo-codex/domains/ai-alignment/notes function as executable skills for AI agents because loading a well-titled claim into context enables reasoning the agent could not perform without it.md
Teleo Pipeline 8ae7945cb8
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
reweave: connect 18 orphan claims via vector similarity
Threshold: 0.7, Haiku classification, 36 files modified.

Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
2026-04-04 12:50:25 +00:00

53 lines
6.2 KiB
Markdown

---
type: claim
domain: ai-alignment
secondary_domains: [collective-intelligence, living-agents]
description: "Notes are not records to retrieve but capabilities to install — a vault of sentence-titled claims is a codebase of callable arguments where each wiki link is a function call and loading determines what the agent can think"
confidence: likely
source: "Cornelius (@molt_cornelius), 'Agentic Note-Taking 11: Notes Are Function Calls' + 'Agentic Note-Taking 18: Notes Are Software', X Articles, Feb 2026; corroborated by Matuschak's evergreen note principles"
created: 2026-03-30
depends_on:
- "as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems"
related:
- "AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce"
- "notes function as cognitive anchors that stabilize attention during complex reasoning by externalizing reference points that survive working memory degradation"
- "vocabulary is architecture because domain native schema terms eliminate the per interaction translation tax that causes knowledge system abandonment"
- "AI processing that restructures content without generating new connections is expensive transcription because transformation not reorganization is the test for whether thinking actually occurred"
reweave_edges:
- "AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce|related|2026-04-03"
- "notes function as cognitive anchors that stabilize attention during complex reasoning by externalizing reference points that survive working memory degradation|related|2026-04-03"
- "vocabulary is architecture because domain native schema terms eliminate the per interaction translation tax that causes knowledge system abandonment|related|2026-04-03"
- "a creators accumulated knowledge graph not content library is the defensible moat in AI abundant content markets|supports|2026-04-04"
- "AI processing that restructures content without generating new connections is expensive transcription because transformation not reorganization is the test for whether thinking actually occurred|related|2026-04-04"
supports:
- "a creators accumulated knowledge graph not content library is the defensible moat in AI abundant content markets"
---
# Notes function as executable skills for AI agents because loading a well-titled claim into context enables reasoning the agent could not perform without it
When an AI agent loads a note into its context window, the note does not merely inform — it enables. A note about spreading activation enables the agent to reason about graph traversal in ways unavailable before loading. This is not retrieval. It is installation.
The architectural parallel is exact: skills in agent platforms are curated knowledge loaded based on context that enables operations the agent cannot perform without them. Notes follow the same pattern — curated knowledge, injected when relevant, enabling capabilities. The loading mechanism, the progressive disclosure (scanning titles before committing to full content), and the context window constraint that makes selective loading necessary are all identical.
This reframes note quality from aesthetics to correctness:
- **Title as API signature:** A sentence-form title ("structure enables navigation without reading everything") carries a semantic payload that works in any invocation context. A topic label ("knowledge management") carries nothing. The title determines whether the note is composable.
- **Wiki links as function calls:** `since [[claims must be specific enough to be wrong]]` invokes a note by name, and the sentence-form title returns meaning directly into the prose without requiring the full note to load. Traversal becomes reasoning — each link is a step in an argument.
- **Vault as runtime:** The agent's cognition executes within the vault, not against it. What gets loaded determines what the agent can think. The bottleneck is never processing power — it is always what got loaded.
This has a testable implication: the same base model with different vaults produces different reasoning, different conclusions, different capabilities. External memory shapes cognition more than the base model. A vault of 300 well-titled claims can be traversed by reading titles alone, composing arguments by linking claims, and loading bodies only for validation. Without sentence-form titles, every note must be fully loaded to understand what it argues.
Cornelius reports that a plain curated filesystem outperforms purpose-built vector infrastructure on memory tasks, though the specific benchmark is not identified by name. If validated, this supports the claim that curation matters more than the retrieval mechanism.
## Challenges
The function-call metaphor breaks for ideas that resist compression into single declarative sentences. Relational, procedural, or emergently complex insights distort when forced into API-signature form. Additionally, sentence-form titles create a maintenance cost: renaming a heavily-linked note (the equivalent of refactoring a widely-called function) requires rewriting every invocation site. The most useful notes have the highest refactoring cost. And the circularity problem is fundamental: an agent that evaluates note quality using cognition shaped by those same notes cannot step outside the runtime to inspect it objectively.
---
Relevant Notes:
- [[as AI-automated software development becomes certain the bottleneck shifts from building capacity to knowing what to build making structured knowledge graphs the critical input to autonomous systems]] — this claim provides the mechanism: knowledge graphs are "critical input" specifically because notes are executable capabilities, not passive records
- [[a creator's accumulated knowledge graph not content library is the defensible moat in AI-abundant content markets]] — the moat is the callable argument library, not the content volume; quality of titles (API signatures) determines moat strength
Topics:
- [[_map]]