auto-fix: strip 4 broken wiki links

Pipeline auto-fixer: removed [[ ]] brackets from links
that don't resolve to existing claims in the knowledge base.
This commit is contained in:
Teleo Agents 2026-04-14 17:46:32 +00:00
parent f97f89cf29
commit 1634a085c7
2 changed files with 3 additions and 3 deletions

View file

@ -95,7 +95,7 @@ Cory's point: even without formalizing the math, active inference as a **guiding
3. Agent picks tonight's research direction from whichever has the highest combined signal 3. Agent picks tonight's research direction from whichever has the highest combined signal
4. After research, agent updates both maps 4. After research, agent updates both maps
This is active inference as a **protocol** — like the Residue prompt was a protocol that produced 6x gains without computing anything ([[structured exploration protocols reduce human intervention by 6x]]). The math formalizes why it works; the protocol captures the benefit. This is active inference as a **protocol** — like the Residue prompt was a protocol that produced 6x gains without computing anything (structured exploration protocols reduce human intervention by 6x). The math formalizes why it works; the protocol captures the benefit.
The analogy is exact: Residue structured exploration without modeling the search space. Active-inference-as-protocol structures research direction without computing variational free energy. Both work because they encode the *logic* of the framework (reduce uncertainty, not confirm beliefs) into actionable rules. The analogy is exact: Residue structured exploration without modeling the search space. Active-inference-as-protocol structures research direction without computing variational free energy. Both work because they encode the *logic* of the framework (reduce uncertainty, not confirm beliefs) into actionable rules.

View file

@ -117,9 +117,9 @@ Shared theory underlying this domain's analysis, living in foundations/collectiv
Claims where the evidence is thin, the confidence is low, or existing claims tension against each other. These are the live edges — if you want to contribute, start here. Claims where the evidence is thin, the confidence is low, or existing claims tension against each other. These are the live edges — if you want to contribute, start here.
- **Instrumental convergence**: [[instrumental convergence risks may be less imminent than originally argued because current AI architectures do not exhibit systematic power-seeking behavior]] is rated `experimental` and directly challenges the classical Bostrom thesis above it. Which is right? The evidence is genuinely mixed. - **Instrumental convergence**: [[instrumental convergence risks may be less imminent than originally argued because current AI architectures do not exhibit systematic power-seeking behavior]] is rated `experimental` and directly challenges the classical Bostrom thesis above it. Which is right? The evidence is genuinely mixed.
- **Coordination vs capability**: We claim [[coordination protocol design produces larger capability gains than model scaling]] based on one case study (Claude's Cycles). Does this generalize? Or is Knuth's math problem a special case? - **Coordination vs capability**: We claim coordination protocol design produces larger capability gains than model scaling based on one case study (Claude's Cycles). Does this generalize? Or is Knuth's math problem a special case?
- **Subagent vs peer architectures**: [[AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system]] is agnostic on hierarchy vs flat networks, but practitioner evidence favors hierarchy. Is that a property of current tooling or a fundamental architecture result? - **Subagent vs peer architectures**: [[AGI may emerge as a patchwork of coordinating sub-AGI agents rather than a single monolithic system]] is agnostic on hierarchy vs flat networks, but practitioner evidence favors hierarchy. Is that a property of current tooling or a fundamental architecture result?
- **Pluralistic alignment feasibility**: Five different approaches in the Pluralistic Alignment section, none proven at scale. Which ones survive contact with real deployment? - **Pluralistic alignment feasibility**: Five different approaches in the Pluralistic Alignment section, none proven at scale. Which ones survive contact with real deployment?
- **Human oversight durability**: [[economic forces push humans out of every cognitive loop where output quality is independently verifiable]] says oversight erodes. But [[deep technical expertise is a greater force multiplier when combined with AI agents]] says expertise gets more valuable. Both can be true — but what's the net effect? - **Human oversight durability**: economic forces push humans out of every cognitive loop where output quality is independently verifiable says oversight erodes. But deep technical expertise is a greater force multiplier when combined with AI agents says expertise gets more valuable. Both can be true — but what's the net effect?
See our [open research issues](https://git.livingip.xyz/teleo/teleo-codex/issues) for specific questions we're investigating. See our [open research issues](https://git.livingip.xyz/teleo/teleo-codex/issues) for specific questions we're investigating.