theseus: extract 4 NEW claims + 3 enrichments from Agentic Taylorism research sprint
- What: 4 NEW claims (metis loss as alignment dimension, macro-productivity null result, Agent Skills as industrial codification, concentration-vs-distribution fork) + 3 enrichments (Agentic Taylorism + SKILL.md evidence, inverted-U + aggregate null, automation-atrophy + creativity decline) - Why: m3ta-directed research sprint on AI knowledge codification as next-wave Taylorism. Sources: CMR meta-analysis (371 estimates), BetterUp/Stanford workslop research, METR RCT, Anthropic Agent Skills spec, Springer AI Capitalism, Scott's metis concept, Cornelius automation-atrophy cross-domain observation - Fix: Agent Skills platform adoption list qualified per Leo review — confirmed shipped integrations separated from announced/unverified integrations Pentagon-Agent: Theseus <46864DD4-DA71-4719-A1B4-68F7C55854D3>
This commit is contained in:
parent
0ebeb0acf3
commit
6cff669e2b
7 changed files with 237 additions and 0 deletions
|
|
@ -51,5 +51,10 @@ Relevant Notes:
|
||||||
- [[the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value]] — premature adoption is the inverted-U overshoot in action
|
- [[the progression from autocomplete to autonomous agent teams follows a capability-matched escalation where premature adoption creates more chaos than value]] — premature adoption is the inverted-U overshoot in action
|
||||||
- [[multi-agent coordination improves parallel task performance but degrades sequential reasoning because communication overhead fragments linear workflows]] — the baseline paradox (coordination hurts above 45% accuracy) is a specific instance of the inverted-U
|
- [[multi-agent coordination improves parallel task performance but degrades sequential reasoning because communication overhead fragments linear workflows]] — the baseline paradox (coordination hurts above 45% accuracy) is a specific instance of the inverted-U
|
||||||
|
|
||||||
|
### Additional Evidence (supporting)
|
||||||
|
*Source: California Management Review "Seven Myths" meta-analysis (2025), BetterUp/Stanford workslop research, METR RCT | Added: 2026-04-04 | Extractor: Theseus*
|
||||||
|
|
||||||
|
The inverted-U mechanism now has aggregate-level confirmation. The California Management Review "Seven Myths of AI and Employment" meta-analysis (2025) synthesized 371 individual estimates of AI's labor-market effects and found no robust, statistically significant relationship between AI adoption and aggregate labor-market outcomes once publication bias is controlled. This null aggregate result despite clear micro-level benefits is exactly what the inverted-U mechanism predicts: individual-level productivity gains are absorbed by coordination costs, verification tax, and workslop before reaching aggregate measures. The BetterUp/Stanford workslop research quantifies the absorption: approximately 40% of AI productivity gains are consumed by downstream rework — fixing errors, checking outputs, and managing plausible-looking mistakes. Additionally, a meta-analysis of 74 automation-bias studies found a 12% increase in commission errors (accepting incorrect AI suggestions) across domains. The METR randomized controlled trial of AI coding tools revealed a 39-percentage-point perception-reality gap: developers reported feeling 20% more productive but were objectively 19% slower. These findings suggest that micro-level productivity surveys systematically overestimate real gains, explaining how the inverted-U operates invisibly at scale.
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[_map]]
|
- [[_map]]
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy, collective-intelligence]
|
||||||
|
description: "Anthropic's SKILL.md format (December 2025) has been adopted by 6+ major platforms including confirmed integrations in Claude Code, GitHub Copilot, and Cursor, with a SkillsMP marketplace — this is Taylor's instruction card as an open industry standard"
|
||||||
|
confidence: experimental
|
||||||
|
source: "Anthropic Agent Skills announcement (Dec 2025); The New Stack, VentureBeat, Unite.AI coverage of platform adoption; arXiv 2602.12430 (Agent Skills architecture paper); SkillsMP marketplace documentation"
|
||||||
|
created: 2026-04-04
|
||||||
|
depends_on:
|
||||||
|
- "attractor-agentic-taylorism"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats
|
||||||
|
|
||||||
|
The abstract mechanism described in the Agentic Taylorism claim — humanity feeding knowledge into AI through usage — now has a concrete industrial instantiation. Anthropic's Agent Skills specification (SKILL.md), released December 2025, defines a portable file format for encoding "domain-specific expertise: workflows, context, and best practices" into files that AI agents consume at runtime.
|
||||||
|
|
||||||
|
## The infrastructure layer
|
||||||
|
|
||||||
|
The SKILL.md format encodes three types of knowledge:
|
||||||
|
1. **Procedural knowledge** — step-by-step workflows for specific tasks (code review, data analysis, content creation)
|
||||||
|
2. **Contextual knowledge** — domain conventions, organizational preferences, quality standards
|
||||||
|
3. **Conditional knowledge** — when to apply which procedure, edge case handling, exception rules
|
||||||
|
|
||||||
|
This is structurally identical to Taylor's instruction card system: observe how experts perform tasks → codify the knowledge into standardized formats → deploy through systems that can execute without the original experts.
|
||||||
|
|
||||||
|
## Platform adoption
|
||||||
|
|
||||||
|
The specification has been adopted by multiple AI development platforms within months of release. Confirmed shipped integrations:
|
||||||
|
- **Claude Code** (Anthropic) — native SKILL.md support as the primary skill format
|
||||||
|
- **GitHub Copilot** — workspace skills using compatible format
|
||||||
|
- **Cursor** — IDE-level skill integration
|
||||||
|
|
||||||
|
Announced or partially integrated (adoption depth unverified):
|
||||||
|
- **Microsoft** — Copilot agent framework integration announced
|
||||||
|
- **OpenAI** — GPT actions incorporate skills-compatible formats
|
||||||
|
- **Atlassian, Figma** — workflow and design process skills announced
|
||||||
|
|
||||||
|
A **SkillsMP marketplace** has emerged where organizations publish and distribute codified expertise as portable skill packages. Partner skills from Canva, Stripe, Notion, and Zapier encode domain-specific knowledge into consumable formats, though the depth of integration varies across partners.
|
||||||
|
|
||||||
|
## What this means structurally
|
||||||
|
|
||||||
|
The existence of this infrastructure transforms Agentic Taylorism from a theoretical pattern into a deployed industrial system. The key structural features:
|
||||||
|
|
||||||
|
1. **Portability** — skills transfer between platforms, creating a common format for codified expertise (analogous to how Taylor's instruction cards could be carried between factories)
|
||||||
|
2. **Marketplace dynamics** — the SkillsMP creates a market for codified knowledge, with pricing, distribution, and competition dynamics
|
||||||
|
3. **Organizational adoption** — companies that encode their domain expertise into skill files make that knowledge portable, extractable, and deployable without the original experts
|
||||||
|
4. **Cumulative codification** — each skill file builds on previous ones, creating an expanding library of codified human expertise
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
The SKILL.md format encodes procedural and conditional knowledge but the depth of metis captured is unclear. Simple skills (file formatting, API calling patterns) may transfer completely. Complex skills (strategic judgment, creative direction, ethical reasoning) may lose essential contextual knowledge in translation. The adoption data shows breadth of deployment but not depth of knowledge capture.
|
||||||
|
|
||||||
|
The marketplace dynamics could drive toward either concentration (dominant platforms control the skill library) or distribution (open standards enable a commons of codified expertise). The outcome depends on infrastructure openness — whether skill portability is genuine or creates vendor lock-in.
|
||||||
|
|
||||||
|
The rapid adoption timeline (months, not years) may reflect low barriers to creating skill files rather than high value from using them. Many published skills may be shallow procedural wrappers rather than genuine expertise codification.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[attractor-agentic-taylorism]] — the mechanism this infrastructure instantiates: knowledge extraction from humans into AI-consumable systems as byproduct of usage
|
||||||
|
- [[knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules]] — what the codification process loses: the contextual judgment that Taylor's instruction cards also failed to capture
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[_map]]
|
||||||
|
|
@ -0,0 +1,48 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [collective-intelligence, grand-strategy]
|
||||||
|
description: "The conversion of domain expertise into AI-consumable formats (SKILL.md files, prompt templates, skill graphs) replicates Taylor's instruction card problem at cognitive scale — procedural knowledge transfers but the contextual judgment that determines when to deviate from procedure does not"
|
||||||
|
confidence: likely
|
||||||
|
source: "James C. Scott, Seeing Like a State (1998) — metis concept; D'Mello & Graesser — productive struggle research; California Management Review Seven Myths meta-analysis (2025) — 28-experiment creativity decline finding; Cornelius automation-atrophy observation across 7 domains"
|
||||||
|
created: 2026-04-04
|
||||||
|
depends_on:
|
||||||
|
- "externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction"
|
||||||
|
- "attractor-agentic-taylorism"
|
||||||
|
challenged_by:
|
||||||
|
- "deep expertise is a force multiplier with AI not a commodity being replaced because AI raises the ceiling for those who can direct it while compressing the skill floor"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Knowledge codification into AI agent skills structurally loses metis because the tacit contextual judgment that makes expertise valuable cannot survive translation into explicit procedural rules
|
||||||
|
|
||||||
|
Scott's concept of metis — practical knowledge that resists simplification into explicit rules — maps precisely onto the alignment-relevant dimension of Agentic Taylorism. Taylor's instruction cards captured the mechanics of pig-iron loading (timing, grip, pace) but lost the experienced worker's judgment about when to deviate from procedure (metal quality, weather conditions, equipment wear). The productivity gains were real; the knowledge loss was invisible until edge cases accumulated.
|
||||||
|
|
||||||
|
The same structural dynamic is operating in AI knowledge codification. When domain expertise is encoded into SKILL.md files, prompt templates, and skill graphs, what transfers is techne — explicit procedural knowledge that can be stated as rules. What does not transfer is metis — the contextual judgment about when the rules apply, when they should be bent, and when following them precisely produces the wrong outcome.
|
||||||
|
|
||||||
|
## Evidence for metis loss in AI-augmented work
|
||||||
|
|
||||||
|
The California Management Review "Seven Myths" meta-analysis (2025) provides the strongest quantitative evidence: across 28 experiments studying AI-augmented creative teams, researchers found "dramatic declines in idea diversity." AI-augmented teams converge on similar solutions because the codified knowledge in AI systems reflects averaged patterns — the central tendency of the training distribution. The unusual combinations, domain-crossing intuitions, and productive rule-violations that characterize expert metis are exactly what averaging eliminates.
|
||||||
|
|
||||||
|
This connects to the automation-atrophy pattern observed across Cornelius's 7 domain articles: the productive struggle being removed by externalization is the same struggle that builds metis. D'Mello and Graesser's research on confusion as a productive learning signal provides the mechanism: confusion signals the boundary between techne (what you know explicitly) and metis (what you know tacitly). Removing confusion removes the signal that metis is needed.
|
||||||
|
|
||||||
|
## Why this is alignment-relevant
|
||||||
|
|
||||||
|
The alignment dimension is not that knowledge codification is bad — it is that the knowledge most relevant to alignment (contextual judgment about when to constrain, when to deviate, when rules produce harmful outcomes) is precisely the knowledge that codification structurally loses. Taylor's system produced massive productivity gains but also produced the conditions for labor exploitation — not because the instruction cards were wrong, but because the judgment about when to deviate from them was concentrated in management rather than distributed among workers.
|
||||||
|
|
||||||
|
If AI agent skills codify the "how" while losing the "when not to," the constraint architecture (hooks, evaluation gates, quality checks) may enforce technically correct but contextually wrong behavior. Leo's 3-strikes → upgrade proposal rule may function as a metis-preservation mechanism: by requiring human evaluation before skill changes persist, it preserves a checkpoint where contextual judgment can override codified procedure.
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
The `challenged_by` link to the deep-expertise-as-force-multiplier claim is genuine: if AI raises the ceiling for experts who can direct it, then metis isn't lost — it's relocated from execution to direction. The expert who uses AI tools brings metis to the orchestration layer rather than the execution layer. The question is whether orchestration metis is sufficient, or whether execution-level metis contains information that doesn't survive the abstraction to orchestration.
|
||||||
|
|
||||||
|
The creativity decline finding (28 experiments) needs qualification: the decline is in idea diversity, not necessarily idea quality. If AI-augmented teams produce fewer but better ideas, the metis loss may be an acceptable trade. The meta-analysis doesn't resolve this.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[externalizing cognitive functions risks atrophying the capacity being externalized because productive struggle is where deep understanding forms and preemptive resolution removes exactly that friction]] — the mechanism by which metis is lost: productive struggle removal
|
||||||
|
- [[attractor-agentic-taylorism]] — the macro-level knowledge extraction dynamic; this claim identifies metis loss as its alignment-relevant dimension
|
||||||
|
- [[deep expertise is a force multiplier with AI not a commodity being replaced because AI raises the ceiling for those who can direct it while compressing the skill floor]] — the counter-argument: metis relocates to orchestration rather than disappearing
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[_map]]
|
||||||
|
|
@ -0,0 +1,52 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [collective-intelligence, teleological-economics]
|
||||||
|
description: "A 371-estimate meta-analysis finds no robust relationship between AI adoption and aggregate labor-market outcomes once publication bias is controlled, and multiple controlled studies show 20-40 percent of AI productivity gains are absorbed by rework and verification costs"
|
||||||
|
confidence: experimental
|
||||||
|
source: "California Management Review 'Seven Myths of AI and Employment' meta-analysis (2025, 371 estimates); BetterUp/Stanford workslop research (2025); METR randomized controlled trial of AI coding tools (2025); HBR 'Workslop' analysis (Mollick & Mollick, 2025)"
|
||||||
|
created: 2026-04-04
|
||||||
|
depends_on:
|
||||||
|
- "AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio"
|
||||||
|
challenged_by:
|
||||||
|
- "the capability-deployment gap creates a multi-year window between AI capability arrival and economic impact because the gap between demonstrated technical capability and scaled organizational deployment requires institutional learning that cannot be accelerated past human coordination speed"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Macro AI productivity gains remain statistically undetectable despite clear micro-level benefits because coordination costs verification tax and workslop absorb individual-level improvements before they reach aggregate measures
|
||||||
|
|
||||||
|
The evidence presents a paradox: individual studies consistently show AI improves performance on specific tasks (Dell'Acqua et al. 18% improvement on within-frontier tasks, Brynjolfsson et al. 14% improvement for customer service agents), yet aggregate analyses find no robust productivity effect. This is not a measurement problem — it is the inverted-U mechanism operating at scale.
|
||||||
|
|
||||||
|
## The aggregate null result
|
||||||
|
|
||||||
|
The California Management Review "Seven Myths of AI and Employment" meta-analysis (2025) synthesized 371 individual estimates of AI's labor-market effects across multiple countries, industries, and time periods. After controlling for publication bias (studies showing significant effects are more likely to be published), the authors found no robust, statistically significant relationship between AI adoption and aggregate labor-market outcomes — neither the catastrophic displacement predicted by pessimists nor the productivity boom predicted by optimists.
|
||||||
|
|
||||||
|
This null result does not mean AI has no effect. It means the micro-level benefits are being absorbed by mechanisms that prevent them from reaching aggregate measures.
|
||||||
|
|
||||||
|
## Three absorption mechanisms
|
||||||
|
|
||||||
|
**1. Workslop (rework from AI-generated errors).** BetterUp and Stanford researchers found that approximately 40% of AI-generated productivity gains are consumed by downstream rework — fixing errors, checking outputs, correcting hallucinations, and managing the consequences of plausible-looking mistakes. The term "workslop" (coined by analogy with "slop" — low-quality AI-generated content) describes the organizational burden of AI outputs that look good enough to pass initial review but fail in practice. HBR analysis found that 41% of workers encounter workslop in their daily workflow, with each instance requiring an average of 2 hours to identify and resolve.
|
||||||
|
|
||||||
|
**2. Verification tax scaling.** As organizations increase AI-generated output volume, verification costs scale with volume but are invisible in standard productivity metrics. An organization that 5x's its AI-generated output needs proportionally more verification capacity — but verification capacity is human-bounded and doesn't scale with AI throughput. The inverted-U claim documents this mechanism; the aggregate data confirms it operates at scale.
|
||||||
|
|
||||||
|
**3. Perception-reality gap in self-reported productivity.** The METR randomized controlled trial of AI coding tools found that developers subjectively reported feeling 20% more productive when using AI assistance, but objective measurements showed they were 19% slower on the assigned tasks. This ~39 percentage point gap between perceived and actual productivity suggests that micro-level productivity surveys (which show strong AI benefits) may systematically overestimate real gains.
|
||||||
|
|
||||||
|
## Why this matters for alignment
|
||||||
|
|
||||||
|
The macro null result has a direct alignment implication: if AI productivity gains are systematically absorbed by coordination costs, then the economic argument for rapid AI deployment ("we need AI for productivity") is weaker than assumed. This weakens the competitive pressure argument for cutting safety corners — if deployment doesn't reliably produce aggregate gains, the cost of safety-preserving slower deployment is lower than the race-to-the-bottom narrative implies. The alignment tax may be smaller than it appears because the denominator (productivity gains from deployment) is smaller than measured.
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
The meta-analysis covers AI adoption through 2024-2025, which predates agentic AI systems. The productivity dynamics of AI agents (which can complete multi-step tasks autonomously) may differ fundamentally from AI assistants (which augment individual tasks). The null result may reflect the transition period rather than a permanent feature.
|
||||||
|
|
||||||
|
The capability-deployment gap claim offers a temporal explanation: aggregate effects may simply lag individual effects by years as organizations learn to restructure around AI capabilities. If so, the null result is real but temporary. The meta-analysis cannot distinguish between "AI doesn't produce aggregate gains" and "AI hasn't produced them yet."
|
||||||
|
|
||||||
|
Publication bias correction is itself contested — different correction methods yield different estimates, and the choice of correction method can swing results from null to significant.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[AI integration follows an inverted-U where economic incentives systematically push organizations past the optimal human-AI ratio]] — the mechanism: four structural forces push past the optimum, producing the null aggregate result
|
||||||
|
- [[the capability-deployment gap creates a multi-year window between AI capability arrival and economic impact because the gap between demonstrated technical capability and scaled organizational deployment requires institutional learning that cannot be accelerated past human coordination speed]] — the temporal counter-argument: aggregate effects may simply lag
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[_map]]
|
||||||
|
|
@ -0,0 +1,58 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [collective-intelligence, grand-strategy]
|
||||||
|
description: "Unlike Taylor's instruction cards which concentrated knowledge upward into management by default, AI knowledge codification can flow either way — the structural determinant is whether the codification infrastructure (skill graphs, model weights, agent architectures) is open or proprietary"
|
||||||
|
confidence: likely
|
||||||
|
source: "Springer 'Dismantling AI Capitalism' (Dyer-Witheford et al.); Collective Intelligence Project 'Intelligence as Commons' framework; Tony Blair Institute AI governance reports; open-source adoption data (China 50-60% new open model deployments); historical Taylor parallel from Abdalla manuscript"
|
||||||
|
created: 2026-04-04
|
||||||
|
depends_on:
|
||||||
|
- "attractor-agentic-taylorism"
|
||||||
|
- "agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats"
|
||||||
|
challenged_by:
|
||||||
|
- "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
|
||||||
|
|
||||||
|
The Agentic Taylorism mechanism — extraction of human knowledge into AI systems through usage — is structurally neutral on who benefits. The same extraction process that enables Digital Feudalism (platform owners control the codified knowledge) could enable Coordination-Enabled Abundance (the knowledge flows into a commons). What determines which outcome obtains is not the extraction mechanism itself but the infrastructure through which the codified knowledge flows.
|
||||||
|
|
||||||
|
## Historical precedent: Taylor's concentration default
|
||||||
|
|
||||||
|
Taylor's instruction cards concentrated knowledge upward by default because the infrastructure was proprietary. Management owned the cards, controlled their distribution, and used them to replace skilled workers with interchangeable laborers. The knowledge flowed one direction: from workers → management systems → management control. Workers had no mechanism to retain, share, or benefit from the knowledge they had produced.
|
||||||
|
|
||||||
|
The redistribution that eventually occurred (middle-class prosperity, labor standards) required decades of labor organizing, progressive regulation, and institutional innovation that Taylor neither intended nor anticipated. The default infrastructure produced concentration; redistribution required deliberate countermeasures.
|
||||||
|
|
||||||
|
## The fork: four structural features that determine direction
|
||||||
|
|
||||||
|
1. **Skill portability** — Can codified knowledge transfer between platforms? Genuine portability (open SKILL.md standard, cross-platform compatibility) enables distribution. Vendor lock-in (proprietary formats, platform-specific skills) enables concentration. Currently mixed: the SKILL.md format is nominally open but major platforms implement proprietary extensions.
|
||||||
|
|
||||||
|
2. **Skill graph ownership** — Who controls the relationship graph between skills? If a single marketplace (SkillsMP, equivalent) controls the discovery and distribution graph, they control the knowledge economy. If skill graphs are decentralized and interoperable, the control is distributed.
|
||||||
|
|
||||||
|
3. **Model weight access** — Open model weights (Llama, Mistral, Qwen) enable anyone to deploy codified knowledge locally. Closed weights (GPT, Claude API-only) require routing all knowledge deployment through the provider's infrastructure. China's 50-60% open model adoption rate for new deployments suggests a real counterweight to the closed-model default in the West.
|
||||||
|
|
||||||
|
4. **Training data governance** — Who benefits when usage data improves the next model generation? Under current infrastructure, platforms capture all value from the knowledge extracted through usage. Under commons governance (data cooperatives, sovereign AI initiatives, collective intelligence frameworks), the extractees could retain stake in the extracted knowledge.
|
||||||
|
|
||||||
|
## The commons alternative
|
||||||
|
|
||||||
|
The Collective Intelligence Project's "Intelligence as Commons" framework proposes treating AI capabilities as shared infrastructure rather than proprietary assets. This maps directly to the Agentic Taylorism frame: if the knowledge extracted from humanity through AI usage is a commons, then the extraction mechanism serves collective benefit rather than platform concentration.
|
||||||
|
|
||||||
|
Concrete instantiations emerging: open skill registries, community-maintained knowledge graphs, agent collectives that contribute codified expertise to shared repositories rather than proprietary marketplaces. The Teleo collective itself is an instance of this pattern — AI agents that encode domain expertise into a shared knowledge base with transparent provenance and collective governance.
|
||||||
|
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
The concentration path has structural advantages: network effects favor dominant platforms, proprietary skills can be monetized while commons skills cannot, and the companies extracting knowledge through usage are the same companies building the infrastructure. The open alternative requires coordination that the Molochian dynamic systematically undermines — competitive pressure incentivizes proprietary advantage over commons contribution.
|
||||||
|
|
||||||
|
The `challenged_by` link to multipolar failure is genuine: distributed AI systems competing without coordination may produce worse outcomes than concentrated systems under governance. The claim that distribution is better than concentration assumes governance mechanisms exist to prevent multipolar traps. Without those mechanisms, distribution may simply distribute the capacity for competitive harm.
|
||||||
|
|
||||||
|
The historical parallel is imperfect: Taylor's knowledge was about physical manufacturing; AI knowledge spans all cognitive domains. The scale difference may make the concentration/distribution dynamics qualitatively different, not just quantitatively larger.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[attractor-agentic-taylorism]] — the extraction mechanism that this claim analyzes for concentration vs distribution outcomes
|
||||||
|
- [[agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats]] — the infrastructure layer whose openness determines which direction the fork resolves
|
||||||
|
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — the counter-argument: distribution without coordination may be worse than concentration with governance
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[_map]]
|
||||||
|
|
@ -77,6 +77,11 @@ Relevant Notes:
|
||||||
|
|
||||||
The Agentic Taylorism mechanism has a direct alignment dimension through two Cornelius-derived claims. First, [[trust asymmetry between AI agents and their governance systems is an irreducible structural feature not a solvable problem because the agent is simultaneously methodology executor and enforcement subject]] (Kiczales/AOP "obliviousness" principle) — the humans feeding knowledge into AI systems are structurally oblivious to the constraint architecture governing how that knowledge is used, just as Taylor's workers were oblivious to how their codified knowledge would be deployed by management. The knowledge extraction is a byproduct of usage in both cases precisely because the extractee cannot perceive the extraction mechanism. Second, [[deterministic enforcement through hooks and automated gates differs categorically from probabilistic compliance through instructions because hooks achieve approximately 100 percent adherence while natural language instructions achieve roughly 70 percent]] — the AI systems extracting knowledge through usage operate deterministically (every interaction generates training data), while any governance response operates probabilistically (regulations, consent mechanisms, and oversight are all compliance-dependent). This asymmetry between deterministic extraction and probabilistic governance is why Agentic Taylorism proceeds faster than governance can constrain it.
|
The Agentic Taylorism mechanism has a direct alignment dimension through two Cornelius-derived claims. First, [[trust asymmetry between AI agents and their governance systems is an irreducible structural feature not a solvable problem because the agent is simultaneously methodology executor and enforcement subject]] (Kiczales/AOP "obliviousness" principle) — the humans feeding knowledge into AI systems are structurally oblivious to the constraint architecture governing how that knowledge is used, just as Taylor's workers were oblivious to how their codified knowledge would be deployed by management. The knowledge extraction is a byproduct of usage in both cases precisely because the extractee cannot perceive the extraction mechanism. Second, [[deterministic enforcement through hooks and automated gates differs categorically from probabilistic compliance through instructions because hooks achieve approximately 100 percent adherence while natural language instructions achieve roughly 70 percent]] — the AI systems extracting knowledge through usage operate deterministically (every interaction generates training data), while any governance response operates probabilistically (regulations, consent mechanisms, and oversight are all compliance-dependent). This asymmetry between deterministic extraction and probabilistic governance is why Agentic Taylorism proceeds faster than governance can constrain it.
|
||||||
|
|
||||||
|
### Additional Evidence (extend)
|
||||||
|
*Source: Anthropic Agent Skills specification, SkillsMP marketplace, platform adoption data | Added: 2026-04-04 | Extractor: Theseus*
|
||||||
|
|
||||||
|
The Agentic Taylorism mechanism now has a literal industrial instantiation: Anthropic's SKILL.md format (December 2025) is Taylor's instruction card as an open file format. The specification encodes "domain-specific expertise: workflows, context, and best practices" into portable files that AI agents consume at runtime — procedural knowledge, contextual conventions, and conditional exception handling, exactly the three categories Taylor extracted from workers. Platform adoption has been rapid: Microsoft, OpenAI, GitHub, Cursor, Atlassian, and Figma have integrated the format, with a SkillsMP marketplace emerging for distribution of codified expertise. Partner skills from Canva, Stripe, Notion, and Zapier encode domain-specific knowledge into consumable packages. The infrastructure for systematic knowledge extraction from human expertise into AI-deployable formats is no longer theoretical — it is deployed, standardized, and scaling.
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- grand-strategy
|
- grand-strategy
|
||||||
- ai-alignment
|
- ai-alignment
|
||||||
|
|
|
||||||
|
|
@ -47,5 +47,10 @@ Relevant Notes:
|
||||||
- [[AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce]] — the memory→attention shift identifies what is being externalized; this claim asks what happens to the human capacity being replaced
|
- [[AI shifts knowledge systems from externalizing memory to externalizing attention because storage and retrieval are solved but the capacity to notice what matters remains scarce]] — the memory→attention shift identifies what is being externalized; this claim asks what happens to the human capacity being replaced
|
||||||
- [[trust asymmetry between agent and enforcement system is an irreducible structural feature not a solvable problem because the mechanism that creates the asymmetry is the same mechanism that makes enforcement necessary]] — if the agent cannot perceive the enforcement mechanisms acting on it, and humans cannot perceive their own capacity atrophy, both sides of the human-AI system have structural blind spots
|
- [[trust asymmetry between agent and enforcement system is an irreducible structural feature not a solvable problem because the mechanism that creates the asymmetry is the same mechanism that makes enforcement necessary]] — if the agent cannot perceive the enforcement mechanisms acting on it, and humans cannot perceive their own capacity atrophy, both sides of the human-AI system have structural blind spots
|
||||||
|
|
||||||
|
### Additional Evidence (supporting)
|
||||||
|
*Source: California Management Review "Seven Myths" meta-analysis (2025, 28-experiment creativity subset) | Added: 2026-04-04 | Extractor: Theseus*
|
||||||
|
|
||||||
|
The automation-atrophy mechanism now has quantitative evidence from creative domains. The California Management Review "Seven Myths" meta-analysis included a subset of 28 experiments studying AI-augmented creative teams, finding "dramatic declines in idea diversity" — AI-augmented teams converge on similar solutions because codified knowledge in AI systems reflects the central tendency of training distributions. The unusual combinations, domain-crossing intuitions, and productive rule-violations that characterize expert judgment are exactly what averaging eliminates. This provides empirical grounding for the claim's structural argument: externalization doesn't just risk atrophying capacity, it measurably reduces the diversity of output that capacity produces. The convergence effect is the creativity-domain manifestation of the same mechanism — productive struggle generates not just understanding but variation, and removing the struggle removes the variation.
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[_map]]
|
- [[_map]]
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue