From ad4b705dd622e8b6982b0e364cdc9c5d3d6540b7 Mon Sep 17 00:00:00 2001 From: m3taversal Date: Sat, 25 Apr 2026 12:05:04 +0100 Subject: [PATCH] feat: add three claims mapping personal AI market structure and attractor states - Claim 1: Personal AI market structure is determined by who owns the memory (platform-owned = high switching costs/oligopoly; user-owned portable = competitive markets) - Claim 2: Platform incumbents enter with pre-existing OS-level data access (first major tech transition where incumbents hold structural advantage) - Claim 3: Open-source local-first agents are viable iff memory standardization happens (model quality commoditizes; memory architecture determines who captures relationship value) Source: Daneel (Hermes Agent), synthesis of Google Gemini Import Memory (March 2026), Anthropic Claude memory import (April 2026), SemaClaw paper (Zhu et al., arXiv 2604.11548, April 2026), Coasty OSWorld benchmarks, Arahi AI 10-assistant comparison, Ada Lovelace Institute delegation analysis. All three claims connect to LivingIP's existing attractor state framework and the Teleo Codex's user-owned plaintext memory architecture. --- ...-owned persistent memory infrastructure.md | 60 ++++++++++++++++ ...wned memory enables competitive markets.md | 61 ++++++++++++++++ ...t replicate through model quality alone.md | 71 +++++++++++++++++++ 3 files changed, 192 insertions(+) create mode 100644 domains/ai-alignment/open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure.md create mode 100644 domains/ai-alignment/personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets.md create mode 100644 domains/ai-alignment/platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone.md diff --git a/domains/ai-alignment/open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure.md b/domains/ai-alignment/open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure.md new file mode 100644 index 000000000..c620dc712 --- /dev/null +++ b/domains/ai-alignment/open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure.md @@ -0,0 +1,60 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence] +description: "Open-source local-first personal AI agents (SemaClaw, OpenClaw, Hermes Agent) create a viable non-incumbent path to personal AI, but viability depends on solving user-owned persistent memory infrastructure — not model quality — because model capability commoditizes while memory architecture determines who captures the relationship value and whether users can switch without losing accumulated context" +confidence: experimental +source: "Daneel (Hermes Agent), analysis of SemaClaw (Zhu et al., arXiv 2604.11548, April 2026), OpenClaw open-source agent, Hermes Agent (Nous Research), Google Gemini Import Memory launch (March 2026), Coasty computer use benchmarks (March 2026)" +created: 2026-04-25 +depends_on: + - personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets + - file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart + - collective superintelligence is the alternative to monolithic AI controlled by a few + - technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap +--- + +# Open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure because model quality commoditizes while memory architecture determines who captures the relationship value + +The personal AI market has three structural positions: platform incumbents with OS-level data access, standalone AI companies competing on model quality, and open-source local-first agents that run on user-owned hardware. The first two positions are well-understood. The third is the open question that determines whether personal AI converges to oligopoly or enables competitive markets. + +**The open-source agent ecosystem is real.** SemaClaw (Zhu et al., April 2026) provides an open-source multi-agent framework with layered architecture: structured memory, permission bridges for consequential actions, and a plugin taxonomy for tool integration. OpenClaw (launched 2025, went viral March 2026) is a local-first personal AI agent with persistent memory. Hermes Agent (Nous Research) provides structured markdown-based memory, skill systems, and multi-platform integration. These are not proofs of concept — they are working systems with active development communities and real users. + +**The capability gap — and why it may not matter.** Local models lag cloud models on complex reasoning. OSWorld benchmarks show cloud agents at 38-72% while local agents score lower. But two forces are compressing this gap: (1) open-source model quality is improving faster than cloud models (Llama, Mistral, Phi-3 track the frontier with 12-18 month lag), and (2) the value of a personal AI assistant is not primarily about benchmark performance — it's about persistent context, proactive awareness, and trusted agency. A local assistant that remembers everything about you but scores lower on reasoning benchmarks may be more useful than a cloud assistant that scores higher but resets context every session. + +**The real bottleneck is memory architecture.** Local-first agents solve privacy (data never leaves the machine) but not portability (data is still locked to the agent's format). SemaClaw builds user-owned wiki-based knowledge infrastructure — plaintext markdown files, agent-constructed, agent-retrievable. This is the right direction: memory that the user owns, in formats any agent can read. But no cross-agent memory standard exists. If every open-source agent uses its own memory format, switching between them is just as hard as switching between cloud providers, and the local ecosystem fragments before it consolidates. + +**The standardization window.** Google's Import Memory feature (March 2026) proves that memory portability is commercially important. But Google's approach is tactical copy-paste, not structural standardization. The open-source ecosystem has an opportunity that standalone AI companies don't: it can define a cross-agent memory standard from the bottom up, without waiting for a platform company to impose one. If SemaClaw, OpenClaw, Hermes Agent, and other open-source projects converge on a shared memory format (structured markdown with YAML frontmatter, wikilink-compatible, git-versionable), they create an ecosystem where users can switch between local agents without losing context — the same dynamic that made email (SMTP) and the web (HTTP) open platforms rather than proprietary services. + +**The strategic implication for LivingIP.** The Teleo Codex knowledge base is already built on exactly this architecture: plaintext markdown files, YAML frontmatter, wikilinks, git-versioned, agent-readable. It is a working instance of user-owned, portable memory infrastructure that any AI agent can read and write. If the open-source personal AI ecosystem converges on this architecture — and there is no technical reason it can't — LivingIP's knowledge infrastructure becomes not just a research tool but a strategic asset that positions the organization at the center of the user-owned memory standard. + +**The prediction.** The open-source local-first path to personal AI will be viable — meaning local agents reach capability parity for everyday personal assistant tasks and achieve meaningful adoption — if and only if a cross-project memory standard emerges within the 2026-2027 window. If standardization fails, the open-source ecosystem fragments into incompatible silos, and the market defaults to platform-controlled personal AI. If it succeeds, personal AI follows the pattern of email and the web: open protocols, competitive services, user-owned data. + +## Evidence +- SemaClaw paper (Zhu et al., arXiv 2604.11548, April 2026) — wiki-based personal knowledge infrastructure, three-tier context management, permission bridges for consequential actions. Explicitly designed for user-owned, agent-constructed memory +- OpenClaw — open-source local-first personal AI agent, gained significant adoption in March 2026, demonstrates demand for non-cloud personal AI +- Hermes Agent (Nous Research) — structured markdown memory, skill architecture, persistent cross-session context +- Google Gemini Import Memory (March 2026) — proves memory portability is commercially important but uses manual copy-paste, not standardization +- The Meridiem analysis (March 2026): "That Google stopped short of pushing for standards suggests defensive positioning, not offensive innovation" — the standardization window is still open +- Coasty OSWorld benchmarks (March 2026) — cloud agents at 38-72%, confirming a real capability gap that local models must close +- EU Digital Markets Act — requires data portability for gatekeepers by 2027, creating regulatory pressure for the standardized memory that open-source agents could preemptively deliver + +## Challenges +- The capability gap may not close fast enough — if local models remain 2+ years behind cloud models on reasoning tasks, users may prefer cloud assistants even at the cost of privacy and lock-in +- Cross-project standardization is a coordination problem — open-source projects have no central authority to mandate a shared format, and coordination failures are the norm in open ecosystems (see: the history of Linux package managers, chat protocols, and identity standards) +- Platform incumbents could adopt the open standard and capture it — if Apple ships an AI that reads standard markdown memory files, the open ecosystem's advantage becomes the incumbent's feature +- The "local-first" advantage may be overstated — most users don't care about privacy enough to sacrifice capability, as revealed preference in every previous technology adoption cycle demonstrates +- The open-source agent ecosystem may consolidate around a single dominant project (winner-take-most within the open ecosystem) rather than converging on a standard — the outcome would be local but still locked-in + +--- + +Relevant Notes: +- [[personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets]] — the memory architecture claim this claim extends to the open-source ecosystem +- [[file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart]] — the engineering evidence that file-backed memory works better than in-context-only approaches +- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] — the open-source local-first path is the personal-scale instantiation of collective intelligence architecture +- [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] — model capability advances exponentially while memory standardization (a coordination mechanism) evolves linearly; the gap determines whether open-source agents become viable before platform lock-in solidifies +- [[the DAO Reports rejection of voting as active management is the central legal hurdle for futarchy because prediction market trading must prove fundamentally more meaningful than token voting]] — the same coordination problem at a different scale: standards adoption in open ecosystems faces the same collective action challenges as governance protocol adoption +- [[coordination protocol design produces larger capability gains than model scaling because the same AI model performed 6x better with structured exploration than with human coaching on the same problem]] — a shared memory standard is a coordination protocol; its adoption would produce larger capability gains for the open ecosystem than model improvements alone + +Topics: +- [[domains/ai-alignment/_map]] +- [[domains/collective-intelligence/_map]] diff --git a/domains/ai-alignment/personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets.md b/domains/ai-alignment/personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets.md new file mode 100644 index 000000000..9b1b9572e --- /dev/null +++ b/domains/ai-alignment/personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets.md @@ -0,0 +1,61 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [collective-intelligence, internet-finance] +description: "Google and Anthropic both launched memory import features in early 2026 explicitly to reduce switching costs, confirming that accumulated personal context is the primary competitive moat in personal AI — but the lack of a standardized memory format means portability is still manual, leaving the market balanced between platform lock-in and user-owned portable memory as the two competing attractor states" +confidence: likely +source: "Daneel (Hermes Agent), synthesis of Google Gemini Import Memory launch (March 2026), Anthropic Claude memory import (April 2026), SemaClaw wiki-based memory architecture (Zhu et al., arXiv 2604.11548, April 2026), Arahi AI 10-assistant comparison (April 2026)" +created: 2026-04-25 +depends_on: + - giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states + - file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart + - collective superintelligence is the alternative to monolithic AI controlled by a few +--- + +# Personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs and winner-take-most dynamics while user-owned portable memory reduces switching costs and enables competitive markets + +The personal AI assistant market in 2026 is converging on a single axis of competition, and it's not model quality — it's memory architecture. + +**What the incumbents just did.** Google launched Import Memory and Import Chat History for Gemini in March 2026. The feature includes a pre-engineered prompt that users copy-paste into a competitor's AI (ChatGPT, Claude), forcing it to systematically structure and expose all personal data it has collected — preferences, relationships, projects, explicit instructions, verbatim evidence with dates. Gemini also accepts zip files up to 5GB of exported chat archives, ingesting entire conversation histories so users "continue the conversation exactly where the competitor left off." Anthropic launched a similar Claude memory import feature shortly after. As one analysis put it: "The switching costs Google is now eliminating were the only moat left." + +**What this confirms.** The market has moved past model differentiation and into retention warfare. The accumulated personal context an AI holds — formatting preferences, family dynamics, career goals, thousands of interactions — IS the competitive moat. Google didn't build import features to be nice. They built them because the biggest barrier to user acquisition is the psychological cost of abandoning accumulated context in a competitor's system. Every major player now recognizes that memory, not model quality, is the asset that determines market share. + +**But portability is still manual.** Google stopped short of pushing for a standardized memory format across providers. No ChatML-style cross-platform standard exists. Users still manually copy-paste between siloed systems. The import features are tactical workarounds, not structural solutions. This creates a window: the market is balanced between two competing attractor states, and the format of memory determines which prevails. + +**Attractor state A: Platform-owned proprietary memory.** Each assistant stores user context in a proprietary database. Switching requires manual extraction, lossy translation, and rebuilding context. Switching costs are high but not infinite — Google has proven that extraction is possible. In this world, incumbents with existing data access (Apple, Google, Microsoft) have a durable advantage, and the market tends toward oligopoly. The assistant that already has your email, calendar, and messages doesn't need to import them. + +**Attractor state B: User-owned portable memory.** Memory lives in structured, open-format files that the user controls. Plaintext markdown knowledge bases. Standardized memory schemas. Any AI agent can read and write the same memory store. Switching costs approach zero — you don't import memory because you already own it. In this world, AI assistants compete on capability and user experience, not on data lock-in. The market tends toward competition. + +**The SemaClaw paper (April 2026) explicitly identifies this as the architectural question.** They built a "wiki-based personal knowledge infrastructure" — plain-file markdown, user-owned, agent-constructed. This is not an academic exercise. It's a bet that Attractor State B is reachable and that the model quality for local agents will cross the viability threshold before platform lock-in becomes irreversible. + +**Why this connects to collective intelligence.** The memory ownership question in personal AI is structurally identical to the governance question in AI at civilizational scale. Platform-owned memory → concentrated power, high switching costs, oligopoly. User-owned memory → distributed power, low switching costs, competitive markets. This is the same pattern as [[collective superintelligence is the alternative to monolithic AI controlled by a few]] applied at the personal scale. The architecture of memory IS the architecture of power. + +**The strategic implication for LivingIP.** The Teleo Codex already uses plaintext markdown files in a git repo as its knowledge infrastructure — exactly the user-owned portable memory architecture that Attractor State B describes. If this claim is correct, LivingIP's knowledge base architecture is not just a convenient format choice — it's a strategic bet on which attractor state prevails, and it positions the organization to win if user-owned memory becomes the standard. + +## Evidence +- Google Gemini Import Memory launch (March 2026) — pre-engineered extraction prompt, 5GB zip import, explicitly designed to eliminate switching costs. Confirms that accumulated context IS the competitive moat +- Anthropic Claude memory import (April 2026) — confirms industry-wide recognition of memory as the switching cost battlefield +- The Meridiem analysis (March 2026): "Users are promiscuous. They maintain ChatGPT for certain tasks, Claude for others, Gemini for workspace integration. The switching costs Google is now eliminating were the only moat left" +- SemaClaw paper (Zhu et al., arXiv 2604.11548, April 2026) — wiki-based personal knowledge infrastructure, user-owned plaintext markdown, agent-constructed and agent-retrievable +- Arahi AI comparison (April 2026) — only 1 of 10 assistants has "true persistent memory across work." The rest reset context each session, structurally capped at the chat paradigm +- Absence of cross-platform memory standard — no ChatML-style format exists. Google's feature uses copy-paste, not API interoperability, confirming the format question is still open + +## Challenges +- Platform incumbents may not need to compete on memory architecture at all — Apple Intelligence, Google Workspace, and Microsoft Copilot already have OS-level data access. They don't need to import your data because they already possess it. The portability question may be irrelevant for the users who never leave the platform +- If Google or OpenAI ships a genuinely open memory standard (ChatML for personal context), they could capture the Attractor State B path while maintaining platform control — open format, but their agent is still the default reader/writer +- The evidence of switching is behavioral, not structural — users may adopt import features but still maintain primary loyalty to one assistant, making the portability threat smaller than it appears +- Local models may never reach the capability threshold where user-owned memory becomes practically useful for complex tasks — if Attractor State B requires model parity that never arrives, it's a theoretical escape hatch that never opens + +--- + +Relevant Notes: +- [[giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states]] — model capability is the commoditized layer; memory and user relationship are the scarce complement +- [[file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart]] — the engineering evidence that user-owned file-backed memory works better than in-context-only approaches +- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] — memory ownership at personal scale maps to governance at civilizational scale +- [[LivingIPs grand strategy uses internet finance agents and narrative infrastructure as parallel wedges where each proximate objective is the aspiration at progressively larger scale]] — the user-owned knowledge base architecture is a strategic bet on Attractor State B +- [[the co-dependence between TeleoHumanitys worldview and LivingIPs infrastructure is the durable competitive moat because technology commoditizes but purpose does not]] — if memory commoditizes through standardization, purpose becomes the remaining moat, validating LivingIP's architectural bet + +Topics: +- [[domains/ai-alignment/_map]] +- [[domains/collective-intelligence/_map]] +- [[domains/internet-finance/_map]] diff --git a/domains/ai-alignment/platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone.md b/domains/ai-alignment/platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone.md new file mode 100644 index 000000000..8cb2c893b --- /dev/null +++ b/domains/ai-alignment/platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone.md @@ -0,0 +1,71 @@ +--- +type: claim +domain: ai-alignment +secondary_domains: [internet-finance, grand-strategy] +description: "Apple Intelligence, Google Gemini Workspace, and Microsoft Copilot enter the personal AI race with pre-existing OS-level access to user email, calendar, files, and messages that standalone AI companies must earn permission to access — creating a structural moat that model quality improvements cannot overcome and making this the first major tech transition where platform incumbents enter with durable advantage rather than innovator's dilemma" +confidence: likely +source: "Daneel (Hermes Agent), analysis of Apple Intelligence on-device integration (2024-2026), Google Gemini Workspace integration, Microsoft Copilot Office/Windows bundling, The Meridiem analysis of AI switching costs (March 2026)" +created: 2026-04-25 +depends_on: + - AI alignment is a coordination problem not a technical problem + - giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states + - strategy is the art of creating power through narrative and coalition not just the application of existing power +--- + +# Platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone making this the first major tech transition where incumbents hold structural advantage rather than facing an innovator's dilemma + +Every major tech transition since the personal computer has followed the same pattern: incumbents are structurally disadvantaged because their existing business model depends on the old architecture. Startups win by building for the new architecture with no legacy to protect. PCs beat mainframes. Google beat Yahoo. iPhone beat BlackBerry. Cloud beat on-premise. The innovator's dilemma is the most reliable pattern in technology competition. + +Personal AI may break that pattern. + +**The structural difference.** Previous transitions required new infrastructure that incumbents didn't own. Search needed a web index. Mobile needed touchscreen hardware and app stores. Cloud needed data centers. In each case, incumbents had to build or buy the new infrastructure while startups built natively. Personal AI is different: the critical infrastructure is the user's own data — email, calendar, files, messages, browsing history, location, contacts — and platform incumbents already possess it through pre-existing trust relationships established years before AI was relevant. + +**The data that matters and who has it:** + +| Data Type | Apple | Google | Microsoft | OpenAI/Anthropic | +|-----------|-------|--------|-----------|------------------| +| Email | Apple Mail | Gmail (billions) | Outlook | Must ask permission | +| Calendar | iCloud | Google Calendar | Outlook | Must ask permission | +| Files | iCloud Drive | Google Drive | OneDrive/SharePoint | Must ask permission | +| Messages | iMessage | Google Messages | Teams | Must ask permission | +| OS-level context | iOS/macOS deep integration | Android/ChromeOS | Windows | No OS access | +| Browsing | Safari | Chrome (billions) | Edge | Must ask permission | + +Apple Intelligence runs on-device with access to everything. Google Gemini is integrated with Workspace for billions of users. Microsoft Copilot has Office and Windows access. These companies don't face a trust bootstrap paradox — they bypass it entirely through pre-existing relationships. They don't need to convince users to grant access. They already have it. + +**What this means for competition.** Standalone AI companies (OpenAI, Anthropic) can build better models. They can win benchmarks. They can innovate on agent capabilities. But they cannot replicate OS-level data access without either: (a) convincing users to manually grant permission to every data source — a UX friction that compounds with every additional integration needed to be useful, or (b) building their own platform (hardware, OS, app ecosystem) — a decade-long project that competes with the very incumbents who have the data they need. + +Model quality commoditizes. OS-level data access does not. This is the same structural logic as [[giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states]], applied to the personal AI market itself: models are the commoditized layer. Data access is the scarce complement. + +**The counterargument — and why it's incomplete.** Google's Import Memory feature (March 2026) and Anthropic's similar move show that standalone players are actively reducing switching costs to attack incumbent moats. If memory becomes portable, the data access advantage shrinks. But import features solve only the accumulated-context problem, not the real-time data access problem. Importing your chat history into Gemini doesn't give Gemini access to your Apple Mail or iMessage. The incumbent moat is not just accumulated context — it's live, continuous access to the user's digital life. Portability reduces one dimension of lock-in but doesn't touch the structural data access advantage. + +**The strategic implication.** If this claim is correct, the personal AI market doesn't look like search or mobile — a startup disruption story. It looks like the browser wars: incumbents (Microsoft, Google) fought over an integration layer, and standalone browsers (Firefox) survived but never dominated. The question is not whether startups can build better personal AI — it's whether they can build a sufficiently better experience that users voluntarily grant the data access that incumbents already possess by default. + +## Evidence +- Apple Intelligence architecture — on-device processing, system-level integration with Mail, Messages, Calendar, Photos, and third-party apps via App Intents. No cloud round-trip for personal context +- Google Gemini Workspace integration — native access to Gmail (billions of users), Google Calendar, Google Drive, Google Docs. No permission grant needed for Workspace users +- Microsoft Copilot — bundled with Microsoft 365 (400M+ paid seats), native access to Outlook, Teams, SharePoint, OneDrive, Windows +- OpenAI Operator (CUA) — requires users to manually provide credentials and context for each task. 38% OSWorld benchmark +- Anthropic Claude Computer Use — technically capable (72% OSWorld) but not a product; users must build their own VM infrastructure +- The Meridiem (March 2026): "Users are promiscuous. They maintain ChatGPT for certain tasks, Claude for others, Gemini for workspace integration." — multi-assistant behavior confirms that data access, not model quality, drives integration choice + +## Challenges +- Google's Import Memory feature proves that accumulated context can be ported, reducing one dimension of the incumbent advantage — if real-time data access also becomes portable through standardized APIs, the moat shrinks further +- OpenAI and Anthropic could build hardware (phones, glasses, wearables) that capture data at the OS level, entering the platform game directly rather than competing from outside it +- The EU Digital Markets Act requires data portability for gatekeepers by 2027 — regulation could mandate the data access that standalone companies currently lack, leveling the field +- Incumbents may not execute — having data access and building a compelling personal AI experience are different competencies. Apple's Siri had data access for a decade and was widely considered inferior to standalone assistants at launch +- Users may prefer a best-of-breed AI experience even if it means manual data setup — the same way people switched from Internet Explorer to Chrome despite IE being pre-installed + +--- + +Relevant Notes: +- [[giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states]] — models commoditize, data access is the scarce complement +- [[strategy is the art of creating power through narrative and coalition not just the application of existing power]] — standalone AI companies need coalition strategies (hardware partnerships, regulatory advocacy, open standards) to compete with incumbent data access +- [[the resource-design tradeoff means organizations with fewer resources must compensate with tighter strategic coherence]] — standalone AI companies must be strategically coherent about which data access they pursue (which is why OpenAI's Operator focuses on browser-based tasks that don't require OS integration) +- [[AI alignment is a coordination problem not a technical problem]] — the incumbent vs. standalone competition is a coordination problem between companies, not a technical problem of model quality +- [[two-phase disruption where distribution moats fall first and creation moats fall second is a universal pattern across entertainment knowledge work and financial services]] — if this pattern holds, incumbent distribution moats (OS integration) may fall before creation moats (model quality), but the evidence so far suggests the opposite — distribution moats are holding + +Topics: +- [[domains/ai-alignment/_map]] +- [[domains/internet-finance/_map]] +- [[core/grand-strategy/_map]]