From 8598d9585875e5e2daeaa238cd21124b10b8667b Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 19 Mar 2026 13:53:52 +0000 Subject: [PATCH 1/3] extract: 2026-03-16-theseus-ai-coordination-governance-evidence Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA> --- ...t alignment governance must account for.md | 6 +++ ...safety language from mission statements.md | 6 +++ ...ive dynamics of frontier AI development.md | 6 +++ ...ng capability development unconstrained.md | 6 +++ ... advance without equivalent constraints.md | 6 +++ ...s-ai-coordination-governance-evidence.json | 46 +++++++++++++++++++ ...eus-ai-coordination-governance-evidence.md | 22 ++++++++- 7 files changed, 97 insertions(+), 1 deletion(-) create mode 100644 inbox/queue/.extraction-debug/2026-03-16-theseus-ai-coordination-governance-evidence.json diff --git a/domains/ai-alignment/AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for.md b/domains/ai-alignment/AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for.md index 66269a85..461ae640 100644 --- a/domains/ai-alignment/AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for.md +++ b/domains/ai-alignment/AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for.md @@ -30,6 +30,12 @@ This concentration has direct alignment implications: The counterfactual worth tracking: Chinese open-source models (Qwen, DeepSeek) now capture 50-60% of new open-model adoption globally. If open-source models close the capability gap (currently 6-18 months, shrinking), capital concentration at the frontier may become less alignment-relevant as capability diffuses. But as of March 2026, frontier capability remains concentrated. + +### Additional Evidence (extend) +*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* + +450+ organizations lobbied on AI in 2025, up from 6 in 2016. $92M in lobbying fees Q1-Q3 2025. Industry successfully blocked California SB 1047 through coordinated lobbying. Concentration creates not just market power but political power—oligopoly structure enables collective action to prevent binding regulation. + --- Relevant Notes: diff --git a/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md b/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md index 4f70867e..80f49a69 100644 --- a/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md +++ b/domains/ai-alignment/AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md @@ -41,6 +41,12 @@ Expert consensus identifies 'external scrutiny, proactive evaluation and transpa STREAM proposal identifies that current model reports lack 'sufficient detail to enable meaningful independent assessment' of dangerous capability evaluations. The need for a standardized reporting framework confirms that transparency problems extend beyond general disclosure (FMTI scores) to the specific domain of dangerous capability evaluation where external verification is currently impossible. + +### Additional Evidence (confirm) +*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* + +Stanford FMTI 2024→2025 data: mean transparency score declined 17 points. Meta -29 points, Mistral -37 points, OpenAI -14 points. OpenAI removed 'safely' from mission statement (Nov 2025), dissolved Superalignment team (May 2024) and Mission Alignment team (Feb 2026). Google accused by 60 UK lawmakers of violating Seoul commitments with Gemini 2.5 Pro (Apr 2025). + --- Relevant Notes: diff --git a/domains/ai-alignment/Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development.md b/domains/ai-alignment/Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development.md index 59bb4483..3507d90c 100644 --- a/domains/ai-alignment/Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development.md +++ b/domains/ai-alignment/Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development.md @@ -21,6 +21,12 @@ This is not a story about Anthropic's leadership failing. It is a story about [[ The alignment implication is structural: if the most safety-motivated lab with the most commercially successful safety brand cannot maintain binding safety commitments, then voluntary self-regulation is not a viable alignment strategy. This strengthens the case for coordination-based approaches — [[AI alignment is a coordination problem not a technical problem]] — because the failure mode is not that safety is technically impossible but that unilateral safety is economically unsustainable. + +### Additional Evidence (confirm) +*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* + +Anthropic's own language in RSP documentation: commitments are 'very hard to meet without industry-wide coordination.' OpenAI made safety explicitly conditional on competitor behavior in Preparedness Framework v2 (April 2025). Pattern holds across all voluntary commitments—no frontier lab maintained unilateral safety constraints when competitors advanced without them. + --- Relevant Notes: diff --git a/domains/ai-alignment/compute export controls are the most impactful AI governance mechanism but target geopolitical competition not safety leaving capability development unconstrained.md b/domains/ai-alignment/compute export controls are the most impactful AI governance mechanism but target geopolitical competition not safety leaving capability development unconstrained.md index b407badb..b699cd13 100644 --- a/domains/ai-alignment/compute export controls are the most impactful AI governance mechanism but target geopolitical competition not safety leaving capability development unconstrained.md +++ b/domains/ai-alignment/compute export controls are the most impactful AI governance mechanism but target geopolitical competition not safety leaving capability development unconstrained.md @@ -30,6 +30,12 @@ For alignment, this means the governance infrastructure that exists (export cont The CFR article confirms diverging governance philosophies between democracies and authoritarian systems, with China's amended Cybersecurity Law emphasizing state oversight while the US pursues standard-setting body engagement. Horowitz notes the US 'must engage in standard-setting bodies to counter China's AI governance influence,' indicating that the most active governance is competitive positioning rather than safety coordination. + +### Additional Evidence (extend) +*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* + +US export controls use tiered country system with deployment caps. Nvidia designed compliance chips (H800, A800) specifically to meet regulatory thresholds. Mechanism proves compute governance CAN work when backed by state enforcement, but current implementation optimizes for strategic advantage over China rather than catastrophic risk reduction. KYC for compute proposed but not implemented, showing technical feasibility without political will. + --- Relevant Notes: diff --git a/domains/ai-alignment/voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md b/domains/ai-alignment/voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md index 015955cd..3a70c264 100644 --- a/domains/ai-alignment/voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md +++ b/domains/ai-alignment/voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md @@ -39,6 +39,12 @@ The International AI Safety Report 2026 (multi-government committee, February 20 The gap between expert consensus (76 specialists identify third-party audits as top-3 priority) and actual implementation (no mandatory audit requirements at major labs) demonstrates that knowing what's needed is insufficient. Even when the field's experts across multiple domains agree on priorities, competitive dynamics prevent voluntary adoption. + +### Additional Evidence (confirm) +*Source: [[2026-03-16-theseus-ai-coordination-governance-evidence]] | Added: 2026-03-19* + +Comprehensive evidence across governance mechanisms: ALL international declarations (Bletchley, Seoul, Paris, Hiroshima, OECD, UN) produced zero verified behavioral change. Frontier Model Forum produced no binding commitments. White House voluntary commitments eroded. 450+ organizations lobbied on AI in 2025 ($92M in fees), California SB 1047 vetoed after industry pressure. Only binding regulation (EU AI Act, China enforcement, US export controls) changed behavior. + --- Relevant Notes: diff --git a/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-coordination-governance-evidence.json b/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-coordination-governance-evidence.json new file mode 100644 index 00000000..cf203310 --- /dev/null +++ b/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-coordination-governance-evidence.json @@ -0,0 +1,46 @@ +{ + "rejected_claims": [ + { + "filename": "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md", + "issues": [ + "missing_attribution_extractor" + ] + }, + { + "filename": "compute-governance-through-export-controls-works-but-targets-geopolitics-not-safety-leaving-capability-race-unconstrained.md", + "issues": [ + "missing_attribution_extractor" + ] + }, + { + "filename": "third-party-ai-evaluation-ecosystem-is-fragile-without-regulatory-mandate-because-voluntary-participation-and-funding-instability-threaten-continuity.md", + "issues": [ + "missing_attribution_extractor" + ] + } + ], + "validation_stats": { + "total": 3, + "kept": 0, + "fixed": 9, + "rejected": 3, + "fixes_applied": [ + "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md:set_created:2026-03-19", + "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md:stripped_wiki_link:only binding regulation with enforcement teeth changes front", + "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md:stripped_wiki_link:voluntary safety commitments collapse under competitive pres", + "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md:stripped_wiki_link:Anthropics RSP rollback under commercial pressure is the fir", + "compute-governance-through-export-controls-works-but-targets-geopolitics-not-safety-leaving-capability-race-unconstrained.md:set_created:2026-03-19", + "compute-governance-through-export-controls-works-but-targets-geopolitics-not-safety-leaving-capability-race-unconstrained.md:stripped_wiki_link:compute export controls are the most impactful AI governance", + "compute-governance-through-export-controls-works-but-targets-geopolitics-not-safety-leaving-capability-race-unconstrained.md:stripped_wiki_link:nation-states will inevitably assert control over frontier A", + "third-party-ai-evaluation-ecosystem-is-fragile-without-regulatory-mandate-because-voluntary-participation-and-funding-instability-threaten-continuity.md:set_created:2026-03-19", + "third-party-ai-evaluation-ecosystem-is-fragile-without-regulatory-mandate-because-voluntary-participation-and-funding-instability-threaten-continuity.md:stripped_wiki_link:pre-deployment-AI-evaluations-do-not-predict-real-world-risk" + ], + "rejections": [ + "binding-regulation-with-enforcement-is-the-only-ai-governance-mechanism-that-changes-frontier-lab-behavior.md:missing_attribution_extractor", + "compute-governance-through-export-controls-works-but-targets-geopolitics-not-safety-leaving-capability-race-unconstrained.md:missing_attribution_extractor", + "third-party-ai-evaluation-ecosystem-is-fragile-without-regulatory-mandate-because-voluntary-participation-and-funding-instability-threaten-continuity.md:missing_attribution_extractor" + ] + }, + "model": "anthropic/claude-sonnet-4.5", + "date": "2026-03-19" +} \ No newline at end of file diff --git a/inbox/queue/2026-03-16-theseus-ai-coordination-governance-evidence.md b/inbox/queue/2026-03-16-theseus-ai-coordination-governance-evidence.md index a6f19e7f..d684c85e 100644 --- a/inbox/queue/2026-03-16-theseus-ai-coordination-governance-evidence.md +++ b/inbox/queue/2026-03-16-theseus-ai-coordination-governance-evidence.md @@ -6,10 +6,14 @@ url: null date_published: 2026-03-16 date_archived: 2026-03-16 domain: ai-alignment -status: unprocessed +status: enrichment processed_by: theseus tags: [ai-governance, coordination, safety-commitments, regulation, enforcement, voluntary-pledges] sourced_via: "Theseus research agent — 45 web searches synthesized from Brookings, Stanford FMTI, EU legislation, OECD, government publications, TechCrunch, TIME, CNN, Fortune, academic papers" +processed_by: theseus +processed_date: 2026-03-19 +enrichments_applied: ["AI transparency is declining not improving because Stanford FMTI scores dropped 17 points in one year while frontier labs dissolved safety teams and removed safety language from mission statements.md", "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development.md", "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints.md", "compute export controls are the most impactful AI governance mechanism but target geopolitical competition not safety leaving capability development unconstrained.md", "AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for.md"] +extraction_model: "anthropic/claude-sonnet-4.5" --- # Empirical Evidence: AI Coordination and Governance Mechanisms That Changed Behavior @@ -51,3 +55,19 @@ Core finding: almost no international AI governance mechanism has produced verif - Insurance/liability: market projected $29.7B by 2033. Creates market incentives aligned with safety. - Third-party auditing: METR, Apollo Research. Apollo warns ecosystem unsustainable without regulatory mandate. - Futarchy: implemented for DAO governance (MetaDAO, Optimism experiment) but not yet for AI governance. + + +## Key Facts +- EU AI Act: Apple paused Apple Intelligence in EU, Meta changed ads, EUR 500M+ fines under DMA +- China implemented mandatory algorithm filing with criminal enforcement (August 2023) +- US export controls: tiered country system, deployment caps, Nvidia compliance chips (H800, A800) +- Stanford FMTI transparency scores: -17 points mean (2024→2025), Meta -29, Mistral -37, OpenAI -14 +- OpenAI removed 'safely' from mission statement (November 2025) +- OpenAI dissolved Superalignment team (May 2024) and Mission Alignment team (February 2026) +- Google accused by 60 UK lawmakers of violating Seoul commitments (Gemini 2.5 Pro, April 2025) +- 450+ organizations lobbied on AI in 2025 (up from 6 in 2016), $92M in lobbying fees Q1-Q3 2025 +- California SB 1047 vetoed after industry lobbying +- Watermarking: 38% implementation rate across frontier labs +- US AISI defunded/rebranded after initial establishment +- UK-US joint evaluation of OpenAI o1 model conducted +- Insurance/liability market projected $29.7B by 2033 -- 2.45.2 From aa496c0deb9700b012d583973c533e5de8dc0407 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 19 Mar 2026 13:56:00 +0000 Subject: [PATCH 2/3] entity-batch: update 2 entities - Applied 2 entity operations from queue - Files: entities/ai-alignment/anthropic.md, entities/ai-alignment/openai.md Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA> --- entities/ai-alignment/anthropic.md | 4 ++++ entities/ai-alignment/openai.md | 6 ++++++ 2 files changed, 10 insertions(+) diff --git a/entities/ai-alignment/anthropic.md b/entities/ai-alignment/anthropic.md index 47169b2f..21e88ec6 100644 --- a/entities/ai-alignment/anthropic.md +++ b/entities/ai-alignment/anthropic.md @@ -49,6 +49,10 @@ Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amod - **2026-03-18** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons; Anthropic refused publicly and Pentagon retaliated (reported by HKS Carr-Ryan Center) - **2026-03** — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons; Anthropic refused publicly and Pentagon retaliated (HKS Carr-Ryan Center report) +- **2026-02** — Abandoned binding RSP (Responsible Scaling Policy) +- **2026-03** — Reached $380B valuation, ~$19B annualized revenue (10x YoY sustained 3 years) +- **2026-03** — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate +- **2026-03** — Surpassed OpenAI at 40% enterprise LLM spend ## Competitive Position Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it. diff --git a/entities/ai-alignment/openai.md b/entities/ai-alignment/openai.md index 72063ffa..4bff74f8 100644 --- a/entities/ai-alignment/openai.md +++ b/entities/ai-alignment/openai.md @@ -45,6 +45,12 @@ The largest and most-valued AI laboratory. OpenAI pioneered the transformer-base - **2026-02** — Raised $110B at $840B valuation, restructured to PBC - **2026** — IPO preparation underway +- **2025-2026** — John Schulman departed for Thinking Machines Lab +- **2026-03** — Reached $840B valuation, ~$25B annualized revenue +- **2026-03** — 68% consumer market share, 27% enterprise LLM spend +- **2026-03** — Released GPT-5/5.2/5.3 +- **2026-03** — Restructured to Public Benefit Corporation +- **2026-03** — IPO expected H2 2026-2027 ## Competitive Position Highest valuation and strongest consumer brand, but losing enterprise share to Anthropic. The Microsoft partnership (exclusive API hosting) provides distribution but also dependency. Key vulnerability: the enterprise coding market — where Anthropic's Claude Code dominates — may prove more valuable than consumer chat. -- 2.45.2 From 7593b07d74f6d158acb0142f4cb37bfa1fae031d Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 19 Mar 2026 13:55:26 +0000 Subject: [PATCH 3/3] extract: 2026-03-16-theseus-ai-industry-landscape-briefing Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA> --- ...heseus-ai-industry-landscape-briefing.json | 40 +++++++++++++++++++ ...-theseus-ai-industry-landscape-briefing.md | 21 +++++++++- 2 files changed, 60 insertions(+), 1 deletion(-) create mode 100644 inbox/queue/.extraction-debug/2026-03-16-theseus-ai-industry-landscape-briefing.json diff --git a/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-industry-landscape-briefing.json b/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-industry-landscape-briefing.json new file mode 100644 index 00000000..afc25a75 --- /dev/null +++ b/inbox/queue/.extraction-debug/2026-03-16-theseus-ai-industry-landscape-briefing.json @@ -0,0 +1,40 @@ +{ + "rejected_claims": [ + { + "filename": "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md", + "issues": [ + "missing_attribution_extractor", + "opsec_internal_deal_terms" + ] + }, + { + "filename": "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md", + "issues": [ + "missing_attribution_extractor", + "opsec_internal_deal_terms" + ] + } + ], + "validation_stats": { + "total": 2, + "kept": 0, + "fixed": 6, + "rejected": 2, + "fixes_applied": [ + "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md:set_created:2026-03-19", + "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md:stripped_wiki_link:coding-agents-crossed-usability-threshold-december-2025-when", + "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md:stripped_wiki_link:the-gap-between-theoretical-AI-capability-and-observed-deplo", + "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md:set_created:2026-03-19", + "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md:stripped_wiki_link:AI-talent-circulation-between-frontier-labs-transfers-alignm", + "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md:stripped_wiki_link:Anthropics-RSP-rollback-under-commercial-pressure-is-the-fir" + ], + "rejections": [ + "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md:missing_attribution_extractor", + "enterprise-coding-agents-emerged-as-first-killer-app-category-for-frontier-ai-because-verifiable-output-and-immediate-roi-overcome-adoption-friction.md:opsec_internal_deal_terms", + "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md:missing_attribution_extractor", + "frontier-ai-lab-talent-circulation-accelerated-dramatically-in-2025-2026-with-11-plus-google-executives-to-microsoft-and-multiple-openai-departures-indicating-competitive-pressure-on-retention.md:opsec_internal_deal_terms" + ] + }, + "model": "anthropic/claude-sonnet-4.5", + "date": "2026-03-19" +} \ No newline at end of file diff --git a/inbox/queue/2026-03-16-theseus-ai-industry-landscape-briefing.md b/inbox/queue/2026-03-16-theseus-ai-industry-landscape-briefing.md index 4a58f571..b68a8b5e 100644 --- a/inbox/queue/2026-03-16-theseus-ai-industry-landscape-briefing.md +++ b/inbox/queue/2026-03-16-theseus-ai-industry-landscape-briefing.md @@ -7,10 +7,13 @@ date_published: 2026-03-16 date_archived: 2026-03-16 domain: ai-alignment secondary_domains: [internet-finance] -status: unprocessed +status: enrichment processed_by: theseus tags: [industry-landscape, ai-labs, funding, competitive-dynamics, startups, investors] sourced_via: "Theseus research agent — 33 web searches synthesized from MIT Tech Review, TechCrunch, Crunchbase, OECD, company announcements, CNBC, Fortune, etc." +processed_by: theseus +processed_date: 2026-03-19 +extraction_model: "anthropic/claude-sonnet-4.5" --- # AI Industry Landscape Briefing — March 2026 @@ -54,3 +57,19 @@ Multi-source synthesis of the current AI industry state. Key data points: - Daniel Gross → left SSI for Meta superintelligence team - John Schulman → left OpenAI for Thinking Machines Lab - 11+ Google executives → Microsoft in 2025 + + +## Key Facts +- xAI reached ~$230B valuation with Grok 4/4.1 leading LMArena, 1M+ H100 GPUs, $20B Series E Jan 2026 +- Mistral reached $13.8B valuation, EUR 300M ARR targeting EUR 1B, building European sovereign compute +- Google DeepMind released Gemini 3/3.1 family, 21% enterprise LLM spend, $175-185B capex 2026, Deep Think achieved gold-medal Olympiad results +- Sierra (Bret Taylor) reached $10B+ valuation in agentic customer service +- Databricks reached $134B valuation, $5B Series L, filed for IPO Q2 2026 +- 2025 total AI VC: $259-270B (52-61% of all global VC) +- Feb 2026 AI funding: $189B (largest single month ever) +- 75-79% of AI funding to US companies +- Inference cost deflation ~10x/year +- Chinese open-source (Qwen, DeepSeek) capturing 50-60% of new open-model adoption +- 95% of enterprise AI pilots fail to deliver ROI (MIT Project NANDA) +- Big 5 AI capex: $660-690B planned 2026 +- US deregulating AI, EU softening regulations -- 2.45.2