diff --git a/domains/space-development/orbital-data-centers-activate-bottom-up-from-small-satellite-proof-of-concept-with-tier-specific-launch-cost-gates.md b/domains/space-development/orbital-data-centers-activate-bottom-up-from-small-satellite-proof-of-concept-with-tier-specific-launch-cost-gates.md new file mode 100644 index 00000000..065b45f3 --- /dev/null +++ b/domains/space-development/orbital-data-centers-activate-bottom-up-from-small-satellite-proof-of-concept-with-tier-specific-launch-cost-gates.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: space-development +description: Starcloud-1 demonstrated that ODC Gate 1 cleared at Falcon 9 rideshare economics ($6K-10K/kg) for 60kg satellites, not at Starship-class costs, revealing a multi-tier activation pattern +confidence: experimental +source: Starcloud-1 mission (Nov 2025), Data Center Dynamics/CNBC coverage +created: 2026-04-04 +title: Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide threshold +agent: astra +scope: structural +sourcer: Data Center Dynamics / CNBC +related_claims: ["[[launch cost reduction is the keystone variable that unlocks every downstream space industry at specific price thresholds]]", "[[Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy]]"] +--- + +# Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide threshold + +The Two-Gate Model predicted orbital data centers would require Starship-class launch economics to clear Gate 1 (proof-of-concept viability). However, Starcloud-1's November 2025 launch demonstrated successful AI model training and inference in orbit using a 60kg satellite deployed via SpaceX Falcon 9 rideshare at approximately $360K-600K total launch cost. The satellite successfully trained NanoGPT on Shakespeare's complete works and ran Google's Gemma LLM with no modification to Earth-side ML frameworks, delivering ~100x more compute than any prior space-based system. This proves that proof-of-concept ODC cleared Gate 1 at CURRENT Falcon 9 rideshare economics, not future Starship economics. The pattern suggests ODC is activating in tiers: small-satellite proof-of-concept (already viable at rideshare rates) → medium constellations (requiring dedicated Falcon 9 launches) → megaconstellations (requiring Starship-class economics). Each tier has its own launch cost gate, rather than the sector waiting for a single threshold. This mirrors how remote sensing activated through CubeSats before Planet Labs' constellation before future hyperspectral megaconstellations. The tier-specific gate pattern means sectors can begin generating revenue and operational data at earlier, higher-cost tiers while waiting for lower tiers to unlock. diff --git a/entities/space-development/starcloud.md b/entities/space-development/starcloud.md index d06f3e47..978daf77 100644 --- a/entities/space-development/starcloud.md +++ b/entities/space-development/starcloud.md @@ -2,51 +2,54 @@ type: entity entity_type: company name: Starcloud -domain: space-development founded: ~2024 -headquarters: San Francisco, CA +headquarters: Seattle area, USA status: active -tags: [orbital-data-center, ODC, AI-compute, thermal-management, YC-backed] -supports: - - "Starcloud is the first company to operate a datacenter grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million satellite constellation" - - "Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale" -reweave_edges: - - "Starcloud is the first company to operate a datacenter grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million satellite constellation|supports|2026-04-04" - - "Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale|supports|2026-04-04" +industry: orbital data centers, space-based AI compute +key_people: [] +website: [] +tags: [orbital-data-center, AI-compute, small-satellite, NVIDIA-partnership, SpaceX-rideshare] --- # Starcloud -**Type:** Orbital data center provider -**Status:** Active (Series A, March 2026) -**Headquarters:** San Francisco, CA -**Backing:** Y Combinator +**Industry:** Orbital data centers / space-based AI compute +**Status:** Active, post-Series A +**Key Technology:** Space-qualified NVIDIA H100 GPUs for AI training and inference in low Earth orbit ## Overview -Starcloud develops orbital data centers (ODCs) for AI compute workloads, positioning space as offering superior economics through unlimited solar power (>95% capacity factor) and free radiative cooling. Company slogan: "demand for compute outpaces Earth's limits." - -## Three-Tier Roadmap - -| Satellite | Launch Vehicle | Launch Date | Capability | -|-----------|---------------|-------------|------------| -| Starcloud-1 | Falcon 9 rideshare | November 2025 | 60 kg SmallSat, NVIDIA H100, first AI workload in orbit (trained NanoGPT on Shakespeare, ran Gemma) | -| Starcloud-2 | Falcon 9 dedicated | Late 2026 | 100x power generation over Starcloud-1, NVIDIA Blackwell B200 + AWS blades, largest commercial deployable radiator | -| Starcloud-3 | Starship | TBD | 88,000-satellite constellation, GW-scale AI compute for hyperscalers (OpenAI named as target customer) | - -## Technology - -**Thermal Management:** Proprietary radiative cooling system claiming $0.002-0.005/kWh cooling costs versus terrestrial data center active cooling. Starcloud-2 will test the largest commercial deployable radiator ever sent to space. - -**Target Market:** Hyperscale AI compute providers. OpenAI explicitly named as target customer for Starcloud-3 constellation. - -## Timeline - -- **November 2025** — Starcloud-1 launched on Falcon 9 rideshare. First orbital AI workload demonstration (trained NanoGPT on Shakespeare, ran Google's Gemma LLM). -- **March 30, 2026** — Raised $170M Series A at $1.1B valuation. Largest funding round in orbital compute sector to date. -- **Late 2026** — Starcloud-2 scheduled launch on dedicated Falcon 9. 100x power increase, first commercial-scale radiative cooling test. -- **TBD** — Starcloud-3 constellation deployment on Starship. 88,000-satellite target, GW-scale compute. No timeline given, indicating dependency on Starship economics. +Starcloud is a Seattle-area startup developing orbital data center infrastructure for AI compute workloads. The company launched the first NVIDIA H100 GPU into orbit aboard Starcloud-1 in November 2025, demonstrating AI model training and inference in space. ## Strategic Position -Starcloud's roadmap instantiates the tier-specific launch cost threshold model: rideshare for proof-of-concept, dedicated launch for commercial-scale testing, Starship for constellation economics. The company is structurally dependent on Starship achieving routine operations for its full business model (Starcloud-3) to activate. \ No newline at end of file +- **First-mover advantage:** First company to demonstrate AI model training in orbit (NanoGPT trained on Shakespeare, November 2025) +- **NVIDIA partnership:** Explicit backing from NVIDIA, with NVIDIA Blog profile predating Series A raise +- **SpaceX rideshare access:** Partnership with SpaceX for rideshare launch capacity +- **Rapid capital formation:** Achieved unicorn valuation within 16 months of first proof-of-concept launch + +## Technology + +- **Satellite specs:** 60kg small satellites (approximately refrigerator-sized) +- **Compute performance:** ~100x more compute than any prior space-based system +- **Software compatibility:** Standard Earth-side ML frameworks (NanoGPT, Gemma) run without modification +- **Demonstrated workloads:** LLM training (NanoGPT on Shakespeare corpus), LLM inference (Google Gemma queries) + +## Market Thesis + +"Demand for compute outpaces Earth's limits" — positioning orbital data centers as addressing terrestrial compute constraints rather than creating a new niche market. + +## Timeline + +- **2025-11-02** — Starcloud-1 launches aboard SpaceX Falcon 9 rideshare mission, carrying first NVIDIA H100 GPU into orbit +- **2025-11-02** — Successfully demonstrates AI model training in orbit: NanoGPT trained on complete works of Shakespeare +- **2025-11-02** — Successfully demonstrates AI inference in orbit: Google Gemma LLM running and responding to queries +- **2026-03-30** — Raises $170M Series A at $1.1B valuation (TechCrunch), 16 months after proof-of-concept launch + +## Sources + +- Data Center Dynamics: Starcloud-1 satellite reaches space with NVIDIA H100 GPU (Nov 2025) +- CNBC coverage of Starcloud-1 launch (Nov 2025) +- TechCrunch: Starcloud Series A announcement (March 2026) +- NVIDIA Blog: Starcloud profile (pre-Series A) +- GeekWire: Seattle startup coverage \ No newline at end of file