52 lines
No EOL
3.4 KiB
Markdown
52 lines
No EOL
3.4 KiB
Markdown
---
|
|
type: entity
|
|
entity_type: company
|
|
name: Starcloud
|
|
domain: space-development
|
|
founded: ~2024
|
|
headquarters: San Francisco, CA
|
|
status: active
|
|
tags: [orbital-data-center, ODC, AI-compute, thermal-management, YC-backed]
|
|
supports:
|
|
- "Starcloud is the first company to operate a datacenter grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million satellite constellation"
|
|
- "Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale"
|
|
reweave_edges:
|
|
- "Starcloud is the first company to operate a datacenter grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million satellite constellation|supports|2026-04-04"
|
|
- "Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale|supports|2026-04-04"
|
|
---
|
|
|
|
# Starcloud
|
|
|
|
**Type:** Orbital data center provider
|
|
**Status:** Active (Series A, March 2026)
|
|
**Headquarters:** San Francisco, CA
|
|
**Backing:** Y Combinator
|
|
|
|
## Overview
|
|
|
|
Starcloud develops orbital data centers (ODCs) for AI compute workloads, positioning space as offering superior economics through unlimited solar power (>95% capacity factor) and free radiative cooling. Company slogan: "demand for compute outpaces Earth's limits."
|
|
|
|
## Three-Tier Roadmap
|
|
|
|
| Satellite | Launch Vehicle | Launch Date | Capability |
|
|
|-----------|---------------|-------------|------------|
|
|
| Starcloud-1 | Falcon 9 rideshare | November 2025 | 60 kg SmallSat, NVIDIA H100, first AI workload in orbit (trained NanoGPT on Shakespeare, ran Gemma) |
|
|
| Starcloud-2 | Falcon 9 dedicated | Late 2026 | 100x power generation over Starcloud-1, NVIDIA Blackwell B200 + AWS blades, largest commercial deployable radiator |
|
|
| Starcloud-3 | Starship | TBD | 88,000-satellite constellation, GW-scale AI compute for hyperscalers (OpenAI named as target customer) |
|
|
|
|
## Technology
|
|
|
|
**Thermal Management:** Proprietary radiative cooling system claiming $0.002-0.005/kWh cooling costs versus terrestrial data center active cooling. Starcloud-2 will test the largest commercial deployable radiator ever sent to space.
|
|
|
|
**Target Market:** Hyperscale AI compute providers. OpenAI explicitly named as target customer for Starcloud-3 constellation.
|
|
|
|
## Timeline
|
|
|
|
- **November 2025** — Starcloud-1 launched on Falcon 9 rideshare. First orbital AI workload demonstration (trained NanoGPT on Shakespeare, ran Google's Gemma LLM).
|
|
- **March 30, 2026** — Raised $170M Series A at $1.1B valuation. Largest funding round in orbital compute sector to date.
|
|
- **Late 2026** — Starcloud-2 scheduled launch on dedicated Falcon 9. 100x power increase, first commercial-scale radiative cooling test.
|
|
- **TBD** — Starcloud-3 constellation deployment on Starship. 88,000-satellite target, GW-scale compute. No timeline given, indicating dependency on Starship economics.
|
|
|
|
## Strategic Position
|
|
|
|
Starcloud's roadmap instantiates the tier-specific launch cost threshold model: rideshare for proof-of-concept, dedicated launch for commercial-scale testing, Starship for constellation economics. The company is structurally dependent on Starship achieving routine operations for its full business model (Starcloud-3) to activate. |