Threshold: 0.7, Haiku classification, 44 files modified. Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
5 KiB
| type | domain | description | confidence | source | created | secondary_domains | depends_on | related | reweave_edges | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | space-development | A 100 MW orbital facility needs 500,000 kg of radiators — space is a thermos not a freezer so only on-orbit satellite data processing and edge inference are viable near-term | likely | Astra, space data centers feasibility analysis February 2026 | 2026-02-17 |
|
|
|
|
Space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density
The pitch for orbital data centers rests on a seductive premise: AI compute demand is growing exponentially, terrestrial data centers are hitting power and cooling constraints, and space offers unlimited solar energy plus passive cooling. The demand side is real -- the US data center pipeline will add 140 GW of new load against current draw under 15 GW. But the supply-side physics are brutal. Space is not a freezer; it is a thermos. With no convective medium, all heat must be radiated according to the Stefan-Boltzmann law, where power radiated scales with the fourth power of temperature and linearly with surface area. At 320 K (a reasonable chip operating temperature), a perfect blackbody radiates roughly 600 watts per square meter. The smallest useful AI data center runs approximately 100 MW. An orbital version would need about 100,000 square meters of radiator surface -- a 316-meter-by-316-meter array -- weighing over 500,000 kg at realistic radiator mass of 5 to 10 kg per square meter.
The bandwidth constraint is equally fatal for the highest-value workload. Large-scale AI training requires hundreds of terabits per second of aggregate inter-node bandwidth. Current satellite links top out at 200 Gbps (Starlink) to 6 Tbps (Blue Origin TeraWave). The gap is orders of magnitude.
What does work is on-orbit processing of satellite-generated data (kilowatt-scale, data already in orbit) and distributed LEO inference (independent nodes, acceptable latency). Terrestrial alternatives -- arctic data centers with 70%+ cooling cost reduction, nuclear-powered facilities -- beat orbital compute on every metric for the next decade. Google projects cost-competitiveness around 2035 contingent on $200/kg launch costs.
Evidence
- Stefan-Boltzmann law: ~600 W/m² radiative capacity at 320 K
- 100 MW facility requires ~100,000 m² radiators weighing 500,000+ kg
- Solar input (1,366 W/m²) further reduces net radiative capacity
- Google Project Suncatcher feasibility analysis (2035 projection)
Challenges
Novel cooling technologies (droplet radiators, phase-change systems) could improve radiative efficiency, but none have been demonstrated at scale in space environments.
Relevant Notes:
- orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players — this note provides the detailed physics showing why the convergence thesis fails at datacenter scale
- on-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously — the viable near-term use case
- distributed LEO inference networks could serve global AI requests at 4-20ms latency competitive with centralized terrestrial data centers for latency-tolerant workloads — the viable long-term use case
Topics: