astra: batch 6 — 10 orbital compute & space data center claims
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Migrated from seed package: - Distributed LEO inference networks (4-20ms latency) - AI accelerator radiation tolerance (Google TPU 15 krad test) - On-orbit satellite data processing (proven near-term use case) - Orbital AI training incompatibility (bandwidth gap) - Orbital compute servicing impossibility (trilemma) - Orbital data centers overview (speculative but serious players) - Five enabling technologies requirement (none at readiness) - Solar irradiance advantage (8-10x ground-based) - Thermal physics blocker (space is thermos not freezer) - Starcloud company analysis (first GPU in orbit, SpaceX dependency) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
1678c6cb08
commit
b53c2015ff
10 changed files with 415 additions and 0 deletions
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "YC S24 startup launched an H100 in orbit 21 months after founding and trained the first LLM in space but has raised only $34M against an 88,000-satellite vision while depending on SpaceX who filed for 1M competing satellites"
|
||||
confidence: experimental
|
||||
source: "Astra, web research compilation including CNBC, GeekWire, DCD, IEEE Spectrum, TechCrunch February 2026"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players"
|
||||
- "on-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously"
|
||||
- "SpaceX vertical integration across launch broadband and manufacturing creates compounding cost advantages that no competitor can replicate piecemeal"
|
||||
---
|
||||
|
||||
# Starcloud is the first company to operate a datacenter-grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million-satellite constellation
|
||||
|
||||
## Company Overview
|
||||
|
||||
Starcloud (formerly Lumen Orbit) was founded in January 2024, Y Combinator Summer 2024 batch. Rebranded from Lumen Orbit in February 2025. Team of approximately 5 people as of late 2025.
|
||||
|
||||
**Key team:** Philip Johnston (CEO) — former McKinsey, Harvard/Wharton/Columbia. Ezra Feilden (CTO) — decade of satellite engineering, former Airbus, PhD in deployable structures. Adi Oltean (Chief Engineer) — former SpaceX Starlink network team, former Microsoft, 25+ patents. Bailey Montano (Lead Mechanical) — former SpaceX Raptor/Merlin, former Helion Energy.
|
||||
|
||||
## Funding & Backers
|
||||
|
||||
Total raised: approximately $27-34M across 8 rounds. Key investors: NFX, Y Combinator, In-Q-Tel (CIA-backed — signals national security interest), NVIDIA Inception Program, 468 Capital, scout funds from a16z and Sequoia.
|
||||
|
||||
## What They Have Built
|
||||
|
||||
**Starcloud-1** (launched November 2, 2025 on Falcon 9): ~60 kg satellite at 325 km carrying a single NVIDIA H100 — the first datacenter-grade GPU in space, 100x more powerful than any GPU previously operated in orbit. Demonstrated: trained NanoGPT on Shakespeare, ran Google Gemma, processed Capella Space SAR data as customer workload.
|
||||
|
||||
**Starcloud-2** (planned October 2026): Multiple H100s plus NVIDIA Blackwell B200, ~100x the power generation of Starcloud-1, running Crusoe Cloud for public cloud workloads, reportedly first satellite with AWS Outposts hardware.
|
||||
|
||||
**FCC filing** (February 2026): Up to 88,000 satellites for orbital AI compute.
|
||||
|
||||
## The SpaceX Dependency
|
||||
|
||||
The most interesting strategic risk. SpaceX controls Starcloud's access to orbit (launch pricing), its data routing infrastructure (Starlink), and is building a directly competing product (million-satellite compute constellation). This mirrors the classic platform-as-competitor dynamic from cloud computing — except the platform literally decides whether your satellites reach space.
|
||||
|
||||
## Economics
|
||||
|
||||
Starcloud projects a 40 MW orbital data center costing $8.2M over ten years versus $167M terrestrial. This comparison is accurate for power and cooling operational costs but deeply misleading as total cost: 25,000 Blackwell servers alone would cost ~$12-13B. The power savings represent 0.007% of total system cost. The real question is whether launch costs drop enough to make orbital deployment competitive on total cost.
|
||||
|
||||
## Challenges
|
||||
|
||||
The capital gap between $34M raised and 88,000 satellites is astronomical. Consumer GPUs are not designed for space radiation. Scaling from one 60 kg satellite to gigawatt-scale arrays is multiple orders of magnitude.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players]] — Starcloud is the company most concretely advancing this thesis
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — the physics constraint Starcloud must solve at scale
|
||||
- [[on-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously]] — Starcloud's Capella workload validates the near-term use case
|
||||
- [[SpaceX vertical integration across launch broadband and manufacturing creates compounding cost advantages that no competitor can replicate piecemeal]] — SpaceX controls launch, networking, and is building a competing product
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "LEO at 500-2000 km gives 4-20ms round-trip latency — acceptable for many AI inference applications and potentially lower than routing to a distant terrestrial hyperscaler"
|
||||
confidence: experimental
|
||||
source: "Astra, space data centers feasibility analysis February 2026; SpaceX FCC filing January 2026"
|
||||
created: 2026-02-17
|
||||
secondary_domains:
|
||||
- critical-systems
|
||||
depends_on:
|
||||
- "Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy"
|
||||
- "LEO satellite internet is the defining battleground of the space economy with Starlink 5 years ahead and only 3-4 mega-constellations viable"
|
||||
---
|
||||
|
||||
# Distributed LEO inference networks could serve global AI requests at 4-20ms latency competitive with centralized terrestrial data centers for latency-tolerant workloads
|
||||
|
||||
Low Earth orbit at 500 to 2,000 km altitude produces approximately 4 to 20 milliseconds of round-trip latency to ground stations. This is not competitive with sub-millisecond latency available within a terrestrial data center, but it is acceptable for many AI inference use cases -- including content recommendation, search ranking, translation, summarization, and conversational AI. For users geographically distant from hyperscale data centers, orbital inference could actually deliver lower latency than routing through multiple terrestrial network hops to a distant facility.
|
||||
|
||||
Inference workloads are architecturally suited to distributed orbital deployment. Unlike training, which requires constant high-bandwidth all-to-all communication between thousands of GPUs for gradient synchronization, inference runs are relatively independent -- each request can be served by a single node or small cluster without tight coordination with other nodes. Bandwidth demands per node are manageable (the model is loaded once; each request involves kilobytes to megabytes of input/output, not the terabytes of parameter gradients that training demands).
|
||||
|
||||
SpaceX's January 2026 FCC filing for up to one million satellites at 500-2,000 km altitudes specifically targets this architecture -- distributed processing nodes harnessing near-constant solar power, leveraging Starlink's existing laser-mesh inter-satellite network for routing. The potential SpaceX-xAI merger would vertically integrate this network infrastructure with Grok inference demand. Google's Project Suncatcher envisions 81-satellite clusters in 1 km formations, also targeting inference and Earth observation processing.
|
||||
|
||||
The critical dependencies are launch cost (Google pins cost-competitiveness at $200/kg, projected around 2035), thermal management (each node must dissipate its compute heat radiatively), and bandwidth (sufficient to deliver inference results but not for the massive data transfers training requires).
|
||||
|
||||
## Evidence
|
||||
- SpaceX FCC filing (January 2026) for up to 1 million satellites optimized for AI inference
|
||||
- Google Project Suncatcher — 81-satellite clusters targeting inference workloads
|
||||
- LEO orbital mechanics — 4-20ms round-trip latency at 500-2,000 km altitude
|
||||
|
||||
## Challenges
|
||||
Terrestrial edge computing and CDN expansion may close the latency gap for most users before orbital inference becomes cost-competitive. The 2035 timeline assumes Starship cost curves materialize.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[orbital AI training is fundamentally incompatible with space communication links because distributed training requires hundreds of Tbps aggregate bandwidth while orbital links top out at single-digit Tbps]] — inference works because it does not require all-to-all bandwidth
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — thermal management remains the binding constraint even for distributed inference
|
||||
- [[SpaceX vertical integration across launch broadband and manufacturing creates compounding cost advantages that no competitor can replicate piecemeal]] — SpaceX uniquely controls both launch and the networking infrastructure
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "Google tested Trillium v6e TPUs in a 67 MeV proton beam with no hard failures up to 15 krad total ionizing dose — challenging the assumption that AI compute requires expensive radiation-hardened hardware"
|
||||
confidence: experimental
|
||||
source: "Astra, Google Project Suncatcher feasibility study late 2025"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
---
|
||||
|
||||
# Modern AI accelerators are more radiation-tolerant than expected because Google TPU testing showed no hard failures up to 15 krad suggesting consumer chips may survive LEO environments
|
||||
|
||||
Google's Project Suncatcher feasibility study included proton beam testing of their Trillium (v6e) TPU accelerators at 67 MeV. The result was surprising: no hard failures up to 15 krad(Si) total ionizing dose. This is a genuinely important data point because the conventional assumption in space systems engineering is that commercial-grade semiconductors require expensive radiation hardening (or radiation-hardened by design alternatives that are generations behind in performance) to survive in orbit.
|
||||
|
||||
Space radiation damages electronics through three mechanisms. Single Event Upsets (SEUs) are bit flips from high-energy particle strikes -- correctable with error-correcting code memory but they increase compute overhead. Total Ionizing Dose (TID) is cumulative degradation that shifts threshold voltages and increases leakage current over the satellite's operational lifetime. Single Event Latchup can cause destructive overcurrent conditions requiring power cycling or permanently damaging circuits.
|
||||
|
||||
The Google result addresses TID specifically and suggests that modern process nodes (5nm and below) may be inherently more radiation-tolerant than older process generations. If confirmed across other chip architectures, this significantly de-risks the hardware side of orbital compute. It does not eliminate the SEU problem -- bit flips will still occur at elevated rates compared to terrestrial operation -- but ECC memory and algorithmic redundancy can manage this for inference workloads where occasional soft errors are tolerable.
|
||||
|
||||
Critical caveats: Starcloud operating an H100 in orbit for a demonstration is fundamentally different from operating thousands of H100s reliably for years. Long-duration LEO operation accumulates dose over years, and the South Atlantic Anomaly creates radiation hotspots that elevate local dose rates. Still, the Google result shifts the prior: radiation hardening may be less of a showstopper than thermal management for orbital compute viability.
|
||||
|
||||
## Evidence
|
||||
- Google Trillium v6e TPU proton beam testing — no hard failures to 15 krad(Si)
|
||||
- Modern 5nm process node characteristics suggesting inherent radiation tolerance
|
||||
- Starcloud H100 orbital demonstration (single GPU, short duration)
|
||||
|
||||
## Challenges
|
||||
Long-duration operation over years with cumulative dose, SAA transits, and solar particle events remains uncharacterized for commercial AI hardware. The TPU result may not generalize to GPU architectures.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — if radiation is less of a problem than expected, thermal management becomes even more clearly the binding constraint
|
||||
- [[orbital data centers require five enabling technologies to mature simultaneously and none currently exist at required readiness]] — radiation tolerance is one of the five enabling conditions
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "Earth observation satellites generate 10 GB per second of raw data and processing in orbit transmits only results — Planet Labs and Google Suncatcher target this workload first"
|
||||
confidence: likely
|
||||
source: "Astra, space data centers feasibility analysis February 2026; Google Project Suncatcher partnership with Planet Labs"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
- "the space manufacturing killer app sequence is pharmaceuticals now ZBLAN fiber in 3-5 years and bioprinted organs in 15-25 years each catalyzing the next tier of orbital infrastructure"
|
||||
---
|
||||
|
||||
# On-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously
|
||||
|
||||
The cleanest near-term use case for orbital compute is processing satellite-generated data where it is collected rather than downlinking raw data to terrestrial facilities. Earth observation satellites generate approximately 10 GB/s of synthetic aperture radar data. Transmitting this raw data to ground stations faces severe bandwidth constraints -- satellite-to-ground links are limited, ground station pass windows are brief, and the data volume is enormous. Processing in orbit and transmitting only the results (classifications, detected changes, compressed features) dramatically reduces both the bandwidth requirement and the end-to-end latency from observation to actionable intelligence.
|
||||
|
||||
This use case sidesteps every major objection to orbital compute. The thermal problem dissolves because on-orbit processing loads are measured in kilowatts, not megawatts -- a single compute node per satellite or small cluster, well within the thermal management capabilities of current satellite bus designs. The bandwidth problem inverts from constraint to advantage -- instead of needing to move data up to orbit for processing, the data is already there. The latency problem disappears because the alternative (downlink, terrestrial process, uplink results) takes hours, making even modest orbital processing a dramatic improvement.
|
||||
|
||||
Planet Labs' partnership with Google for Project Suncatcher explicitly targets this workload first. Axiom Space's orbital data center concept similarly focuses on satellite-proximate processing. This is also the workload that SpaceX's FCC filing implicitly supports through Starlink's optical inter-satellite link mesh.
|
||||
|
||||
The strategic importance of this use case goes beyond its direct market size. It establishes orbital compute as a real business with real revenue, validates hardware in the orbital environment, and builds operational experience that de-risks the harder use cases that follow.
|
||||
|
||||
## Evidence
|
||||
- Earth observation satellites generating ~10 GB/s of SAR data
|
||||
- Planet Labs + Google Project Suncatcher partnership targeting on-orbit processing
|
||||
- Axiom Space orbital data center concept focused on satellite-proximate processing
|
||||
- Starcloud Capella Space customer workload demonstrating viable business model
|
||||
|
||||
## Challenges
|
||||
Improved ground station networks and higher-bandwidth satellite-to-ground links may reduce the advantage of on-orbit processing by making raw data downlink more feasible.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — on-orbit processing sidesteps this because compute loads per satellite are kilowatts not megawatts
|
||||
- [[LEO satellite internet is the defining battleground of the space economy with Starlink 5 years ahead and only 3-4 mega-constellations viable]] — Starlink's optical mesh provides the inter-satellite networking for distributed on-orbit processing
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "A large training run on tens of thousands of GPUs needs constant all-to-all gradient exchange at hundreds of Tbps — current satellite links deliver 200 Gbps per node with next-gen targeting 1 Tbps making orbital training likely never viable"
|
||||
confidence: likely
|
||||
source: "Astra, space data centers feasibility analysis February 2026; Google Project Suncatcher analysis"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "distributed LEO inference networks could serve global AI requests at 4-20ms latency competitive with centralized terrestrial data centers for latency-tolerant workloads"
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
---
|
||||
|
||||
# Orbital AI training is fundamentally incompatible with space communication links because distributed training requires hundreds of Tbps aggregate bandwidth while orbital links top out at single-digit Tbps
|
||||
|
||||
Large-scale AI training is the one workload that virtually every serious analysis concludes will never move to orbit. The reason is bandwidth, and the gap is not marginal -- it is orders of magnitude.
|
||||
|
||||
Training a frontier model involves distributing computation across tens of thousands of GPUs that must constantly exchange gradient updates during backpropagation. This requires aggregate inter-node bandwidth measured in hundreds of terabits per second with tight synchronization (microsecond-scale consistency across nodes). A single terrestrial data center typically has 100-plus Tbps of aggregate internal bandwidth, with individual node interconnects running at 400 Gbps to 800 Gbps (moving toward 1.6 Tbps with next-generation InfiniBand and Ethernet standards).
|
||||
|
||||
Current state-of-the-art satellite communication links deliver: Starlink satellites at 200 Gbps per satellite with next generation targeting 1 Tbps; Blue Origin TeraWave at up to 6 Tbps; Axiom optical inter-satellite links at 10 Gbps. Even Blue Origin's most ambitious specification falls two orders of magnitude short of the aggregate bandwidth a terrestrial training cluster provides.
|
||||
|
||||
The bandwidth constraint is compounded by latency jitter. Distributed training algorithms (data parallelism, model parallelism, pipeline parallelism) all require deterministic communication timing to maintain training efficiency. Orbital link latency varies with satellite position, atmospheric conditions on ground links, and inter-satellite hop count -- introducing jitter that degrades training throughput even when average bandwidth is sufficient.
|
||||
|
||||
Starcloud's demonstration of "training an LLM in space" almost certainly involved a small model on a single GPU -- a valid proof of concept for orbital hardware operation but not evidence that distributed training at frontier scale is feasible. This constraint shapes the entire orbital compute opportunity: inference yes (eventually), on-orbit satellite processing yes (now), training no (likely never).
|
||||
|
||||
## Evidence
|
||||
- Terrestrial data center aggregate bandwidth: 100+ Tbps with 400-800 Gbps per node
|
||||
- Starlink satellite links: 200 Gbps current, 1 Tbps next-gen target
|
||||
- Blue Origin TeraWave: up to 6 Tbps (most ambitious orbital link)
|
||||
- Gap: 2+ orders of magnitude between orbital and terrestrial bandwidth
|
||||
|
||||
## Challenges
|
||||
Novel training algorithms that reduce communication requirements (local SGD, federated learning approaches) could narrow the gap, but the fundamental bandwidth asymmetry makes orbital training uncompetitive for frontier-scale models.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[distributed LEO inference networks could serve global AI requests at 4-20ms latency competitive with centralized terrestrial data centers for latency-tolerant workloads]] — inference works because it does not require all-to-all bandwidth
|
||||
- [[on-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously]] — the viable alternative to moving training to orbit
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "No technician can swap a failed drive in orbit — every failure is permanent without servicing infrastructure that does not exist at scale creating a reliability-cost tradeoff that favors disposable architecture"
|
||||
confidence: likely
|
||||
source: "Astra, space data centers feasibility analysis February 2026; Microsoft Project Natick comparison"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
- "orbital debris is a classic commons tragedy where individual launch incentives are private but collision risk is externalized to all operators"
|
||||
---
|
||||
|
||||
# Orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit
|
||||
|
||||
The impossibility of on-orbit maintenance creates a fundamental reliability-cost tradeoff that terrestrial data centers never face. In a ground facility, a failed drive is swapped in minutes. A failed GPU is replaced by next-day delivery. In orbit, every failure is permanent for the life of that satellite.
|
||||
|
||||
This forces a trilemma. First, radiation-hardened components -- but radiation-hardened processors are generations behind commercial silicon in performance and orders of magnitude more expensive, negating the economic case for orbital compute. Second, massive redundancy -- but every redundant component adds mass that must be launched, and the cost of launching mass is the critical economic variable. Third, disposable architecture -- accept failures and replace entire satellites, but this requires a launch cadence and cost structure that does not yet exist and creates space debris from deorbiting failed units.
|
||||
|
||||
Microsoft's Project Natick provides an instructive comparison. Their sealed underwater data centers achieved a 0.7 percent server failure rate versus 5.9 percent on land over two years -- demonstrating that controlled environments without human access can actually improve reliability. But underwater is retrievable at modest cost. Orbit is not. Microsoft ultimately killed Project Natick in 2024 because the deployment model was impractical at scale despite the reliability improvement.
|
||||
|
||||
The maintenance constraint also limits hardware refresh cycles. Terrestrial data centers upgrade GPUs every 3 to 5 years. Orbital hardware has a fixed capability at launch for its entire 5 to 10 year operational lifetime. A satellite launched in 2027 with H100-class GPUs will be running 2027-era hardware in 2032, by which time terrestrial facilities will have cycled through one or two generations of dramatically more powerful accelerators.
|
||||
|
||||
## Evidence
|
||||
- Microsoft Project Natick — 0.7% vs 5.9% failure rate but killed in 2024 due to deployment impracticality
|
||||
- Astroscale 15m closest commercial approach to debris (single-mission demonstrations only)
|
||||
- Northrop Grumman MEV life-extension docking (single-mission scale)
|
||||
- GPU refresh cycles: 3-5 years terrestrial vs fixed capability for orbital lifetime
|
||||
|
||||
## Challenges
|
||||
Autonomous satellite servicing and modular hardware architectures could change this equation, but require a servicing fleet that does not exist and would add significant cost overhead.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[orbital debris is a classic commons tragedy where individual launch incentives are private but collision risk is externalized to all operators]] — failed orbital compute nodes add to the debris problem
|
||||
- [[reusability without rapid turnaround and minimal refurbishment does not reduce launch costs as the Space Shuttle proved over 30 years]] — the Shuttle lesson applies: servicing in orbit may cost more than replacement
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "Starcloud trained an LLM in space, Axiom launched orbital nodes, SpaceX filed for millions of satellites, Google plans Suncatcher — economics do not close yet but FCC filings signal conviction from major players"
|
||||
confidence: speculative
|
||||
source: "Astra, web research compilation February 2026"
|
||||
created: 2026-02-17
|
||||
secondary_domains:
|
||||
- critical-systems
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
- "Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy"
|
||||
---
|
||||
|
||||
# Orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players
|
||||
|
||||
Space-based data centers have exploded in activity despite being the most speculative sector in the space economy. Axiom Space launched first two orbital data center nodes to LEO on January 11, 2026. Starcloud (Nvidia-backed, Y Combinator company) deployed NVIDIA H100-class systems in orbit, trained an LLM in space, ran Google Gemini in orbit, and filed an FCC proposal for up to 88,000 satellites. SpaceX filed FCC plans for millions of satellites leveraging Starlink integration for orbital computing. Google's Project Suncatcher plans solar-powered satellite constellations carrying specialty AI chips for a 2027 demonstration.
|
||||
|
||||
The theoretical advantages are real: unlimited solar power in certain orbits, radiative cooling in vacuum, and escape from terrestrial power and cooling constraints hitting AI data centers. LEO data centers at 550 km have approximately 3.7 ms one-way latency -- comparable to many terrestrial connections. But the challenges are formidable: radiation-hardened hardware requirements, cooling limitations (radiative only, no convection), extremely high cost of launching power-dense compute, maintenance and upgradeability constraints, and bandwidth limitations for data transfer.
|
||||
|
||||
The economics do not currently close for general cloud computing. But the convergence of insatiable AI compute demand, falling launch costs, and advancing in-space solar power could make orbital data centers viable for specific workloads before general computing moves to orbit. The concept is real but overhyped on timeline. Google projects cost-competitiveness around 2035 contingent on $200/kg launch costs. Terrestrial alternatives -- arctic data centers, nuclear-powered facilities, on-site generation -- beat orbital compute on every metric for the next decade.
|
||||
|
||||
## Evidence
|
||||
- Axiom Space orbital data center nodes launched January 2026
|
||||
- Starcloud H100 in orbit, LLM trained in space (November 2025)
|
||||
- SpaceX FCC filing for millions of satellites (January 2026)
|
||||
- Google Project Suncatcher 2027 demonstration planned
|
||||
- Google feasibility analysis projecting cost-competitiveness ~2035 at $200/kg
|
||||
|
||||
## Challenges
|
||||
Thermal management is the showstopper at scale. A 100 MW orbital data center would need ~100,000 m² of radiators weighing 500,000+ kg. Space is a thermos, not a freezer.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — the physics deep-dive on why datacenter-scale orbital compute fails
|
||||
- [[Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy]] — orbital data centers require Starship-era launch costs
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "Starship-class launch at sub-100/kg plus advanced radiative thermal management plus Tbps optical links plus radiation-tolerant AI accelerators plus autonomous servicing — all five needed and none proven at scale"
|
||||
confidence: likely
|
||||
source: "Astra, space data centers feasibility analysis February 2026; Google Project Suncatcher analysis"
|
||||
created: 2026-02-17
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
- "Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy"
|
||||
---
|
||||
|
||||
# Orbital data centers require five enabling technologies to mature simultaneously and none currently exist at required readiness
|
||||
|
||||
The viability of orbital data centers at commercially meaningful scale depends on the simultaneous maturation of five independent enabling technologies. The failure of any single one is sufficient to block the entire concept. As of early 2026, none of the five exist at the required readiness level.
|
||||
|
||||
**1. Starship-class launch at $100/kg or less.** Google's feasibility analysis pins orbital compute cost-competitiveness at $200/kg launch costs, projected around 2035 if Starship achieves 180 flights per year at full reusability. Current Falcon 9 customer pricing is approximately $2,720/kg. Status: TRL 7-8 for the vehicle, but the cost target depends on operational tempo that is TRL 4-5.
|
||||
|
||||
**2. Advanced radiative thermal management at data center scale.** A 100 MW orbital facility needs approximately 100,000 square meters of radiator surface weighing over 500,000 kg. No design, prototype, or credible roadmap exists for megawatt-scale radiative cooling in orbit. Status: TRL 2-3 at megawatt scale.
|
||||
|
||||
**3. High-bandwidth optical inter-satellite links at Tbps-plus.** Distributed orbital compute requires inter-node communication far beyond current capability. Starlink at 200 Gbps, next gen targeting 1 Tbps. Blue Origin TeraWave at up to 6 Tbps. Terrestrial data center aggregate bandwidth exceeds 100 Tbps. Status: TRL 6-7 for current generation, TRL 3-4 for the 10-100 Tbps links orbital compute at scale would require.
|
||||
|
||||
**4. Radiation-tolerant or radiation-hardened AI accelerators.** Google's TPU testing (no hard failures to 15 krad) is encouraging but represents one chip architecture in short-duration exposure. Long-duration operation remains uncharacterized for commercial AI hardware. Status: TRL 4-5 for commercial chips in LEO.
|
||||
|
||||
**5. Autonomous satellite servicing or reliable disposable architecture.** Without maintenance capability, every satellite has a fixed operational lifetime of 5-10 years. Status: TRL 3-4 for commercial servicing, with single-mission demonstrations only.
|
||||
|
||||
The probability of all five maturing on compatible timelines is the product of their individual probabilities -- substantially lower than any single probability.
|
||||
|
||||
## Evidence
|
||||
- Google Project Suncatcher feasibility analysis (2035 cost-competitiveness projection)
|
||||
- Current TRL assessments across all five technology areas
|
||||
- Falcon 9 pricing at ~$2,720/kg vs required $100-200/kg
|
||||
|
||||
## Challenges
|
||||
Distributed architecture (thousands of small satellites) changes the thermal and servicing math but multiplies launch costs and introduces distributed computing challenges that compound the bandwidth requirement.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — technology #2 is the hardest with no credible roadmap
|
||||
- [[Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy]] — technology #1 is the keystone that gates all others economically
|
||||
- [[modern AI accelerators are more radiation-tolerant than expected because Google TPU testing showed no hard failures up to 15 krad suggesting consumer chips may survive LEO environments]] — technology #4 showing promising early results
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "At 1366 W/m² with no atmosphere, clouds, or night cycle in sun-synchronous orbits, space solar eliminates the power constraint that gates terrestrial data center expansion"
|
||||
confidence: proven
|
||||
source: "Astra, space data centers feasibility analysis February 2026; Google Project Suncatcher feasibility study"
|
||||
created: 2026-02-17
|
||||
secondary_domains:
|
||||
- energy
|
||||
depends_on:
|
||||
- "space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density"
|
||||
- "power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited"
|
||||
---
|
||||
|
||||
# Solar irradiance in LEO delivers 8-10x ground-based solar power with near-continuous availability in sun-synchronous orbits making orbital compute power-abundant where terrestrial facilities are power-starved
|
||||
|
||||
Solar irradiance in low Earth orbit is approximately 1,366 watts per square meter -- the full output of the sun unattenuated by atmosphere. After accounting for atmospheric absorption, weather, day/night cycles, and panel orientation losses, ground-based solar panels achieve roughly 150-200 W/m² of average output. The orbital advantage is therefore 7-10x in raw power density per unit area.
|
||||
|
||||
In sun-synchronous orbits (approximately 600-800 km altitude), satellites maintain a nearly constant angle to the sun, achieving near-continuous illumination. Eclipse periods still occur but are short (roughly 30 minutes per 90-minute orbit in some configurations), manageable with battery buffering. There are no grid interconnection queues, no utility contracts, no transmission losses, no permitting delays, and no competition with other users for the same electrical infrastructure.
|
||||
|
||||
This is the strongest genuine advantage of orbital compute. Power generation in space is not a speculative technology -- it is mature, well-characterized physics exploited by every satellite in orbit since the dawn of the space age. The solar panels themselves are the most cost-effective component of the orbital compute stack. The irony is that while power generation is essentially solved in orbit, dissipating the waste heat from using that power is the unsolved showstopper. Power-abundant and cooling-constrained is the exact inverse of the terrestrial situation (cooling-abundant, power-constrained), which is why the orbital data center thesis is seductive but the physics do not cooperate at scale.
|
||||
|
||||
## Evidence
|
||||
- Solar constant: 1,366 W/m² in LEO vs 150-200 W/m² average ground-based
|
||||
- Sun-synchronous orbit mechanics providing near-continuous illumination
|
||||
- Every satellite in orbit validates space solar power generation
|
||||
|
||||
## Challenges
|
||||
[[space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density]] — the fatal irony: orbital power is abundant but dissipating waste heat is the binding constraint.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[space-based solar power economics depend almost entirely on launch cost reduction with viability threshold near 10 dollars per kg to orbit]] — the alternative: beam orbital solar to terrestrial data centers
|
||||
- [[power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited]] — for compute, the constraint shifts from power to thermal management
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
type: claim
|
||||
domain: space-development
|
||||
description: "A 100 MW orbital facility needs 500,000 kg of radiators — space is a thermos not a freezer so only on-orbit satellite data processing and edge inference are viable near-term"
|
||||
confidence: likely
|
||||
source: "Astra, space data centers feasibility analysis February 2026"
|
||||
created: 2026-02-17
|
||||
secondary_domains:
|
||||
- critical-systems
|
||||
depends_on:
|
||||
- "Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy"
|
||||
- "power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited"
|
||||
---
|
||||
|
||||
# Space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density
|
||||
|
||||
The pitch for orbital data centers rests on a seductive premise: AI compute demand is growing exponentially, terrestrial data centers are hitting power and cooling constraints, and space offers unlimited solar energy plus passive cooling. The demand side is real -- the US data center pipeline will add 140 GW of new load against current draw under 15 GW. But the supply-side physics are brutal. Space is not a freezer; it is a thermos. With no convective medium, all heat must be radiated according to the Stefan-Boltzmann law, where power radiated scales with the fourth power of temperature and linearly with surface area. At 320 K (a reasonable chip operating temperature), a perfect blackbody radiates roughly 600 watts per square meter. The smallest useful AI data center runs approximately 100 MW. An orbital version would need about 100,000 square meters of radiator surface -- a 316-meter-by-316-meter array -- weighing over 500,000 kg at realistic radiator mass of 5 to 10 kg per square meter.
|
||||
|
||||
The bandwidth constraint is equally fatal for the highest-value workload. Large-scale AI training requires hundreds of terabits per second of aggregate inter-node bandwidth. Current satellite links top out at 200 Gbps (Starlink) to 6 Tbps (Blue Origin TeraWave). The gap is orders of magnitude.
|
||||
|
||||
What does work is on-orbit processing of satellite-generated data (kilowatt-scale, data already in orbit) and distributed LEO inference (independent nodes, acceptable latency). Terrestrial alternatives -- arctic data centers with 70%+ cooling cost reduction, nuclear-powered facilities -- beat orbital compute on every metric for the next decade. Google projects cost-competitiveness around 2035 contingent on $200/kg launch costs.
|
||||
|
||||
## Evidence
|
||||
- Stefan-Boltzmann law: ~600 W/m² radiative capacity at 320 K
|
||||
- 100 MW facility requires ~100,000 m² radiators weighing 500,000+ kg
|
||||
- Solar input (1,366 W/m²) further reduces net radiative capacity
|
||||
- Google Project Suncatcher feasibility analysis (2035 projection)
|
||||
|
||||
## Challenges
|
||||
Novel cooling technologies (droplet radiators, phase-change systems) could improve radiative efficiency, but none have been demonstrated at scale in space environments.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players]] — this note provides the detailed physics showing why the convergence thesis fails at datacenter scale
|
||||
- [[on-orbit processing of satellite data is the proven near-term use case for space compute because it avoids bandwidth and thermal bottlenecks simultaneously]] — the viable near-term use case
|
||||
- [[distributed LEO inference networks could serve global AI requests at 4-20ms latency competitive with centralized terrestrial data centers for latency-tolerant workloads]] — the viable long-term use case
|
||||
|
||||
Topics:
|
||||
- [[space exploration and development]]
|
||||
Loading…
Reference in a new issue