6 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | flagged_for_theseus | flagged_for_rio | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | Starcloud launches first NVIDIA H100 in orbit, trains first LLM in space (NanoGPT on Shakespeare) | CNBC / Kif Leswing | https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html | 2025-12-10 | space-development |
|
thread | unprocessed | high |
|
|
|
Content
Starcloud launched Starcloud-1 on November 2, 2025, aboard a SpaceX rocket — a 60 kg satellite carrying the first NVIDIA H100 GPU in space. As of December 2025:
Milestones achieved:
- First commercial data-center-class GPU in orbit
- Trained NanoGPT (LLM created by OpenAI co-founder Andrej Karpathy) on the complete works of Shakespeare — first LLM trained in space
- Running Google Gemma in orbit — first LLM operated on a high-powered GPU in outer space
- The H100 is "100 times more powerful than any GPU compute that has been in space before"
Technical specs:
- Starcloud-1: 60 kg satellite, ~size of a small refrigerator
- GPU: NVIDIA H100 (terrestrial, data-center-class, first deployed in orbit)
- Next satellite: Multiple H100s + NVIDIA Blackwell platform, October 2026
Business model:
- Orbital AI compute as a service
- Targeting AI inference workloads that benefit from near-continuous solar power in orbit
- Backed by NVIDIA (strategic alignment with H100/Blackwell roadmap)
Company background:
- Starcloud filed FCC application for 88,000 satellites for orbital data centers (February 3, 2026)
- Also ran Google Gemma in orbit — first to run LLM on high-powered Nvidia GPU in space
Agent Notes
Why this matters: This is Gate 1 being cleared for the orbital data center sector. Not an FCC filing, not a concept — actual hardware in orbit doing actual AI compute. This is the Varda equivalent for orbital AI: proof of concept at demonstration scale. The two-gate model implies this is the signal that the supply threshold has been crossed, and now the question is Gate 2 (commercial AI economics).
What surprised me: The satellite is only 60 kg. This is a rideshare-class satellite, not a purpose-built platform. The fact that a 60 kg rideshare can carry a commercial H100 and train LLMs means the supply-side entry barrier is much lower than any prior orbital manufacturing demonstration. Compare to Varda's microgravity manufacturing: complex reentry capsule, unique flight dynamics. Orbital compute at H100 scale is a standard rideshare payload.
What I expected but didn't find: Cost data. No unit economics on what Starcloud charges per GPU-hour in orbit vs. terrestrial H100 rental cost. This is the Gate 2 data point — without it, we can't assess whether the demand threshold is clearing.
KB connections:
- the space manufacturing killer app sequence is pharmaceuticals now ZBLAN fiber in 3-5 years and bioprinted organs in 15-25 years each catalyzing the next tier of orbital infrastructure — orbital AI compute is potentially a NEW category outside this three-tier framework; should the sequence be updated?
- power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited — this is the motivation for solar-powered orbital compute; continuous solar in SSO SOLVES the power constraint for GPU compute in a way it doesn't for ISRU or manufacturing
- SpaceX vertical integration across launch broadband and manufacturing creates compounding cost advantages that no competitor can replicate piecemeal — Starcloud is using SpaceX rideshare to bootstrap; NVIDIA backing creates a similar vertical-ish relationship (GPU manufacturer + compute operator)
Extraction hints:
- "The orbital data center sector crossed its supply-side (Gate 1) threshold in November 2025 when Starcloud deployed the first commercial NVIDIA H100 in orbit and demonstrated AI model training, establishing that terrestrial data-center-class compute is viable as a standard rideshare payload" (confidence: experimental — one satellite, one proof of concept; commercial scale unproven)
- "Orbital AI compute's architecture convergence on solar-powered low-orbit platforms reflects the fundamental reason orbital deployment is attractive for AI workloads: near-continuous solar illumination in sun-synchronous orbit provides power for compute without terrestrial grid, cooling, or water infrastructure constraints" (confidence: likely — physics of SSO solar illumination is established; economic competitiveness is the open question)
Context: NVIDIA backing is strategically significant — this aligns NVIDIA's chip roadmap with orbital deployment. NVIDIA Space Computing initiative + Starcloud + Blackwell platform in orbit by October 2026 = NVIDIA has placed a bet on orbital compute. This is different from a startup bet — it's a semiconductor platform vendor validating the market.
Curator Notes
PRIMARY CONNECTION: the space manufacturing killer app sequence is pharmaceuticals now ZBLAN fiber in 3-5 years and bioprinted organs in 15-25 years each catalyzing the next tier of orbital infrastructure WHY ARCHIVED: Gate 1 proof-of-concept for orbital AI compute — the hardest evidence that this sector is real, not speculative. Changes the two-gate model's sector mapping (orbital data centers from "no evidence" to "Gate 1 cleared"). EXTRACTION HINT: Extract the Gate 1 threshold crossing claim. Separately, flag the three-tier manufacturing thesis for update — orbital AI compute may be a new tier or a new sequence that doesn't fit the pharma/ZBLAN/bioprinting model.