Skip to content
Pre-Launch · Wyoming · 2026

Emerging · Demand catalyst for the orbital book

AI-grade compute is moving to orbit. The clearing layer is what keeps it from collapsing into two vertical stacks.

On April 13, 2026, TechCrunch reported that the largest orbital compute cluster in history was open for business — Starcloud-1, built around an NVIDIA H100 launched in November 2025. Nine days earlier, on February 4, 2026, the FCC's Space Bureau accepted SpaceX's filing for an orbital data-center system of up to one million satellites (DA 26-113); Starcloud filed for an 88,000-sat constellation the day before. Anthropic publicly partnered with xAI on compute and signalled multi-gigawatt orbital interest. Vertical integration is rational. Neutral clearing is also rational. They serve different counterparties.

Starcloud commercial

Apr 2026

FCC accepts SpaceX ODC

Feb 4 · 2026

2030 addressable spend

$15B

Settlement

T+0 · atomic · USDC

What just happened

Eight months. Eight events. The category is forming in real time.

Orbital compute went from a thesis-stage idea in late 2025 to multiple operators running production workloads in February 2026 and a commercial cluster open for business by April. The cadence is the signal.
  1. November 2025Complete

    Starcloud-1 launches with the first NVIDIA H100 in orbit

    60-kg satellite, ~100× the GPU compute ever previously operated in space. Establishes the substrate.
  2. December 2025Complete

    First in-orbit LLM training and inference

    Starcloud runs Google DeepMind's Gemma on a high-power GPU on-orbit and performs in-orbit training of nanoGPT. The category is no longer hypothetical.
  3. January 30, 2026Complete

    SpaceX files orbital-DC system with the FCC

    Up to one million solar-powered satellites between 500 and 2,000 km altitude — a hundred-fold increase over the current LEO population.
  4. February 3, 2026Complete

    Starcloud files for 88,000-satellite orbital DC constellation

    FCC application for compute capacity at scale. Day before SpaceX's filing is accepted.
  5. February 4, 2026Complete

    FCC Space Bureau accepts SpaceX ODC filing — DA 26-113

    Public Notice issued. Comment period opens. Secure World Foundation and 1,000+ public commenters raise debris, Kessler, and light-pollution concerns.
  6. March 30, 2026Complete

    Starcloud Series A — $170M at $1.1B valuation

    Capital markets confirm the category. NVIDIA backing; Crusoe contracted to run the cloud platform on Starcloud-2.
  7. April 13, 2026In progress

    Largest orbital compute cluster open for business

    TechCrunch report. Starcloud begins commercial sales of H100-equivalent compute time on-orbit. LOIs in hand.
  8. October 2026In progress

    Starcloud-2 launches with NVIDIA Vera Rubin Space Module

    Claimed 25× H100 inference performance. Crusoe cloud platform operational. Starcloud says the satellite generates more cash than it costs to build and launch.

The vertical-integration wedge

Two stacks are forming. Everyone outside them needs a neutral CCP.

SpaceX and xAI merged into a single corporate entity. Starcloud signed a multi-gigawatt-of-interest signal from Anthropic and a compute partnership with xAI. The shape of the orbital compute market that follows is two vertical stacks: one Musk-aligned (Starlink V3 backhaul → SpaceX/xAI compute → xAI demand), one NVIDIA-anchored (Starcloud GPU clusters → Crusoe cloud → Anthropic and frontier-lab demand). Both stacks are rational. Both will route compute intra-firm where they can.

Vertical integration handles the demand the integrated firms produce. It does not handle the demand the rest of the industry produces. Hyperscalers procuring orbital compute without committing to either stack, defense and intelligence buyers requiring jurisdictional neutrality, sovereign AI programmes that need attestation rather than vendor trust, ground-station operators settling downlink against compute output, insurers underwriting compute-delivery SLAs — none of them want to live inside one of the two walled gardens.

Wavestar is what they trade on. The clearing layer is the only structure that prevents the orbital compute market from collapsing into two zero-sum vertical stacks. Standardised Compute-Hour contracts, observer-signed delivery, atomic cash-and-resource settlement, and CCP novation — applied to compute the same way they apply to spectrum and downlink.

Vertical integration is rational. Neutral clearing is also rational. They serve different counterparties.
Wavestar Research·Compute thesis · v0.1

Demand catalyst

Compute pulls every other Wavestar market forward.

A gigawatt-class orbital compute footprint is a multiplier on the existing orbital book. Every Compute-Hour delivered creates a downlink-minute of egress, an ISL-Gbps-hour of training-fabric backhaul, a Ku/Ka spectrum-hour, a hosted-payload slot for the compute payload, and a recurring propellant/ISAM demand for the constellation that hosts it.

+30%

Output-data egress drives downlink bookings beyond observation/comms baseline. Output manifests are the observer signal.

ISL fabric demand · 2030

1 Tbps · per-link

Starlink V3 carrier-grade ISL becomes the AI training fabric. Cross-constellation routing becomes liquidity, not a feature.

Spectrum allocations

Ku · Ka · V

ODC operations consume new authorised allocations at the same rate compute footprint grows. EPFD-modernised regime accommodates.

ISAM servicing market

88k–1M sats

If even a fraction of filed ODC capacity ships, recurring propellant top-up and on-orbit servicing scale by an order of magnitude.

What you trade

1 GPU-equivalent × 1 hour delivered to a named orbit shell, named latency class.

The atom is a Compute-Hour. The context is a named compute-provider DID, a named orbit shell, a named latency class (training-tolerant or inference-bound), and a named GPU-equivalent class (H100, GB200, Vera Rubin Space). v1 settles short-block reservations against provider-signed result manifests; long-dated forwards activate after the working group lands the contract spec and the SEC ATS-N filing covers compute alongside spectrum.
  • 01

    Unit of trade

    1 GPU-equivalent × 1 hour at a named compute-provider DID, named orbit shell, named latency class. GPU-equivalent is normalised to a working-group reference (initial draft: H100-equivalent SXM5).
  • 02

    v1 scope — short-block reservation

    Reservations of 1 to 168 hours on inventory the provider already operates. Provider-signed result manifest is the delivery primitive. No new authorisation required.
  • 03

    v2 scope — long-dated forward

    Forward contracts on capacity that ships in a future quarter. Activates after the contract spec lands and SEC ATS-N covers compute alongside spectrum.
  • 04

    Latency classes

    Training-tolerant (asynchronous, hours-to-days latency acceptable) settles distinctly from inference-bound (round-trip latency to a named ground region). Different liquidity pools, different reference curves.
  • 05

    Delivery attestation

    Provider DID signs the start-of-job attestation; observer-signed downlink of the result manifest closes the leg. Multi-observer quorum required for inference-bound contracts where round-trip timing is part of the spec.
  • 06

    Export-control posture

    Every match passes a counterparty screen (OFAC, BIS Entity List, equivalents) and a jurisdiction screen for the underlying GPU class. H100 in-orbit jurisdiction is unsettled — Wavestar enforces the most restrictive interpretation per match.

Contract specification

Compute-Hour (CMPT) · Rulebook draft v0.1.

CMPT · Short-block reservation contract

Ticker
CMPT · [gpu-class] · [orbit-shell] · [latency] · [hour]
Unit of trade
1 GPU-equivalent × 1 hour at a named provider DID
Reservation tenor
1 hour minimum · 168 hours maximum (v1)

Long-dated forwards (> 168 hours) activate in v2 after working-group spec lands and SEC ATS-N covers compute.

Tick size
$0.10 per Compute-Hour

Working-group draft v0.1. Calibrated against $3.00 (GCP) – $6.98 (Azure) terrestrial H100 hourly references.

Minimum block
8 Compute-Hours

Aligned with the smallest training-or-inference job that pencils in delivery cost.

Delivery attestation
Provider DID signature + observer-signed result manifest downlink

Inference-bound contracts require multi-observer quorum on round-trip timing. Training-tolerant contracts settle on result-hash reconciliation.

Settlement
T+0 · atomic · USDC primary · Fedwire / SWIFT optional
Initial margin
20%

Reflects compute-provider performance risk plus jurisdictional uncertainty during reservation window.

Variation margin band
5%

Daily mark on the cleared curve per GPU class and orbit shell.

Prohibited pairs
OFAC · BIS Entity List · GPU-class jurisdiction screen

Every counterparty pair is screened on match. Cross-jurisdiction matches subject to the most restrictive applicable export-control regime.

Position limit
5% of cleared open interest per buyer; 25% per provider
Spec status
Working group v0.1 · target ratification Q1 2027

Working group convening Starcloud, NVIDIA, Crusoe, hyperscalers, frontier labs. Public draft will be posted to the rulebook.

Counterparties and observers

Providers, buyers, observers, regulators.

  • PRV

    Compute providers

    Starcloud (NVIDIA H100 / Vera Rubin Space Module), SpaceX/xAI (Starlink V3 compute pilot), Axiom Space (orbital DC nodes from Jan 2026), Lonestar Data Holdings (lunar edge), and the wider NVIDIA partner network — Aetherflux, Kepler, Planet Labs, Sophia Space.
  • BUY

    Demand-side buyers

    Frontier AI labs (Anthropic published interest; xAI internal), hyperscalers (AWS / Azure / GCP procurement outside their own footprints), defense / intelligence buyers needing jurisdictional neutrality, sovereign AI programmes, brokers aggregating reservations.
  • CLD

    Cloud platform layer

    Crusoe runs the cloud platform on Starcloud-2 — the cloud surface customers actually deploy against. Wavestar clears the Compute-Hour underneath the cloud invoice; the cloud platform is the operational delivery layer.
  • GPU

    Silicon class anchors

    NVIDIA Space-1 Vera Rubin Space Module is the reference inference fabric. H100 is the training-class baseline. Working group sets the GPU-equivalent normalisation.
  • MON

    Result-manifest observers

    Ground-station providers operating downlink for the compute payload sign result-manifest delivery. Independent timing-attestation observers sign round-trip latency for inference-bound contracts. BLS-aggregated quorum forms the delivery proof.
  • REG

    Regulators

    FCC Space Bureau (DA 26-113 + Starcloud filing in process), BIS (Oct 2024 space export-control revisions, License Exception CSA), ITAR DDTC (non-cooperative grappling controls), Wassenaar-40 administrations.

Regulatory context

The category is novel. The rulebook is not.

FCC · DA 26-113 · Feb 4 2026
SpaceX ODC filing accepted for filing

1M-satellite orbital data-center system. Public Notice issued. Comment period opened. Wavestar tracks the docket; Compute-Hour delivery on SpaceX-side capacity is gated on operational authorisation.

FCC · Starcloud filing · Feb 3 2026
88,000-satellite ODC constellation

Filed by Starcloud. Wavestar's compute book includes Starcloud commercial capacity; new constellation operational dates feed the cleared curves.

Secure World Foundation · 2026
1,000+ public comments · Kessler / scale concerns

SWF and the majority of public commenters opposed the 1M-sat scale citing debris, Kessler syndrome, and light pollution. Wavestar's attestation network mirrors the FCC docket and signs sustainability-relevant facts where requested.

BIS · Oct 23 2024
Space-related export-control revisions live

License Exception CSA (Commercial Space Activities) and Wassenaar-40 carve-out reduce friction for cooperative trade. H100 / Blackwell in-orbit jurisdiction remains unsettled — Wavestar applies the most restrictive applicable regime per match.

ITAR · §121.16
Non-cooperative grappling controls apply downstream

Compute payloads are not directly defense articles, but their service vehicles (refueling, deorbit, replacement) may be. Wavestar segregates compute settlement from any servicing leg that triggers ITAR.

EPFD · April 30 2026
Spectrum modernisation reinforces compute backhaul

FCC's EPFD private-bargaining regime applies to the same Ku/Ka allocations that ODC backhaul will consume. The compute book and the spectrum book share regulatory substrate.

Sustainability claim parity
Starcloud (10× lower) vs Saarland (10× higher)

Conflicting carbon-intensity claims in public literature. Wavestar's posture: signed environmental telemetry is part of the result manifest where the buyer requires it; we do not adjudicate the headline claim.

SEC ATS-N
v2 long-dated compute forwards activate post-clearance

Filing in flight covers spectrum first; compute forwards layer in once the contract spec is ratified by the working group.

Vertical-integrated orbital compute is the single best argument for neutral clearing the orbital economy will ever produce. We would not have written the page this way if SpaceX had not merged with xAI.
Wavestar Research·Compute thesis · v0.1

Compute working group · 2026 H2

Compute providers. Hyperscalers. Frontier labs. Ground stations.

Working-group seats for the first cohort drafting the Compute-Hour rulebook. Design-partner reservations clear zero-fee through the first $5M of cleared notional. NVIDIA-class GPU normalisation, observer methodology, and export-control jurisdiction settled in public.