Skip to content
Pre-Launch · Filing seed · Series A — Q4 2026

Regulatory · Operational resilience

Four nines, five. Tested quarterly.

Clearing houses are critical market infrastructure. We voluntarily align with Regulation SCI from day one — covering technology systems, cybersecurity, incident response, business continuity, and disaster recovery. Multi-region active-active topology, quarterly failover drills, and a 99.995% availability target that ratchets up, never down.
Uptime target
99.995%
Recovery time (RTO)
≤ 2 hours — clearing critical
Recovery point (RPO)
Near-zero (synchronous)
DR exercises
Quarterly

Availability targets

Measured, published, defended.

Each system-class has a tiered availability target. We publish performance monthly; deviations are incidents and are reported in the annual transparency report with full root-cause analysis.
Clearing engine (ORCH)
99.995%

Novation and matching availability, business-hours weighted.

Registry / transparency log
99.995%

Read and write availability; append-only guarantee.

Attestation quorum
99.99%

Observer availability and quorum achievement.

Terminal (operator UI)
99.95%

Authenticated session availability.

Market (discovery UI)
99.9%

Read-side availability for listings and participant search.

Public API
99.95%

REST and gRPC, authed + anonymous read endpoints.

Anchoring service
99.99%

Hourly Ethereum + Bitcoin anchor service, with a documented fallback cadence.

Documentation site
99.9%

This surface.

Resilience topology

Multi-region active-active. Deterministic failover.

  • 01

    Multi-region active-active

    Clearing engine, registry, and attestation quorum deployed active-active across at least three geographically-separated regions. Synchronous replication on the write path; asynchronous for read-only archives.
  • 02

    Deterministic leader election

    HotStuff-derived consensus for leader election ensures failover is deterministic and latency-bounded. Leader transitions are signed, logged, and auditable after the fact.
  • 03

    Commodity-cloud independence

    No single-cloud dependency. Workloads distributed across independently-operated cloud providers. Control plane and data plane separated; cross-cloud telemetry aggregated at an independent observability layer.
  • 04

    Cold-start capability

    Full cold-start drill performed at least annually. All persistent state reconstructible from the transparency log plus periodic signed snapshots. No single-person or single-system dependency for system revival.
  • 05

    Third-party dependency inventory

    Every third-party dependency categorised as Critical, Important or Supporting. Critical dependencies have documented replacement pathways with tested cut-over procedures.
  • 06

    Change-management discipline

    Changes to production graduated through canary and progressive-rollout phases. Emergency-change procedures require named approvers and are reviewed weekly. Full change log retained for regulatory inspection.

Testing cadence

A drill is a rehearsal. Rehearsal builds muscle memory.

  1. MonthlyComplete

    Region-failover rehearsal

    Scheduled region-failover of a non-critical workload during a low-activity window. Measures failover latency and confirms alarm flow. Summary published internally.
  2. QuarterlyComplete

    Full DR exercise

    Failover of clearing-critical workloads during a pre-announced regulator-observed window. Participating members perform conformance checks against the failover endpoints.
  3. Semi-annuallyComplete

    Cybersecurity tabletop

    Scenario-based tabletop exercise with Legal, Risk, Compliance, Comms and Executive participation. Scenarios rotate through ransomware, supply-chain compromise, insider abuse, and third-party impact.
  4. AnnuallyComplete

    Cold-start drill

    Complete reconstruction of the registry and clearing engine from transparency-log + snapshot state, in an isolated environment, against a recovery-time objective.
  5. AnnuallyComplete

    Third-party penetration test

    External penetration testing by an independent firm. Findings triaged and tracked to remediation. Summary published in the transparency report.
  6. AnnuallyComplete

    Business-impact-analysis refresh

    BIA refreshed against updated volumes, new contract classes, and dependency changes. Recovery priorities re-validated with the Risk Committee.

Incident response

When things break, these are the steps.

Public documents

Resilience artefacts, signed and dated.

Operational resilience register

Availability performance

Minute-by-minute status for each system class; monthly performance summary.

Incident register

Every Sev-1 and Sev-2 incident with summary and link to post-incident review.

Penetration-test summary

Latest independent pen-test executive summary; detailed findings under NDA.

SOC 2 Type II report

Latest SOC 2 Type II attestation available under NDA in the Trust Center.

Cold-start drill attestation

Independently-witnessed cold-start drill completion attestation.

Companion disclosures

Resilience works in the baseline. Recovery handles the tail.

Operational resilience keeps the system running day to day. Recovery and resolution are the framework for the day it doesn't. Both are published and tested.