@Brian_Zoresky
 •  OXIDE CLOUD COMPUTER  •  RACK-SCALE DESIGN  •  CO-DESIGNED HARDWARE + SOFTWARE  •  ZERO LICENSING FEES  •  DEPLOY IN UNDER 2 HOURS  •  AMD MILAN EPYC 64-CORE  •  UP TO 1PB RAW STORAGE  •  BLIND-MATED SLEDS  •  100GbE DUAL-PATH  •  GARTNER HCI 2024  •  OXIDE CLOUD COMPUTER  •  RACK-SCALE DESIGN  •  CO-DESIGNED HARDWARE + SOFTWARE  •  ZERO LICENSING FEES  •  DEPLOY IN UNDER 2 HOURS  •  AMD MILAN EPYC 64-CORE  •  UP TO 1PB RAW STORAGE  •  BLIND-MATED SLEDS  •  100GbE DUAL-PATH  •  GARTNER HCI 2024  • 
┌─────────┬─────────────────────────────────────────┐ │ 0xIDE-R3 │ OXIDE CLOUD COMPUTER // REV 3.1 │ ├─────────┼─────────────────────────────────────────┤ │ U01-U02 │ [SLD-00] ████████████████████░░░░ AMD MILAN OK │ │ U03-U04 │ [SLD-01] ██████████████████████░░ AMD MILAN OK │ │ U05-U06 │ [SLD-02] ██████████████░░░░░░░░░░ AMD MILAN & │ │ U07-U08 │ [SLD-03] ██████████████████████░░ AMD MILAN OK │ │ ////// │ ░░░░░░░░░░░░░░░░░░░░░░░░ [CUBBIES 05-16 AVAIL] │ │ SWP-A │ [NET-A] ●●●●●●●●●●●●●●●● 100GbE PRIMARY │ │ SWP-B │ [NET-B] ●●●●●●●●●●●●●●●● 100GbE STANDBY │ │ PSU-SHELF│ [PWR] ████████████ 18kW DC-BUSBAR OK │ └─────────┴─────────────────────────────────────────┘ STATUS: NOMINAL // 24°C // UPTIME: 847d 14h 22m

Servers as they should be

THE CLOUD COMPUTER

Rack-scale co-design. One-time purchase. No licensing fees. Hyperscaler-class infrastructure you own — deployable in under two hours.

32
Sleds per Rack
64
Cores per Sled
1PB
Max Raw Storage
100G
Network per Sled
<2H
Deploy Time
33%
Space vs Traditional

// RACK_EXPLORER

// Component Map — Click to Inspect

SELECT CLICK A COMPONENT

Click any component in the rack map on the left to inspect its live status, specifications, and operational telemetry.

UTILIZATION

// WHAT_IS_OXIDE

[HW]
Co-Designed Hardware
Every component — PCB layout, switch ASIC integration, power distribution — is custom-built by Oxide. Blind-mated sleds snap in without a single cable. The rack ships fully assembled and tested.
AMD Milan EPYC // NVMe U.2 // DC Busbar
[SW]
Integrated Software Stack
The entire stack — firmware to cloud control plane — ships with the rack. Virtual compute, elastic block storage, VPC networking, and a full REST API. No VMware. No per-core licensing fees. Ever.
Console // API // CLI // SDK
[NET]
100GbE High-Availability Networking
Dual redundant switches running Delay Driven Multipath. OPTE handles firewalling, routing, NAT, and VPC encapsulation at line rate. Failover in under 100ms — transparent to workloads.
DDM // Geneve Encap // OPTE Engine
[STR]
Resilient Storage
OpenZFS checksums and scrubs all data continuously. Automated rebalancing preserves redundancy when drives or sleds are removed. Up to 10x U.2 NVMe per sled — up to 1PB raw per rack.
OpenZFS // EBS-equivalent // Auto-Rebalance
[OPS]
Front-Serviceable Design
Full serviceability from the cold aisle. Hot-plug cubby bays mean every FRU is reachable without downtime. Power shelf OOB management via RJ-45 on the front panel. No rear access required.
Hot-Plug // Cold Aisle // Zero-Downtime FRU
[API]
Cloud-Parity Developer UX
Provision VMs, block storage, and VPCs through a REST API, CLI, or web console — identical to AWS or GCP, but on iron you own. Multi-tenant, self-service. No ticket, no queue.
REST // Terraform // Python SDK // Rust SDK
OXIDE CONSOLE — OPERATOR SESSION
oxide@rack-01:~$ oxide instance list --project ai-workloads
NAME STATE CPUS MEMORY DISK SLED
───────────────────────────────────────────────────────────
gpu-trainer-01 running 128 512GiB 3.2TiB sled-04
gpu-trainer-02 running 128 512GiB 3.2TiB sled-05
db-primary running 32 256GiB 6.4TiB sled-02 ⚠ high-iops
api-server-01 running 16 128GiB 1.6TiB sled-01
 
oxide@rack-01:~$ oxide disk snapshot create --disk db-primary-data --name snap-2026-05-12
✓ snapshot created: snap-2026-05-12 (6.4TiB, elapsed: 2.1s)
oxide@rack-01:~$

// WHY_OXIDE

Traditional racks force separate procurement of servers, switches, storage, and software — then weeks of wiring, testing, and licensing each layer. Hyperscalers solved this by building it all themselves. Oxide brings that same advantage to enterprises who buy rather than rent.

CapabilityOxideTraditional
Installation time< 2 hoursWeeks–Months
Software licensingIncluded ($0)Per-core fees
CablingZero (blind-mated)Extensive
Space efficiency33% smaller footprintBaseline
Cloud-style APINative, built-inAdd-on / Optional
Data sovereigntyFull ownershipVendor-dependent
Max rack power< 18kW DC busVariable / higher
Firmware opennessOpen source (Hubris)Proprietary blob

// Ideal Workloads

AI / ML Training
Data-intensive training where your data belongs — on hardware you control at a fixed, predictable cost. No per-GPU-hour surprises.
CI/CD Infrastructure
Dedicated build runners with security baked in. Consistent performance, no noisy neighbors, fixed monthly cost.
Data Engineering
Batch compute, staging, and orchestration on multi-tenant infrastructure you own. Data stays in your authority.
Regulated Industries
Finance, healthcare, and defense workloads requiring hardware verification, air-gap capability, and data residency.

// FRU_REPLACEMENT_GUIDE

Field-replaceable unit tutorials for on-site operators. All procedures are designed for single-technician execution from the cold aisle without scheduled downtime windows.

01
Compute Sled Replacement
DIFFICULTY: LOW  |  TIME: ~15 MIN  |  TOOLS: NONE  |  DOWNTIME: NONE (LIVE MIGRATION)

Compute sleds are hot-pluggable with no tools required. The blind-mate connector self-aligns on insertion. Live migration moves all running VMs before extraction begins.

  • Open Oxide Console → Sleds → Select target sled
  • Run: oxide sled evacuate --sled-id $SLED_ID
  • Confirm zero running instances on sled
  • Press release lever on sled front panel
  • Slide sled straight out — no cables to disconnect
  • Insert replacement sled until lever clicks
  • Verify LED turns solid green within 90s
  • Run: oxide sled commission --sled-id $NEW_ID
SLED EVACUATION
$ oxide sled evacuate --sled-id sled-02
Migrating 4 instances...
✓ db-primary → sled-01
✓ api-server → sled-03
✓ cache-01 → sled-04
✓ cache-02 → sled-05
Sled clear. Safe to remove.
ESD wrist strap required. Do not touch PCB contacts. Handle sled by chassis handles only.
02
NVMe Drive Replacement (U.2)
DIFFICULTY: LOW  |  TIME: ~10 MIN  |  TOOLS: NONE  |  DOWNTIME: NONE (OpenZFS AUTO-HEAL)

NVMe drives sit in hot-swap U.2 bays on the sled front face. OpenZFS detects failure, marks the drive FAULTED, and immediately begins rebuilding from parity data. No operator action required until physical swap.

  • Oxide Console alert: DRIVE FAULT — sled-03, bay-07
  • Confirm: oxide storage disk list --sled sled-03
  • Locate bay-07 LED (solid red = faulted)
  • Press bay eject button — drive unlocks
  • Pull drive by handle, set aside
  • Insert replacement drive until audible click
  • ZFS pool rebuild starts automatically
  • Monitor: oxide storage pool status --watch
STORAGE STATUS
$ oxide storage pool status
pool: data state: DEGRADED
drive: sled-03/bay-07 FAULTED
 
↻ resilvering: 847GiB / 3.2TiB (26%)
est. remaining: 38 minutes
Wait for drive LED to extinguish before removal. Drives spin down 10s after last I/O to prevent data corruption.
03
Network Switch Replacement
DIFFICULTY: MEDIUM  |  TIME: ~30 MIN  |  TOOLS: NONE  |  DOWNTIME: NONE (DUAL-SWITCH HA)

Each rack ships with two switches running simultaneously via DDM. Removing one switch shifts all traffic to the peer in milliseconds. No service interruption for running VMs or storage.

  • Verify peer switch ACTIVE: Console → Network → Switches
  • Set failing switch to STANDBY via OOB port
  • Confirm all sled links fail over to peer
  • Disconnect OOB management cable (front RJ-45)
  • Press release levers simultaneously on both switch ears
  • Slide switch out — no other cables connected
  • Insert replacement, both ears click into position
  • Reconnect OOB cable, wait for auto-config (<5 min)
  • Promote: oxide network switch set-active --id swp-01
SWITCH STATUS
$ oxide network switch list
ID ROLE STATE UPTIME
swp-01 primary active 14d 06h
swp-02 standby faulted —
 
$ oxide network failover --to swp-01
✓ All 32 sleds on swp-01. Safe to remove swp-02.
Never remove both switches simultaneously. Always verify full HA failover before extracting the faulted unit.
04
Power Shelf PSU Replacement
DIFFICULTY: MEDIUM  |  TIME: ~20 MIN  |  TOOLS: TORQUE DRIVER  |  DOWNTIME: NONE (N+1)

Power is distributed via a low-voltage DC bus bar pair. The shelf runs N+1 redundant PSUs — a single failure triggers an alert but remaining units carry full rack load.

  • Identify faulted PSU via amber LED on power shelf
  • Confirm remaining PSUs show green LEDs
  • Connect to power shelf OOB: front RJ-45
  • Run: oxide power psu status — confirm N+1 sufficient
  • Loosen 2x captive screws (torque driver, front access)
  • Pull PSU handle — unit slides free
  • Insert replacement PSU, tighten captive screws
  • Verify PSU LED green, shelf shows NOMINAL
POWER STATUS
$ oxide power psu status
PSU STATE OUTPUT TEMP
A nominal 4.2kW 41°C
B nominal 4.2kW 39°C
C faulted 0.0kW —
D nominal 4.1kW 40°C
 
⚠ N+1 maintained. Replace PSU-C within 72h.
DC bus bar carries high current. Do not bypass safety interlocks. Ensure PSU is fully seated before re-energizing shelf.

// SUPPORT_CHANNELS

[DOC]
Documentation
Full operator and developer docs including architecture guides, API reference, and hardware specifications. Updated with every software release at docs.oxide.computer.
[OPS]
On-Call Engineering
24/7 engineering support for production incidents. Direct line to the same engineers that built the rack. No tier-1 gatekeeping, no ticket queues.
[FRU]
FRU Dispatch
Replacement parts dispatched same business day. Full inventory of sleds, drives, switches, and PSUs maintained at regional depots.
[UPD]
Software Updates
Rolling zero-downtime updates. Firmware, OS, and control plane updated atomically through the Oxide update service — no maintenance windows.
[TRN]
Operator Training
On-site training covering rack operations, all FRU procedures, console management, and API integration patterns for your team.
[COM]
RFD Community
Open Request for Discussion process. Participate in roadmap decisions, browse design documents, and discuss directly with Oxide engineers.