A Kubernetes cluster on Oxide is a set of Oxide instances running a standard Kubernetes distribution. You provision the cluster using a management platform that creates Oxide instances as Kubernetes nodes through the Oxide API. Oxide maintains integrations for two platforms, Omni and Rancher, that automate instance creation and scaling. Other provisioning tools work as well, but these two have the most complete integration.
Once the cluster is running, you deploy the
Oxide Cloud Controller Manager (CCM)
into the cluster to integrate Kubernetes with the Oxide API, providing node
lifecycle management and LoadBalancer service support. We recommend the CCM
for all Kubernetes clusters on Oxide regardless of how they were provisioned.
Provisioning Options
Oxide maintains provisioning integrations for Omni and Rancher. Both automate the full lifecycle of Oxide instances as Kubernetes nodes — creating, scaling, and destroying them through the Oxide API — but they differ in how they run and what they provide.
Omni is a Kubernetes lifecycle management
platform from Sidero Labs that runs as a SaaS or on-premises— the Omni guide
uses SaaS. You run an Oxide-specific infrastructure provider that bridges Omni
and the Oxide API, then use omnictl to define machine classes and create
clusters. Omni provisions Talos Linux instances on Oxide automatically as
clusters are created and scaled. This option is well-suited for teams that
want declarative cluster lifecycle management and prefer Talos Linux’s minimal,
immutable, API-driven operating system.
Rancher is a Kubernetes management platform from SUSE that runs as a SaaS or on-premises— the Rancher guide runs it on an Oxide instance. You install the Oxide node driver into Rancher, then define clusters and machine pools via Kubernetes manifests or the Rancher UI. Rancher provisions Oxide instances using any RKE 2-supported operating system. This option is well-suited for teams already operating Rancher or those who prefer a traditional Linux operating system.
Comparison
| Omni | Rancher | |
|---|---|---|
Management Platform | Hosted or self-hosted; from Sidero Labs | Hosted or self-hosted; from SUSE |
Kubernetes Node Operating System | Talos Linux | Any RKE2-supported operating system (e.g., Ubuntu 24.04) |
Primary Tooling |
|
|
Oxide Integration | Infrastructure provider runs with access to both Omni and the Oxide API | Node driver installed into Rancher with access to the Oxide API |
Cluster Lifecycle | Define machine classes; Omni provisions and deprovisions Oxide instances automatically | Define node pools; Rancher provisions and deprovisions Oxide instances automatically |
Oxide CCM Support | Configured via Omni | Configured via |
Source Code |
Getting Started
Regardless of which provisioning option you choose, we recommend the following setup for Kubernetes clusters running on Oxide.
Provision a cluster with CCM support enabled. Follow the Omni or Rancher guide to create a cluster. Both guides include instructions for setting
--cloud-provider=externalon thekubelet,kube-apiserver, andkube-controller-manager, which configures Kubernetes to use an external controller (e.g., Oxide CCM) for cloud provider specific logic. Retrofitting an existing cluster with CCM support requires restarting these components.Deploy the Oxide Cloud Controller Manager. Follow the Cloud Controller Manager guide to deploy the CCM into the cluster via Helm. This enables node lifecycle management and
LoadBalancerservice support.