The Oxide cloud controller manager (CCM) is a Kubernetes control plane component that embeds Oxide-specific control logic, allowing Kubernetes clusters running on Oxide to integrate with the Oxide API via the cloud controller manager architecture.
This guide demonstrates how to create a Kubernetes cluster configured to use an external cloud provider and deploy the Oxide cloud controller manager to the cluster to run Oxide-specific cloud controllers.
We recommend running the cloud controller manager on all Kubernetes clusters running on Oxide.
Overview
The cloud controller manager is the bridge between Kubernetes and the underlying cloud provider, turning a generic Kubernetes cluster into a cluster that integrates natively with the cloud provider it’s running on.
The cloud controller manager runs controllers to manage nodes and services, as well as custom controllers for any additional logic that may be required.
The Oxide cloud controller manager runs the following controllers.
Node Controller
The node controller updates Node resources when nodes are created, removed, or
become unhealthy. The node controller performs the following functions.
Remove the node’s
node.cloudprovider.kubernetes.io/uninitializedtaint.Set the node’s
spec.providerIDwith its instance ID retrieved from Oxide.Configure the node’s labels with Oxide-specific information (e.g.,
node.kubernetes.io/instance-type).Set the node’s
status.addresseswith its hostname and internal and external IP addresses.Continuously monitor node health using the Oxide API, updating the node status to mitigate false node evictions (e.g., network partition).
Service Controller
The service controller watches Service resources of type LoadBalancer
and ensures an Oxide load balancer is correctly provisioned based on the
specification. The service controller performs the following functions.
Allocates a floating IP and attaches it to the first node ordered by name.
Updates
status.loadBalancer.ingresson theServiceresource.
Limitations
Oxide does not currently have a native load balancer service, so floating IPs are used instead. Please note the following limitations with this approach, each of which will be resolved when Oxide has a native load balancer service.
There can only be one
LoadBalancerservice per uniquespec.ports[].port. Since floating IPs are transparent to Oxide instances, traffic will arrive on the instance’s internal IP address usingspec.ports[].port. The service controller adds both the floating IP and the instance’s internal IP to theLoadBalancerstatus so that Kubernetes creates the per-node firewall rules needed to allow the traffic. Put another way, a load balancer configured to use port 443 will conflict with another load balancer configured to use port 443.The
LoadBalancerservice must setspec.externalTrafficPolicytoCluster. This allows traffic to enter any node in the cluster and be delivered to a node that’s running a backend service pod.The service controller does not modify Oxide VPC firewall rules to allow traffic to the external service address and port. Users must update VPC firewall rules themselves. This may change in a future update to the cloud controller manager.
We recommend creating a single LoadBalancer resource that sends traffic to a
Gateway or Ingress resource running within the Kubernetes cluster that will
handle path-based routing of traffic.
Prerequisites
Kubernetes Cluster
The Kubernetes cluster running the Oxide CCM must have all its nodes running in the same Oxide silo and project.
The kubelet, kube-apiserver, and kube-controller-manager must run with
--cloud-provider=external to configure the Kubernetes cluster to use an
external cloud provider. This process differs depending on your Kubernetes
distribution of choice.
kubelet, kube-apiserver, and
kube-controller-manager.You can use our Omni and Rancher integrations to create a Kubernetes cluster that meets these requirements, noting the following instructions.
Omni
When creating the Kubernetes cluster with Omni, configure the control plane
nodes to use an external cloud provider using the following ConfigPatches
manifest.
---
metadata:
namespace: default
type: ConfigPatches.omni.sidero.dev
id: external-cloud-provider
labels:
omni.sidero.dev/system-patch:
omni.sidero.dev/cluster: oxide-k8s-cluster
omni.sidero.dev/machine-set: control-plane
spec:
data: |
cluster:
externalCloudProvider:
enabled: trueRancher
When creating the Kubernetes cluster with Rancher, configure the control plane
nodes to use an external cloud provider using the following Cluster manifest.
---
apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
name: oxide-k8s-cluster
namespace: fleet-default
spec:
# ...
rkeConfig:
# ...
machineGlobalConfig:
# 1. Disable the embedded RKE2 cloud controller.
disable-cloud-controller: true
# 2. Tell the core components to look for an external provider.
kubelet-arg:
- "cloud-provider=external"
kube-apiserver-arg:
- "cloud-provider=external"
kube-controller-manager-arg:
- "cloud-provider=external"Install kubectl
Follow the instructions at
Install Tools to download
and install kubectl on your machine. Verify your installation once finished.
$ kubectl version --client
Client Version: v1.35.2
Kustomize Version: v5.7.1Install Helm
Follow the instructions at Installing Helm
to download and install helm on your machine. Verify your installation once
finished.
$ helm version --short
v4.1.1+g5caf004API Credentials
The cloud controller manager must be configured with Oxide API credentials,
specifically the host, API token, and project. The API credentials must be
generated by a user with the collaborator role.
Follow the instructions in the CLI introduction to configure Oxide API credentials. Take note of the host, API token, and project.
Host: https://oxide.sys.example.com
API Token: oxide-token-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Project: exampleIP Pool
The service controller will allocate and attach a floating IP to provision
LoadBalancer services. The Oxide silo must have a linked IP pool. Follow the
instructions in the IP Pools and Subnet Pools
guide to verify or create one.
Deploy the Cloud Controller Manager
This section assumes you have deployed a new Kubernetes cluster that’s
configured to use an external cloud provider as covered in Prerequisites. Ensure your kubeconfig is configured to access the target
cluster before proceeding.
Verify that the Kubernetes cluster is ready.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
oxide-k8s-cluster-control-planes-pn9mnk Ready control-plane 7m17s v1.35.2
oxide-k8s-cluster-control-planes-qxsmrc Ready control-plane 7m32s v1.35.2
oxide-k8s-cluster-control-planes-tzrscl Ready control-plane 7m32s v1.35.2
oxide-k8s-cluster-workers-rvh5k5 Ready <none> 7m32s v1.35.2Verify that each node is tainted with
node.cloudprovider.kubernetes.io/uninitialized. This taint will be removed by the node controller.$ kubectl get node --output custom-columns='NAME:.metadata.name,TAINTS:.spec.taints[*].key'
NAME TAINTS
oxide-k8s-cluster-control-planes-pn9mnk node-role.kubernetes.io/control-plane,node.cloudprovider.kubernetes.io/uninitialized
oxide-k8s-cluster-control-planes-qxsmrc node-role.kubernetes.io/control-plane,node.cloudprovider.kubernetes.io/uninitialized
oxide-k8s-cluster-control-planes-tzrscl node-role.kubernetes.io/control-plane,node.cloudprovider.kubernetes.io/uninitialized
oxide-k8s-cluster-workers-rvh5k5 node.cloudprovider.kubernetes.io/uninitializedGather the Oxide host, API token, and project that were set aside earlier and create a
Secretresource, replacing$NAMEwith a name that’ll help you uniquely identify this secret later (e.g.,oxide-k8s-cluster).kubectl create secret generic $NAME-oxide-cloud-controller-manager \
--namespace kube-system \
--from-literal=oxide-host=https://oxide.sys.example.com \
--from-literal=oxide-token=oxide-token-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
--from-literal=oxide-project=exampleInstall the Oxide cloud controller manager using Helm, replacing
$NAMEwith the same value you used for the secret and$VERSIONwith the desired version of the Oxide cloud controller manager, which can be found on the Helm chart registry.helm install $NAME \
oci://ghcr.io/oxidecomputer/helm-charts/oxide-cloud-controller-manager \
--version $VERSION \
--namespace kube-system \
--create-namespaceYou’ll see the following output if the deployment succeeded.
Pulled: ghcr.io/oxidecomputer/helm-charts/oxide-cloud-controller-manager:0.4.0
Digest: sha256:850f5dcfb1af610a08a9a134b3078953658608c0637aa1580cdb956c124c98a7
NAME: oxide-k8s-cluster
LAST DEPLOYED: Thu Mar 19 15:00:27 2026
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
TEST SUITE: NoneVerify the Oxide cloud controller manager deployment is running and ready, replacing
$NAMEwith the same value you used for the secret.$ kubectl get deployment \
--namespace kube-system \
$NAME-oxide-cloud-controller-manager
NAME READY UP-TO-DATE AVAILABLE AGE
oxide-k8s-cluster-oxide-cloud-controller-manager 1/1 1 1 6m31s
Verify the Node Controller
Let’s verify the node controller is successfully running.
Verify the
node.cloudprovider.kubernetes.io/uninitializedtaint was removed from the nodes.$ kubectl get node --output custom-columns='NAME:.metadata.name,TAINTS:.spec.taints[*].key'
NAME TAINTS
oxide-k8s-cluster-control-planes-pn9mnk node-role.kubernetes.io/control-plane
oxide-k8s-cluster-control-planes-qxsmrc node-role.kubernetes.io/control-plane
oxide-k8s-cluster-control-planes-tzrscl node-role.kubernetes.io/control-plane
oxide-k8s-cluster-workers-rvh5k5 <none>Verify that each node has a provider ID set that matches its respective Oxide instance ID.
$ kubectl get node --output custom-columns='NAME:.metadata.name,PROVIDER_ID:.spec.providerID'
NAME PROVIDER_ID
oxide-k8s-cluster-control-planes-pn9mnk oxide://3a3ae06b-efd2-4762-839f-c2b8d720b8fb
oxide-k8s-cluster-control-planes-qxsmrc oxide://032db83a-05ce-4eb3-af29-db2dec387f96
oxide-k8s-cluster-control-planes-tzrscl oxide://c229a420-7d3d-4872-bb13-472f3424ccd3
oxide-k8s-cluster-workers-rvh5k5 oxide://02141805-48c2-43cf-b02f-f406a338f36aVerify that each node has addresses set.
$ kubectl get node --output custom-columns='NAME:.metadata.name,ADDRESSES:.status.addresses[*]'
NAME ADDRESSES
oxide-k8s-cluster-control-planes-pn9mnk map[address:172.30.0.20 type:InternalIP],map[address:oxide-k8s-cluster-control-planes-pn9mnk type:Hostname]
oxide-k8s-cluster-control-planes-qxsmrc map[address:172.30.0.21 type:InternalIP],map[address:oxide-k8s-cluster-control-planes-qxsmrc type:Hostname]
oxide-k8s-cluster-control-planes-tzrscl map[address:172.30.0.19 type:InternalIP],map[address:oxide-k8s-cluster-control-planes-tzrscl type:Hostname]
oxide-k8s-cluster-workers-rvh5k5 map[address:172.30.0.22 type:InternalIP],map[address:oxide-k8s-cluster-workers-rvh5k5 type:Hostname]
Verify the Service Controller
The following steps deploy a sample workload to verify the service controller is running, then clean up afterward.
Verify that there are no
LoadBalancerservices running.$ kubectl get services \
--all-namespaces \
--field-selector spec.type=LoadBalancer
No resources foundCreate a Kubernetes manifest named
oxide-ccm-service.yamlwith aDeploymentandServiceof typeLoadBalancer. Replace$IP_POOLwith the name of the IP pool you’d like to allocate a floating IP from.---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
annotations:
oxide.computer/floating-ip-pool: "$IP_POOL"
name: nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCPApply the Kubernetes manifest.
$ kubectl apply -f oxide-ccm-service.yaml
deployment.apps/nginx created
service/nginx createdVerify that the
nginxLoadBalancerservice is running and has a populatedEXTERNAL-IPcolumn. TheEXTERNAL-IPcolumn will contain both the node’s internal IP and the floating IP. Use the non-RFC 1918 address (the floating IP) for external connectivity. See Limitations for details on why this is necessary.$ kubectl get services \
--all-namespaces \
--field-selector spec.type=LoadBalancer
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default nginx LoadBalancer 10.111.159.169 172.30.0.22,45.154.216.217 80:31351/TCP 3m9sEnsure your VPC firewall rules allow inbound traffic on the service port to the floating IP address. The service controller does not modify VPC firewall rules automatically.
Connect to the external IP address to verify service connectivity.
$ curl --silent --head http://45.154.216.217
HTTP/1.1 200 OK
Server: nginx/1.29.6
Date: Thu, 19 Mar 2026 20:33:23 GMT
Content-Type: text/html
Content-Length: 896
Last-Modified: Tue, 10 Mar 2026 15:29:07 GMT
Connection: keep-alive
ETag: "69b038c3-380"
Accept-Ranges: bytesDelete the Kubernetes manifest to clean up.
$ kubectl delete -f oxide-ccm-service.yaml
deployment.apps "nginx" deleted from default namespace
service "nginx" deleted from default namespace
Troubleshooting
If the cloud controller manager is not running or nodes remain tainted, check the cloud controller manager logs for errors.
$ kubectl logs \
--namespace kube-system \
deployment/$NAME-oxide-cloud-controller-manager