☸️Container

Self-Host OpenClaw on Kubernetes

Deploy OpenClaw on Kubernetes using Kustomize with persistent storage, secure gateway access, and production-ready configuration. Step-by-step guide for K8s clusters.

Difficulty: advancedTime: ~30 minCost: varies

Self-Host OpenClaw on Kubernetes

This guide walks you through deploying OpenClaw on a Kubernetes cluster using Kustomize. The deployment creates a single-pod setup with an init container, gateway service, persistent storage, and configurable agent instructions. It works on any conformant Kubernetes cluster, whether managed (EKS, GKE, AKS) or self-hosted.

Quick Path

For experienced Kubernetes operators who want to get running fast:

  1. Clone the OpenClaw repository and cd into deploy/kubernetes/
  2. Export your LLM API key: export ANTHROPIC_API_KEY=sk-...
  3. Generate secrets: ./scripts/generate-secrets.sh
  4. Review and edit config/openclaw.json and config/AGENTS.md
  5. Deploy: kubectl apply -k .
  6. Port-forward: kubectl port-forward -n openclaw svc/openclaw-gateway 18789:18789
  7. Open http://localhost:18789 in your browser

Prerequisites

Before you begin, make sure you have:

If you want to test locally before deploying to a real cluster, see the "Local Development with Kind" section at the end of this guide.

Architecture Overview

The Kustomize deployment creates the following resources in a dedicated openclaw namespace:

Step-by-Step Deployment

Clone the Repository

git clone https://github.com/openclaw/openclaw.git
cd openclaw/deploy/kubernetes

Configure Your API Keys

Generate a gateway token and set your LLM provider key:

export OPENCLAW_GATEWAY_TOKEN=$(openssl rand -hex 32)
export ANTHROPIC_API_KEY="sk-ant-your-key-here"

Create the Kubernetes secret from these values:

kubectl create namespace openclaw

kubectl create secret generic openclaw-secrets \
  -n openclaw \
  --from-literal=gateway-token="$OPENCLAW_GATEWAY_TOKEN" \
  --from-literal=anthropic-api-key="$ANTHROPIC_API_KEY"

If you use a different provider, substitute the appropriate key name (e.g., openai-api-key, openrouter-api-key).

Review the Configuration

The ConfigMap sources live in the config/ directory:

ls config/
# openclaw.json    AGENTS.md

Edit config/openclaw.json to set your preferred model and runtime options:

{
  "model": "claude-sonnet-4-20250514",
  "gateway": {
    "bind": "0.0.0.0",
    "port": 18789
  },
  "permissions": {
    "allow_network": true,
    "allow_file_write": true
  }
}

Edit config/AGENTS.md to customize the system prompt and agent behavior:

# Agent Instructions

You are a coding assistant deployed on Kubernetes.
Follow the project's coding standards. Run tests before committing.

Review the Kustomize Manifests

The kustomization.yaml ties everything together:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: openclaw

resources:
  - namespace.yaml
  - deployment.yaml
  - service.yaml
  - pvc.yaml

configMapGenerator:
  - name: openclaw-config
    files:
      - config/openclaw.json
      - config/AGENTS.md

secretGenerator:
  - name: openclaw-secrets
    type: Opaque
    literals:
      - gateway-token=REPLACE_ME

The deployment manifest defines the pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: openclaw
  namespace: openclaw
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openclaw
  template:
    metadata:
      labels:
        app: openclaw
    spec:
      initContainers:
        - name: init-config
          image: openclaw/openclaw:latest
          command: ["sh", "-c", "cp /config/* /data/config/"]
          volumeMounts:
            - name: config
              mountPath: /config
            - name: data
              mountPath: /data
      containers:
        - name: gateway
          image: openclaw/openclaw:latest
          ports:
            - containerPort: 18789
          env:
            - name: OPENCLAW_GATEWAY_TOKEN
              valueFrom:
                secretKeyRef:
                  name: openclaw-secrets
                  key: gateway-token
            - name: ANTHROPIC_API_KEY
              valueFrom:
                secretKeyRef:
                  name: openclaw-secrets
                  key: anthropic-api-key
            - name: XDG_CONFIG_HOME
              value: /home/node/.openclaw
          volumeMounts:
            - name: data
              mountPath: /data
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "1Gi"
              cpu: "1000m"
      volumes:
        - name: config
          configMap:
            name: openclaw-config
        - name: data
          persistentVolumeClaim:
            claimName: openclaw-data

The PVC requests 10GB of storage:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: openclaw-data
  namespace: openclaw
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

The Service exposes the gateway on ClusterIP:

apiVersion: v1
kind: Service
metadata:
  name: openclaw-gateway
  namespace: openclaw
spec:
  type: ClusterIP
  selector:
    app: openclaw
  ports:
    - port: 18789
      targetPort: 18789
      protocol: TCP

Deploy

Apply the full stack with a single command:

kubectl apply -k .

Verify the pod is running:

kubectl get pods -n openclaw -w

Wait until the pod shows Running with 1/1 ready containers.

Access the Gateway

Port-forward the service to your local machine:

kubectl port-forward -n openclaw svc/openclaw-gateway 18789:18789

Open http://localhost:18789 in your browser. Use the gateway token you generated earlier to authenticate.

Customization

Pinning the Container Image

For reproducible deployments, pin to a specific image tag instead of latest:

image: openclaw/openclaw:v0.12.3

Check the OpenClaw releases page for available tags.

Adding Model Providers

To add OpenAI or OpenRouter alongside Anthropic, extend the secret:

kubectl create secret generic openclaw-secrets \
  -n openclaw \
  --from-literal=gateway-token="$OPENCLAW_GATEWAY_TOKEN" \
  --from-literal=anthropic-api-key="$ANTHROPIC_API_KEY" \
  --from-literal=openai-api-key="$OPENAI_API_KEY" \
  --from-literal=openrouter-api-key="$OPENROUTER_API_KEY" \
  --dry-run=client -o yaml | kubectl apply -f -

Then add the corresponding env entries in the deployment manifest.

Custom Namespace

Change the namespace in kustomization.yaml:

namespace: my-team-openclaw

All resources will be created in the new namespace automatically.

Ingress for Remote Access

To expose OpenClaw outside the cluster, create an Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: openclaw-ingress
  namespace: openclaw
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - openclaw.yourdomain.com
      secretName: openclaw-tls
  rules:
    - host: openclaw.yourdomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: openclaw-gateway
                port:
                  number: 18789

This requires an Ingress controller (nginx, Traefik, etc.) and cert-manager for automatic TLS certificates.

Security Best Practices

Local Development with Kind

To test the deployment locally before pushing to a production cluster:

# Install Kind
brew install kind   # macOS
# or: go install sigs.k8s.io/kind@latest

# Create a local cluster
kind create cluster --name openclaw-dev

# Load a locally built image (optional)
docker build -t openclaw/openclaw:latest .
kind load docker-image openclaw/openclaw:latest --name openclaw-dev

# Deploy
kubectl apply -k deploy/kubernetes/

# Port-forward and test
kubectl port-forward -n openclaw svc/openclaw-gateway 18789:18789

Delete the cluster when done:

kind delete cluster --name openclaw-dev

Troubleshooting

Pod stuck in Pending

Check if the PVC is bound:

kubectl get pvc -n openclaw

If the PVC is stuck in Pending, your cluster may not have a default StorageClass. Create one or specify a storageClassName in the PVC manifest.

Init container fails

Inspect the init container logs:

kubectl logs -n openclaw deployment/openclaw -c init-config

Common cause: the ConfigMap was not created because kustomize was not used (plain kubectl apply -f skips configMapGenerator).

Gateway returns 401 Unauthorized

Verify the token matches what you generated:

kubectl get secret openclaw-secrets -n openclaw -o jsonpath='{.data.gateway-token}' | base64 -d

Container OOMKilled

Increase the memory limit in the deployment manifest. If you are running large context windows or multiple concurrent sessions, bump to 2Gi or higher.

Persistent data missing after pod restart

Confirm the PVC is attached and the volume mount paths are correct. Data should survive pod restarts as long as the PVC is not deleted.

Updating OpenClaw

To update to a new version:

# Pull the latest manifests
cd openclaw/deploy/kubernetes
git pull origin main

# Update the image tag in deployment.yaml (if pinned)
# Then re-apply
kubectl apply -k .

# Watch the rollout
kubectl rollout status deployment/openclaw -n openclaw

Kubernetes performs a rolling update by default, so there is minimal downtime.

Frequently Asked Questions

Does OpenClaw provide a Helm chart?

OpenClaw uses Kustomize-based manifests rather than Helm. Kustomize is built into kubectl and requires no additional tooling. You can wrap the provided manifests in a Helm chart if your organization standardizes on Helm.

How much CPU and memory does OpenClaw need on Kubernetes?

A single OpenClaw pod runs comfortably with 512Mi-1Gi of memory and 0.5-1 CPU core. The agent itself is lightweight — most compute happens on the LLM provider side. Increase resources if you run many concurrent sessions.

Can I run multiple OpenClaw replicas for high availability?

OpenClaw stores session state on a persistent volume, so horizontal scaling requires sticky sessions or a shared filesystem like NFS or EFS. For most teams, a single replica with a PVC is sufficient.

Is it safe to expose the OpenClaw gateway to the internet?

Not without authentication and TLS. The gateway binds to loopback by default. If you add an Ingress, enforce HTTPS with a cert-manager certificate and keep the OPENCLAW_GATEWAY_TOKEN secret strong.

Can I use OpenClaw on managed Kubernetes services like EKS, GKE, or AKS?

Yes. The Kustomize manifests are standard Kubernetes resources and work on any conformant cluster including EKS, GKE, AKS, and self-managed clusters.

SuperBuilder

Prefer a managed experience?

SuperBuilder runs OpenClaw with zero setup — cloud execution, cost tracking, and team collaboration built in.

Try SuperBuilder Free