Kubernetes Operator
The Semantic Router Operator provides a Kubernetes-native way to deploy and manage vLLM Semantic Router instances using Custom Resource Definitions (CRDs). It simplifies deployment, configuration, and lifecycle management across Kubernetes and OpenShift platforms.
Features
- 🚀 Declarative Deployment: Define semantic router instances using Kubernetes CRDs
- 🔄 Automatic Configuration: Generates and manages ConfigMaps for semantic router configuration
- 📦 Persistent Storage: Manages PVCs for ML model storage with automatic lifecycle
- 🔐 Platform Detection: Automatically detects and configures for OpenShift or standard Kubernetes
- 📊 Built-in Observability: Metrics, tracing, and monitoring support out of the box
- 🎯 Production Features: HPA, ingress, service mesh integration, and pod disruption budgets
- 🛡️ Secure by Default: Drops all capabilities, prevents privilege escalation
Quick Start
Prerequisites
- Kubernetes 1.24+ or OpenShift 4.12+
kubectlorocCLI configured- Cluster admin access (for CRD installation)
Installation
Option 1: Using Kustomize (Standard Kubernetes)
# Clone the repository
git clone https://github.com/vllm-project/semantic-router
cd semantic-router/deploy/operator
# Install CRDs
make install
# Deploy the operator
make deploy IMG=ghcr.io/vllm-project/semantic-router-operator:latest
Verify the operator is running:
kubectl get pods -n semantic-router-operator-system
Option 2: Using OLM (OpenShift)
For OpenShift deployments using Operator Lifecycle Manager:
cd semantic-router/deploy/operator
# Build and push to your registry (Quay, internal registry, etc.)
podman login quay.io
make podman-build IMG=quay.io/<your-org>/semantic-router-operator:latest
make podman-push IMG=quay.io/<your-org>/semantic-router-operator:latest
# Deploy using OLM
make openshift-deploy
See the OpenShift Quick Start Guide for detailed instructions.
Deploy Your First Router
Create a my-router.yaml file:
apiVersion: vllm.ai/v1alpha1
kind: SemanticRouter
metadata:
name: my-router
namespace: default
spec:
replicas: 2
image:
repository: ghcr.io/vllm-project/semantic-router/extproc
tag: latest
resources:
limits:
memory: "7Gi"
cpu: "2"
requests:
memory: "3Gi"
cpu: "1"
persistence:
enabled: true
size: 10Gi
storageClassName: "standard"
config:
bert_model:
model_id: "models/mom-embedding-light"
threshold: 0.6
use_cpu: true
semantic_cache:
enabled: true
backend_type: "memory"
max_entries: 1000
ttl_seconds: 3600
tools:
enabled: true
top_k: 3
similarity_threshold: 0.2
prompt_guard:
enabled: true
threshold: 0.7
toolsDb:
- tool:
type: "function"
function:
name: "get_weather"
description: "Get weather information for a location"
parameters:
type: "object"
properties:
location:
type: "string"
description: "City and state, e.g. San Francisco, CA"
required: ["location"]
description: "Weather information tool"
category: "weather"
tags: ["weather", "temperature"]
Apply the configuration:
kubectl apply -f my-router.yaml
Verify Deployment
# Check the SemanticRouter resource
kubectl get semanticrouter my-router
# Check created resources
kubectl get deployment,service,configmap -l app.kubernetes.io/instance=my-router
# View status
kubectl describe semanticrouter my-router
# View logs
kubectl logs -f deployment/my-router
Expected output:
NAME PHASE REPLICAS READY AGE
semanticrouter.vllm.ai/my-router Running 2 2 5m
Architecture
The operator manages a complete stack of resources for each SemanticRouter:
┌─────────────────────────────────────────────────────┐
│ SemanticRouter CR │
│ apiVersion: vllm.ai/v1alpha1 │
│ kind: SemanticRouter │
└──────────────────┬───────── ─────────────────────────┘
│
▼
┌─────────────────────┐
│ Operator Controller │
│ - Watches CR │
│ - Reconciles state │
│ - Platform detection│
└─────────┬────────────┘
│
┌────────────┼────────────┬──────────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│Deployment│ │ Service │ │ConfigMap│ │ PVC │
│ │ │ - gRPC │ │ - config│ │ - models│
│ │ │ - API │ │ - tools │ │ │
│ │ │ - metrics│ │ │ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
Managed Resources:
- Deployment: Runs semantic router pods with configurable replicas
- Service: Exposes gRPC (50051), HTTP API (8080), and metrics (9190)
- ConfigMap: Contains semantic router configuration and tools database
- ServiceAccount: For RBAC (optional, created when specified)
- PersistentVolumeClaim: For ML model storage (optional, when persistence enabled)
- HorizontalPodAutoscaler: For auto-scaling (optional, when autoscaling enabled)
- Ingress: For external access (optional, when ingress enabled)
Platform Detection and Security
The operator automatically detects the platform and configures security contexts appropriately.
OpenShift Platform
When running on OpenShift, the operator:
- Detects: Checks for
route.openshift.ioAPI resources - Security Context: Does NOT set
runAsUser,runAsGroup, orfsGroup - Rationale: Lets OpenShift SCCs assign UIDs/GIDs from the namespace's allowed range
- Compatible with:
restrictedSCC (default) and custom SCCs - Log Message:
"Detected OpenShift platform - will use OpenShift-compatible security contexts"
Standard Kubernetes
When running on standard Kubernetes, the operator:
- Security Context: Sets
runAsUser: 1000,fsGroup: 1000,runAsNonRoot: true - Rationale: Provides secure defaults for pod security policies/standards
- Log Message:
"Detected standard Kubernetes platform - will use standard security contexts"
Both Platforms
Regardless of platform:
- Drops ALL capabilities (
drop: [ALL]) - Prevents privilege escalation (
allowPrivilegeEscalation: false) - No special permissions or SCCs required beyond defaults
Override Security Context
You can override automatic security contexts in your CR:
spec:
# Container security context
securityContext:
runAsNonRoot: true
runAsUser: 2000
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# Pod security context
podSecurityContext:
runAsNonRoot: true
runAsUser: 2000
fsGroup: 2000
When running on OpenShift, it's recommended to omit runAsUser and fsGroup and let SCCs handle UID/GID assignment automatically.