07_kubernetes_deployment
Part 7: Kubernetes Deployment
Learning Objective: Deploy Open5GS on Kubernetes using Helm charts for cloud-native telecom infrastructure.
Why Kubernetes for Telecom?
| Benefit | Description |
|---|---|
| Scalability | Auto-scale NFs based on load (e.g., scale UPF at edge) |
| High Availability | Automatic failover and pod restart on crash |
| Multi-tenancy | Isolate network slices via namespaces |
| CI/CD | GitOps deployment workflows (ArgoCD, Flux) |
| Edge Computing | Deploy UPF at edge locations closer to users |
| Observability | Built-in Prometheus metrics, Grafana dashboards |
Prerequisites
Tools Required
# Install kubectl
brew install kubectl
# Install Helm
brew install helm
# Install kind (Kubernetes in Docker) — lightweight local clusters
brew install kind
# Verify installations
kubectl version --client
helm version
kind version
Cluster Setup with kind
# Create a multi-node kind cluster
cat > kind-config.yaml << 'EOF'
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 30000
hostPort: 30000
protocol: TCP
- containerPort: 31999
hostPort: 31999
protocol: TCP
- role: worker
labels:
node-role: ran
- role: worker
labels:
node-role: core
EOF
kind create cluster --name open5gs --config kind-config.yaml
# Verify
kubectl get nodes
Expected:
NAME STATUS ROLES AGE VERSION
open5gs-control-plane Ready control-plane 1m v1.29.2
open5gs-worker Ready <none> 1m v1.29.2
open5gs-worker2 Ready <none> 1m v1.29.2
Architecture: Docker vs Kubernetes
graph LR
subgraph "Docker Compose (Part 4)"
D1[Single Host]
D2[Static IPs]
D3[Manual restart]
D4[docker-compose.yml]
end
subgraph "Kubernetes (Part 7)"
K1[Multi-node cluster]
K2[Service DNS]
K3[Auto-healing]
K4[Helm charts + K8s manifests]
end
D1 -->|Migration| K1
D2 -->|Migration| K2
D3 -->|Migration| K3
D4 -->|Migration| K4| Feature | Docker Compose | Kubernetes |
|---|---|---|
| Service discovery | Static IPs | DNS (nrf.open5gs.svc.cluster.local) |
| Config management | Volume mounts | ConfigMaps / Secrets |
| Scaling | Manual | kubectl scale / HPA |
| Networking | Docker bridge | CNI plugins (Calico, Multus for multi-homing) |
| Restart policy | restart: always |
restartPolicy + liveness probes |
| Secrets | Environment vars | K8s Secrets (encrypted at rest) |
Deployment Strategy
Option A: Helm Chart (Recommended)
The community Helm chart from gradiant/openverso-charts provides a tested deployment:
# Add Helm repository
helm repo add openverso https://gradiant.github.io/openverso-charts/
helm repo update
# Search for available charts
helm search repo openverso/open5gs
Option B: Manual Manifests
For learning purposes, you can create your own K8s manifests. We'll show both approaches.
Helm Deployment
Step 1: Create Namespace
kubectl create namespace open5gs
kubectl label namespace open5gs app=open5gs
Step 2: values.yaml Configuration
Create values.yaml with your lab settings:
# Open5GS Helm Chart values
global:
image:
registry: docker.io
tag: "v2.7.6"
plmn:
mcc: "001"
mnc: "01"
sbi:
port: 7777
mongodb:
uri: "mongodb://mongodb:27017/open5gs"
# MongoDB
mongodb:
enabled: true
persistence:
enabled: true
size: 5Gi
# WebUI
webui:
enabled: true
service:
# NodePort so we can access from host
type: NodePort
nodePort: 31999
# NRF — Service registry
nrf:
enabled: true
replicaCount: 1
config:
sbi:
server:
- address: 0.0.0.0
port: 7777
# AMF — Access and Mobility
amf:
enabled: true
replicaCount: 1
config:
ngap:
server:
- address: 0.0.0.0
guami:
- plmn_id:
mcc: "001"
mnc: "01"
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: "001"
mnc: "01"
tac: 1
plmn_support:
- plmn_id:
mcc: "001"
mnc: "01"
s_nssai:
- sst: 1
security:
integrity_order: [NIA2, NIA1, NIA0]
ciphering_order: [NEA2, NEA1, NEA0]
service:
ngap:
type: NodePort
nodePort: 30000
protocol: SCTP
# SMF — Session Management
smf:
enabled: true
replicaCount: 1
config:
subnet:
- addr: 10.45.0.1/16
dnn: internet
dns:
- 8.8.8.8
# UPF — User Plane
upf:
enabled: true
replicaCount: 1
config:
gtpu:
server:
- address: 0.0.0.0
subnet:
- addr: 10.45.0.1/16
dnn: internet
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN"]
# Supporting NFs
ausf:
enabled: true
udm:
enabled: true
udr:
enabled: true
pcf:
enabled: true
bsf:
enabled: true
nssf:
enabled: true
scp:
enabled: true
Step 3: Install
helm install open5gs openverso/open5gs \
--namespace open5gs \
--values values.yaml \
--wait --timeout 5m
# Check deployment status
kubectl get pods -n open5gs -w
Step 4: Verify All Pods Running
kubectl get pods -n open5gs
Expected:
NAME READY STATUS RESTARTS AGE
open5gs-amf-0 1/1 Running 0 2m
open5gs-ausf-0 1/1 Running 0 2m
open5gs-bsf-0 1/1 Running 0 2m
open5gs-mongodb-0 1/1 Running 0 2m
open5gs-nrf-0 1/1 Running 0 2m
open5gs-nssf-0 1/1 Running 0 2m
open5gs-pcf-0 1/1 Running 0 2m
open5gs-scp-0 1/1 Running 0 2m
open5gs-smf-0 1/1 Running 0 2m
open5gs-udm-0 1/1 Running 0 2m
open5gs-udr-0 1/1 Running 0 2m
open5gs-upf-0 1/1 Running 0 2m
open5gs-webui-0 1/1 Running 0 2m
Step 5: Access WebUI
# Port-forward (alternative to NodePort)
kubectl port-forward -n open5gs svc/open5gs-webui 9999:9999
# Open http://localhost:9999
# Login: admin / 1423
# Register subscribers same as Part 4
Manual Manifests (Learning Path)
If you prefer to learn by building manifests yourself, here's the pattern:
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: open5gs
labels:
app: open5gs
ConfigMap (AMF Example)
apiVersion: v1
kind: ConfigMap
metadata:
name: amf-config
namespace: open5gs
data:
amf.yaml: |
logger:
level: info
amf:
sbi:
server:
- address: 0.0.0.0
port: 7777
client:
nrf:
- uri: http://nrf.open5gs.svc.cluster.local:7777
scp:
- uri: http://scp.open5gs.svc.cluster.local:7777
ngap:
server:
- address: 0.0.0.0
guami:
- plmn_id:
mcc: "001"
mnc: "01"
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: "001"
mnc: "01"
tac: 1
plmn_support:
- plmn_id:
mcc: "001"
mnc: "01"
s_nssai:
- sst: 1
security:
integrity_order: [NIA2, NIA1, NIA0]
ciphering_order: [NEA2, NEA1, NEA0]
Tip
Key K8s Difference: Notice the NRF URI uses DNS (nrf.open5gs.svc.cluster.local) instead of static IPs. Kubernetes Services provide stable DNS names for each NF.
Deployment + Service (AMF Example)
apiVersion: apps/v1
kind: Deployment
metadata:
name: amf
namespace: open5gs
spec:
replicas: 1
selector:
matchLabels:
app: amf
template:
metadata:
labels:
app: amf
spec:
containers:
- name: amf
image: borieher/open5gs-amf:v2.7.6
ports:
- containerPort: 7777 # SBI
protocol: TCP
- containerPort: 38412 # NGAP
protocol: SCTP
volumeMounts:
- name: config
mountPath: /etc/open5gs/amf.yaml
subPath: amf.yaml
readinessProbe:
tcpSocket:
port: 7777
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 7777
initialDelaySeconds: 15
periodSeconds: 20
volumes:
- name: config
configMap:
name: amf-config
---
apiVersion: v1
kind: Service
metadata:
name: amf
namespace: open5gs
spec:
selector:
app: amf
ports:
- name: sbi
port: 7777
protocol: TCP
- name: ngap
port: 38412
protocol: SCTP
Debugging Guide
Pod Not Starting?
# Check pod events (most common: image pull errors, config errors)
kubectl describe pod -n open5gs <pod-name>
# Check container logs
kubectl logs -n open5gs <pod-name> --previous # if pod is in CrashLoopBackOff
# Common issues:
# 1. ImagePullBackOff → wrong image tag, no internet
# 2. CrashLoopBackOff → config error (check YAML syntax)
# 3. Pending → insufficient resources or node affinity issues
NFs Can't Find NRF?
# Verify NRF service exists
kubectl get svc -n open5gs nrf
# Test DNS resolution from another pod
kubectl exec -n open5gs deploy/amf -- \
nslookup nrf.open5gs.svc.cluster.local
# Check NRF logs
kubectl logs -n open5gs deploy/nrf | grep -i "error\|listen"
SCTP Issues (AMF ↔ gNB)
# SCTP is required for NGAP but not always supported by default CNI
# Check if SCTP module is loaded
kubectl exec -n open5gs deploy/amf -- cat /proc/net/sctp/eps
# If SCTP is blocked, you may need to:
# 1. Use Multus for a secondary network with SCTP support
# 2. Or use a CNI that supports SCTP (Calico, Cilium)
Network Policy for Isolation
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-sbi-only
namespace: open5gs
spec:
podSelector:
matchLabels:
app: nrf
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
app: open5gs
ports:
- protocol: TCP
port: 7777
Production Considerations
Multus CNI (Multi-Homing for UPF)
In production, UPF needs multiple network interfaces:
graph LR
subgraph "UPF Pod"
eth0[eth0
SBI/PFCP
Management]
n3[net1
N3 GTP-U
RAN data]
n6[net2
N6
Internet]
end
eth0 --> SBI[SBI Network]
n3 --> RAN[RAN Network]
n6 --> Internet[Internet/DN]
style eth0 fill:#ff9
style n3 fill:#9cf
style n6 fill:#9f9# NetworkAttachmentDefinition for Multus
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: n3-network
namespace: open5gs
spec:
config: |
{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "static",
"addresses": [{"address": "10.100.50.233/24"}]
}
}
Resource Limits
# Always set resource limits in production
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Horizontal Pod Autoscaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: amf-hpa
namespace: open5gs
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: amf
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Cleanup
# Remove Helm release
helm uninstall open5gs -n open5gs
# Delete namespace
kubectl delete namespace open5gs
# Delete kind cluster
kind delete cluster --name open5gs
🔬 Exercises
- Scaling: Scale the AMF to 2 replicas with
kubectl scale deployment amf --replicas=2 -n open5gs. What happens? Does NRF register both? - Chaos Engineering: Delete a random pod with
kubectl delete pod <pod-name> -n open5gs. How quickly does K8s restart it? - Network Policy: Apply the NetworkPolicy above. Verify that pods outside the
open5gsnamespace cannot reach NRF's SBI port. - Config Update: Change the PLMN to
999/99in the AMF ConfigMap and restart the pod. What error does the UE see?
Summary
- ✅ Set up a local Kubernetes cluster with kind
- ✅ Deployed Open5GS via Helm chart with custom
values.yaml - ✅ Understood K8s concepts: ConfigMaps, Services, DNS, Probes, NetworkPolicies
- ✅ Learned debugging workflows for common K8s + telecom issues
- ✅ Explored production patterns: Multus, HPA, resource limits
Next: Part 8: 4G Threat Model →