TP-09_container_k8s
TP-09 — Container & Kubernetes Security Tests
Domain: Infrastructure Security (Docker / Kubernetes)
Standards: 3GPP TS 33.117 §4.3-4.4 · CIS Kubernetes Benchmark · GSMA FS.40 v3.0 §6
Prerequisites: TP-00 Step 11 complete (kind cluster running); Open5GS Helm charts deployed
TC-K8S-01: NetworkPolicy Enforcement — NF Cross-Namespace Isolation
Threat Model
graph TD
subgraph NS_DEFAULT["Namespace: default\n(untrusted workloads)"]
NETSHOOT["netshoot debug pod\n(attacker foothold)"]
ROGUE["Rogue workload\n(compromised app)"]
end
subgraph NS_OPEN5GS["Namespace: open5gs\n(protected)"]
AMF_POD["amf pod\n:7777 SBI"]
UDM_POD["udm pod\n:7777 SBI"]
MONGO_POD["mongodb pod\n:27017"]
end
subgraph NETPOL["NetworkPolicy Rules"]
DENY_ALL["Default: deny-all ingress\n(applied to open5gs ns)"]
ALLOW_INTRA["Allow: intra-namespace\nopen5gs → open5gs"]
end
NETSHOOT -->|curl http://amf.open5gs.svc:7777| POLICY_CHECK{"NetworkPolicy\nallows this?"}
ROGUE -->|mongosh mongodb://mongodb.open5gs:27017| POLICY_CHECK
POLICY_CHECK -->|No policy = open| VULN["❌ CRITICAL:\nAll namespaces can reach\nopen5gs NFs and MongoDB"]
POLICY_CHECK -->|deny-all applied| BLOCKED["✅ Connection dropped\nDefault deny enforced"]
AMF_POD -->|intra-ns OK| UDM_POD
AMF_POD -->|intra-ns OK| MONGO_POD
style VULN fill:#c0392b,color:#fff
style BLOCKED fill:#27ae60,color:#fff
style DENY_ALL fill:#16213e,color:#e0e0e0Objective
Verify Kubernetes NetworkPolicies prevent cross-namespace access to 5GC NF pods.
Steps
# 1. Verify kind cluster is running
kubectl get nodes -o wide
kubectl get ns
# 2. Apply default-deny NetworkPolicy to open5gs namespace
cat << 'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: open5gs
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-intra-namespace
namespace: open5gs
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
policyTypes:
- Ingress
EOF
# 3. Test: launch attacker pod in 'default' namespace
kubectl run netshoot --rm -it --restart=Never \
--image=nicolaka/netshoot \
--namespace=default \
-- sh -c "
echo '=== Test: Can default namespace reach open5gs AMF? ==='
curl -s --connect-timeout 3 http://amf.open5gs.svc.cluster.local:7777 \
&& echo 'REACHABLE - FINDING!' \
|| echo 'BLOCKED - NetworkPolicy working'
echo '=== Test: Can default namespace reach MongoDB? ==='
nc -zv -w3 mongodb.open5gs.svc.cluster.local 27017 \
&& echo 'REACHABLE - CRITICAL FINDING!' \
|| echo 'BLOCKED - NetworkPolicy working'
"
# 4. Verify intra-namespace traffic still works
kubectl run intra-test --rm -it --restart=Never \
--image=curlimages/curl \
--namespace=open5gs \
-- curl -s --connect-timeout 3 http://nrf:7777/nnrf-nfm/v1/nf-instances
# Should succeed (same namespace)
# 5. List all NetworkPolicies
kubectl get networkpolicies -n open5gs
kubectl describe networkpolicies -n open5gs
Expected Results
defaultnamespace pod CANNOT reachopen5gsNF pods- MongoDB unreachable from
defaultnamespace - Intra-namespace (NF-to-NF) traffic unaffected
kubectl get networkpolicies -n open5gsshowsdefault-deny-ingressandallow-intra-namespace
Pass Criteria
Zero successful cross-namespace connections. Intra-namespace traffic functional.
TC-K8S-02: RBAC — Least Privilege Verification
Threat Model
graph TD
subgraph ATTACK["Attack: Privilege Escalation via RBAC"]
ATK["Attacker compromises\nAMF pod"]
ATK_SA["AMF service account\n(amf-sa)"]
ATK -->|exec into pod| ATK_SA
end
subgraph RBAC_CHECK["RBAC Permission Check"]
ATK_SA -->|kubectl get secrets --all-namespaces| CHECK1{"SA has\ncluster-admin?"}
ATK_SA -->|kubectl get nodes| CHECK2{"SA has\ncluster-scope?"}
ATK_SA -->|kubectl delete pods -n open5gs| CHECK3{"SA has\ndelete pods?"}
end
CHECK1 -->|Yes| FULL_CLUSTER["❌ CRITICAL:\nFull cluster takeover\nAll workloads at risk"]
CHECK1 -->|No| LIMITED["✅ Cluster-admin\nnot granted"]
CHECK2 -->|Yes| NODE_INFO["⚠️ Node enumeration\nAttacker maps cluster"]
CHECK2 -->|No| NODE_BLOCKED["✅ Node access\nblocked"]
CHECK3 -->|Yes| POD_DEL["⚠️ Can delete own-namespace pods\n(acceptable: own pod management)"]
CHECK3 -->|No| POD_BLOCKED["✅ Cannot disrupt\nother NF pods"]
subgraph CORRECT["Correct RBAC Design"]
C1["AMF SA: only needs\nConfigMap read (own config)\n+ own pod status"]
C2["No cluster-wide roles\nNo secrets read\nNo cross-namespace access"]
end
style FULL_CLUSTER fill:#c0392b,color:#fff
style LIMITED fill:#27ae60,color:#fff
style NODE_BLOCKED fill:#27ae60,color:#fff
style POD_BLOCKED fill:#27ae60,color:#fffObjective
Verify no NF service account has cluster-admin or excessive permissions; enforce least privilege.
Steps
# 1. Check for cluster-admin bindings involving NF service accounts
echo "=== Cluster-Admin Bindings ==="
kubectl get clusterrolebinding -o json | \
jq -r '.items[] | select(.roleRef.name=="cluster-admin") |
"CRB: " + .metadata.name +
" -> " + (.subjects // [] | map(.name) | join(", "))'
# 2. List all service accounts in open5gs namespace
echo "=== Service Accounts in open5gs ==="
kubectl get serviceaccounts -n open5gs
# 3. Check what each SA can do (auth can-i)
for SA in $(kubectl get sa -n open5gs -o name | cut -d/ -f2); do
echo ""
echo "=== Service Account: ${SA} ==="
# Check dangerous permissions
kubectl auth can-i get secrets --namespace=default \
--as=system:serviceaccount:open5gs:${SA} 2>/dev/null && \
echo " ⚠️ Can read secrets in 'default' namespace"
kubectl auth can-i list nodes \
--as=system:serviceaccount:open5gs:${SA} 2>/dev/null && \
echo " ⚠️ Can list cluster nodes"
kubectl auth can-i delete pods --namespace=kube-system \
--as=system:serviceaccount:open5gs:${SA} 2>/dev/null && \
echo " ❌ CRITICAL: Can delete kube-system pods!"
kubectl auth can-i get configmaps --namespace=open5gs \
--as=system:serviceaccount:open5gs:${SA} 2>/dev/null && \
echo " ✅ Can read own ConfigMaps (expected)"
done
# 4. Check roles and rolebindings
echo ""
echo "=== Roles in open5gs ==="
kubectl get roles,rolebindings -n open5gs -o yaml | \
grep -A10 "rules:\|roleRef:"
# 5. Check for wildcard permissions
kubectl get roles -n open5gs -o json | \
jq '.items[] | select(.rules[].resources[] == "*") | .metadata.name'
# Should return nothing — no wildcard resource permissions
Expected Results
- No NF service account has
cluster-adminClusterRoleBinding - No SA can read secrets outside
open5gsnamespace - No SA can list/modify cluster nodes
- Each SA limited to
open5gsnamespace operations - No wildcard (
*) resource permissions
Pass Criteria
Zero cluster-admin bindings for NF SAs. All SAs scoped to namespace. No wildcards.
TC-K8S-03: Container Escape Prevention — Security Context Hardening
Threat Model
graph TD
subgraph CONTAINER["AMF Container (open5gs-amf)"]
PROC["AMF process\nUID: ?"]
FS["Root filesystem\nread-only?"]
CAP["Linux Capabilities\nNET_ADMIN needed\nSYS_ADMIN dangerous"]
SEC["Seccomp Profile\nAppArmor?"]
end
subgraph ESCAPE_PATH["Container Escape Paths"]
E1["UID=0 (root)\n→ Exploit kernel vulns\n→ Host root access"]
E2["Writable /\n→ Modify binaries\n→ Persistence"]
E3["SYS_ADMIN cap\n→ Mount host filesystem\n→ Bypass all controls"]
E4["No seccomp\n→ Dangerous syscalls\n→ Namespace escape"]
end
PROC -->|Running as root| E1
FS -->|Writable| E2
CAP -->|SYS_ADMIN granted| E3
SEC -->|No profile| E4
subgraph HARDENING["Security Context Hardening"]
H1["runAsNonRoot: true\nrunAsUser: 1000"]
H2["readOnlyRootFilesystem: true"]
H3["capabilities:\n drop: [ALL]\n add: [NET_ADMIN]"]
H4["seccompProfile:\n type: RuntimeDefault"]
end
H1 -.->|Prevents| E1
H2 -.->|Prevents| E2
H3 -.->|Prevents| E3
H4 -.->|Prevents| E4
style E1 fill:#c0392b,color:#fff
style E2 fill:#c0392b,color:#fff
style E3 fill:#c0392b,color:#fff
style E4 fill:#c0392b,color:#fff
style H1 fill:#27ae60,color:#fff
style H2 fill:#27ae60,color:#fff
style H3 fill:#27ae60,color:#fff
style H4 fill:#27ae60,color:#fffObjective
Verify NF containers run as non-root with read-only filesystem, minimal capabilities, and seccomp profiles.
Steps
# 1. Check security context for all NF pods
echo "=== Container Security Contexts ==="
kubectl get pods -n open5gs -o json | jq -r '
.items[] |
.metadata.name as $pod |
.spec.containers[] |
{
pod: $pod,
container: .name,
runAsNonRoot: (.securityContext.runAsNonRoot // "NOT SET"),
runAsUser: (.securityContext.runAsUser // "NOT SET"),
readOnlyFS: (.securityContext.readOnlyRootFilesystem // "NOT SET"),
capabilities: (.securityContext.capabilities // "NOT SET"),
seccomp: (.securityContext.seccompProfile.type // "NOT SET")
}
'
# 2. Test: attempt to write to root filesystem
echo ""
echo "=== Test: Write to root filesystem ==="
for POD in $(kubectl get pods -n open5gs -o name | head -3 | cut -d/ -f2); do
RESULT=$(kubectl exec -n open5gs ${POD} -- sh -c "touch /test-write 2>&1" 2>&1)
echo "${POD}: ${RESULT}"
# Expected: "Read-only file system" or "Permission denied"
done
# 3. Test: Check running UID
echo ""
echo "=== Test: Container user IDs ==="
for POD in $(kubectl get pods -n open5gs -o name | head -5 | cut -d/ -f2); do
UID=$(kubectl exec -n open5gs ${POD} -- id -u 2>/dev/null)
echo "${POD}: UID=${UID} $([ "${UID}" = "0" ] && echo '❌ ROOT' || echo '✅ non-root')"
done
# 4. Check capabilities
echo ""
echo "=== Test: Container capabilities ==="
for POD in $(kubectl get pods -n open5gs -o name | head -3 | cut -d/ -f2); do
kubectl exec -n open5gs ${POD} -- cat /proc/self/status 2>/dev/null | \
grep "Cap" | head -3
echo "---"
done
# 5. Apply hardening if missing (example for AMF deployment)
kubectl patch deployment amf -n open5gs --type='json' -p='[
{
"op": "add",
"path": "/spec/template/spec/containers/0/securityContext",
"value": {
"runAsNonRoot": true,
"runAsUser": 1000,
"readOnlyRootFilesystem": true,
"allowPrivilegeEscalation": false,
"capabilities": {
"drop": ["ALL"],
"add": ["NET_ADMIN"]
},
"seccompProfile": {"type": "RuntimeDefault"}
}
}
]' 2>/dev/null || echo "Patch requires existing deployment — adjust path as needed"
Expected Results
- All NF containers:
runAsNonRoot: true, UID ≠ 0 readOnlyRootFilesystem: true→ write to/fails- Capabilities:
drop: ALL,add: NET_ADMINonly (UPF may need NET_ADMIN for TUN) seccompProfile: RuntimeDefaultapplied- No
SYS_ADMIN,SYS_PTRACE, orDAC_OVERRIDEcapabilities
Pass Criteria
Zero root-running containers. Read-only filesystem enforced. Only NET_ADMIN capability present (UPF/gNB). Seccomp default profile applied.
TC-K8S-04: etcd Access Control
Threat Model
graph TD
subgraph ETCD["etcd (Kubernetes data store)"]
DB["etcd database\nAll K8s state:\n- Pod specs\n- Secrets (base64)\n- RBAC policies\n- NetworkPolicies\n- Service accounts"]
end
subgraph ATTACK["Attack Vectors"]
A1["Rogue pod attempts\ndirect etcd connection\n172.x.x.x:2379"]
A2["NF pod with\nkubectl access\nreads all secrets"]
A3["etcd port exposed\non node IP\n(misconfigured)"]
end
A1 -->|No mTLS| ETCD
A2 -->|K8s API → etcd| ETCD
A3 -->|Unauthenticated| ETCD
ETCD -->|If accessible| IMPACT["❌ CRITICAL:\nAll K8s secrets exposed\nAll subscriber secrets\nAll TLS private keys\nFull cluster takeover"]
subgraph DEFENSE["Defenses"]
D1["etcd mTLS required\n(client cert + key)"]
D2["etcd bound to\n127.0.0.1:2379 only\n(not node IP)"]
D3["RBAC: only\nkube-apiserver\ncan access etcd"]
end
D1 -.->|Mitigates| A1
D2 -.->|Mitigates| A3
D3 -.->|Mitigates| A2
style IMPACT fill:#c0392b,color:#fff
style D1 fill:#27ae60,color:#fff
style D2 fill:#27ae60,color:#fff
style D3 fill:#27ae60,color:#fffObjective
Verify etcd is not accessible without valid client TLS certificates; confirm it is not exposed on node network interfaces.
Steps
# 1. Find etcd pod/endpoint
kubectl get pods -n kube-system | grep etcd
ETCD_POD=$(kubectl get pods -n kube-system -l component=etcd -o name | head -1)
echo "etcd pod: ${ETCD_POD}"
# 2. Check etcd listen address
kubectl exec -n kube-system ${ETCD_POD} -- \
sh -c "netstat -tlnp 2>/dev/null || ss -tlnp" | grep 2379
# 3. Attempt unauthenticated etcd access from NF pod
NF_POD=$(kubectl get pods -n open5gs -o name | head -1 | cut -d/ -f2)
echo "=== Test: NF pod cannot access etcd ==="
kubectl exec -n open5gs ${NF_POD} -- \
sh -c "curl -s --connect-timeout 3 https://172.18.0.2:2379/health --insecure" 2>&1 | \
grep -E "curl|error|SSL|certificate"
# Expected: SSL certificate error or connection refused
# 4. Attempt without TLS (cleartext)
kubectl exec -n open5gs ${NF_POD} -- \
sh -c "curl -s --connect-timeout 3 http://172.18.0.2:2379/health" 2>&1
# Expected: Connection refused
# 5. Check etcd configuration for mTLS
kubectl exec -n kube-system ${ETCD_POD} -- \
cat /etc/kubernetes/manifests/etcd.yaml 2>/dev/null | \
grep -E "client-cert|key-file|trusted-ca|peer-cert" | head -10
# 6. Verify etcd only accessible with valid certs (using etcdctl from control-plane)
kubectl exec -n kube-system ${ETCD_POD} -- \
etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint health 2>/dev/null
# ✅ healthy — shows cert-based access works
# 7. Confirm etcd secrets are at least base64 encoded (not plaintext)
kubectl exec -n kube-system ${ETCD_POD} -- \
etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/open5gs --prefix --keys-only 2>/dev/null | head -5
Expected Results
- etcd bound to
127.0.0.1:2379only (not node IP) - Unauthenticated connections: connection refused or TLS required
- NF pod cannot connect to etcd (different network + mTLS required)
- etcd TLS configured:
--client-cert-auth=true,--trusted-ca-fileset
Pass Criteria
etcd not accessible from any pod without valid mTLS certificate. Not exposed on node external IP.