Published: January 2, 2025
•
Updated: January 2, 2025
The CIS Kubernetes Benchmark is the industry-standard security configuration guideline for Kubernetes clusters. It contains 100+ recommendations covering control plane configuration, worker node security, pod security policies, and network controls.
The challenge: Most organizations know they should follow CIS recommendations, but manual compliance checking is impossible at scale. A single misconfigured pod can expose your entire cluster. Traditional security audits happen quarterly, but deployments happen hundreds of times per day.
This guide shows you how to enforce CIS compliance automatically using admission policies (OPA/Gatekeeper), CI/CD testing (Conftest), and runtime security (Falco). Your cluster will reject non-compliant workloads before they even start.
The CIS Kubernetes Benchmark (v1.8, latest as of 2024) is organized into 5 sections:
API server, controller manager, scheduler, etcd security configurations.
Scope: Managed by cloud provider (EKS/GKE/AKS) or your cluster admin
Encryption at rest, TLS, access controls for the etcd datastore.
Scope: Infrastructure-level (often automated by managed K8s)
RBAC, service account tokens, audit logging, admission controllers.
Scope: YOUR RESPONSIBILITY (this is where OPA comes in)
Kubelet configuration, file permissions, certificate rotation.
Scope: Node-level (use kube-bench for scanning)
Pod Security Standards, Network Policies, seccomp, AppArmor.
Scope: YOUR RESPONSIBILITY (enforce with admission policies)
This guide focuses on Sections 3 and 5—the areas where you have the most control and where OPA/Gatekeeper provides the most value.
OPA Gatekeeper is an admission controller that enforces policies written in Rego (OPA's policy language). It intercepts all API requests to Kubernetes and allows/denies them based on your policies.
# Install Gatekeeper using Helm helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts helm install gatekeeper/gatekeeper --name-template=gatekeeper --namespace gatekeeper-system --create-namespace # Verify installation kubectl get pods -n gatekeeper-system # Expected output: # NAME READY STATUS RESTARTS AGE # gatekeeper-audit-... 1/1 Running 0 1m # gatekeeper-controller-manager-... 1/1 Running 0 1m # gatekeeper-controller-manager-... 1/1 Running 0 1m
Gatekeeper uses ConstraintTemplates (reusable policy definitions) and Constraints (policy instances). Let's implement the most critical CIS controls:
Requirement: Containers should not run as privileged (escalated permissions).
# policies/cis-5-2-1-deny-privileged.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPPrivilegedContainer metadata: name: deny-privileged-containers spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] excludedNamespaces: - kube-system # System pods may need privilege - gatekeeper-system parameters: exemptImages: [] # No exemptions for user workloads
# Apply the policy kubectl apply -f policies/cis-5-2-1-deny-privileged.yaml # Test it (should be rejected) kubectl run test --image=nginx --privileged=true # Output: # Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: # [deny-privileged-containers] Privileged container is not allowed: nginx
# policies/cis-5-2-3-deny-root.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPAllowedUsers metadata: name: deny-running-as-root spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] excludedNamespaces: - kube-system parameters: runAsUser: rule: MustRunAsNonRoot # Enforces runAsNonRoot: true supplementalGroups: rule: MustRunAs ranges: - min: 1 max: 65535 fsGroup: rule: MustRunAs ranges: - min: 1 max: 65535
# policies/cis-5-2-6-deny-host-namespaces.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPHostNamespace metadata: name: deny-host-namespaces spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] excludedNamespaces: - kube-system parameters: allowHostPID: false # Blocks hostPID: true allowHostIPC: false # Blocks hostIPC: true allowHostNetwork: false # Blocks hostNetwork: true
# policies/cis-5-2-7-8-deny-host-access.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPHostFilesystem metadata: name: deny-host-filesystem spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] excludedNamespaces: - kube-system parameters: allowedHostPaths: [] # No direct host path mounting allowed --- apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPHostNetworkingPorts metadata: name: deny-host-ports spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] parameters: # Block hostPort entirely or allow only specific ranges min: 0 max: 0 # Setting both to 0 blocks all hostPort usage
# policies/cis-5-2-9-drop-capabilities.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPCapabilities metadata: name: drop-all-capabilities spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] parameters: requiredDropCapabilities: - ALL # Force dropping all capabilities allowedCapabilities: [] # Then allow specific ones if needed (e.g., NET_BIND_SERVICE)
Waiting until admission time to find policy violations is too late—it breaks deployments in production. Use Conftest to test Kubernetes manifests in CI/CD before they reach the cluster.
# Install Conftest brew install conftest # macOS # or curl -L https://github.com/open-policy-agent/conftest/releases/latest/download/conftest_linux_amd64.tar.gz | tar xz sudo mv conftest /usr/local/bin/ # Create a policy directory mkdir -p policy
# policy/kubernetes.rego package main import future.keywords # CIS 5.2.1: Deny privileged containers deny[msg] { input.kind == "Pod" container := input.spec.containers[_] container.securityContext.privileged == true msg := sprintf("Container '%s' is running as privileged (CIS 5.2.1)", [container.name]) } # CIS 5.2.3: Enforce non-root deny[msg] { input.kind == "Pod" container := input.spec.containers[_] not container.securityContext.runAsNonRoot msg := sprintf("Container '%s' must set runAsNonRoot: true (CIS 5.2.3)", [container.name]) } # CIS 5.2.6: Deny host namespaces deny[msg] { input.kind == "Pod" input.spec.hostPID == true msg := "Pod cannot use hostPID (CIS 5.2.6)" } deny[msg] { input.kind == "Pod" input.spec.hostIPC == true msg := "Pod cannot use hostIPC (CIS 5.2.6)" } deny[msg] { input.kind == "Pod" input.spec.hostNetwork == true msg := "Pod cannot use hostNetwork (CIS 5.2.6)" } # CIS 5.2.7: Deny host ports deny[msg] { input.kind == "Pod" container := input.spec.containers[_] port := container.ports[_] port.hostPort msg := sprintf("Container '%s' cannot use hostPort (CIS 5.2.7)", [container.name]) } # CIS 5.2.9: Require dropping ALL capabilities deny[msg] { input.kind == "Pod" container := input.spec.containers[_] not container.securityContext.capabilities.drop msg := sprintf("Container '%s' must drop ALL capabilities (CIS 5.2.9)", [container.name]) } warn[msg] { input.kind == "Pod" container := input.spec.containers[_] not includes_all(container.securityContext.capabilities.drop) msg := sprintf("Container '%s' should drop ALL capabilities (CIS 5.2.9)", [container.name]) } includes_all(drops) { "ALL" in drops }
# Test a single manifest conftest test deployment.yaml # Test all manifests in a directory conftest test k8s/manifests/ # Output format for CI conftest test k8s/manifests/ --output json > conftest-results.json # Example GitHub Actions workflow # .github/workflows/conftest.yml name: Policy Validation on: pull_request: paths: - 'k8s/**' jobs: conftest: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Conftest run: | curl -L https://github.com/open-policy-agent/conftest/releases/latest/download/conftest_linux_amd64.tar.gz | tar xz sudo mv conftest /usr/local/bin/ - name: Run Conftest run: conftest test k8s/ --policy policy/ --output github - name: Comment on PR if: failure() uses: actions/github-script@v6 with: script: | github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: '❌ **CIS policy violations detected**. Fix before merging. See checks for details.' })
Admission policies prevent non-compliant workloads from starting. But what about runtime behavior? Falco monitors system calls and detects suspicious activity in real time.
# Install Falco using Helm helm repo add falcosecurity https://falcosecurity.github.io/charts helm install falco falcosecurity/falco --namespace falco --create-namespace \ --set falcosidekick.enabled=true \ --set falcosidekick.webui.enabled=true # Verify installation kubectl get pods -n falco
# Create custom Falco rules # falco-rules-cis.yaml - rule: Detect Privileged Container Launch desc: Alert when a privileged container is launched (CIS 5.2.1) condition: > container.privileged=true and not container.image.repository in (trusted_images) output: > Privileged container launched (user=%user.name container_id=%container.id image=%container.image.repository) priority: WARNING tags: [cis, container] - rule: Detect Container Running as Root desc: Alert when a container runs as root (CIS 5.2.3) condition: > spawned_process and proc.pname=containerd-shim and user.uid=0 and not container.image.repository in (trusted_images) output: > Container running as root (user=%user.name container=%container.name image=%container.image.repository) priority: WARNING tags: [cis, container] - rule: Detect Host Mount in Container desc: Alert when sensitive host paths are mounted (CIS 5.2.8) condition: > container and container.mount.dest in (/etc, /var/run/docker.sock, /root, /boot) output: > Sensitive host path mounted in container (container=%container.name path=%container.mount.dest) priority: ERROR tags: [cis, filesystem] - rule: Detect Shell Execution in Container desc: Alert on unexpected shell execution (potential compromise) condition: > spawned_process and container and proc.name in (sh, bash, zsh) and not proc.pname in (supervisord, entrypoint.sh) output: > Shell spawned in container (user=%user.name container=%container.name shell=%proc.name parent=%proc.pname) priority: WARNING tags: [shell, runtime]
# Apply custom rules kubectl create configmap falco-rules-cis --from-file=falco-rules-cis.yaml -n falco # Update Falco deployment to use custom rules helm upgrade falco falcosecurity/falco --namespace falco \ --set customRules.rules-cis.yaml="$(cat falco-rules-cis.yaml)"
# Falcosidekick routes alerts to external systems helm upgrade falco falcosecurity/falco --namespace falco \ --set falcosidekick.config.slack.webhookurl="https://hooks.slack.com/services/YOUR/WEBHOOK/URL" \ --set falcosidekick.config.slack.minimumpriority="warning" \ --set falcosidekick.config.pagerduty.routingkey="YOUR_PD_KEY" \ --set falcosidekick.config.pagerduty.minimumpriority="error" # Alerts will now appear in Slack/PagerDuty in real time
kube-bench runs the CIS Kubernetes Benchmark checks directly on your cluster and generates a compliance report.
# Run kube-bench as a Kubernetes Job kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml # Wait for completion kubectl wait --for=condition=complete job/kube-bench # View results kubectl logs job/kube-bench # Example output: # [INFO] 5 Kubernetes Policies # [INFO] 5.2 Pod Security Standards # [PASS] 5.2.1 Minimize the admission of privileged containers (Manual) # [PASS] 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace # [PASS] 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace # [FAIL] 5.2.6 Minimize the admission of root containers (Manual) # [INFO] Remediation: Create Pod Security Policy that sets runAsNonRoot: true
# .github/workflows/cis-compliance.yml name: Weekly CIS Compliance Scan on: schedule: - cron: '0 0 * * 0' # Every Sunday workflow_dispatch: jobs: kube-bench: runs-on: ubuntu-latest steps: - name: Configure kubectl uses: azure/k8s-set-context@v3 with: kubeconfig: ${{ secrets.KUBECONFIG }} - name: Run kube-bench run: | kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml kubectl wait --for=condition=complete job/kube-bench --timeout=5m kubectl logs job/kube-bench > kube-bench-report.txt - name: Parse results run: | FAIL_COUNT=$(grep -c "\[FAIL\]" kube-bench-report.txt || true) echo "CIS_FAIL_COUNT=$FAIL_COUNT" >> $GITHUB_ENV - name: Upload report uses: actions/upload-artifact@v3 with: name: cis-compliance-report path: kube-bench-report.txt - name: Fail if critical issues if: env.CIS_FAIL_COUNT > 5 run: | echo "❌ CIS Benchmark: $CIS_FAIL_COUNT failures detected" exit 1
CIS 5.3.2 recommends network segmentation. Kubernetes NetworkPolicies enforce which pods can communicate.
# Default deny all ingress traffic apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: production spec: podSelector: {} policyTypes: - Ingress --- # Allow only specific ingress (e.g., from ingress controller) apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-ingress namespace: production spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx ports: - protocol: TCP port: 8080 --- # Deny egress except to DNS and internal services apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns-and-internal namespace: production spec: podSelector: {} policyTypes: - Egress egress: - to: - namespaceSelector: {} # Allow to all namespaces (internal) ports: - protocol: TCP port: 443 - protocol: TCP port: 80 - to: - namespaceSelector: matchLabels: name: kube-system ports: - protocol: UDP port: 53 # DNS
┌─────────────┬────────────────────────────────────────┬──────────────────────────────┐ │ CIS Section │ Control │ Implementation │ ├─────────────┼────────────────────────────────────────┼──────────────────────────────┤ │ 5.2.1 │ Minimize privileged containers │ Gatekeeper + Conftest │ │ 5.2.2 │ Minimize hostPID/hostIPC │ Gatekeeper │ │ 5.2.3 │ Minimize running as root │ Gatekeeper + Conftest │ │ 5.2.4 │ Minimize NET_RAW capability │ Gatekeeper capabilities │ │ 5.2.5 │ Minimize adding capabilities │ Gatekeeper (drop ALL) │ │ 5.2.6 │ Minimize host namespaces │ Gatekeeper │ │ 5.2.7 │ Minimize hostPort │ Gatekeeper │ │ 5.2.8 │ Minimize host path volumes │ Gatekeeper + Falco │ │ 5.2.9 │ Minimize capabilities │ Gatekeeper (drop ALL) │ │ 5.2.13 │ Encrypt secrets at rest │ EKS KMS / GKE CMEK │ │ 5.3.2 │ Network segmentation │ NetworkPolicies │ │ 5.7.3 │ Apply security context │ Conftest in CI/CD │ │ Runtime │ Detect anomalous behavior │ Falco monitoring │ │ Audit │ Continuous compliance scanning │ kube-bench weekly │ └─────────────┴────────────────────────────────────────┴──────────────────────────────┘
Set up Prometheus metrics to track policy violations:
# Gatekeeper exports Prometheus metrics kubectl port-forward -n gatekeeper-system svc/gatekeeper-webhook-service 8443:443 # Metrics available at https://localhost:8443/metrics: # gatekeeper_violations{enforcement_action="deny"} - Number of denied requests # gatekeeper_violations{enforcement_action="dryrun"} - Number of violations in audit mode # Create Prometheus alerts groups: - name: gatekeeper rules: - alert: HighPolicyViolationRate expr: rate(gatekeeper_violations{enforcement_action="deny"}[5m]) > 10 for: 5m annotations: summary: "High rate of policy violations detected" description: "{{ $value }} policy violations per second in the last 5 minutes" - alert: PrivilegedContainerAttempt expr: increase(gatekeeper_violations{constraint_name="deny-privileged-containers"}[1h]) > 0 annotations: summary: "Attempted to launch privileged container" description: "Someone tried to deploy a privileged container (blocked by policy)"
Don't enable enforcement on Day 1—you'll break everything. Use this phased approach:
enforcementAction: dryrunenforcementAction: deny for critical policiesThis setup provides audit-ready evidence for:
For auditors: Provide Gatekeeper audit logs, Conftest CI/CD results, Falco alert history, and quarterly kube-bench reports. This demonstrates continuous compliance, not just point-in-time assessments.
CIS Kubernetes hardening isn't a single tool or checklist—it's a layered approach:
Start today: Deploy Gatekeeper in audit mode, run kube-bench, fix the top 5 violations. Then gradually enforce stricter policies. Your cluster will be demonstrably more secure, and you'll have the audit evidence to prove it.
We implement end-to-end CIS compliance: OPA policies, Conftest CI/CD integration, Falco runtime security, kube-bench automation, and audit-ready documentation.
HostingX IL
Scalable automation & integration platform accelerating modern B2B product teams.
Services
Subscribe to our newsletter
Get monthly email updates about improvements.
Copyright © 2025 HostingX IL. All Rights Reserved.