ComputeAdvanced22 min read

    AWS EKS Security Best Practices

    Tarek Cheikh

    Founder & AWS Security Expert

    View Security Card

    Amazon Elastic Kubernetes Service (EKS) is the most popular managed Kubernetes platform on AWS, running mission-critical workloads across thousands of organizations. But Kubernetes is complex -- with its own identity system, networking model, and API surface, it introduces an entirely new attack plane layered on top of AWS IAM. A single misconfigured ClusterRoleBinding, an exposed API server endpoint, or an unencrypted secret can give an attacker full cluster compromise.

    In 2025, Wiz Research disclosed that over 14% of EKS clusters they analyzed had publicly accessible API endpoints with overly permissive RBAC bindings. The OWASP Kubernetes Top 10 lists insecure workload configurations, lack of network segmentation, and secrets management failures as the most exploited weaknesses. Meanwhile, AWS has been rapidly evolving EKS security -- launching Pod Identity as the successor to IRSA, EKS Auto Mode for fully managed infrastructure, enhanced VPC CNI network policies, and GuardDuty Extended Threat Detection for EKS.

    This guide covers 12 advanced EKS security best practices, each with real CLI commands, policy examples, and references to the latest AWS and CIS controls.

    1. Restrict API Server Endpoint Access

    By default, EKS creates a publicly accessible Kubernetes API server endpoint. This means anyone on the internet can attempt to authenticate against your cluster. Even with strong authentication, exposing the endpoint increases the attack surface and enables reconnaissance. Security Hub control [EKS.1] flags clusters with public endpoints.

    Implementation

    • Enable private endpoint access so nodes and pods communicate with the API server over the VPC.
    • Disable public endpoint access entirely if all access originates from within the VPC or via VPN/Direct Connect.
    • If public access is required (e.g., CI/CD pipelines outside the VPC), restrict it to specific CIDR blocks.
    # Make the API endpoint private-only
    aws eks update-cluster-config   --name my-cluster   --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
    
    # If public access is needed, restrict to specific CIDRs
    aws eks update-cluster-config   --name my-cluster   --resources-vpc-config     endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs="203.0.113.0/24,198.51.100.0/24"
    
    # Verify current endpoint configuration
    aws eks describe-cluster --name my-cluster   --query "cluster.resourcesVpcConfig.{Public:endpointPublicAccess,Private:endpointPrivateAccess,CIDRs:publicAccessCidrs}"

    Security Hub: [EKS.1] EKS cluster endpoints should not be publicly accessible. Enable private endpoint access and disable or restrict public access to pass this control.

    2. Use Pod Identity for Least-Privilege Pod-Level IAM

    Pods often need access to AWS services -- S3, DynamoDB, SQS, Secrets Manager. The wrong approach is attaching broad IAM policies to the node instance profile, which grants every pod on that node the same permissions. EKS Pod Identity is the recommended mechanism for granting fine-grained, per-pod AWS permissions, replacing the older IAM Roles for Service Accounts (IRSA) approach.

    Pod Identity vs. IRSA

    • Pod Identity (recommended): No OIDC provider setup required. Simpler trust policies. Centralized visibility via ListPodIdentityAssociations. Faster role changes without pod restarts. Supports session tags for attribute-based access control (ABAC).
    • IRSA (still supported): Requires an OIDC provider per cluster. More complex trust policy management. Still valid for non-EKS Kubernetes (EKS Anywhere, self-managed, OpenShift on AWS).

    Implementation

    # Step 1: Install the EKS Pod Identity Agent add-on
    aws eks create-addon   --cluster-name my-cluster   --addon-name eks-pod-identity-agent
    
    # Step 2: Create an IAM role with Pod Identity trust policy
    cat <<'EOF' > pod-identity-trust.json
    {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": "pods.eks.amazonaws.com"
        },
        "Action": ["sts:AssumeRole", "sts:TagSession"]
      }]
    }
    EOF
    
    aws iam create-role   --role-name my-app-pod-role   --assume-role-policy-document file://pod-identity-trust.json
    
    # Attach only the permissions your pod needs (least privilege)
    aws iam attach-role-policy   --role-name my-app-pod-role   --policy-arn arn:aws:iam::123456789012:policy/MyAppMinimalPolicy
    
    # Step 3: Create the pod identity association
    aws eks create-pod-identity-association   --cluster-name my-cluster   --namespace my-namespace   --service-account my-service-account   --role-arn arn:aws:iam::123456789012:role/my-app-pod-role
    
    # List all pod identity associations for audit
    aws eks list-pod-identity-associations --cluster-name my-cluster

    Block IMDS Access

    When using Pod Identity or IRSA, block access to the EC2 Instance Metadata Service (IMDS) to prevent pods from inheriting node-level permissions:

    # Block IMDS access at the node level (launch template user data)
    # Set HttpPutResponseHopLimit=1 so containers cannot reach IMDS
    aws ec2 modify-instance-metadata-options   --instance-id i-1234567890abcdef0   --http-put-response-hop-limit 1   --http-tokens required

    Best practice: Migrate from IRSA to Pod Identity for all new workloads. Pod Identity provides better auditability and simpler management at scale. IRSA is not deprecated but Pod Identity is the forward-looking standard.

    3. Enforce Pod Security Standards

    Pods can request dangerous capabilities -- running as root, mounting the host filesystem, using host networking, escalating privileges. Pod Security Standards (PSS), enforced by the built-in Pod Security Admission (PSA) controller, provide three profiles: Privileged (unrestricted), Baseline (prevents known privilege escalations), and Restricted (heavily hardened, current best practice).

    Implementation

    Apply PSS at the namespace level using labels:

    # Enforce the restricted profile on a namespace
    kubectl label namespace my-app-namespace   pod-security.kubernetes.io/enforce=restricted   pod-security.kubernetes.io/audit=restricted   pod-security.kubernetes.io/warn=restricted
    
    # For system namespaces that need elevated privileges, use baseline
    kubectl label namespace kube-system   pod-security.kubernetes.io/enforce=baseline   pod-security.kubernetes.io/audit=restricted   pod-security.kubernetes.io/warn=restricted
    
    # Verify labels on all namespaces
    kubectl get namespaces -L pod-security.kubernetes.io/enforce

    Restricted Profile Requirements

    Pods in a restricted namespace must:

    • Run as non-root (runAsNonRoot: true)
    • Drop all capabilities (drop: ["ALL"])
    • Use a read-only root filesystem where possible
    • Disallow privilege escalation (allowPrivilegeEscalation: false)
    • Use seccomp profile RuntimeDefault or Localhost
    # Example pod spec compliant with restricted profile
    apiVersion: v1
    kind: Pod
    metadata:
      name: secure-app
      namespace: my-app-namespace
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: app
        image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:v1.2.3
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop: ["ALL"]
        resources:
          limits:
            memory: "256Mi"
            cpu: "500m"

    Migration tip: Start by applying PSS in warn and audit modes first. Review warnings and audit logs to identify non-compliant workloads before switching to enforce.

    4. RBAC Least Privilege and Namespace Isolation

    Kubernetes RBAC controls who can do what within the cluster. Overly permissive ClusterRoleBindings -- especially granting cluster-admin to service accounts or groups -- are one of the most common EKS misconfigurations. The system:masters group in EKS bypasses all RBAC and cannot be audited.

    Implementation

    # Audit existing ClusterRoleBindings for overly broad access
    kubectl get clusterrolebindings -o json |   jq '.items[] | select(.roleRef.name == "cluster-admin") | .metadata.name'
    
    # Use EKS access entries instead of aws-auth ConfigMap (recommended)
    aws eks create-access-entry   --cluster-name my-cluster   --principal-arn arn:aws:iam::123456789012:role/DevTeamRole   --type STANDARD
    
    # Associate a scoped access policy (namespace-level)
    aws eks associate-access-policy   --cluster-name my-cluster   --principal-arn arn:aws:iam::123456789012:role/DevTeamRole   --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy   --access-scope type=namespace,namespaces=dev-team

    RBAC Best Practices

    • Use EKS access entries (API mode) instead of the legacy aws-auth ConfigMap. Access entries are managed via the AWS API and are auditable in CloudTrail.
    • Prefer namespace-scoped Roles over ClusterRoles. Developers should only access their own namespaces.
    • Never bind to system:masters for day-to-day operations. Use it only for break-glass emergency access.
    • Audit regularly: Run kubectl auth can-i --list --as=system:serviceaccount:ns:sa to verify effective permissions.
    • Disable anonymous access: EKS disables it by default, but verify with kubectl auth can-i --list --as=system:anonymous.

    CIS EKS Benchmark: Section 4 covers RBAC and Service Accounts with 15 controls, including minimizing wildcard use in Roles (4.1.3) and ensuring default service account tokens are not auto-mounted (4.1.6).

    5. Encrypt Secrets with KMS Envelope Encryption

    Kubernetes secrets are stored in etcd, and by default EKS encrypts the underlying EBS volumes. However, this is storage-level encryption only -- anyone with etcd access or API server access can read secret values in plaintext. Envelope encryption with AWS KMS adds an additional layer: secrets are encrypted with a data encryption key (DEK), and the DEK itself is encrypted with your KMS customer managed key (CMK). Security Hub control [EKS.3] checks for this.

    Implementation

    # Create a KMS key for EKS secrets encryption
    aws kms create-key   --description "EKS Secrets Encryption Key"   --key-usage ENCRYPT_DECRYPT   --origin AWS_KMS
    
    # Enable secrets encryption on an existing cluster
    aws eks associate-encryption-config   --cluster-name my-cluster   --encryption-config '[{
        "resources": ["secrets"],
        "provider": {
          "keyArn": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
        }
      }]'
    
    # For a new cluster, include encryption at creation time
    aws eks create-cluster   --name new-cluster   --role-arn arn:aws:iam::123456789012:role/EKSClusterRole   --resources-vpc-config subnetIds=subnet-abc123,subnet-def456   --encryption-config '[{
        "resources": ["secrets"],
        "provider": {
          "keyArn": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
        }
      }]'
    
    # Verify encryption configuration
    aws eks describe-cluster --name my-cluster   --query "cluster.encryptionConfig"

    Important: Once envelope encryption is enabled, it cannot be disabled. All new secrets are encrypted immediately, and existing secrets are encrypted upon their next update. Use kubectl get secrets --all-namespaces -o json | kubectl replace -f - to force re-encryption of all existing secrets.

    Security Hub: [EKS.3] EKS clusters should use encrypted Kubernetes secrets.

    6. Enforce Network Policies for Pod Isolation

    By default, all pods in a Kubernetes cluster can communicate with every other pod -- there is no network segmentation. This means a compromised pod can freely reach databases, internal APIs, and the metadata service. Network policies provide firewall rules at the pod level.

    Implementation with VPC CNI

    Amazon VPC CNI now natively supports Kubernetes Network Policies using eBPF, removing the need for third-party CNI plugins like Calico for basic policy enforcement:

    # Enable network policy support on VPC CNI
    aws eks create-addon   --cluster-name my-cluster   --addon-name vpc-cni   --addon-version v1.19.0-eksbuild.1   --configuration-values '{"enableNetworkPolicy": "true"}'
    
    # Or update existing VPC CNI add-on
    kubectl set env daemonset aws-node   -n kube-system   ENABLE_NETWORK_POLICY="true"

    Example Network Policies

    # Default deny all ingress traffic in a namespace
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: default-deny-ingress
      namespace: production
    spec:
      podSelector: {}
      policyTypes:
      - Ingress
    ---
    # Allow traffic only from specific pods
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-frontend-to-backend
      namespace: production
    spec:
      podSelector:
        matchLabels:
          app: backend
      policyTypes:
      - Ingress
      ingress:
      - from:
        - podSelector:
            matchLabels:
              app: frontend
        ports:
        - protocol: TCP
          port: 8080
    ---
    # Restrict egress to only required services
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: backend-egress
      namespace: production
    spec:
      podSelector:
        matchLabels:
          app: backend
      policyTypes:
      - Egress
      egress:
      - to:
        - podSelector:
            matchLabels:
              app: database
        ports:
        - protocol: TCP
          port: 5432
      - to:  # Allow DNS resolution
        - namespaceSelector: {}
          podSelector:
            matchLabels:
              k8s-app: kube-dns
        ports:
        - protocol: UDP
          port: 53

    2025 Update: Amazon EKS introduced enhanced network security policies in late 2025, adding DNS-based egress policies and cluster-wide network access filters. For advanced use cases (global policies, host endpoint protection, policy tiers), Calico Enterprise remains the most feature-rich option.

    7. Container Image Security and Admission Control

    Running unvetted container images is one of the fastest paths to cluster compromise. Supply chain attacks, vulnerable base images, and embedded malware are all common vectors. A defense-in-depth approach combines image scanning, trusted registries, and admission controllers.

    ECR Image Scanning

    # Enable enhanced scanning (powered by Amazon Inspector)
    aws ecr put-registry-scanning-configuration   --scan-type ENHANCED   --rules '[{
        "repositoryFilters": [{"filter": "*", "filterType": "WILDCARD"}],
        "scanFrequency": "CONTINUOUS_SCAN"
      }]'
    
    # Check scan findings for a specific image
    aws ecr describe-image-scan-findings   --repository-name my-app   --image-id imageTag=latest   --query "imageScanFindings.findingSeverityCounts"
    
    # Enable image tag immutability to prevent tag overwriting
    aws ecr put-image-tag-mutability   --repository-name my-app   --image-tag-mutability IMMUTABLE

    Admission Controllers

    Use Kyverno or OPA Gatekeeper to enforce image policies at deploy time:

    # Kyverno policy: only allow images from your ECR registry
    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: restrict-image-registries
    spec:
      validationFailureAction: Enforce
      rules:
      - name: validate-registries
        match:
          any:
          - resources:
              kinds:
              - Pod
        validate:
          message: "Images must come from the approved ECR registry."
          pattern:
            spec:
              containers:
              - image: "123456789012.dkr.ecr.*.amazonaws.com/*"
    ---
    # Kyverno policy: require image digest (no mutable tags)
    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: require-image-digest
    spec:
      validationFailureAction: Enforce
      rules:
      - name: check-digest
        match:
          any:
          - resources:
              kinds:
              - Pod
        validate:
          message: "Images must use a digest, not a tag."
          pattern:
            spec:
              containers:
              - image: "*@sha256:*"

    2025 Update: Amazon Inspector now maps ECR images to specific EKS pods, correlating vulnerabilities with the exact running workloads. Enhanced scanning supports scratch, distroless, and Chainguard base images.

    8. Enable GuardDuty EKS Protection and Runtime Monitoring

    Amazon GuardDuty provides two layers of EKS threat detection: EKS Audit Log Monitoring (analyzes Kubernetes API activity) and EKS Runtime Monitoring (uses a managed eBPF agent to detect container-level threats like reverse shells, crypto miners, and privilege escalation).

    Implementation

    # Enable GuardDuty EKS Protection
    aws guardduty update-detector   --detector-id abcdef1234567890   --features '[{
        "Name": "EKS_AUDIT_LOGS",
        "Status": "ENABLED"
      }, {
        "Name": "RUNTIME_MONITORING",
        "Status": "ENABLED",
        "AdditionalConfiguration": [{
          "Name": "EKS_ADDON_MANAGEMENT",
          "Status": "ENABLED"
        }]
      }]'
    
    # Verify the GuardDuty agent is running on EKS nodes
    kubectl get pods -n amazon-guardduty -l app=guardduty-agent
    
    # List EKS-specific findings
    aws guardduty list-findings   --detector-id abcdef1234567890   --finding-criteria '{
        "Criterion": {
          "resource.resourceType": {
            "Eq": ["EKSCluster"]
          }
        }
      }'

    Key EKS Threat Types Detected

    • Kubernetes API threats: Anonymous API access, exposed dashboards, suspicious exec into pods, Tor network access
    • Runtime threats: Reverse shells, crypto mining processes, container escape attempts, suspicious DNS queries, binary execution from unusual locations
    • Extended Threat Detection (2025): Multi-stage attack correlation across EKS audit logs, runtime behavior, malware execution, and AWS API activity

    Best practice: Enable EKS_ADDON_MANAGEMENT to let GuardDuty automatically manage the runtime agent add-on. Use an SCP to prevent disabling GuardDuty in member accounts. Route findings to Security Hub for centralized visibility.

    9. Enable Comprehensive Control Plane Logging

    EKS control plane logs are disabled by default. Without them, you have no visibility into API server requests, authentication events, or authorization decisions -- making incident response effectively impossible.

    Implementation

    # Enable all five log types
    aws eks update-cluster-config   --name my-cluster   --logging '{
        "clusterLogging": [{
          "types": ["api", "audit", "authenticator", "controllerManager", "scheduler"],
          "enabled": true
        }]
      }'
    
    # Verify logging configuration
    aws eks describe-cluster --name my-cluster   --query "cluster.logging.clusterLogging"
    
    # Query audit logs in CloudWatch Logs Insights
    # Find all failed authentication attempts
    aws logs start-query   --log-group-name /aws/eks/my-cluster/cluster   --start-time $(date -d '24 hours ago' +%s)   --end-time $(date +%s)   --query-string 'fields @timestamp, user.username, responseStatus.code
        | filter @logStream like /kube-apiserver-audit/
        | filter responseStatus.code >= 400
        | sort @timestamp desc
        | limit 50'

    Critical Log Types

    • audit: Records all API requests and responses. Essential for security investigations and compliance. The most important log type.
    • authenticator: Records authentication decisions (IAM to Kubernetes user mapping). Critical for detecting unauthorized access attempts.
    • api: API server logs. Useful for troubleshooting but can be high volume.
    • controllerManager / scheduler: Lower priority for security but useful for operational visibility.

    Cost optimization: At minimum, enable audit and authenticator logs. Set CloudWatch Logs retention to 90-365 days based on compliance requirements. Export to S3 for long-term retention at lower cost.

    10. Harden Worker Nodes

    Worker nodes run your containers and are the primary compute surface. A compromised node means access to all pods on that node, their secrets, service account tokens, and potentially the ability to pivot to other nodes or the control plane.

    Use Managed Node Groups

    # Create a managed node group with Bottlerocket AMI
    aws eks create-nodegroup   --cluster-name my-cluster   --nodegroup-name secure-nodes   --node-role arn:aws:iam::123456789012:role/EKSNodeRole   --subnets subnet-abc123 subnet-def456   --ami-type BOTTLEROCKET_x86_64   --instance-types m6i.large   --scaling-config minSize=2,maxSize=10,desiredSize=3   --update-config maxUnavailable=1
    
    # Enable automatic node group updates
    aws eks update-nodegroup-config   --cluster-name my-cluster   --nodegroup-name secure-nodes   --update-config maxUnavailable=1

    Why Bottlerocket

    • Minimal attack surface: No shell, no package manager, no SSH by default. The OS includes only what is needed to run containers.
    • Immutable root filesystem: Verified at boot using dm-verity. Tampering triggers a reboot to a known-good state.
    • Automatic updates: Bottlerocket uses an A/B partition scheme for safe, atomic OS updates with automatic rollback on failure.
    • SELinux enforcing: Mandatory access control policies are applied by default.

    Additional Node Hardening

    • Use IMDSv2 only: Set HttpTokens=required in the launch template to block IMDSv1 (vulnerable to SSRF attacks).
    • Restrict instance profile permissions: The node IAM role should have only the minimum policies: AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly.
    • Keep Kubernetes versions current: Security Hub control [EKS.2] checks that clusters run on supported versions. EKS supports three minor versions at a time.

    Security Hub: [EKS.2] EKS clusters should run on a supported Kubernetes version.

    11. Leverage EKS Auto Mode for AWS-Managed Security

    EKS Auto Mode, launched at re:Invent 2024 and enhanced throughout 2025, shifts the operational and security burden of worker node management to AWS. It eliminates the need to configure managed node groups, auto-scaling groups, or CNI plugins -- AWS manages everything from node provisioning to OS patching.

    Security Benefits

    • Automatic patching: AWS applies security patches to managed instances without cluster downtime. Nodes are automatically replaced.
    • 21-day maximum node lifetime: Nodes are automatically recycled every 21 days, ensuring no node runs stale software for extended periods.
    • Bottlerocket-based AMIs: Auto Mode runs on Bottlerocket with its minimal, immutable, SELinux-enforcing architecture.
    • Built-in encryption: Envelope encryption for all Kubernetes API data using KMS with Kubernetes KMS provider v2.
    • Pre-installed add-ons: EBS CSI driver, CoreDNS, kube-proxy, and VPC CNI are all managed and kept up to date by AWS.

    Implementation

    # Create a cluster with EKS Auto Mode enabled
    aws eks create-cluster   --name auto-mode-cluster   --role-arn arn:aws:iam::123456789012:role/EKSAutoClusterRole   --resources-vpc-config subnetIds=subnet-abc123,subnet-def456   --compute-config enabled=true,nodePools=general-purpose,nodeRoleArn=arn:aws:iam::123456789012:role/EKSAutoNodeRole   --kubernetes-network-config elasticLoadBalancing=enabled   --storage-config blockStorage=enabled
    
    # Check Auto Mode node pool status
    aws eks describe-cluster --name auto-mode-cluster   --query "cluster.computeConfig"
    
    # Run CIS compliance checks on Auto Mode nodes using kubectl debug
    kubectl debug node/auto-mode-node-xyz -it   --image=aquasec/kube-bench:latest   -- kube-bench run --targets node

    2025 Update: EKS Auto Mode now supports FIPS-validated cryptographic modules for FedRAMP compliance and is available in AWS GovCloud regions. AWS published a security whitepaper detailing the shared responsibility model for Auto Mode clusters.

    12. Continuous Compliance with CIS EKS Benchmark

    The CIS Amazon EKS Benchmark v1.7.0 contains 46 security recommendations across five areas: Control Plane Configuration (2 controls), Worker Node Security (13 controls), RBAC and Service Accounts (15 controls), Pod Security Standards (9 controls), and Managed Services (7 controls). Continuous compliance ensures your cluster does not drift from these baselines.

    Automated Scanning with kube-bench

    # Run kube-bench as a Kubernetes Job
    kubectl apply -f - <<'EOF'
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: kube-bench
      namespace: kube-system
    spec:
      template:
        spec:
          hostPID: true
          containers:
          - name: kube-bench
            image: aquasec/kube-bench:latest
            command: ["kube-bench", "run", "--targets", "node", "--benchmark", "eks-1.7.0"]
            volumeMounts:
            - name: var-lib-kubelet
              mountPath: /var/lib/kubelet
              readOnly: true
            - name: etc-systemd
              mountPath: /etc/systemd
              readOnly: true
            - name: etc-kubernetes
              mountPath: /etc/kubernetes
              readOnly: true
          restartPolicy: Never
          volumes:
          - name: var-lib-kubelet
            hostPath:
              path: /var/lib/kubelet
          - name: etc-systemd
            hostPath:
              path: /etc/systemd
          - name: etc-kubernetes
            hostPath:
              path: /etc/kubernetes
      backoffLimit: 0
    EOF
    
    # View results
    kubectl logs job/kube-bench -n kube-system

    Security Hub Integration

    # Enable the AWS Foundational Security Best Practices standard
    aws securityhub batch-enable-standards   --standards-subscription-requests '[{
        "StandardsArn": "arn:aws:securityhub:::standards/aws-foundational-security-best-practices/v/1.0.0"
      }]'
    
    # Check EKS-specific Security Hub findings
    aws securityhub get-findings   --filters '{
        "ResourceType": [{"Value": "AwsEksCluster", "Comparison": "EQUALS"}],
        "ComplianceStatus": [{"Value": "FAILED", "Comparison": "EQUALS"}]
      }'   --query "Findings[].{Title:Title,Status:Compliance.Status,Id:ProductFields.ControlId}"

    Automated Compliance with Kyverno

    Kyverno can enforce CIS controls as Kubernetes-native policies, providing real-time admission control rather than periodic scanning. The open-source cis-eks-kyverno project implements 62 CIS controls as Kyverno policies.

    # Install Kyverno
    helm repo add kyverno https://kyverno.github.io/kyverno/
    helm install kyverno kyverno/kyverno -n kyverno --create-namespace
    
    # Apply CIS EKS policies
    kubectl apply -f https://raw.githubusercontent.com/kyverno/policies/main/pod-security/restricted/

    Best practice: Combine kube-bench (periodic node-level scanning), Security Hub (continuous AWS-level controls), and Kyverno/Gatekeeper (real-time admission control) for comprehensive coverage across all CIS Benchmark sections.


    Common EKS Misconfigurations

    Misconfiguration Risk Detection
    Public API endpoint with no CIDR restrictions Cluster reconnaissance and brute-force attacks Security Hub [EKS.1]
    Broad node IAM role with S3/DynamoDB full access All pods inherit excessive permissions IAM Access Analyzer
    Unencrypted Kubernetes secrets Secrets exposed in etcd backups or API responses Security Hub [EKS.3]
    cluster-admin bound to default service accounts Any pod can fully control the cluster kubectl get clusterrolebindings
    No network policies (flat pod network) Lateral movement from any compromised pod kube-bench Section 5, manual audit
    Containers running as root with all capabilities Container escape to host node PSA warnings, Kyverno/Gatekeeper
    Outdated Kubernetes version Known CVEs exploitable in unpatched clusters Security Hub [EKS.2]

    Quick Reference Checklist

    # Practice Priority
    1Restrict API server endpoint accessCritical
    2Use Pod Identity for pod-level IAMCritical
    3Enforce Pod Security StandardsHigh
    4RBAC least privilege and namespace isolationCritical
    5Encrypt secrets with KMS envelope encryptionHigh
    6Enforce network policies for pod isolationHigh
    7Container image security and admission controlHigh
    8Enable GuardDuty EKS protectionCritical
    9Enable control plane loggingCritical
    10Harden worker nodesHigh
    11Leverage EKS Auto ModeMedium
    12Continuous CIS Benchmark complianceHigh

    Related Resources

    Go Deeper: The State of AWS Security 2026

    This article is just the start. Get the full picture with our free whitepaper - 8 chapters covering IAM, S3, VPC, monitoring, agentic AI security, compliance, and a prioritized action plan with 50+ CLI commands.

    EKSKubernetesPod SecurityIRSAPod IdentityContainer SecurityRBACNetwork PoliciesGuardDutyBottlerocketEKS Auto ModeCIS Benchmark