Tarek Cheikh
Founder & AWS Security Expert
Amazon Elastic Kubernetes Service (EKS) is the most popular managed Kubernetes platform on AWS, running mission-critical workloads across thousands of organizations. But Kubernetes is complex -- with its own identity system, networking model, and API surface, it introduces an entirely new attack plane layered on top of AWS IAM. A single misconfigured ClusterRoleBinding, an exposed API server endpoint, or an unencrypted secret can give an attacker full cluster compromise.
In 2025, Wiz Research disclosed that over 14% of EKS clusters they analyzed had publicly accessible API endpoints with overly permissive RBAC bindings. The OWASP Kubernetes Top 10 lists insecure workload configurations, lack of network segmentation, and secrets management failures as the most exploited weaknesses. Meanwhile, AWS has been rapidly evolving EKS security -- launching Pod Identity as the successor to IRSA, EKS Auto Mode for fully managed infrastructure, enhanced VPC CNI network policies, and GuardDuty Extended Threat Detection for EKS.
This guide covers 12 advanced EKS security best practices, each with real CLI commands, policy examples, and references to the latest AWS and CIS controls.
By default, EKS creates a publicly accessible Kubernetes API server endpoint. This means anyone on the internet can attempt to authenticate against your cluster. Even with strong authentication, exposing the endpoint increases the attack surface and enables reconnaissance. Security Hub control [EKS.1] flags clusters with public endpoints.
# Make the API endpoint private-only
aws eks update-cluster-config --name my-cluster --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
# If public access is needed, restrict to specific CIDRs
aws eks update-cluster-config --name my-cluster --resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs="203.0.113.0/24,198.51.100.0/24"
# Verify current endpoint configuration
aws eks describe-cluster --name my-cluster --query "cluster.resourcesVpcConfig.{Public:endpointPublicAccess,Private:endpointPrivateAccess,CIDRs:publicAccessCidrs}"
Security Hub: [EKS.1] EKS cluster endpoints should not be publicly accessible. Enable private endpoint access and disable or restrict public access to pass this control.
Pods often need access to AWS services -- S3, DynamoDB, SQS, Secrets Manager. The wrong approach is attaching broad IAM policies to the node instance profile, which grants every pod on that node the same permissions. EKS Pod Identity is the recommended mechanism for granting fine-grained, per-pod AWS permissions, replacing the older IAM Roles for Service Accounts (IRSA) approach.
ListPodIdentityAssociations. Faster role changes without pod restarts. Supports session tags for attribute-based access control (ABAC).# Step 1: Install the EKS Pod Identity Agent add-on
aws eks create-addon --cluster-name my-cluster --addon-name eks-pod-identity-agent
# Step 2: Create an IAM role with Pod Identity trust policy
cat <<'EOF' > pod-identity-trust.json
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}]
}
EOF
aws iam create-role --role-name my-app-pod-role --assume-role-policy-document file://pod-identity-trust.json
# Attach only the permissions your pod needs (least privilege)
aws iam attach-role-policy --role-name my-app-pod-role --policy-arn arn:aws:iam::123456789012:policy/MyAppMinimalPolicy
# Step 3: Create the pod identity association
aws eks create-pod-identity-association --cluster-name my-cluster --namespace my-namespace --service-account my-service-account --role-arn arn:aws:iam::123456789012:role/my-app-pod-role
# List all pod identity associations for audit
aws eks list-pod-identity-associations --cluster-name my-cluster
When using Pod Identity or IRSA, block access to the EC2 Instance Metadata Service (IMDS) to prevent pods from inheriting node-level permissions:
# Block IMDS access at the node level (launch template user data)
# Set HttpPutResponseHopLimit=1 so containers cannot reach IMDS
aws ec2 modify-instance-metadata-options --instance-id i-1234567890abcdef0 --http-put-response-hop-limit 1 --http-tokens required
Best practice: Migrate from IRSA to Pod Identity for all new workloads. Pod Identity provides better auditability and simpler management at scale. IRSA is not deprecated but Pod Identity is the forward-looking standard.
Pods can request dangerous capabilities -- running as root, mounting the host filesystem, using host networking, escalating privileges. Pod Security Standards (PSS), enforced by the built-in Pod Security Admission (PSA) controller, provide three profiles: Privileged (unrestricted), Baseline (prevents known privilege escalations), and Restricted (heavily hardened, current best practice).
Apply PSS at the namespace level using labels:
# Enforce the restricted profile on a namespace
kubectl label namespace my-app-namespace pod-security.kubernetes.io/enforce=restricted pod-security.kubernetes.io/audit=restricted pod-security.kubernetes.io/warn=restricted
# For system namespaces that need elevated privileges, use baseline
kubectl label namespace kube-system pod-security.kubernetes.io/enforce=baseline pod-security.kubernetes.io/audit=restricted pod-security.kubernetes.io/warn=restricted
# Verify labels on all namespaces
kubectl get namespaces -L pod-security.kubernetes.io/enforce
Pods in a restricted namespace must:
runAsNonRoot: true)drop: ["ALL"])allowPrivilegeEscalation: false)# Example pod spec compliant with restricted profile
apiVersion: v1
kind: Pod
metadata:
name: secure-app
namespace: my-app-namespace
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:v1.2.3
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
resources:
limits:
memory: "256Mi"
cpu: "500m"
Migration tip: Start by applying PSS in warn and audit modes first. Review warnings and audit logs to identify non-compliant workloads before switching to enforce.
Kubernetes RBAC controls who can do what within the cluster. Overly permissive ClusterRoleBindings -- especially granting cluster-admin to service accounts or groups -- are one of the most common EKS misconfigurations. The system:masters group in EKS bypasses all RBAC and cannot be audited.
# Audit existing ClusterRoleBindings for overly broad access
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "cluster-admin") | .metadata.name'
# Use EKS access entries instead of aws-auth ConfigMap (recommended)
aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::123456789012:role/DevTeamRole --type STANDARD
# Associate a scoped access policy (namespace-level)
aws eks associate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::123456789012:role/DevTeamRole --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy --access-scope type=namespace,namespaces=dev-team
aws-auth ConfigMap. Access entries are managed via the AWS API and are auditable in CloudTrail.system:masters for day-to-day operations. Use it only for break-glass emergency access.kubectl auth can-i --list --as=system:serviceaccount:ns:sa to verify effective permissions.kubectl auth can-i --list --as=system:anonymous.CIS EKS Benchmark: Section 4 covers RBAC and Service Accounts with 15 controls, including minimizing wildcard use in Roles (4.1.3) and ensuring default service account tokens are not auto-mounted (4.1.6).
Kubernetes secrets are stored in etcd, and by default EKS encrypts the underlying EBS volumes. However, this is storage-level encryption only -- anyone with etcd access or API server access can read secret values in plaintext. Envelope encryption with AWS KMS adds an additional layer: secrets are encrypted with a data encryption key (DEK), and the DEK itself is encrypted with your KMS customer managed key (CMK). Security Hub control [EKS.3] checks for this.
# Create a KMS key for EKS secrets encryption
aws kms create-key --description "EKS Secrets Encryption Key" --key-usage ENCRYPT_DECRYPT --origin AWS_KMS
# Enable secrets encryption on an existing cluster
aws eks associate-encryption-config --cluster-name my-cluster --encryption-config '[{
"resources": ["secrets"],
"provider": {
"keyArn": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}
}]'
# For a new cluster, include encryption at creation time
aws eks create-cluster --name new-cluster --role-arn arn:aws:iam::123456789012:role/EKSClusterRole --resources-vpc-config subnetIds=subnet-abc123,subnet-def456 --encryption-config '[{
"resources": ["secrets"],
"provider": {
"keyArn": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
}
}]'
# Verify encryption configuration
aws eks describe-cluster --name my-cluster --query "cluster.encryptionConfig"
Important: Once envelope encryption is enabled, it cannot be disabled. All new secrets are encrypted immediately, and existing secrets are encrypted upon their next update. Use kubectl get secrets --all-namespaces -o json | kubectl replace -f - to force re-encryption of all existing secrets.
Security Hub: [EKS.3] EKS clusters should use encrypted Kubernetes secrets.
By default, all pods in a Kubernetes cluster can communicate with every other pod -- there is no network segmentation. This means a compromised pod can freely reach databases, internal APIs, and the metadata service. Network policies provide firewall rules at the pod level.
Amazon VPC CNI now natively supports Kubernetes Network Policies using eBPF, removing the need for third-party CNI plugins like Calico for basic policy enforcement:
# Enable network policy support on VPC CNI
aws eks create-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.19.0-eksbuild.1 --configuration-values '{"enableNetworkPolicy": "true"}'
# Or update existing VPC CNI add-on
kubectl set env daemonset aws-node -n kube-system ENABLE_NETWORK_POLICY="true"
# Default deny all ingress traffic in a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow traffic only from specific pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
---
# Restrict egress to only required services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
2025 Update: Amazon EKS introduced enhanced network security policies in late 2025, adding DNS-based egress policies and cluster-wide network access filters. For advanced use cases (global policies, host endpoint protection, policy tiers), Calico Enterprise remains the most feature-rich option.
Running unvetted container images is one of the fastest paths to cluster compromise. Supply chain attacks, vulnerable base images, and embedded malware are all common vectors. A defense-in-depth approach combines image scanning, trusted registries, and admission controllers.
# Enable enhanced scanning (powered by Amazon Inspector)
aws ecr put-registry-scanning-configuration --scan-type ENHANCED --rules '[{
"repositoryFilters": [{"filter": "*", "filterType": "WILDCARD"}],
"scanFrequency": "CONTINUOUS_SCAN"
}]'
# Check scan findings for a specific image
aws ecr describe-image-scan-findings --repository-name my-app --image-id imageTag=latest --query "imageScanFindings.findingSeverityCounts"
# Enable image tag immutability to prevent tag overwriting
aws ecr put-image-tag-mutability --repository-name my-app --image-tag-mutability IMMUTABLE
Use Kyverno or OPA Gatekeeper to enforce image policies at deploy time:
# Kyverno policy: only allow images from your ECR registry
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registries
spec:
validationFailureAction: Enforce
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must come from the approved ECR registry."
pattern:
spec:
containers:
- image: "123456789012.dkr.ecr.*.amazonaws.com/*"
---
# Kyverno policy: require image digest (no mutable tags)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-image-digest
spec:
validationFailureAction: Enforce
rules:
- name: check-digest
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must use a digest, not a tag."
pattern:
spec:
containers:
- image: "*@sha256:*"
2025 Update: Amazon Inspector now maps ECR images to specific EKS pods, correlating vulnerabilities with the exact running workloads. Enhanced scanning supports scratch, distroless, and Chainguard base images.
Amazon GuardDuty provides two layers of EKS threat detection: EKS Audit Log Monitoring (analyzes Kubernetes API activity) and EKS Runtime Monitoring (uses a managed eBPF agent to detect container-level threats like reverse shells, crypto miners, and privilege escalation).
# Enable GuardDuty EKS Protection
aws guardduty update-detector --detector-id abcdef1234567890 --features '[{
"Name": "EKS_AUDIT_LOGS",
"Status": "ENABLED"
}, {
"Name": "RUNTIME_MONITORING",
"Status": "ENABLED",
"AdditionalConfiguration": [{
"Name": "EKS_ADDON_MANAGEMENT",
"Status": "ENABLED"
}]
}]'
# Verify the GuardDuty agent is running on EKS nodes
kubectl get pods -n amazon-guardduty -l app=guardduty-agent
# List EKS-specific findings
aws guardduty list-findings --detector-id abcdef1234567890 --finding-criteria '{
"Criterion": {
"resource.resourceType": {
"Eq": ["EKSCluster"]
}
}
}'
exec into pods, Tor network accessBest practice: Enable EKS_ADDON_MANAGEMENT to let GuardDuty automatically manage the runtime agent add-on. Use an SCP to prevent disabling GuardDuty in member accounts. Route findings to Security Hub for centralized visibility.
EKS control plane logs are disabled by default. Without them, you have no visibility into API server requests, authentication events, or authorization decisions -- making incident response effectively impossible.
# Enable all five log types
aws eks update-cluster-config --name my-cluster --logging '{
"clusterLogging": [{
"types": ["api", "audit", "authenticator", "controllerManager", "scheduler"],
"enabled": true
}]
}'
# Verify logging configuration
aws eks describe-cluster --name my-cluster --query "cluster.logging.clusterLogging"
# Query audit logs in CloudWatch Logs Insights
# Find all failed authentication attempts
aws logs start-query --log-group-name /aws/eks/my-cluster/cluster --start-time $(date -d '24 hours ago' +%s) --end-time $(date +%s) --query-string 'fields @timestamp, user.username, responseStatus.code
| filter @logStream like /kube-apiserver-audit/
| filter responseStatus.code >= 400
| sort @timestamp desc
| limit 50'
Cost optimization: At minimum, enable audit and authenticator logs. Set CloudWatch Logs retention to 90-365 days based on compliance requirements. Export to S3 for long-term retention at lower cost.
Worker nodes run your containers and are the primary compute surface. A compromised node means access to all pods on that node, their secrets, service account tokens, and potentially the ability to pivot to other nodes or the control plane.
# Create a managed node group with Bottlerocket AMI
aws eks create-nodegroup --cluster-name my-cluster --nodegroup-name secure-nodes --node-role arn:aws:iam::123456789012:role/EKSNodeRole --subnets subnet-abc123 subnet-def456 --ami-type BOTTLEROCKET_x86_64 --instance-types m6i.large --scaling-config minSize=2,maxSize=10,desiredSize=3 --update-config maxUnavailable=1
# Enable automatic node group updates
aws eks update-nodegroup-config --cluster-name my-cluster --nodegroup-name secure-nodes --update-config maxUnavailable=1
HttpTokens=required in the launch template to block IMDSv1 (vulnerable to SSRF attacks).AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly.Security Hub: [EKS.2] EKS clusters should run on a supported Kubernetes version.
EKS Auto Mode, launched at re:Invent 2024 and enhanced throughout 2025, shifts the operational and security burden of worker node management to AWS. It eliminates the need to configure managed node groups, auto-scaling groups, or CNI plugins -- AWS manages everything from node provisioning to OS patching.
# Create a cluster with EKS Auto Mode enabled
aws eks create-cluster --name auto-mode-cluster --role-arn arn:aws:iam::123456789012:role/EKSAutoClusterRole --resources-vpc-config subnetIds=subnet-abc123,subnet-def456 --compute-config enabled=true,nodePools=general-purpose,nodeRoleArn=arn:aws:iam::123456789012:role/EKSAutoNodeRole --kubernetes-network-config elasticLoadBalancing=enabled --storage-config blockStorage=enabled
# Check Auto Mode node pool status
aws eks describe-cluster --name auto-mode-cluster --query "cluster.computeConfig"
# Run CIS compliance checks on Auto Mode nodes using kubectl debug
kubectl debug node/auto-mode-node-xyz -it --image=aquasec/kube-bench:latest -- kube-bench run --targets node
2025 Update: EKS Auto Mode now supports FIPS-validated cryptographic modules for FedRAMP compliance and is available in AWS GovCloud regions. AWS published a security whitepaper detailing the shared responsibility model for Auto Mode clusters.
The CIS Amazon EKS Benchmark v1.7.0 contains 46 security recommendations across five areas: Control Plane Configuration (2 controls), Worker Node Security (13 controls), RBAC and Service Accounts (15 controls), Pod Security Standards (9 controls), and Managed Services (7 controls). Continuous compliance ensures your cluster does not drift from these baselines.
# Run kube-bench as a Kubernetes Job
kubectl apply -f - <<'EOF'
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
namespace: kube-system
spec:
template:
spec:
hostPID: true
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench", "run", "--targets", "node", "--benchmark", "eks-1.7.0"]
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
restartPolicy: Never
volumes:
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
- name: etc-systemd
hostPath:
path: /etc/systemd
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
backoffLimit: 0
EOF
# View results
kubectl logs job/kube-bench -n kube-system
# Enable the AWS Foundational Security Best Practices standard
aws securityhub batch-enable-standards --standards-subscription-requests '[{
"StandardsArn": "arn:aws:securityhub:::standards/aws-foundational-security-best-practices/v/1.0.0"
}]'
# Check EKS-specific Security Hub findings
aws securityhub get-findings --filters '{
"ResourceType": [{"Value": "AwsEksCluster", "Comparison": "EQUALS"}],
"ComplianceStatus": [{"Value": "FAILED", "Comparison": "EQUALS"}]
}' --query "Findings[].{Title:Title,Status:Compliance.Status,Id:ProductFields.ControlId}"
Kyverno can enforce CIS controls as Kubernetes-native policies, providing real-time admission control rather than periodic scanning. The open-source cis-eks-kyverno project implements 62 CIS controls as Kyverno policies.
# Install Kyverno
helm repo add kyverno https://kyverno.github.io/kyverno/
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
# Apply CIS EKS policies
kubectl apply -f https://raw.githubusercontent.com/kyverno/policies/main/pod-security/restricted/
Best practice: Combine kube-bench (periodic node-level scanning), Security Hub (continuous AWS-level controls), and Kyverno/Gatekeeper (real-time admission control) for comprehensive coverage across all CIS Benchmark sections.
| Misconfiguration | Risk | Detection |
|---|---|---|
| Public API endpoint with no CIDR restrictions | Cluster reconnaissance and brute-force attacks | Security Hub [EKS.1] |
| Broad node IAM role with S3/DynamoDB full access | All pods inherit excessive permissions | IAM Access Analyzer |
| Unencrypted Kubernetes secrets | Secrets exposed in etcd backups or API responses | Security Hub [EKS.3] |
cluster-admin bound to default service accounts |
Any pod can fully control the cluster | kubectl get clusterrolebindings |
| No network policies (flat pod network) | Lateral movement from any compromised pod | kube-bench Section 5, manual audit |
| Containers running as root with all capabilities | Container escape to host node | PSA warnings, Kyverno/Gatekeeper |
| Outdated Kubernetes version | Known CVEs exploitable in unpatched clusters | Security Hub [EKS.2] |
| # | Practice | Priority |
|---|---|---|
| 1 | Restrict API server endpoint access | Critical |
| 2 | Use Pod Identity for pod-level IAM | Critical |
| 3 | Enforce Pod Security Standards | High |
| 4 | RBAC least privilege and namespace isolation | Critical |
| 5 | Encrypt secrets with KMS envelope encryption | High |
| 6 | Enforce network policies for pod isolation | High |
| 7 | Container image security and admission control | High |
| 8 | Enable GuardDuty EKS protection | Critical |
| 9 | Enable control plane logging | Critical |
| 10 | Harden worker nodes | High |
| 11 | Leverage EKS Auto Mode | Medium |
| 12 | Continuous CIS Benchmark compliance | High |
This article is just the start. Get the full picture with our free whitepaper - 8 chapters covering IAM, S3, VPC, monitoring, agentic AI security, compliance, and a prioritized action plan with 50+ CLI commands.
Comprehensive guide to securing Amazon Elastic Container Service. Covers task role separation, non-root containers, ECR image scanning, secrets management, GuardDuty runtime monitoring, network isolation, ECScape mitigation, and container image signing.
Comprehensive guide to securing AWS Identity and Access Management. Covers MFA enforcement, least privilege, IAM Identity Center, SCPs, Access Analyzer, and credential management.
Comprehensive guide to securing AWS Virtual Private Cloud. Covers Security Groups, NACLs, VPC Flow Logs, VPC Endpoints, Block Public Access, Encryption Controls, Network Firewall, Transit Gateway, and GuardDuty threat detection.