StorageIntermediate20 min read

    AWS S3 Security Best Practices

    Tarek Cheikh

    Founder & AWS Security Expert

    View Security Card

    Amazon S3 stores trillions of objects and handles millions of requests per second globally. It is the default storage layer for backups, data lakes, application assets, and log archives. Because S3 is so ubiquitous, it is also one of the most frequently misconfigured and attacked AWS services. A single public bucket can expose millions of records in minutes.

    The consequences of S3 misconfigurations are not theoretical. In January 2025, the Codefinger ransomware campaign exploited stolen AWS credentials paired with SSE-C (customer-provided encryption keys) to encrypt S3 objects with attacker-controlled keys, then set 7-day lifecycle deletion policies to pressure victims into paying. AWS responded by allowing organizations to disable SSE-C at the account level. In August 2025, an Indian banking platform left an S3 bucket publicly accessible, exposing 273,000 bank transfer PDFs containing names, account numbers, and transaction details. In February 2025, watchTowr Labs demonstrated that abandoned S3 buckets -- where the original account was deleted but the bucket name remained referenced in code, templates, or DNS -- could be re-registered by attackers who created new accounts in the same region, hijacking the namespace to serve malicious content.

    This guide covers 12 battle-tested S3 best practices, each with real AWS CLI commands, audit procedures, and the latest 2025-2026 updates from AWS.

    1. Enable S3 Block Public Access

    S3 Block Public Access is your first and most critical line of defense. It overrides any bucket policy or ACL that would grant public access. AWS enables it by default on new buckets since April 2023, but legacy buckets and accounts created before that date may still lack it.

    Implementation

    Enable at both the account level (covers all buckets) and individual bucket level (defense in depth):

    # Enable Block Public Access at the ACCOUNT level (all four settings)
    aws s3control put-public-access-block   --account-id 123456789012   --public-access-block-configuration     BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
    
    # Verify account-level settings
    aws s3control get-public-access-block --account-id 123456789012
    
    # Enable Block Public Access on a specific bucket
    aws s3api put-public-access-block   --bucket my-bucket   --public-access-block-configuration     BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
    
    # Audit: find buckets with public access
    aws s3api list-buckets --query "Buckets[].Name" --output text | tr '	' '
    ' | while read bucket; do
      echo "Checking $bucket..."
      aws s3api get-public-access-block --bucket "$bucket" 2>/dev/null || echo "  NO Block Public Access configured!"
    done

    CIS Benchmark: Control 2.1.4 (S3 Block Public Access enabled at account level). This is a Level 1 control -- every AWS account must have it.

    Exception handling: If you genuinely need a public bucket (e.g., static website hosting), use CloudFront with an Origin Access Control (OAC) instead. The bucket stays private; CloudFront serves the content publicly.

    2. Disable ACLs with Bucket Owner Enforced

    S3 ACLs are a legacy access control mechanism from before bucket policies existed. They are a frequent source of accidental public exposure because they operate independently of bucket policies and are harder to audit. AWS recommends disabling ACLs entirely.

    # Set bucket ownership to BucketOwnerEnforced (disables all ACLs)
    aws s3api put-bucket-ownership-controls   --bucket my-bucket   --ownership-controls 'Rules=[{ObjectOwnership=BucketOwnerEnforced}]'
    
    # Verify ownership controls
    aws s3api get-bucket-ownership-controls --bucket my-bucket
    
    # Audit: check all buckets for ACL status
    aws s3api list-buckets --query "Buckets[].Name" --output text | tr '	' '
    ' | while read bucket; do
      ownership=$(aws s3api get-bucket-ownership-controls --bucket "$bucket" 2>/dev/null     --query "OwnershipControls.Rules[0].ObjectOwnership" --output text)
      if [ "$ownership" != "BucketOwnerEnforced" ]; then
        echo "WARNING: $bucket still uses ACLs (ownership: $ownership)"
      fi
    done

    When BucketOwnerEnforced is set, all objects in the bucket are owned by the bucket owner regardless of who uploaded them, and all ACL-based access grants are ignored. Access is controlled exclusively through IAM policies and bucket policies.

    CIS Benchmark: Control 2.1.2 (S3 bucket ACLs disabled via BucketOwnerEnforced).

    3. Enforce TLS/HTTPS via Bucket Policy

    By default, S3 accepts both HTTP and HTTPS requests. Unencrypted HTTP requests expose data in transit to network-level interception. You must explicitly deny HTTP access via a bucket policy condition.

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "DenyInsecureTransport",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": [
          "arn:aws:s3:::my-bucket",
          "arn:aws:s3:::my-bucket/*"
        ],
        "Condition": {
          "Bool": {
            "aws:SecureTransport": "false"
          }
        }
      }]
    }
    # Apply the HTTPS-only bucket policy
    aws s3api put-bucket-policy   --bucket my-bucket   --policy file://https-only-policy.json
    
    # Verify the policy
    aws s3api get-bucket-policy --bucket my-bucket --query "Policy" --output text | python3 -m json.tool

    CIS Benchmark: Control 2.1.1 (deny HTTP access to S3 buckets). This applies to every bucket in your account without exception.

    Best practice: Enforce a minimum TLS version of 1.2 by adding a second condition: "s3:TlsVersion": "1.2" with NumericLessThan. AWS has deprecated TLS 1.0 and 1.1 for S3 endpoints.

    4. Configure Encryption Properly

    S3 offers multiple server-side encryption options. Choosing the right one -- and staying current with AWS changes -- is essential for both security and compliance.

    SSE-KMS with Bucket Keys (Recommended)

    SSE-KMS provides envelope encryption with AWS KMS, giving you key management, rotation, and audit via CloudTrail. Bucket Keys reduce KMS API calls (and costs) by up to 99% by generating a short-lived bucket-level key.

    # Create a KMS key for S3 encryption
    aws kms create-key --description "S3 encryption key"   --key-usage ENCRYPT_DECRYPT --key-spec SYMMETRIC_DEFAULT
    
    # Set default bucket encryption to SSE-KMS with Bucket Keys enabled
    aws s3api put-bucket-encryption   --bucket my-bucket   --server-side-encryption-configuration '{
        "Rules": [{
          "ApplyServerSideEncryptionByDefault": {
            "SSEAlgorithm": "aws:kms",
            "KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789012:key/key-id"
          },
          "BucketKeyEnabled": true
        }]
      }'
    
    # Verify encryption configuration
    aws s3api get-bucket-encryption --bucket my-bucket

    SSE-C Deprecation (Critical Update)

    Following the Codefinger ransomware attack in January 2025 -- where attackers used stolen credentials to encrypt objects with their own SSE-C keys, making recovery impossible without paying ransom -- AWS announced that SSE-C will be disabled by default starting April 2026. Organizations should proactively disable SSE-C now using the s3:x-amz-server-side-encryption-customer-algorithm condition key in bucket policies.

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "DenySSEC",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:PutObject",
        "Resource": "arn:aws:s3:::my-bucket/*",
        "Condition": {
          "StringNotEquals": {
            "s3:x-amz-server-side-encryption": ["aws:kms", "AES256"]
          }
        }
      }]
    }

    UpdateObjectEncryption API (January 2026)

    AWS introduced the UpdateObjectEncryption API in January 2026, allowing you to change the encryption configuration of existing objects in place without re-uploading them. This significantly simplifies migration from SSE-S3 or SSE-C to SSE-KMS.

    # Re-encrypt an existing object to SSE-KMS (new API)
    aws s3api update-object-encryption   --bucket my-bucket   --key my-object   --object-encryption '{"SSEKMS": {"KMSKeyArn": "arn:aws:kms:us-east-1:123456789012:key/key-id", "BucketKeyEnabled": true}}'

    CIS Benchmark: Control 2.1.3 (default encryption enabled on all buckets).

    5. Enable Versioning

    Versioning protects against accidental deletion and overwrites by keeping multiple variants of every object. It is a prerequisite for S3 Object Lock, cross-region replication, and MFA Delete.

    # Enable versioning on a bucket
    aws s3api put-bucket-versioning   --bucket my-bucket   --versioning-configuration Status=Enabled
    
    # Verify versioning status
    aws s3api get-bucket-versioning --bucket my-bucket
    
    # List object versions (useful during incident response)
    aws s3api list-object-versions --bucket my-bucket --prefix important-file.txt
    
    # Recover a deleted object by removing the delete marker
    aws s3api delete-object   --bucket my-bucket   --key important-file.txt   --version-id "delete-marker-version-id"

    Cost management: Versioning increases storage costs because all versions are retained. Use S3 Lifecycle policies to expire non-current versions after a defined period (e.g., 90 days) while keeping the current version indefinitely.

    # Set lifecycle policy to expire non-current versions after 90 days
    aws s3api put-bucket-lifecycle-configuration   --bucket my-bucket   --lifecycle-configuration '{
        "Rules": [{
          "ID": "ExpireOldVersions",
          "Status": "Enabled",
          "NoncurrentVersionExpiration": {
            "NoncurrentDays": 90,
            "NewerNoncurrentVersions": 3
          },
          "Filter": {"Prefix": ""}
        }]
      }'

    CIS Benchmark: Control 2.1.3 (S3 versioning enabled).

    6. Enable MFA Delete

    MFA Delete adds a second layer of protection by requiring multi-factor authentication to permanently delete object versions or change the versioning state of a bucket. Even if an attacker compromises IAM credentials, they cannot permanently destroy data without the MFA device.

    Important Constraints

    • Root account only: MFA Delete can only be enabled by the root account -- no IAM user or role can do it.
    • CLI only: It cannot be enabled via the console; you must use the AWS CLI or SDK.
    • Versioning required: The bucket must have versioning enabled first.
    # Enable MFA Delete (MUST be run as root with MFA)
    aws s3api put-bucket-versioning   --bucket my-bucket   --versioning-configuration Status=Enabled,MFADelete=Enabled   --mfa "arn:aws:iam::123456789012:mfa/root-account-mfa-device 123456"
    
    # Verify MFA Delete status
    aws s3api get-bucket-versioning --bucket my-bucket
    # Should show: "Status": "Enabled", "MFADelete": "Enabled"

    Operational consideration: Once MFA Delete is enabled, every permanent deletion of an object version requires the MFA serial number and token code. Plan your operational workflows accordingly -- automated cleanup scripts will not work without MFA integration.

    7. Use S3 Object Lock for WORM Compliance

    S3 Object Lock provides write-once-read-many (WORM) protection, meeting regulatory requirements for SEC Rule 17a-4, FINRA, HIPAA, and other frameworks that mandate immutable data retention.

    # Create a bucket with Object Lock enabled (must be set at creation time)
    aws s3api create-bucket   --bucket compliance-bucket   --region us-east-1   --object-lock-enabled-for-bucket
    
    # Set a default retention policy (Compliance mode - cannot be shortened by anyone)
    aws s3api put-object-lock-configuration   --bucket compliance-bucket   --object-lock-configuration '{
        "ObjectLockEnabled": "Enabled",
        "Rule": {
          "DefaultRetention": {
            "Mode": "COMPLIANCE",
            "Years": 7
          }
        }
      }'
    
    # Apply a Legal Hold to a specific object (prevents deletion until removed)
    aws s3api put-object-legal-hold   --bucket compliance-bucket   --key financial-report-2025.pdf   --legal-hold Status=ON

    Retention Modes

    • Compliance mode: No one -- including the root account -- can delete or shorten the retention period. Use for regulatory requirements.
    • Governance mode: Users with the s3:BypassGovernanceRetention permission can override. Use for internal data governance.
    • Legal Hold: Independent of retention period. Prevents deletion until explicitly removed. Use for litigation holds.

    Ransomware defense: Object Lock in Compliance mode is one of the strongest defenses against ransomware. Even with full admin access, an attacker cannot delete or encrypt locked objects.

    8. Enable Server Access Logging and CloudTrail Data Events

    Without logging, you have no visibility into who accessed what data, when, or from where. S3 offers two complementary logging mechanisms.

    Server Access Logging

    Provides detailed records of individual requests to a bucket. Delivered to a target bucket in a best-effort manner.

    # Create a logging target bucket
    aws s3api create-bucket --bucket my-access-logs --region us-east-1
    
    # Grant S3 log delivery permissions via bucket policy (ACL-free approach)
    aws s3api put-bucket-policy --bucket my-access-logs --policy '{
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "S3ServerAccessLogsPolicy",
        "Effect": "Allow",
        "Principal": {"Service": "logging.s3.amazonaws.com"},
        "Action": "s3:PutObject",
        "Resource": "arn:aws:s3:::my-access-logs/*",
        "Condition": {"StringEquals": {"aws:SourceAccount": "123456789012"}}
      }]
    }'
    
    # Enable server access logging
    aws s3api put-bucket-logging   --bucket my-bucket   --bucket-logging-status '{
        "LoggingEnabled": {
          "TargetBucket": "my-access-logs",
          "TargetPrefix": "my-bucket-logs/"
        }
      }'

    CloudTrail Data Events

    Provides a complete, ordered audit trail of S3 API calls with IAM identity, source IP, and request parameters. Essential for compliance and incident response.

    # Enable CloudTrail S3 data events for all buckets
    aws cloudtrail put-event-selectors   --trail-name my-trail   --event-selectors '[{
        "ReadWriteType": "All",
        "IncludeManagementEvents": true,
        "DataResources": [{
          "Type": "AWS::S3::Object",
          "Values": ["arn:aws:s3"]
        }]
      }]'
    
    # Or for specific buckets only (lower cost)
    aws cloudtrail put-event-selectors   --trail-name my-trail   --event-selectors '[{
        "ReadWriteType": "All",
        "IncludeManagementEvents": true,
        "DataResources": [{
          "Type": "AWS::S3::Object",
          "Values": ["arn:aws:s3:::sensitive-bucket/"]
        }]
      }]'

    CIS Benchmark: Control 3.6 (CloudTrail S3 data events enabled). CloudTrail data events have a cost per 100,000 events -- use server access logging for high-volume buckets and CloudTrail for sensitive buckets.

    9. Use VPC Endpoints for S3

    By default, traffic from EC2 instances to S3 traverses the public internet (even within the same region). A VPC Gateway Endpoint routes S3 traffic over the AWS private network, eliminating internet exposure and enabling bucket policies that restrict access to specific VPCs.

    # Create a Gateway VPC Endpoint for S3
    aws ec2 create-vpc-endpoint   --vpc-id vpc-abc123   --service-name com.amazonaws.us-east-1.s3   --route-table-ids rtb-abc123 rtb-def456
    
    # Verify the endpoint
    aws ec2 describe-vpc-endpoints   --filters "Name=service-name,Values=com.amazonaws.us-east-1.s3"

    Restrict Bucket Access to VPC Endpoint Only

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "DenyAccessOutsideVPCEndpoint",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": [
          "arn:aws:s3:::internal-data-bucket",
          "arn:aws:s3:::internal-data-bucket/*"
        ],
        "Condition": {
          "StringNotEquals": {
            "aws:sourceVpce": "vpce-abc123"
          }
        }
      }]
    }

    Gateway vs. Interface endpoints: Gateway endpoints are free and work for S3 and DynamoDB. Interface endpoints (PrivateLink) cost money but support DNS resolution from on-premises via Direct Connect or VPN. Use Gateway endpoints unless you need on-premises access.

    Orphaned bucket warning: In February 2025, watchTowr Labs demonstrated that attackers could hijack abandoned S3 bucket names by creating new AWS accounts in the same region and claiming the namespace. Applications, CloudFormation templates, and CI/CD pipelines that referenced the old bucket name would then download attacker-controlled content. Always use VPC endpoint policies and bucket policies together to restrict which buckets your VPC can access.

    10. Use S3 Access Points for Multi-Team Access

    Bucket policies become unmanageable at scale. A single bucket shared by five teams, three applications, and two partner accounts can result in a policy that exceeds the 20 KB size limit. S3 Access Points solve this by providing named network endpoints, each with its own access policy.

    # Create an access point for the analytics team
    aws s3control create-access-point   --account-id 123456789012   --name analytics-ap   --bucket my-data-lake   --vpc-configuration VpcId=vpc-abc123
    
    # Set the access point policy
    aws s3control put-access-point-policy   --account-id 123456789012   --name analytics-ap   --policy '{
        "Version": "2012-10-17",
        "Statement": [{
          "Sid": "AnalyticsReadOnly",
          "Effect": "Allow",
          "Principal": {"AWS": "arn:aws:iam::123456789012:role/AnalyticsTeamRole"},
          "Action": ["s3:GetObject", "s3:ListBucket"],
          "Resource": [
            "arn:aws:s3:us-east-1:123456789012:accesspoint/analytics-ap",
            "arn:aws:s3:us-east-1:123456789012:accesspoint/analytics-ap/object/*"
          ]
        }]
      }'
    
    # Delegate bucket access control to access points
    aws s3api put-bucket-policy   --bucket my-data-lake   --policy '{
        "Version": "2012-10-17",
        "Statement": [{
          "Sid": "DelegateToAccessPoints",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": [
            "arn:aws:s3:::my-data-lake",
            "arn:aws:s3:::my-data-lake/*"
          ],
          "Condition": {
            "StringEquals": {
              "s3:DataAccessPointAccount": "123456789012"
            }
          }
        }]
      }'

    Best practice: Restrict access points to a specific VPC using the VpcConfiguration parameter. This ensures that the access point can only be used from within your VPC, not from the public internet.

    11. Secure Presigned URLs

    Presigned URLs grant temporary access to private S3 objects without requiring AWS credentials. They are widely used for file uploads, downloads, and sharing. However, they inherit the permissions of the signer and can be abused if not properly constrained.

    Best Practices for Presigned URLs

    • Short expiry: Set expiration to the minimum necessary (minutes, not hours). Default maximum is 7 days for IAM users and 12 hours for IAM roles (STS session limit).
    • Use IAM roles, not IAM users: Role-based presigned URLs are inherently time-limited by the STS session duration.
    • IP restrictions: Add aws:SourceIp conditions to the bucket policy to restrict which IPs can use presigned URLs.
    • Separate signing roles: Create a dedicated IAM role with minimal permissions for generating presigned URLs.
    # Generate a presigned URL with 5-minute expiry
    aws s3 presign s3://my-bucket/reports/quarterly.pdf --expires-in 300
    
    # Note: 'aws s3 presign' generates GET URLs only.
    # For PUT presigned URLs, use the AWS SDK (boto3, JS SDK, etc.)

    Bucket Policy to Restrict Presigned URL Usage by IP

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Sid": "RestrictPresignedURLByIP",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::my-bucket/shared/*",
        "Condition": {
          "NotIpAddress": {
            "aws:SourceIp": ["203.0.113.0/24", "198.51.100.0/24"]
          }
        }
      }]
    }

    Security risk: A presigned URL generated by an IAM user with s3:* permissions grants the URL holder those same permissions for the specified object. Always follow least privilege for the signing identity.

    12. Enable GuardDuty S3 Protection and Macie

    Proactive threat detection and data classification are essential for identifying attacks and sensitive data exposure before they become breaches.

    GuardDuty S3 Protection

    GuardDuty S3 Protection monitors CloudTrail S3 data events and detects suspicious activity such as unusual data access patterns, S3 API calls from known malicious IPs, and attempts to disable S3 logging or Block Public Access.

    # Enable GuardDuty with S3 Protection
    aws guardduty create-detector --enable   --features '[{"Name": "S3_DATA_EVENTS", "Status": "ENABLED"}]'
    
    # Check if S3 Protection is enabled
    aws guardduty get-detector --detector-id DETECTOR_ID   --query "DataSources.S3Logs.Status"
    
    # List S3-related findings
    aws guardduty list-findings --detector-id DETECTOR_ID   --finding-criteria '{
        "Criterion": {
          "type": {"Eq": [
            "Discovery:S3/MaliciousIPCaller",
            "Exfiltration:S3/MaliciousIPCaller",
            "Impact:S3/MaliciousIPCaller",
            "UnauthorizedAccess:S3/MaliciousIPCaller.Custom",
            "Policy:S3/BucketBlockPublicAccessDisabled",
            "Policy:S3/BucketPublicAccessGranted"
          ]}
        }
      }'

    Amazon Macie for Sensitive Data Discovery

    Macie uses machine learning and pattern matching to discover and classify sensitive data (PII, PHI, financial data, credentials) stored in S3 buckets.

    # Enable Macie
    aws macie2 enable-macie
    
    # Create a classification job for specific buckets
    aws macie2 create-classification-job   --job-type SCHEDULED   --name "weekly-pii-scan"   --schedule-frequency '{"weeklySchedule": {"dayOfWeek": "SUNDAY"}}'   --s3-job-definition '{
        "bucketDefinitions": [{
          "accountId": "123456789012",
          "buckets": ["customer-data-bucket", "uploads-bucket"]
        }]
      }'   --managed-data-identifier-selector ALL
    
    # Review findings
    aws macie2 list-findings --finding-criteria '{
      "criterion": {
        "severity.description": {"eq": ["High", "Critical"]}
      }
    }'

    Indian banking data exposure (August 2025): The exposure of 273,000 bank transfer PDFs would have been detected by Macie's automated PII scanning within hours. Macie identifies financial data patterns including bank account numbers, routing numbers, and transaction details across document types including PDFs.


    Common Misconfigurations

    Misconfiguration Risk Detection
    Block Public Access disabled Data exposure to the entire internet AWS Config: s3-account-level-public-access-blocks
    ACLs granting public read/write Unauthorized data access or upload of malicious content Access Analyzer external access findings
    No HTTPS enforcement Data in transit exposed to interception AWS Config: s3-bucket-ssl-requests-only
    SSE-C enabled (no deny policy) Ransomware via attacker-supplied encryption keys (Codefinger) Bucket policy audit for SSE-C deny statement
    No versioning or Object Lock Permanent data loss from accidental or malicious deletion AWS Config: s3-bucket-versioning-enabled
    Overly permissive bucket policy with Principal: "*" Unintended cross-account or anonymous access IAM Access Analyzer external access findings
    Orphaned bucket names in code/templates Namespace hijacking by attackers (watchTowr) Manual code review; search for hardcoded S3 URIs
    Long-lived presigned URLs URLs shared or leaked grant extended access CloudTrail data events: check X-Amz-Expires parameter

    Quick Reference Checklist

    # Practice Priority
    1Enable S3 Block Public Access (account + bucket)Critical
    2Disable ACLs with BucketOwnerEnforcedCritical
    3Enforce TLS/HTTPS via bucket policyCritical
    4Configure SSE-KMS with Bucket Keys; disable SSE-CCritical
    5Enable versioningHigh
    6Enable MFA DeleteHigh
    7Use Object Lock for WORM complianceHigh
    8Enable server access logging + CloudTrail data eventsHigh
    9Use VPC endpoints for S3Medium
    10Use S3 Access Points for multi-team accessMedium
    11Secure presigned URLs (short expiry, IAM roles)High
    12Enable GuardDuty S3 Protection + MacieHigh

    Related Resources

    Go Deeper: The State of AWS Security 2026

    This article is just the start. Get the full picture with our free whitepaper - 8 chapters covering IAM, S3, VPC, monitoring, agentic AI security, compliance, and a prioritized action plan with 50+ CLI commands.

    S3Block Public AccessEncryptionObject LockMFA DeleteBucket PolicyData Protection