Tarek Cheikh
Founder & AWS Security Expert
Amazon DynamoDB is a fully managed, serverless NoSQL database that powers some of the world's most demanding workloads. Unlike RDS, there is no OS to patch, no network interface to misconfigure on the instance itself -- but that does not mean DynamoDB is secure by default. The shared responsibility model is alive and well: AWS manages the hardware, network, and underlying service availability; you are responsible for encryption key choice, access control, backup configuration, and network isolation.
The DynamoDB US-East-1 outage of October 2025 was a stark reminder of the stakes. A DNS automation bug cascaded across the region, causing an extended service disruption with estimated insurance losses of $581 million. While that incident was an availability event rather than a security breach, it underscored how critical it is to have Point-in-Time Recovery, multi-region replication, and deletion protection in place before disaster strikes -- not after.
Cloud misconfiguration trends reinforce this urgency: 27% of organizations experienced a cloud misconfiguration incident in 2024, with an average of 43 misconfigurations per AWS account. DynamoDB tables are frequently misconfigured -- left with AWS-owned encryption keys, no PITR, no deletion protection, and wildcard IAM policies. AWS Security Hub now includes dedicated DynamoDB controls (DynamoDB.1 through DynamoDB.7) to automate detection of the most common failures. This guide covers 12 battle-tested best practices with real CLI commands and audit procedures.
DynamoDB encrypts all data at rest by default, but the default uses AWS-owned keys -- keys that are managed entirely by AWS, invisible in your KMS console, and provide no audit trail in CloudTrail. For any table containing sensitive data, you must use a customer-managed key (CMK) that you control.
# Create a CMK for DynamoDB encryption
aws kms create-key \
--description "DynamoDB encryption key - production tables" \
--key-usage ENCRYPT_DECRYPT \
--origin AWS_KMS \
--tags TagKey=Environment,TagValue=production TagKey=Service,TagValue=dynamodb
# Enable automatic key rotation
aws kms enable-key-rotation \
--key-id arn:aws:kms:us-east-1:123456789012:key/KEY-ID
# Create a table with a CMK
aws dynamodb create-table \
--table-name prod-orders \
--attribute-definitions AttributeName=orderId,AttributeType=S \
--key-schema AttributeName=orderId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=arn:aws:kms:us-east-1:123456789012:key/KEY-ID
# Update an existing table to use CMK
aws dynamodb update-table \
--table-name prod-orders \
--sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=arn:aws:kms:us-east-1:123456789012:key/KEY-ID
# Audit: Find tables NOT using CMKs
aws dynamodb list-tables --query "TableNames" --output text | tr '\t' '\n' | while read table; do
result=$(aws dynamodb describe-table --table-name "$table" \
--query "Table.SSEDescription.SSEType" --output text 2>/dev/null)
if [ "$result" != "KMS" ]; then
echo "No CMK: $table (SSEType: $result)"
fi
done
Security Hub Controls: No dedicated control for CMK vs AWS-owned key distinction -- this is a best practice that goes beyond the baseline. Pair CMK encryption with a KMS key policy that restricts usage to your DynamoDB service role and specific IAM principals.
Point-in-Time Recovery continuously backs up your DynamoDB table data to S3 (managed by AWS) and allows you to restore to any second within the past 35 days. Without PITR, your only recovery option for accidental deletion or corruption is a manual on-demand backup -- which captures a point in time, not a continuous window.
The DynamoDB US-East-1 October 2025 incident affected thousands of tables. Organizations with PITR enabled were able to restore their data to the exact state before the disruption. Those relying solely on manual backups lost hours or days of data.
# Enable PITR on an existing table
aws dynamodb update-continuous-backups \
--table-name prod-orders \
--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true
# Verify PITR status
aws dynamodb describe-continuous-backups \
--table-name prod-orders \
--query "ContinuousBackupsDescription.PointInTimeRecoveryDescription"
# Restore a table to a specific point in time (creates a new table)
aws dynamodb restore-table-to-point-in-time \
--source-table-name prod-orders \
--target-table-name prod-orders-restored-20260316 \
--restore-date-time 2026-03-16T10:00:00Z \
--sse-specification-override Enabled=true,SSEType=KMS,KMSMasterKeyId=arn:aws:kms:us-east-1:123456789012:key/KEY-ID
# Audit: Find tables without PITR enabled
aws dynamodb list-tables --query "TableNames" --output text | tr '\t' '\n' | while read table; do
status=$(aws dynamodb describe-continuous-backups --table-name "$table" \
--query "ContinuousBackupsDescription.PointInTimeRecoveryDescription.PointInTimeRecoveryStatus" \
--output text 2>/dev/null)
if [ "$status" != "ENABLED" ]; then
echo "PITR DISABLED: $table"
fi
done
PITR costs $0.20 per GB-month based on the table's current size (including local secondary indexes). DynamoDB monitors table size continuously to determine backup charges. Restores create a new table — the original table is not modified. You can run a restore and verify data integrity before promoting the restored table to production.
Security Hub Controls: DynamoDB.2 (DynamoDB tables should have PITR enabled). This is one of the highest-priority DynamoDB controls.
Deletion protection prevents a DynamoDB table from being deleted by any API call, even from an administrator. It requires explicitly disabling protection before deletion can proceed -- creating deliberate friction that protects against both accidental deletion and compromised credentials that attempt to destroy data.
After the October 2025 DynamoDB outage exposed the fragility of single-region deployments, AWS teams scrambled to enable deletion protection across all critical tables. The feature had been available since 2023 but was widely neglected. A compromised CI/CD pipeline with dynamodb:DeleteTable permissions can destroy a table in milliseconds -- deletion protection adds a mandatory two-step process.
# Enable deletion protection on an existing table
aws dynamodb update-table \
--table-name prod-orders \
--deletion-protection-enabled
# Enable at creation time
aws dynamodb create-table \
--table-name prod-customers \
--attribute-definitions AttributeName=customerId,AttributeType=S \
--key-schema AttributeName=customerId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--deletion-protection-enabled
# Verify deletion protection status
aws dynamodb describe-table \
--table-name prod-orders \
--query "Table.DeletionProtectionEnabled"
# Audit: Find tables without deletion protection
aws dynamodb list-tables --query "TableNames" --output text | tr '\t' '\n' | while read table; do
protected=$(aws dynamodb describe-table --table-name "$table" \
--query "Table.DeletionProtectionEnabled" --output text 2>/dev/null)
if [ "$protected" != "True" ]; then
echo "NOT PROTECTED: $table"
fi
done
# Use an SCP to prevent disabling deletion protection without approval
# {
# "Version": "2012-10-17",
# "Statement": [{
# "Sid": "DenyDeletionProtectionDisable",
# "Effect": "Deny",
# "Action": "dynamodb:UpdateTable",
# "Resource": "*",
# "Condition": {
# "StringEquals": {
# "dynamodb:DeletionProtectionEnabled": "false"
# }
# }
# }]
# }
Security Hub Controls: DynamoDB.6 (DynamoDB tables should have deletion protection enabled). Pair this with an SCP that requires a specific condition tag or MFA to disable deletion protection.
DynamoDB Fine-Grained Access Control allows IAM policies to restrict access at the item level (specific rows) and attribute level (specific columns) within a table. Without FGAC, any principal with dynamodb:GetItem on a table can read every item -- including items belonging to other users. This is particularly dangerous in multi-tenant applications.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToOwnItems",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Query"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders",
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": ["${aws:PrincipalTag/userId}"]
}
}
},
{
"Sid": "RestrictSensitiveAttributes",
"Effect": "Allow",
"Action": ["dynamodb:GetItem", "dynamodb:Query"],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders",
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:Attributes": ["orderId", "status", "createdAt", "amount"]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
# Tag IAM principals with their userId for LeadingKeys conditions
aws iam tag-role \
--role-name AppServiceRole \
--tags Key=userId,Value=user-123
# Simulate access to verify FGAC is working correctly
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:role/AppServiceRole \
--action-names dynamodb:GetItem \
--resource-arns arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders \
--context-entries ContextKeyName=dynamodb:LeadingKeys,ContextKeyValues=user-123,ContextKeyType=stringList
Use aws:PrincipalTag to dynamically bind the LeadingKeys condition to the authenticated user's attributes from IAM Identity Center or Cognito. This eliminates the need to hardcode user IDs in policies and scales to millions of users.
Security Hub Controls: No dedicated Security Hub control for FGAC -- this is a design-time best practice. Enforce it through IAM policy reviews and access analyzer findings.
By default, your Lambda functions, EC2 instances, and ECS tasks communicate with DynamoDB over the public internet -- even if they are inside a VPC. A VPC gateway endpoint routes all DynamoDB API traffic through the AWS private network, ensuring data never traverses the public internet and cannot be intercepted or routed to a malicious DynamoDB endpoint.
Gateway endpoints for DynamoDB are free. There is no per-hour charge and no data processing fee. There is no reason not to use them.
# Create a VPC gateway endpoint for DynamoDB
aws ec2 create-vpc-endpoint \
--vpc-id vpc-0a1b2c3d4e5f \
--service-name com.amazonaws.us-east-1.dynamodb \
--route-table-ids rtb-0a1b2c3d rtb-0b2c3d4e \
--vpc-endpoint-type Gateway
# Verify the endpoint exists and is available
aws ec2 describe-vpc-endpoints \
--filters "Name=service-name,Values=com.amazonaws.us-east-1.dynamodb" \
--query "VpcEndpoints[].[VpcEndpointId,State,VpcId]" \
--output table
# Add an endpoint policy to restrict access to specific tables
# (Attach this policy to the VPC endpoint)
# {
# "Version": "2012-10-17",
# "Statement": [{
# "Effect": "Allow",
# "Principal": "*",
# "Action": ["dynamodb:GetItem","dynamodb:PutItem","dynamodb:Query","dynamodb:UpdateItem"],
# "Resource": [
# "arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders",
# "arn:aws:dynamodb:us-east-1:123456789012:table/prod-customers"
# ]
# }]
# }
# Enforce VPC endpoint usage in IAM policies (deny if not via endpoint)
# Add this condition to IAM policies for DynamoDB access:
# "Condition": {
# "StringNotEquals": {
# "aws:sourceVpce": "vpce-0a1b2c3d4e5f67890"
# }
# }
Combine the VPC endpoint policy with an IAM condition key aws:sourceVpce to create a two-layer enforcement: the IAM policy denies requests not coming through the endpoint, and the endpoint policy restricts which tables and actions are accessible through the endpoint.
Security Hub Controls: No dedicated control for VPC endpoints, but this directly supports network isolation requirements in CIS AWS Foundations Benchmark. Use AWS Config rule dynamodb-in-backup-plan and custom Config rules to detect tables accessible without VPC endpoint routing.
AWS CloudTrail logs management events (table creation, deletion, policy changes) by default -- but data events (GetItem, PutItem, DeleteItem, Query, Scan) are not logged unless explicitly enabled. Without data event logging, you have no visibility into who read or modified data in your DynamoDB tables.
Data event logging is essential for compliance (PCI-DSS requires logging all access to cardholder data), forensics (reconstructing what an attacker accessed), and anomaly detection (identifying unusual scan patterns that indicate data exfiltration).
# Enable data events for specific DynamoDB tables in an existing trail
aws cloudtrail put-event-selectors \
--trail-name prod-security-trail \
--event-selectors '[
{
"ReadWriteType": "All",
"IncludeManagementEvents": true,
"DataResources": [
{
"Type": "AWS::DynamoDB::Table",
"Values": [
"arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders",
"arn:aws:dynamodb:us-east-1:123456789012:table/prod-customers"
]
}
]
}
]'
# Enable data events for ALL DynamoDB tables (use for highest-sensitivity environments)
aws cloudtrail put-event-selectors \
--trail-name prod-security-trail \
--event-selectors '[
{
"ReadWriteType": "All",
"IncludeManagementEvents": true,
"DataResources": [
{
"Type": "AWS::DynamoDB::Table",
"Values": ["arn:aws:dynamodb"]
}
]
}
]'
# Use Advanced Event Selectors for fine-grained filtering (reduces cost)
aws cloudtrail put-event-selectors \
--trail-name prod-security-trail \
--advanced-event-selectors '[
{
"Name": "DynamoDB-Write-Events",
"FieldSelectors": [
{"Field": "eventCategory", "Equals": ["Data"]},
{"Field": "resources.type", "Equals": ["AWS::DynamoDB::Table"]},
{"Field": "readOnly", "Equals": ["false"]}
]
}
]'
# Verify data events are configured
aws cloudtrail get-event-selectors \
--trail-name prod-security-trail \
--query "EventSelectors[].DataResources"
Data event logging costs $0.10 per 100,000 events. For high-throughput tables processing millions of requests per minute, use Advanced Event Selectors to log only write events (readOnly: false) or specific operations to manage cost. Store CloudTrail logs in an S3 bucket with Object Lock enabled to prevent log tampering.
Security Hub Controls: No dedicated DynamoDB control for CloudTrail data events, but this is required by CIS AWS Foundations Benchmark 3.x and PCI-DSS Requirement 10. Use a CloudWatch Metric Filter on CloudTrail logs to alert on DeleteTable or mass DeleteItem events.
DynamoDB resource-based policies, generally available since early 2024, allow you to attach an IAM policy directly to a table or stream -- similar to S3 bucket policies. Combined with DynamoDB Block Public Access (BPA), which became generally available in March 2024, you can enforce account-level and organization-level controls that prevent any DynamoDB table from being made publicly accessible.
In February 2026, AWS added Resource Control Policies (RCPs) support for DynamoDB, enabling organization-wide guardrails through AWS Organizations. RCPs complement SCPs by restricting what external principals can do in your account, regardless of what IAM policies allow.
# Enable DynamoDB Block Public Access at the account level
aws dynamodb put-resource-policy \
--resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders \
--policy '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSpecificRolesOnly",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:role/AppServiceRole",
"arn:aws:iam::123456789012:role/DataPipelineRole"
]
},
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Query",
"dynamodb:BatchGetItem",
"dynamodb:BatchWriteItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders",
"arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders/index/*"
]
}
]
}'
# Retrieve the current resource policy
aws dynamodb get-resource-policy \
--resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders
# Delete a resource policy (if needed for remediation)
aws dynamodb delete-resource-policy \
--resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RCP-DenyExternalPrincipals",
"Effect": "Deny",
"Principal": "*",
"Action": "dynamodb:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalOrgID": "o-exampleorgid"
},
"BoolIfExists": {
"aws:PrincipalIsAWSService": "false"
}
}
}
]
}
The RCP above, deployed as an Organization-level Resource Control Policy, prevents any principal outside your AWS Organization from accessing any DynamoDB table in any member account. This is a powerful defense-in-depth measure that works even if an individual account's IAM policies are misconfigured.
Security Hub Controls: Resource-based policies and BPA are not yet reflected in a dedicated Security Hub control, but they directly support the principle of least privilege and comply with NIST SP 800-53 AC-3 controls.
DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB that provides microsecond latency. DAX caches the plaintext values of your DynamoDB items -- if DAX is unencrypted, an attacker who gains access to the cluster's memory or storage can read your data even if the underlying DynamoDB table is encrypted with a CMK.
Two distinct encryption requirements exist for DAX: encryption at rest (protecting the node's SSDs and snapshots) and encryption in transit (TLS between your application and the DAX cluster, and between DAX nodes in a cluster).
# Create a DAX cluster with both at-rest and in-transit encryption
aws dax create-cluster \
--cluster-name prod-dax-cluster \
--node-type dax.r6g.xlarge \
--replication-factor 3 \
--iam-role-arn arn:aws:iam::123456789012:role/DAXRole \
--subnet-group-name prod-dax-subnet-group \
--security-group-ids sg-0dax1234567890 \
--sse-specification Enabled=true \
--cluster-endpoint-encryption-type TLS
# Verify encryption settings
aws dax describe-clusters \
--cluster-names prod-dax-cluster \
--query "Clusters[].[ClusterName,SSEDescription.Status,ClusterEndpointEncryptionType]" \
--output table
# Create a DAX subnet group in private subnets
aws dax create-subnet-group \
--subnet-group-name prod-dax-subnet-group \
--description "Private subnets for DAX clusters" \
--subnet-ids subnet-0a1b2c3d4e5f6a7b8 subnet-0b2c3d4e5f6a7b8c9
# Audit: Find DAX clusters without encryption at rest
aws dax describe-clusters \
--query "Clusters[?SSEDescription.Status!='ENABLED'].[ClusterName,SSEDescription.Status]" \
--output table
# Audit: Find DAX clusters without TLS
aws dax describe-clusters \
--query "Clusters[?ClusterEndpointEncryptionType!='TLS'].[ClusterName,ClusterEndpointEncryptionType]" \
--output table
Note that ClusterEndpointEncryptionType cannot be changed after cluster creation -- encryption in transit must be enabled at creation time. To add TLS to an existing plaintext cluster, you must create a new cluster with TLS enabled and migrate traffic to it. Plan encryption from the start.
Security Hub Controls: DynamoDB.3 (DAX clusters should be encrypted at rest), DynamoDB.7 (DAX clusters should have cluster encryption enabled for in-transit encryption).
While PITR provides continuous recovery within a 35-day window, AWS Backup provides additional capabilities: cross-region copies, cross-account copies, long-term retention beyond 35 days, and centralized backup management across multiple tables and services. Using both PITR and AWS Backup provides defense in depth for data protection.
Cross-account backup copies are particularly important: if your primary account is compromised and an attacker deletes all tables and disables PITR, backups stored in a separate, locked-down backup account remain intact. This is the cloud equivalent of offline backups.
# Create a backup vault with CMK encryption in a separate backup account
aws backup create-backup-vault \
--backup-vault-name prod-dynamodb-vault \
--encryption-key-arn arn:aws:kms:us-east-1:999888777666:key/BACKUP-KEY-ID \
--region us-east-1
# Create a backup plan with daily backups and cross-region DR copy
aws backup create-backup-plan --backup-plan '{
"BackupPlanName": "dynamodb-daily-backup",
"Rules": [
{
"RuleName": "DailyBackup",
"TargetBackupVaultName": "prod-dynamodb-vault",
"ScheduleExpression": "cron(0 2 * * ? *)",
"StartWindowMinutes": 60,
"CompletionWindowMinutes": 180,
"Lifecycle": {
"DeleteAfterDays": 35
},
"CopyActions": [
{
"DestinationBackupVaultArn": "arn:aws:backup:eu-west-1:999888777666:backup-vault:dr-dynamodb-vault",
"Lifecycle": {
"DeleteAfterDays": 90
}
}
]
}
]
}'
# Assign DynamoDB tables to the backup plan by tag
aws backup create-backup-selection \
--backup-plan-id PLAN-ID \
--backup-selection '{
"SelectionName": "dynamodb-tagged-tables",
"IamRoleArn": "arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole",
"ListOfTags": [
{
"ConditionType": "STRINGEQUALS",
"ConditionKey": "backup",
"ConditionValue": "required"
}
]
}'
# Tag tables to include them in the backup plan
aws dynamodb tag-resource \
--resource-arn arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders \
--tags Key=backup,Value=required
# Verify backup jobs
aws backup list-backup-jobs \
--by-resource-type DynamoDB \
--query "BackupJobs[].[BackupJobId,State,ResourceArn]" \
--output table
Security Hub Controls: DynamoDB.4 (DynamoDB tables should be present in a backup plan). This control checks that your tables are covered by an AWS Backup plan -- PITR alone does not satisfy this control.
A DynamoDB table that exhausts its provisioned throughput returns ProvisionedThroughputExceededException errors, causing application failures. Insufficient capacity is not just an availability issue -- it can be triggered intentionally as a denial-of-service attack if an adversary can craft high-volume requests to a specific table.
On-demand capacity mode automatically scales to handle any request volume, eliminating throttling. Provisioned capacity mode with auto-scaling adjusts capacity based on utilization targets. Both approaches prevent capacity-based availability attacks, but on-demand provides stronger guarantees for unpredictable workloads.
# Switch a table to on-demand capacity (recommended for variable workloads)
aws dynamodb update-table \
--table-name prod-orders \
--billing-mode PAY_PER_REQUEST
# For provisioned capacity: configure auto-scaling on read capacity
aws application-autoscaling register-scalable-target \
--service-namespace dynamodb \
--resource-id "table/prod-orders" \
--scalable-dimension "dynamodb:table:ReadCapacityUnits" \
--min-capacity 5 \
--max-capacity 1000
# Configure the scaling policy (target 70% utilization)
aws application-autoscaling put-scaling-policy \
--service-namespace dynamodb \
--resource-id "table/prod-orders" \
--scalable-dimension "dynamodb:table:ReadCapacityUnits" \
--policy-name "DynamoDBReadAutoScaling" \
--policy-type "TargetTrackingScaling" \
--target-tracking-scaling-policy-configuration '{
"TargetValue": 70.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBReadCapacityUtilization"
},
"ScaleInCooldown": 60,
"ScaleOutCooldown": 60
}'
# Audit: Find tables still on provisioned capacity without auto-scaling
aws dynamodb list-tables --query "TableNames" --output text | tr '\t' '\n' | while read table; do
mode=$(aws dynamodb describe-table --table-name "$table" \
--query "Table.BillingModeSummary.BillingMode" --output text 2>/dev/null)
if [ "$mode" = "PROVISIONED" ] || [ -z "$mode" ]; then
echo "CHECK AUTO-SCALING: $table (mode: $mode)"
fi
done
Security Hub Controls: DynamoDB.1 (DynamoDB tables should automatically scale capacity with demand). This control checks that auto-scaling is configured for tables using provisioned capacity mode.
DynamoDB Streams capture a time-ordered sequence of item-level modifications. They are commonly used to trigger Lambda functions for event-driven architectures, replicate data to Elasticsearch, or audit changes. A stream contains the full before-and-after images of every item modification -- making it a high-value target for data exfiltration.
The stream shard iterator API (GetShardIterator, GetRecords) provides access to raw item data without going through table-level access controls. An IAM principal with broad DynamoDB permissions but no access to the table itself could still read all table data through the stream if stream permissions are not restricted separately.
# Enable a stream with NEW_AND_OLD_IMAGES (only when both images are needed)
aws dynamodb update-table \
--table-name prod-orders \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
# For audit-only use cases, use KEYS_ONLY to minimize data exposure
aws dynamodb update-table \
--table-name prod-orders \
--stream-specification StreamEnabled=true,StreamViewType=KEYS_ONLY
# Verify stream is enabled and note the stream ARN
aws dynamodb describe-table \
--table-name prod-orders \
--query "Table.[StreamSpecification,LatestStreamArn]"
# List all DynamoDB streams
aws dynamodbstreams list-streams \
--query "Streams[].[StreamArn,TableName,StreamStatus]" \
--output table
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaStreamAccess",
"Effect": "Allow",
"Action": [
"dynamodb:GetShardIterator",
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:ListStreams"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders/stream/*"
}
]
}
# Create a Lambda event source mapping for the stream (least-privilege)
aws lambda create-event-source-mapping \
--event-source-arn arn:aws:dynamodb:us-east-1:123456789012:table/prod-orders/stream/STREAM-ID \
--function-name prod-orders-stream-processor \
--batch-size 100 \
--starting-position LATEST \
--bisect-batch-on-function-error \
--destination-config '{
"OnFailure": {
"Destination": "arn:aws:sqs:us-east-1:123456789012:dynamodb-stream-dlq"
}
}'
# Disable streams if not actively used
aws dynamodb update-table \
--table-name prod-orders \
--stream-specification StreamEnabled=false
Use KEYS_ONLY stream view type when your consumer only needs to know which items changed, not what they changed to. This minimizes the data exposed in the stream. Use NEW_IMAGE only when you need the current state. Reserve NEW_AND_OLD_IMAGES for cases where both before-and-after states are genuinely required (e.g., change data capture for auditing).
Security Hub Controls: No dedicated Security Hub control for stream security. Use AWS Access Analyzer to verify that stream IAM policies are not overly permissive and that no cross-account access to streams is unintended.
DynamoDB Global Tables provide multi-region, active-active replication. As of June 2025, Multi-Region Strong Consistency (MRSC) is generally available, enabling strongly consistent reads across regions. In February 2026, AWS extended Global Tables to support multi-account configurations, allowing replicas to exist in separate AWS accounts for compliance and isolation requirements.
Global Tables significantly expand your attack surface: a misconfiguration in any replica region affects all regions. A table deleted in one region is deleted in all regions. IAM policies and KMS key permissions must be consistent across all replicas, and cross-region replication traffic must be secured.
# Create a global table with replicas in multiple regions
aws dynamodb create-table \
--table-name prod-orders \
--attribute-definitions AttributeName=orderId,AttributeType=S \
--key-schema AttributeName=orderId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=arn:aws:kms:us-east-1:123456789012:key/KEY-ID \
--deletion-protection-enabled \
--region us-east-1
# Add a replica in eu-west-1 (CMK must exist in the target region)
aws dynamodb update-table \
--table-name prod-orders \
--replica-updates '[
{
"Create": {
"RegionName": "eu-west-1",
"KMSMasterKeyId": "arn:aws:kms:eu-west-1:123456789012:key/EU-KEY-ID"
}
}
]' \
--region us-east-1
# Verify all replicas are ACTIVE and using CMKs
aws dynamodb describe-table \
--table-name prod-orders \
--query "Table.Replicas[].[RegionName,ReplicaStatus,KMSMasterKeyId]" \
--output table \
--region us-east-1
# Enable PITR on each replica region independently
aws dynamodb update-continuous-backups \
--table-name prod-orders \
--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true \
--region eu-west-1
# Verify deletion protection on each replica (must be set per region)
aws dynamodb update-table \
--table-name prod-orders \
--deletion-protection-enabled \
--region eu-west-1
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyDeleteTableGlobal",
"Effect": "Deny",
"Action": [
"dynamodb:DeleteTable",
"dynamodb:UpdateGlobalTable"
],
"Resource": "arn:aws:dynamodb:*:*:table/prod-*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::123456789012:role/DatabaseAdminBreakGlass"
}
}
}
]
}
# Audit Global Tables: verify all replicas have consistent security settings
for region in us-east-1 eu-west-1 ap-southeast-1; do
echo "=== Region: $region ==="
aws dynamodb describe-table \
--table-name prod-orders \
--query "Table.[DeletionProtectionEnabled,SSEDescription.SSEType,BillingModeSummary.BillingMode]" \
--output table \
--region "$region"
aws dynamodb describe-continuous-backups \
--table-name prod-orders \
--query "ContinuousBackupsDescription.PointInTimeRecoveryDescription.PointInTimeRecoveryStatus" \
--output text \
--region "$region"
done
Each region's replica requires its own CMK -- DynamoDB cannot use a single KMS key across regions because KMS keys are region-scoped. Ensure the CMK in each replica region has the same key policy structure, and that the KMS grants allowing DynamoDB to use the key are applied consistently. Use AWS Config aggregator to detect configuration drift across replica regions.
Security Hub Controls: All DynamoDB Security Hub controls (DynamoDB.1 through DynamoDB.7) must pass in every replica region independently. A table that is compliant in us-east-1 may be non-compliant in eu-west-1 if PITR was not explicitly enabled in that region. Use a Security Hub cross-region aggregator to get a unified compliance view.
| Misconfiguration | Risk | Detection |
|---|---|---|
| AWS-owned key (not CMK) | No audit trail, no key rotation control, no access revocation capability | Check SSEDescription.SSEType; KMS key ARN absent means AWS-owned key |
| PITR disabled | No continuous recovery window; limited to manual on-demand backups | Security Hub DynamoDB.2; aws dynamodb describe-continuous-backups |
| Deletion protection disabled | Table can be destroyed by compromised credentials or CI/CD pipeline in a single API call | Security Hub DynamoDB.6; aws dynamodb describe-table --query "Table.DeletionProtectionEnabled" |
Wildcard IAM policies (dynamodb:*) |
Over-privileged principals can read, write, or delete any item in the table | IAM Access Analyzer; policy review; no LeadingKeys conditions |
| No VPC endpoint configured | DynamoDB traffic traverses the public internet, exposing it to interception | AWS Config rule; check for com.amazonaws.REGION.dynamodb gateway endpoint |
| CloudTrail data events disabled | No visibility into who read or modified table data; forensic blind spot | aws cloudtrail get-event-selectors; check for DynamoDB DataResources |
| DAX cluster unencrypted at rest | Cache node SSDs store plaintext item data accessible without table-level encryption | Security Hub DynamoDB.3; aws dax describe-clusters --query "Clusters[].SSEDescription" |
| DAX cluster without TLS | Application-to-DAX traffic in plaintext; data exposed to network sniffing within VPC | Security Hub DynamoDB.7; aws dax describe-clusters --query "Clusters[].ClusterEndpointEncryptionType" |
| Table not in AWS Backup plan | No cross-region or cross-account backup; PITR alone insufficient for compliance | Security Hub DynamoDB.4; aws backup list-protected-resources --by-resource-type DynamoDB |
| Provisioned capacity without auto-scaling | Throttling under load; potential DoS via targeted high-volume requests | Security Hub DynamoDB.1; check Application Auto Scaling for DynamoDB targets |
| Streams with NEW_AND_OLD_IMAGES unnecessarily | Full item data exposed in stream; higher data exfiltration surface | Review stream view type; use KEYS_ONLY or NEW_IMAGE when full history not needed |
| Global table replicas with inconsistent security settings | Compliant in primary region, non-compliant in replicas; PITR or deletion protection missing per region | Security Hub cross-region aggregator; run describe-table and describe-continuous-backups per region |
| # | Practice | Security Hub Control | Priority |
|---|---|---|---|
| 1 | Encrypt at rest with customer-managed KMS keys | -- | Critical |
| 2 | Enable Point-in-Time Recovery (PITR) | DynamoDB.2 | Critical |
| 3 | Enable table deletion protection | DynamoDB.6 | Critical |
| 4 | Implement fine-grained access control (FGAC) | -- | Critical |
| 5 | Use VPC gateway endpoints, enforce network isolation | -- | High |
| 6 | Enable CloudTrail data events for sensitive tables | -- | High |
| 7 | Apply resource-based policies with Block Public Access | -- | High |
| 8 | Encrypt DAX clusters at rest and in transit (TLS) | DynamoDB.3, DynamoDB.7 | Critical |
| 9 | Include tables in AWS Backup plans | DynamoDB.4 | High |
| 10 | Configure auto-scaling or on-demand capacity | DynamoDB.1 | Medium |
| 11 | Secure DynamoDB Streams with least-privilege IAM | -- | High |
| 12 | Secure global tables with cross-region/cross-account controls | All controls per region | High |
This article is just the start. Get the full picture with our free whitepaper - 8 chapters covering IAM, S3, VPC, monitoring, agentic AI security, compliance, and a prioritized action plan with 50+ CLI commands.
Comprehensive guide to securing Amazon RDS databases. Covers encryption at rest and in transit, private subnet deployment, IAM database authentication, RDS Proxy, audit logging, Secrets Manager rotation, and snapshot security.
Comprehensive guide to securing AWS Identity and Access Management. Covers MFA enforcement, least privilege, IAM Identity Center, SCPs, Access Analyzer, and credential management.
Comprehensive guide to securing AWS Key Management Service. Covers key policies, separation of duties, automatic rotation, encryption context, envelope encryption, cross-account access, multi-Region keys, and continuous compliance monitoring.