CRITICALMalware30-60 min containment15 steps across 5 phases

    Compromised EC2 Instance

    An EC2 instance shows signs of compromise: unexpected outbound connections, unusual processes, modified system files, or GuardDuty alerts. The instance may be used as a pivot point for lateral movement within your VPC. Isolate first, investigate second.

    Phase 1: Detection

    $ tail -f /var/log/cloudtrail/events.log
    1

    Review GuardDuty findings for the instance

    GuardDuty detects command-and-control communication, port scanning, DNS exfiltration, and other indicators of compromise.

    GuardDuty:Backdoor:EC2/C&CActivity.BTrojan:EC2/BlackholeTrafficUnauthorizedAccess:EC2/SSHBruteForceRecon:EC2/PortProbeUnprotectedPortBehavior:EC2/TrafficVolumeUnusual
    2

    Check VPC Flow Logs for suspicious connections

    Look for unexpected outbound connections, especially to known malicious IPs or unusual ports.

    aws ec2 describe-flow-logs \
      --filter "Name=resource-id,Values=<vpc-id>"
    # Query flow logs in CloudWatch Logs Insights
    # fields @timestamp, srcAddr, dstAddr, dstPort, action
    # | filter srcAddr = "<instance-private-ip>"
    # | filter action = "ACCEPT"
    # | sort @timestamp desc
    # | limit 100
    3

    Use SSM to inspect the instance (if agent is running)

    Run commands on the instance to check for suspicious processes, network connections, and recently modified files.

    # List unusual network connections
    aws ssm send-command \
      --instance-ids <instance-id> \
      --document-name "AWS-RunShellScript" \
      --parameters 'commands=["netstat -tlnp","ss -tlnp","cat /etc/crontab","ls -la /tmp"]'

    Phase 2: Containment

    $ ./containment.sh --isolate --immediate
    1

    Isolate the instance with a quarantine security group

    Replace all security groups with a quarantine SG that has no inbound or outbound rules. Create it first if needed (see Crypto Mining playbook for creation steps including removing the default outbound rule).

    aws ec2 modify-instance-attribute \
      --instance-id <instance-id> \
      --groups <isolation-sg-id>

    Do NOT stop or terminate the instance yet. Live memory and running processes are crucial for forensics. Ensure your isolation SG has no outbound rules (new SGs have a default allow-all outbound rule that must be removed).

    2

    Create a forensic snapshot of all volumes

    Snapshot all EBS volumes attached to the instance for offline analysis.

    # List all volumes attached to the instance
    aws ec2 describe-volumes \
      --filters "Name=attachment.instance-id,Values=<instance-id>" \
      --query 'Volumes[].VolumeId' --output text
    # Create snapshots
    aws ec2 create-snapshot \
      --volume-id <vol-id> \
      --description "IR-forensic-snapshot-$(date +%Y%m%d)" \
      --tag-specifications 'ResourceType=snapshot,Tags=[{Key=IncidentResponse,Value=true}]'
    3

    Capture instance metadata and memory

    Record the instance metadata, console output, and screenshot for evidence.

    aws ec2 get-console-output --instance-id <instance-id>
    aws ec2 get-console-screenshot --instance-id <instance-id>

    Phase 3: Eradication

    $ ./eradicate.sh --purge --verify
    1

    Terminate the compromised instance

    After forensic evidence is preserved, terminate the instance. Do NOT try to clean it - rebuild from a known-good AMI.

    aws ec2 terminate-instances --instance-ids <instance-id>

    Never attempt to "clean" a compromised instance. You cannot trust anything on it. Always rebuild from scratch.

    2

    Revoke the instance role credentials

    If the instance had an IAM role, the temporary credentials may have been exfiltrated. Revoke them.

    aws iam put-role-policy \
      --role-name <instance-role> \
      --policy-name RevokeOldSessions \
      --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Deny","Action":"*","Resource":"*","Condition":{"DateLessThan":{"aws:TokenIssueTime":"<current-timestamp>"}}}]}'
    3

    Scan other instances in the same VPC

    The attacker may have moved laterally. Check other instances in the VPC for similar indicators.

    aws ec2 describe-instances \
      --filters "Name=vpc-id,Values=<vpc-id>" "Name=instance-state-name,Values=running" \
      --query 'Reservations[].Instances[].{ID:InstanceId,IP:PrivateIpAddress,Role:IamInstanceProfile.Arn}' \
      --output table

    Phase 4: Recovery

    $ ./recovery.sh --restore --validate
    1

    Launch a replacement instance from a clean AMI

    Deploy a new instance from a known-good AMI with updated security patches.

    Use a golden AMI from your pipeline. Never reuse AMIs that were running during the incident.

    2

    Restore data from pre-compromise backups

    If application data was on the instance, restore from backups taken before the compromise.

    3

    Verify the replacement instance is clean

    Monitor the new instance for 24-48 hours and run vulnerability scans before returning to production.

    Phase 5: Lessons Learned

    $ cat POST_INCIDENT_REVIEW.md
    1

    Enable SSM Session Manager instead of SSH

    Eliminate exposed SSH ports. Use SSM Session Manager for shell access with CloudTrail logging.

    2

    Implement IMDSv2 enforcement

    Require IMDSv2 (hop limit = 1) to prevent SSRF-based credential theft from the metadata service.

    aws ec2 modify-instance-metadata-options \
      --instance-id <instance-id> \
      --http-tokens required \
      --http-put-response-hop-limit 1
    3

    Deploy Amazon Inspector for continuous vulnerability scanning

    Enable Inspector to continuously scan EC2 instances for software vulnerabilities and network exposure.

    ec2malwarebackdoorlateral-movementforensics

    Need Help with Incident Response?

    When an incident strikes, every minute counts. We help AWS teams prepare, detect, and respond to security incidents with proven expertise.