A developer spins up a test environment. The test completes. They terminate the EC2 instance. The EBS volume stays, the Elastic IP stays, the load balancer stays, and the RDS snapshot stays. Six months later, nobody remembers any of it. All of it is still on the bill.
This is the zombie resource problem. Cloud resources that were created for a purpose that no longer exists, but nobody explicitly deleted them. They do not cause any errors. They do not alert. They just sit there, billing silently every month.
Wiz and PerfectScale both flag this as the number one source of cloud waste in 2025-2026. On most accounts I audit that have been running for more than a year, zombie cleanup alone saves 10-20% of the monthly bill.
The most common zombie resources
Unattached EBS volumes
When you terminate an EC2 instance, the attached EBS volume is not always deleted automatically. The default behavior depends on whether DeleteOnTerminationwas enabled at launch (it is for the root volume, but not for secondary volumes). A 100GB EBS volume costs about $8-10/month in most regions, and they accumulate fast.
# Find all unattached EBS volumes
aws ec2 describe-volumes --filters Name=status,Values=available --query 'Volumes[*].[VolumeId,Size,CreateTime,Tags]' --output table
# Delete a specific volume (irreversible - make sure you don't need it)
aws ec2 delete-volume --volume-id vol-0123456789abcdefOrphaned load balancers
Application Load Balancers and Network Load Balancers charge by the hour whether or not they have any targets registered. An ALB with no registered targets has been billed roughly $0.016/hour, continuously, since it was created. An NLB is similar. If the backend it pointed to no longer exists, the load balancer is useless.
# List all ALBs and their target groups
aws elbv2 describe-load-balancers --query 'LoadBalancers[*].[LoadBalancerArn,LoadBalancerName,State.Code,CreatedTime]' --output table
# Check if a specific load balancer has targets
aws elbv2 describe-target-groups --load-balancer-arn arn:aws:elasticloadbalancing:... --query 'TargetGroups[*].[TargetGroupName,TargetType]'Forgotten RDS instances and snapshots
RDS instances are expensive to run. A db.t3.medium costs about $0.068/hour: over $49/month. A database created for a project that ended is still at that rate unless it was explicitly deleted. RDS snapshots accumulate too, at $0.095/GB-month.
# List all RDS instances
aws rds describe-db-instances --query 'DBInstances[*].[DBInstanceIdentifier,DBInstanceStatus,DBInstanceClass,Engine,MultiAZ]' --output table
# List all manual RDS snapshots older than 90 days
aws rds describe-db-snapshots --snapshot-type manual --query 'DBSnapshots[?SnapshotCreateTime<`2025-10-30`].[DBSnapshotIdentifier,SnapshotCreateTime,AllocatedStorage]' --output tableAbandoned S3 buckets with data in them
S3 storage is cheap per GB, but there are two ways abandoned buckets cost more than expected: old versioned objects and storage class charges. If versioning is enabled on a bucket, every deleted object creates a delete marker and the old versions accumulate. On a bucket with years of activity, the versioned objects can be many times larger than the current content.
# Check total storage used by each S3 bucket (from CloudWatch, last 1 day)
aws cloudwatch get-metric-statistics --namespace AWS/S3 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=your-bucket-name Name=StorageType,Value=StandardStorage --start-time 2026-03-01T00:00:00Z --end-time 2026-03-02T00:00:00Z --period 86400 --statistics AverageSet lifecycle policies on any bucket that retains versioned objects. Transition old versions to Glacier after 30 days, delete them after 90.
Idle Elastic IPs
Covered in more detail elsewhere, but they belong on this list too. Any Elastic IP not attached to a running instance charges $3.60/month. They are invisible in the bill unless you know to look.
Old CloudWatch Log Groups
Lambda functions and ECS tasks automatically create CloudWatch log groups. These groups retain logs indefinitely by default. Log storage costs $0.03/GB-month. A log group for a Lambda that ran heavily a year ago and has not run since still holds all those logs.
# List log groups with no retention policy set
aws logs describe-log-groups --query 'logGroups[?retentionInDays==null].[logGroupName,storedBytes,creationTime]' --output table
# Set a 30-day retention policy on a log group
aws logs put-retention-policy --log-group-name /aws/lambda/my-function --retention-in-days 30The monthly cleanup habit
The best defense against zombie resources is a monthly 30-minute review. Add it to a calendar. Check: unattached volumes, unattached Elastic IPs, load balancers with no targets, RDS instances that are not in use, and any new services that appeared in Cost Explorer that you do not recognize.
Longer term: use tagging. Every resource should have an Owner tag and an Environment tag. When a project ends, searching for resources with that project tag tells you exactly what to delete.
# Tag resources at creation to make cleanup easier later
aws ec2 create-tags --resources i-0123456789abcdef vol-0123456789abcdef --tags Key=Project,Value=client-portal Key=Environment,Value=staging Key=Owner,Value=yourname$ cleanup --cloud-account
If your account has been running for a while and nobody's done a cleanup, there's almost certainly money sitting there. I can find it.
$ ./start-cloud-cleanup.sh →