23% of all cloud security incidents in 2025 were caused by misconfiguration. Not exploits. Not zero-days. Just wrong settings. An S3 bucket that should be private set to public. A security group with port 5432 open to the world. A storage account with no access key required. These are the settings that cause breaches, and most of them take under five minutes to fix once you find them.
The Wiz 2025 cloud security report found that 72% of cloud environments had at least one publicly exposed database with no access controls. Not 72% of insecure companies: 72% of cloud environments. Including organizations that consider themselves security-conscious. The average cost of a misconfiguration breach is $4.3 million.
I run through these checks when I look at a new AWS or Azure account. In my experience, most accounts that have been running for more than a year have at least two of the five issues below. None of them are difficult to fix once you find them.
Public S3 buckets
The S3 bucket misconfiguration is the most documented cloud security failure in history. Capital One, Facebook, Twitch, and hundreds of other organizations have had data exposed through public S3 buckets. AWS added "Block Public Access" at the account level specifically because it kept happening.
# Check if Block Public Access is enabled at the account level
aws s3control get-public-access-block --account-id $(aws sts get-caller-identity --query Account --output text)
# Check each bucket individually
aws s3api get-public-access-block --bucket your-bucket-name
# List buckets with ACL that allows public read
aws s3api list-buckets --query 'Buckets[*].Name' --output text | tr ' ' '
' | while read bucket; do
acl=$(aws s3api get-bucket-acl --bucket "$bucket" 2>/dev/null)
echo "$bucket: $(echo $acl | grep -c AllUsers) public grants"
doneEnable Block Public Access at the account level unless you have a specific bucket that genuinely needs to be public (static website hosting, public file downloads). Those buckets should be the exception, not the default.
Security groups with ports open to the world
Security groups are AWS's virtual firewall. The rule that causes the most breaches: inbound 0.0.0.0/0 on database ports. Port 3306 (MySQL), 5432 (PostgreSQL), 27017 (MongoDB), 1433 (SQL Server) should never be accessible from the open internet.
# Find security groups that allow inbound access from 0.0.0.0/0
aws ec2 describe-security-groups --query "SecurityGroups[?IpPermissions[?IpRanges[?CidrIp=='0.0.0.0/0']]].{Name:GroupName,ID:GroupId,Ports:IpPermissions[*].FromPort}" --output table
# Also check ::/0 (IPv6 equivalent)
aws ec2 describe-security-groups --query "SecurityGroups[?IpPermissions[?Ipv6Ranges[?CidrIpv6=='::/0']]].{Name:GroupName,ID:GroupId}" --output tableDatabases should only accept connections from the application servers or subnets that need them. Restrict the source to a specific security group or CIDR range, not0.0.0.0/0.
IAM users with more access than they need
The principle of least privilege: every IAM user, role, and service should have only the permissions it needs and nothing more. The common violation: an application uses an IAM user with AdministratorAccess or FullAccess because it was easier to set up that way.
# List all IAM users with their attached policies
aws iam list-users --query 'Users[*].UserName' --output text | tr ' ' '
' | while read user; do
policies=$(aws iam list-attached-user-policies --user-name "$user" --query 'AttachedPolicies[*].PolicyName' --output text)
echo "$user: $policies"
done
# Find users with AdministratorAccess
aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --query 'PolicyUsers[*].UserName'Application service accounts should use IAM roles where possible, not long-lived access keys. If access keys are necessary, scope them to only the specific S3 buckets, DynamoDB tables, or other resources that application actually touches.
RDS instances without encryption at rest
RDS encryption at rest is not enabled by default on all instance types. If your database is not encrypted, a breach of the underlying storage exposes all data. For RDS, encryption must be set at creation time: you cannot encrypt an existing unencrypted instance (you create a snapshot and restore it to an encrypted instance).
# Check which RDS instances are not encrypted
aws rds describe-db-instances --query 'DBInstances[?StorageEncrypted==`false`].[DBInstanceIdentifier,Engine,DBInstanceClass]' --output tableNo CloudTrail: you have no audit trail
CloudTrail logs every API call made to your AWS account. Without it, if a breach occurs, you have no audit trail. You cannot tell when an attacker entered, what they accessed, or what they changed. CloudTrail should be enabled in every region, with logs stored in an S3 bucket that has write-once protection.
# Check if CloudTrail is active
aws cloudtrail describe-trails --query 'trailList[*].[Name,IsLogging,HomeRegion]'
# Verify logging is actually enabled for each trail
aws cloudtrail get-trail-status --name your-trail-name --query '[IsLogging,LatestDeliveryTime]'AWS Security Hub and Azure Defender
Both AWS and Azure have first-party security posture tools that automate these checks and many more. AWS Security Hub consolidates findings from GuardDuty, Inspector, and Config, and benchmarks your account against CIS AWS Foundations. Azure Defender does the equivalent.
AWS Security Hub has a 30-day free trial. Azure Defender is charged per resource. For a small team, running the trial and looking at the findings is worth the time even if you do not continue the subscription: you will find the issues that matter.
$ scan --your-aws-account
I run through these checks on every account I look at. If you want them run on yours, I can do it and tell you the specific settings that need to change.
$ ./request-cloud-audit.sh →