Docker makes it easy to ship applications. It does not make them secure by default. Containers running as root, outdated base images, secrets baked into layers, and exposed Docker sockets are things I find regularly when I look at small team production setups.
Mistake 1: Running containers as root
Docker containers run as root by default unless you tell them not to. That means if an attacker finds a vulnerability in your application and gets code execution inside the container, they are running as root inside that container. With a kernel exploit or a Docker socket misconfiguration, that escalates to root on the host.
CVE-2025-31133 and related runc vulnerabilities disclosed in 2025 required root inside the container as part of the exploit chain. Non-root containers are not immune, but they remove one layer of the attack path.
Fix it in your Dockerfile:
# Create a non-root user in your Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Create a non-root user and switch to it
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 3000
CMD ["node", "server.js"]Or in docker-compose.yml:
services:
api:
image: your-api:latest
user: "1001:1001" # UID:GID of the non-root userMistake 2: Outdated base images
Pulling node:18 or python:3.11 and pinning it there forever means your image drifts further behind on security patches with every week that passes. The base image contains the OS layer, and OS packages get CVEs too.
Use specific version tags and rebuild regularly. Prefer Alpine or Distroless base images to minimize the attack surface: smaller images have fewer packages, fewer packages means fewer potential vulnerabilities.
# Instead of:
FROM node:18
# Use a specific patch version, prefer Alpine:
FROM node:20.18-alpine3.20
# Or for production APIs, consider Distroless (no shell, minimal OS):
FROM gcr.io/distroless/nodejs20-debian12Scan your images before pushing to production. Trivy is free and takes 30 seconds to run:
# Scan an image for vulnerabilities
docker run --rm aquasec/trivy:latest image your-app:latest
# Fail the build if HIGH or CRITICAL vulnerabilities exist
docker run --rm aquasec/trivy:latest image --exit-code 1 --severity HIGH,CRITICAL your-app:latestMistake 3: Secrets in the image layers
Every RUN, COPY, and ENV instruction in a Dockerfile creates a layer. Those layers persist in the image. If you ever passed a secret through a build argument, set an API key in an ENV instruction, or COPY-ed a file containing credentials, that information is in the image history.
# This is wrong - the key is burned into the image layer:
ARG API_KEY
ENV API_KEY=$API_KEY
# Check if secrets are in your current images:
docker history your-app:latest --no-trunc | grep -i "key|secret|password|token"Secrets should be injected at runtime, not at build time. Use Docker Secrets (for Swarm), Kubernetes Secrets, or environment variables injected by your orchestrator at startup. The image itself should contain no secrets.
# Inject at runtime via docker run:
docker run -e DATABASE_URL="$DATABASE_URL" your-app:latest
# Or via docker-compose using an .env file (never commit the .env file):
services:
api:
env_file: .envMistake 4: Mounting the Docker socket
Mounting /var/run/docker.sock into a container gives that container full control over the Docker daemon. It can create new containers, modify existing ones, and escape to the host. It is root on the host with extra steps.
CVE-2025-9074 was a Docker Desktop vulnerability with a CVSS score of 9.3 that exploited API access to the Docker Engine from within a container. The pattern of socket mounting dramatically increases the blast radius of any container compromise.
# Never do this unless you have a very specific reason and understand the risk:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# If you genuinely need it (e.g., a monitoring agent), consider using
# a Docker socket proxy that restricts which API endpoints are accessible:
# https://github.com/Tecnativa/docker-socket-proxyMistake 5: No resource limits
Without resource limits, a single misbehaving or compromised container can consume all CPU and memory on the host, taking down every other container with it. This is a denial of service at the infrastructure level and it is trivial to trigger with a fork bomb or a bug that creates an infinite loop.
# docker-compose.yml: set memory and CPU limits
services:
api:
image: your-api:latest
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
reservations:
memory: 256M
cpus: "0.25"
# docker run equivalent:
docker run --memory="512m" --memory-reservation="256m" --cpus="0.5" your-api:latestThe reservation is what Docker guarantees. The limit is the hard cap. Set both.
The quick audit
Run these against your running containers to get an immediate picture:
# Check if any containers are running as root (UID 0)
docker ps -q | xargs -I{} docker inspect {} --format '{{.Name}}: User={{.Config.User}}'
# Check if any containers have the Docker socket mounted
docker ps -q | xargs -I{} docker inspect {} --format '{{.Name}}: {{.HostConfig.Binds}}' | grep docker.sock
# Check resource limits on running containers
docker stats --no-stream --format "table {{.Name}} {{.MemUsage}} {{.CPUPerc}}"$ check --docker-setup
If any of these landed for you and you want someone to go through the actual Dockerfile, compose config, and image scanning setup, I can do that as part of a DevOps review.
$ ./request-devops-review.sh →