Every web application audit I have done has found something. Not once have I reviewed a production app and come back with a clean report. The issues are different in severity, but the categories repeat: credentials in the wrong place, an auth check that can be bypassed, storage that is open to the internet, inputs that are not validated, and libraries with known vulnerabilities that nobody updated. Here is what each of those looks like in practice.
This is not meant to scare you. It is meant to give you a realistic picture of what a security review actually covers and what it finds, so you can decide whether the app your team built, or the one an agency delivered to you, needs one.
Finding 1: Credentials in places they should not be
The most common finding across every audit: secrets in the wrong place. This takes several forms.
The worst is credentials committed to git. A developer adds a database connection string or an API key directly to a config file and pushes it. Even if they realize the mistake and delete the file later, git history preserves it. Anyone with read access to the repository can run git log -p and see it.
# How to check your own git history for committed secrets
# Run this from your repository root
git log -p | grep -iE "(password|secret|api_key|token|database_url)" | head -50
# Or use a tool built for this:
# truffleHog scans git history for high-entropy strings and known secret patterns
npx trufflehog git file://. --only-verifiedThe second form: secrets in environment variables prefixed with NEXT_PUBLIC_or equivalent client-side exposure mechanisms. In Next.js, anything prefixedNEXT_PUBLIC_ is bundled into the JavaScript that ships to the browser. Every visitor to your site can read it by opening DevTools and looking at the source.
The third form: API keys hardcoded into mobile app binaries or JavaScript files served to the browser. A Stripe secret key embedded in a React component. A Twilio account SID and auth token in a frontend file. These are readable by anyone who views page source.
The fix for all of these: secrets belong in server-side environment variables only, set through your deployment platform (Vercel, Railway, AWS Parameter Store, etc.), never in code, never in public-facing bundles. Add a .gitignore entry for .env files and use a tool like truffleHog or GitLeaks in CI to catch violations before they are pushed.
Finding 2: Authentication logic that can be bypassed
Auth issues are the category with the widest range of severity. The worst ones allow full account takeover with no credentials. The milder ones allow access to data a user should not be able to see.
The most common pattern I find: authorization checks happen in the frontend but not in the API. The UI hides a button or a page from users who should not have access, but the API route behind it does not verify the caller has permission. An attacker does not use the UI. They call the API directly.
// Common vulnerable pattern: auth check only in the component
// components/AdminPanel.tsx
export function AdminPanel() {
const { user } = useAuth();
if (user.role !== 'admin') return null; // UI check only
return <div>...</div>;
}
// The API route has no check:
// app/api/admin/users/route.ts
export async function GET() {
const users = await db.user.findMany(); // No auth check!
return Response.json(users);
}
// Anyone can call: GET /api/admin/users
// and get the full user list regardless of their roleOther patterns that come up often: password reset tokens that do not expire, session tokens that are not invalidated on logout, no rate limiting on login endpoints (allowing brute-force attacks), and JWT validation that checks the signature but not whether the user still has the claimed role or still exists in the database.
Finding 3: Cloud storage open to the public
S3 buckets and equivalent storage in other clouds are private by default, but they can be made public easily and the setting is not always obvious. Common scenarios where this goes wrong:
- A developer enables public access to host images for a product catalog. The bucket that was meant to hold only product images also has user-uploaded documents in a different prefix because the application writes everything to the same bucket.
- A staging environment bucket gets public access enabled for testing and the setting is never reverted. The staging bucket has real customer data imported for testing.
- A backup bucket with a predictable name has public read enabled. Backups include database dumps with full customer records.
# Check all S3 buckets in your AWS account for public access
aws s3api list-buckets --query 'Buckets[].Name' --output text | tr '\t' '\n' | while read bucket; do
echo -n "$bucket: "
aws s3api get-bucket-acl --bucket "$bucket" 2>/dev/null | grep -q "AllUsers" && echo "PUBLIC" || echo "private"
done
# Also check Block Public Access settings per bucket
aws s3api get-public-access-block --bucket your-bucket-nameThe correct posture: enable S3 Block Public Access at the account level, not just the bucket level. If you need to serve public files, use CloudFront in front of a private bucket. For user uploads, generate pre-signed URLs with short expiry rather than making the bucket or objects public.
Finding 4: Unsanitized inputs
Input validation issues are the category most closely associated with OWASP Top 10 vulnerabilities. The two that show up most in web apps built today:
SQL injection: User input is interpolated directly into a database query rather than being parameterized. More rare in applications using ORMs like Prisma or Drizzle (which parameterize by default), but common in applications using raw SQL or query builders incorrectly.
// SQL injection vulnerability
const users = await db.query(
`SELECT * FROM users WHERE email = '${req.body.email}'`
// req.body.email = "' OR '1'='1" returns all users
// req.body.email = "admin'--" may bypass authentication
);
// Correct: parameterized query
const users = await db.query(
'SELECT * FROM users WHERE email = $1',
[req.body.email] // Driver handles escaping
);
// With Prisma (safe by default):
const user = await prisma.user.findUnique({ where: { email: req.body.email } });Cross-site scripting (XSS):User-supplied content is rendered as HTML without escaping. In React applications this is largely mitigated by JSX's automatic escaping, but it appears in applications that usedangerouslySetInnerHTML with unsanitized content, or in server-rendered templates outside React.
Beyond SQL injection and XSS, the broader issue is absence of input validation entirely. API endpoints that accept a user ID should verify the ID is a valid format before querying. Endpoints that accept a file upload should validate the MIME type and size before processing. None of this requires complex code, but it requires someone to have thought about it.
Finding 5: Outdated dependencies with known CVEs
This finding is almost guaranteed in any application that has not had a dependency review in the past few months. Packages get security patches regularly. Applications do not always update them.
# Run npm audit to see known vulnerabilities in your dependencies
npm audit
# Sample output:
# 3 vulnerabilities (1 moderate, 2 high)
#
# high: ReDoS in semver package via malicious version string
# Package: semver Patched in: >=7.5.2 Dependency of: @google-cloud/storage
#
# high: SSRF in axios via malicious redirects
# Package: axios Patched in: >=1.7.0 Dependency of: stripe (dev)
# To auto-fix non-breaking updates:
npm audit fix
# See full dependency tree for a specific package:
npm ls semverThe severity varies widely. Some audit findings are theoretical vulnerabilities in rarely-used code paths that require unusual conditions to exploit. Others are actively exploited in the wild. The audit output tells you the severity level and whether a patch is available.
The preventive measure is running npm audit as part of your CI pipeline so new vulnerabilities are flagged automatically when they are published. If you are not running CI, schedule a monthly audit run manually. It takes two minutes and the findings can be significant.
What happens after: how findings get prioritized
Not every finding needs to be fixed the same week. A real audit report categorizes findings by severity and gives remediation guidance with a rough priority order. How I typically rank them:
Critical: fix immediately
Exposed secrets in code or git history, public S3 buckets containing sensitive data, authentication bypass that allows access to any account, SQL injection on a publicly accessible endpoint.
High: fix within the week
Missing authorization checks on API routes, stored XSS vulnerabilities, high-severity CVEs in production dependencies, missing rate limiting on auth endpoints.
Medium: fix in the next sprint
Missing security headers (CSP, HSTS, X-Frame-Options), moderate-severity dependency CVEs, overly permissive CORS configuration, missing input length validation.
Low: schedule and track
Low-severity CVEs in non-production dependencies, verbose error messages that expose stack traces, outdated TLS configuration that still functions but is not best practice.
What an audit does not cover
A web application security audit covers the application layer: the code, the API, the configuration, the dependencies. It does not cover physical security, employee phishing susceptibility, or social engineering vectors. A separate penetration test might include those, but a code and configuration review does not.
It also does not guarantee that no vulnerabilities exist after the audit. Security is not a one-time state. New vulnerabilities are published in packages every week. New code introduces new paths. An audit gives you a point-in-time picture of what is wrong now. What you do with that picture, and how you maintain security after, is the ongoing part.
$ audit --your-web-app
If you want your app looked at, not just a scanner report but an actual review of the code and configuration, I offer security audits as a standalone engagement.
$ ./request-security-audit.sh →