Someone built an entire social platform using AI prompts, launched it, and within days researchers had access to 1.5 million API keys and 30,000 email addresses. No exploit needed. The keys were just sitting there in the JavaScript. This is where "vibe coding" is landing people right now.
What happened with Moltbook
In early 2026, a founder launched Moltbook: an AI social network. He built the entire thing by prompting an AI assistant. Did not write a single line of code himself. Shipped it. Got users.
Wiz Security researchers looked at it and found a Supabase API key sitting in plain client-side JavaScript. Not buried. Not obfuscated. Right there, readable by anyone who opened DevTools. That key had full database access because Row Level Security had never been configured. The AI did not set it up. The founder did not know to check.
Researchers were able to read the entire database without authenticating. 1.5 million API authentication tokens. 30,000 email addresses. Private messages. All of it. The founder fixed it within hours once notified. But the damage window was days.
The ICAEW has now issued formal warnings about vibe-coded platforms handling any kind of user data. This is not a one-off story.
The actual numbers
Veracode tested over 100 AI models on code generation tasks and found that 45% of AI-generated code contained confirmed security vulnerabilities. A separate analysis found AI code introduces 2.74x more flaws than human-written code. Not because AI is incompetent but because it has no fear. It writes working code fast and moves on. It does not pause to ask: is the API key exposed? Is auth actually enforced here? Does this endpoint validate input?
CVEs directly attributed to AI-generated code: 6 in January 2026, 15 in February, 35 in March. That trajectory is not slowing down.
AI coding assistants hit 90% enterprise adoption by end of 2025. The gap between "we used AI to build this" and "we checked if AI built it securely" is where breaches are living right now.
What AI consistently gets wrong
I have reviewed a handful of AI-generated codebases over the past year. The same patterns show up repeatedly.
1. Secrets in client-side code
AI generates API calls and hardcodes keys inline. It does not default to environment variables. It does not separate server-side from client-side. You end up with something like this shipped to production:
// What AI generates by default
const supabase = createClient(
"https://yourproject.supabase.co",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." // service_role key
);The service_role key bypasses Row Level Security entirely. Anyone with your bundle can make themselves admin. The fix is to use the anon key in the browser and put any privileged calls behind server functions or Edge Functions. But AI will not tell you that unless you ask.
2. No Row Level Security
RLS is Supabase's mechanism for ensuring a user can only read their own rows. If you have a users table and a documents table, RLS policies enforce at the database level that user A cannot read user B's documents even if your API has a bug.
AI does not enable RLS by default. It builds the schema, wires up the queries, and gets the feature working. You have to explicitly tell it to add RLS, and even then, you need to verify the policies actually do what you think. This is what bit Moltbook.
3. Auth checks happen in the frontend only
AI will often put auth logic in React components. If a user is not logged in, redirect them away. That looks correct. The problem is the API endpoints it also generated have no corresponding server-side auth check. The frontend redirects. But someone can just call the endpoint directly.
// What AI generates: auth check in the component
if (!user) return redirect("/login");
// What it forgets: the API route that feeds this component
// app/api/documents/route.ts
export async function GET() {
// No auth check here. Anyone can call this directly.
const docs = await db.query("SELECT * FROM documents");
return Response.json(docs);
}Frontend auth is a UX layer. Backend auth is the actual security layer. You need both. AI regularly gives you only the first one.
4. No input validation
AI generates code that trusts request bodies completely. No schema validation. No type checking at runtime. You get endpoints that take whatever is posted, pass it directly into a database query, and return the result. SQL injection and NoSQL injection are real outcomes of this.
// Common AI-generated pattern
app.post("/search", async (req, res) => {
const { query } = req.body;
// Query goes straight into the database
const results = await db.collection("products").find({ name: query });
res.json(results);
});Even when AI uses an ORM like Prisma, it will sometimes build raw queries for complex lookups. Those raw queries have no sanitization applied.
5. Passwords stored incorrectly or not at all
If you ask AI to "add a login system" without specifying bcrypt or argon2, it sometimes stores passwords in plain text or uses a weak hash like MD5. I have seen this in codebases. The developer tested login, it worked, and they moved on. They never looked at what was actually stored in the database.
How to check your own app right now
If your app was built by AI or by someone using AI heavily, run through this list before you have any more users on it.
$ cat security-checklist.txt
1. Open DevTools on your live site
Go to Sources or Network. Search for anything that looks like an API key, a token, or a connection string. If you can find it in the browser, so can anyone else.
2. Check your .env.example and git history
Run git log --all --full-history -- .env to see if a real .env file was ever committed. Keys committed and deleted are still in the git history. They need to be rotated immediately.
3. Hit your API endpoints directly without logging in
Use curl or Postman. Make the same requests your app makes but remove the auth header. If you still get data back, your backend has no auth enforcement.
4. Check your database for RLS
If you use Supabase, open the dashboard and check which tables have RLS enabled. If none of them do and your anon key is exposed anywhere, that is a critical issue.
5. Look at how passwords are stored
Run a query and look at one password hash. A bcrypt hash starts with $2b$. An argon2 hash starts with $argon2. MD5 is 32 hex characters. Plain text is plain text. If it is not one of the first two, you have a problem.
6. Check what third-party packages were installed
AI picks packages freely. Run npm audit or pip audit and look at the output. Pay attention to high and critical severity findings.
"But the AI said it was secure"
I hear this. The problem is that AI evaluates security at the level of the code it just wrote. It does not have visibility into your infrastructure, your Supabase settings, your environment variable handling, your deployment pipeline, or your server configuration. It will tell you the code looks fine because the code often does look fine in isolation. The vulnerability is in the gap between the code and how it is actually deployed.
Moltbook's code probably worked exactly as the AI intended. The problem was a configuration setting that no prompt ever addressed.
What to do if you are not sure
The practical options are:
- Run the checklist above yourself. If you find issues, address them before onboarding more users.
- Use a tool like Semgrep or CodeQL to scan your codebase for common vulnerability patterns. Both have free tiers and work on most languages.
- Get a security review before launch or before handling real user data. Not a full pen test necessarily. Even a focused audit of your auth flow, your API endpoints, and your environment handling will catch most critical issues.
The founder who built Moltbook fixed his issue in hours. But his users' data was already exposed. That window is the cost of not checking before launch.
$ audit --ai-generated-app
If your app was built with AI tools and you want to know what's actually wrong before your users find out, I can go through it: auth, API endpoints, environment handling, deployment config.
$ ./request-security-review.sh →