In February 2026, a popular AI-built app leaked 18,697 user records — including 4,538 student accounts from UC Berkeley and UC Davis. The app had no human-written code. The same tools that built it are the ones being sold to small business owners as a fast, cheap way to get online.
A real app built with Lovable — one of the most popular AI website builders — exposed nearly 19,000 user records because the security was built backwards: strangers got in, real users got locked out. Security researchers found 16 vulnerabilities, 6 of them critical. The same pattern is common in AI-built small business websites. If your site collects any customer data and was built quickly with AI tools, this is worth reading.
A security researcher examined a real, live application built on Lovable — one of the most popular "describe what you want and AI builds it" platforms of the past year. The app had been seen over 100,000 times on Lovable's own showcase. It looked professional. It functioned. People were using it.
The researcher found 16 security vulnerabilities. Six of them were critical.
The worst one: the authentication system was built backwards. Strangers could get in. Actual registered users were getting locked out. The database behind the app — containing 18,697 user records — was accessible to anyone who knew how to look. Those records included 4,538 student accounts from UC Berkeley and UC Davis.
Not a hypothetical. Not a stress test. A live application with real users whose information was sitting exposed, on a platform marketed to people who do not know how to code.
If you have seen the phrase and are not sure what it refers to: vibe coding is building a website, app, or digital product by describing what you want in plain language to an AI — Lovable, Bolt.new, Cursor, Wix AI, Squarespace AI — and having it write the code without you writing any of it yourself.
The results are genuinely impressive for what they are. A landing page, a portfolio, a simple booking form, a basic storefront — AI builders can produce these quickly and cheaply, and for many use cases, that is completely fine.
The problem is not the tools. The problem is the gap between "this looks like it works" and "this is safe for real people's data."
An app that lets strangers browse your menu or read your blog has a small exposure if something goes wrong. An app that takes customer names, emails, and phone numbers through a contact form has a very different exposure. An app with user accounts, booking history, or any kind of stored client information has a different one still. The AI that built your site does not know the difference and does not warn you.
The Register's reporting on the Lovable breach describes something specific about how the failure happened — and it matters because it is not random. It is structural.
AI coding tools are trained to make things work. When you describe a login system, the AI builds a login system that functions: users can sign up, sign in, and access their account. What it does not automatically do is think through who else might be able to access those accounts, what happens if someone sends a malformed request, or whether the rules governing who can see which records are actually enforced at the database level.
The authentication was backwards because the AI built what was described, not what was implied. No one asked: what should happen when someone who is not logged in tries to read another user's data? Without that question being asked and answered explicitly, the AI left the door open.
Veracode research puts a number on how common this is: 45% of AI-generated code fails basic security checks. Not advanced attacks. The first tests any security review would run. Nearly half.
That number is not a reason to panic about every AI-built tool in existence. It is a reason to know whether yours has ever been checked.
The Lovable story is easy to read as someone else's problem. A tech platform. Students at a university. Not your customer base, not your industry.
But here is the overlap: the same tools that built that app are being used right now by freelancers, web designers, and marketing agencies to build small business websites. Quickly and affordably, which is exactly what the pitch promises. A restaurant's online ordering system. A law firm's client intake form. A contractor's booking page. A medical practice's appointment scheduler.
Every one of those collects real information from real people. Most of them have never had a security review of any kind.
If a customer submits their name, phone number, and address through your contact form, where does that go? Is it stored somewhere? Who can access it? If your booking system keeps appointment history, is that data behind any meaningful protection? If you use an integration that connects your site to a payment processor or a CRM, where do those credentials live?
These are not technical questions that only developers care about. They are business liability questions. A data exposure involving your customers' information — even a small one, even one you did not cause and did not know about — can damage trust in a way that takes years to repair.
You do not need to understand the code to ask the right questions. Start here.
Who built your site, and how? If it was built primarily with an AI tool, or by someone who relies heavily on AI-generated code, ask them directly: was there a security review before launch? Not "did it work" — did anyone check specifically for access control and data exposure issues?
What does your site collect and where does it go? Walk through your own contact forms, booking systems, and login pages. For each one, ask: if a stranger submitted this form, where does the data land? Is it in a database, a spreadsheet, an email inbox? Who has access to that destination?
Are you using a third-party platform or a custom backend? Booking tools like Calendly, payment processors like Stripe, and form tools like Typeform have their own security teams and compliance programs. Custom-built backends — even AI-built ones — do not come with that protection automatically.
Has your site changed significantly in the last year? New integrations, new features, new forms, a new developer — any of these can introduce vulnerabilities that did not exist at the time of the original build.
Do you know what happens if something goes wrong? Who do you call? What is the process? Does anyone have a backup? Knowing the answer before something happens is a very different position than finding out during one.
The businesses that were not exposed in incidents like the Lovable breach have one thing in common: someone asked the security questions before launch, not after.
That conversation is not standard in most small business web projects. The typical engagement — especially with a freelancer or a cheap build — ends when the site goes live and looks correct. The security layer, if it gets discussed at all, is treated as optional or as something to revisit later.
Later is when the researcher finds the exposure. Later is when a customer calls to say their information appeared somewhere it should not have. Later is when the problem becomes public.
The AI tools themselves are improving. Lovable and others are adding security checks and better defaults as these incidents come to light. But they cannot check for problems they are not aware of in your specific configuration, your specific integrations, your specific business context. That still requires a human who knows what to look for.
The story here is not that AI tools are dangerous. They are not — and ruling them out entirely is not the point. The story is that building a website and securing a website are two separate jobs, and the first one does not automatically include the second.
Most small business owners would not hand a stranger the keys to a filing cabinet that contains every client's contact information. But that is effectively what an unsecured database behind a website amounts to. The difference is that the filing cabinet is visible and the database is not — so the risk stays invisible until it is not.
If your site handles customer data in any form and you are not confident it has been reviewed from a security standpoint, that is worth knowing now.
👉 Get a security review of your site or Book a strategy call with Sitora
If this article reflects the kind of problem you are solving, these are the most relevant next steps inside SitoraWeb.
Improve trust, search visibility, and lead quality with a custom website built around how buyers actually compare options.
Explore Website ServicesBuild secure portals, dashboards, internal tools, and customer-facing web apps that remove operational friction.
Explore Web App ServicesGet validation, workflow analysis, and a roadmap before you commit to the wrong build path.
Explore ConsultingThe rest of the blog covers search strategy, site architecture, analytics, automation, and common mistakes that slow down growth.