Gal Nagli, head of threat exposure at Wiz and a security researcher with over $3 million in bug bounties to his name, pointed Claude Code at RentAHuman and told it to hack. Four and a half minutes later, the agent had found an open Firestore database containing 187,714 personal email addresses, user IDs, and Stripe customer IDs. No authentication required. No sophisticated exploit. Just a curl request to a publicly readable collection.
Nagli posted the full timeline on X, and it reads like a speedrun. At the three-minute mark, the AI agent had already scanned all JavaScript files on the page and pulled out the Firebase config. By minute four, it was testing Firestore REST endpoints. The /humans collection came back 200 OK with full documents. The API was hiding emails; Firestore was showing everything.
The platform that rents humans couldn't protect them
RentAHuman launched on February 1, 2026, built by software engineers Alexander Liteplo and Patricia Tani. Wired covered it as a curiosity: a marketplace where AI agents post bounties for physical tasks they can't do themselves, and humans sign up to complete them. Count pigeons in Washington Square Park. Deliver flowers to Anthropic's office. Hold a sign that says "An AI Paid Me To Hold This Sign." That last one paid $100.
The platform grew fast, reportedly crossing 500,000 signups within weeks. Nature wrote about it. Scientists were listing their skills. The premise is genuinely interesting, even if Wired's Reece Rogers found the actual gig experience to be mostly dead ends and AI startup marketing stunts disguised as bounties.
But here's the thing. Liteplo has said publicly that he "vibe coded" the platform in about a day and a half. And the security posture reflected exactly that timeline.
A four-minute audit
Nagli's Claude Code session, timestamped in the screenshots he shared, tells the whole story. The agent scanned the site's JavaScript, found the Firebase project ID, constructed the Firestore REST URL, and started probing collections. The Realtime Database returned 404. The /users collection returned 403 (secured). But /humans? Wide open. Every document, every field.
The discrepancy between the API and Firestore is what makes this particular screwup so instructive. RentAHuman's own API was apparently stripping email addresses from responses. Someone thought about privacy at the application layer. But the underlying Firestore database had no security rules restricting reads on that collection, so anyone who went around the API and hit Firestore directly got everything, including Stripe customer IDs.
By the eight-minute mark, Nagli's agent had fired off a critical Slack alert. By eight and a half minutes, it had written a full vulnerability report.
We've seen this before. We keep seeing it.
Firebase misconfigurations are not new. In 2024, three security researchers scanning 5.2 million domains found over 900 websites leaking data through open Firebase databases, exposing an estimated 125 million user records including plaintext passwords. Check Point Research found roughly 5% of all Firebase implementations had some form of misconfiguration. Appthority found the same pattern back in 2018.
What's changed is the speed at which vulnerable apps now ship. Vibe coding means a functional product can go from idea to live deployment in under two days, with hundreds of thousands of users collecting within a week. The security review that might have caught an open Firestore collection? It never happened because there was no review at all.
Nagli knows this pattern intimately. Weeks before the RentAHuman finding, he led Wiz's investigation into Moltbook, the "social network for AI agents" that went viral in late January 2026. Same story, different database: a misconfigured Supabase backend exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages. Moltbook's creator, Matt Schlicht, had told the New York Times he didn't write a single line of code.
"When properly configured with Row Level Security, the public API key is safe to expose," Nagli explained in the Moltbook writeup. Without RLS policies, though, "this key grants full database access to anyone who has it."
Same principle applies to Firebase security rules. Google provides the tools. Developers just have to use them.
AI finds what AI forgot to secure
There's an irony here that's hard to ignore. Nagli used Claude Code, an AI coding agent, to discover a vulnerability in a platform that was itself built by AI coding tools. The attacker was automated. The builder was automated. The only human in the loop was Nagli, who typed one command and watched.
A Towards Data Science analysis published recently documents how AI coding assistants routinely set Firebase and Supabase databases to public access when they encounter permission errors during development. The model's optimization target is "make the code work," not "make the code secure." When a developer who doesn't know the difference accepts every suggestion without reading the diff, the database ships open.
Escape's research team scanned over 5,600 vibe-coded applications and found more than 2,000 vulnerabilities, 400 exposed secrets, and 175 instances of personally identifiable information including medical records. Veracode's testing found 45% of AI-generated code contains OWASP Top 10 vulnerabilities, a number that hasn't improved with newer models.
And the Cal AI breach, disclosed just days ago on March 9, 2026? An open Firebase backend exposing 3.2 million health records from the calorie-tracking app. The attacker who posted the data on BreachForums used the handle "vibecodelegend." I genuinely can't tell if that's a joke.
What happens next
RentAHuman hasn't publicly commented on the breach as of this writing. Nagli's post suggests he disclosed responsibly, but the timeline from discovery to fix (if there's been one) isn't clear from the available information. The 187,714 figure was "at the time" of his test, per his post, which means the actual exposure window could have been longer and the number could be higher given the platform's rapid growth.
For the half-million-plus people who signed up to rent their bodies to AI agents, the leaked data includes email addresses and Stripe customer IDs. That's enough for targeted phishing and, depending on what Stripe metadata was accessible, potentially more.
The broader pattern is clear enough that it barely needs stating. Vibe-coded apps are shipping with the same class of misconfiguration that security researchers have been documenting since at least 2018. The difference now is scale and speed: more users, faster, with less oversight. Google provides Firebase security rules. Supabase provides Row Level Security. Both are documented. Both are ignored, over and over, by code that optimizes for functionality.
Nagli's four-minute audit makes the cost of finding these bugs essentially zero. The cost of not finding them, for 187,000 people who just wanted to hold a sign for a robot, is the part that hasn't been calculated yet.




