Cloudflare, the company that processes roughly 20% of global web traffic and has built one of the most aggressive anti-bot empires on the internet, announced a new /crawl endpoint for its Browser Rendering service on Monday. It lets anyone scrape an entire website with a single API call. The intended use cases, per Cloudflare's own changelog post: training AI models, building RAG pipelines, and monitoring content.
Read that list again. These are the exact activities Cloudflare has spent the last two years convincing its customers to block.
The timeline is something
In mid-2024, Cloudflare rolled out a one-click AI bot blocker that let any site owner shut out scrapers. Over a million customers flipped that switch. The company published dramatic traffic graphs showing the scale of the bot invasion, naming Bytespider and GPTBot as the worst offenders.
By mid-2025, Cloudflare went further: new domains would block AI crawlers by default. CEO Matthew Prince framed it as putting power back in creators' hands. Publishers lined up to praise the move. Wired called it the end of the AI scraping free-for-all.
Nine months later, Cloudflare is selling a scraper.
So what does the /crawl endpoint actually do?
You POST a URL to the endpoint, get a job ID back, and poll for results. The service spins up headless browsers on Cloudflare's edge network, follows links, and returns content as HTML, Markdown, or structured JSON (that last one powered by Workers AI). You can set crawl depth, page limits, and URL patterns to include or exclude.
Jobs run asynchronously with a maximum runtime of seven days. There's a static mode that skips the browser render for faster crawls on simpler sites. It's available on both free and paid Workers plans, though free accounts cap out at 10 minutes of browser time per day. Paid plans get 10 hours per month free, then $0.09 per browser hour after that.
Cloudflare's defense of its own crawler is predictable: it honors robots.txt, including crawl-delay directives. "Well-behaved bot" is how the changelog puts it, and the phrasing feels deliberately pointed at competitors who don't bother with such niceties. Disallowed URLs still show up in your results, tagged with a "disallowed" status, which is a nice touch for auditing purposes if nothing else.
The gatekeeper problem
The developer community noticed the contradiction immediately. On Hacker News, the top comment laid out the accusation bluntly: Cloudflare spent years convincing sites to wall themselves off behind its bot protection, and now it's selling the key to get back in.
"It's hard to see how this isn't extorting folks by offering a working solution that, oh, Cloudflare doesn't block," wrote one commenter who builds crawlers professionally. That suspicion, that Cloudflare's bot protection might treat its own crawling service more favorably, is going to follow this product.
To be fair, the documentation states that sites using Cloudflare's Bot Management, WAF, or Turnstile will apply the same rules to Browser Rendering's crawler. The docs even note that users need to create a WAF skip rule if they want their own Cloudflare crawl jobs to get through their own bot protection. Whether that's reassuring or just evidence of how tangled this gets depends on your level of cynicism.
Both sides of every transaction
The business logic makes more sense when you remember Cloudflare also launched a pay-per-crawl system last year, letting publishers charge bots for access. Cloudflare is positioning itself as the tollbooth operator for the entire AI data supply chain: charge publishers to protect their content, charge AI companies to access it, take a cut both ways. It's a strategy that only works if you sit in the middle of 20% of the web's traffic. Which they do.
The /crawl endpoint is in open beta now. No word on when it graduates to general availability.




