Google rolled out Code Wiki on November 13, 2025, an AI service that crawls public GitHub repositories and generates a continuously updated, structured wiki for each one. The official announcement calls reading existing code "one of the biggest, most expensive bottlenecks in software development." The public preview at codewiki.google is free for open-source repos; a Gemini CLI extension for private codebases is gated behind a waitlist with no announced timeline or pricing.
Not exactly a new product
The thing the announcement doesn't quite spell out: Code Wiki isn't original work. It's a rebuild of Auto Wiki, a product from a startup called Mutable.ai. Omar Shams, Mutable.ai's founder, is now a Google research engineer and is listed as a co-author on the launch post. The Register spotted his Hacker News comment confirming the lineage: Mutable.ai was acquired by Google, and Code Wiki is essentially Auto Wiki rebuilt at Google scale, with Gemini swapped in for whatever was running underneath.
This isn't disqualifying. Lots of products start as acquisitions and get better with bigger compute. But it does change the framing. Code Wiki is less "Google invented this" and more "Google bought it, scaled it, and shipped it as a preview." That distinction matters when you're trying to figure out how mature the underlying tech actually is.
What it actually does
Three things, mostly. Code Wiki scans an entire repository after every commit and regenerates the documentation, so the docs and the code stay in sync (in theory). Every wiki page is hyperlinked back to specific files and definitions, which is the part that does actually feel useful when you're spelunking through an unfamiliar project. And there's a Gemini chat that uses the current wiki as context, so you can ask about a codebase in plain English instead of grepping for half an hour.
It also auto-generates architecture, class, and sequence diagrams. These regenerate when the code changes, which sounds great until you wonder whether the diagrams are actually useful or just look useful. (More on that in a second.)
The accuracy problem
The Register tested it on Microsoft's ASP.NET Core repository. Tim Anderson asked the Gemini chat whether Postgres could be used as a distributed cache backend. Code Wiki said there was "no mention of direct support or an out-of-the-box implementation for PostgreSQL." Microsoft's own documentation says the opposite: Azure Database for PostgreSQL works as a distributed cache via the IDistributedCache interface.
Half right, really. The repo itself doesn't ship a Postgres implementation, so the answer is technically scoped to the codebase, which is what Code Wiki claims to do. But this is exactly the kind of "scoped" answer that misleads anyone who isn't already clear on the difference between what's in the repo and what the framework supports. If you're reaching for Code Wiki because you don't know the codebase, you probably also don't know that distinction.
Code Wiki's response to this, footer-style on every page: "Gemini can make mistakes, so double-check it."
The classic AI cop-out.
The whole pitch of the product is that you don't have to read the code yourself; the disclaimer is that you should probably read the code yourself.
Other people did this already
Devin's DeepWiki has been doing essentially the same thing for months, and it already supports private repositories, which Code Wiki doesn't. Code Wiki's differentiation, beyond the Google brand, is mostly Gemini's context window and the diagrams. That might matter for very large codebases. It might not.
Technical writer Fabrizio Ferri Benedetti, who tested both Code Wiki and DeepWiki on real projects, was less polite. He called the genre "documentation theater" and described the output for complex projects as "at best dry reference, at worst a dumpster fire of hallucinations, Escherian information architecture, and subtly wrong facts and diagrams." That's a strong take. The underlying complaint, though, that AI-generated docs paper over a real labor problem instead of solving it, is harder to dismiss when you actually try to use one to learn something specific.
Does anyone want docs that change every commit?
One question that keeps coming up in the discussion threads is whether you actually want the documentation to change every time someone pushes. "If I could be in the middle of reading it, and the next day it's completely different, that's a huge waste of my time," one Hacker News commenter wrote. The whole appeal of documentation is that it's a stable reference. Regenerating it on every push trades stability for currency, which may or may not be the trade developers actually want, and which Code Wiki has decided for them.
And the indexing isn't quite there yet, at least early on. In the days after launch, multiple users on LinkedIn reported that mid-size and even very popular repositories had not been processed; the Node.js main repo, with over 114,000 stars, was one of the ones that came up. Six months in, coverage has clearly broadened, but Code Wiki's value proposition lives or dies on whether it can actually ingest the repo you care about, on demand. That's still a question.
What's next
The public preview is open. The Gemini CLI extension for private repos remains on a waitlist, which is the version most enterprise teams will care about; running an AI documentation pipeline against a proprietary codebase requires very different guarantees about data handling than scraping public GitHub. If I had to guess where the actual money is in Code Wiki, it's there, in the eventual private-repo offering, and the blog post doesn't address pricing for any of it. Watch that line.




