Security researcher Chaofan Shou posted six words to X on March 31, 2026, that Anthropic probably would have paid good money to prevent: "Claude code source code has been leaked via a map file in their npm registry." Within hours, the post had crossed 4.5 million views. Multiple GitHub mirrors appeared with the full, neatly extracted codebase. And Anthropic, the AI company that has built its entire brand around being the careful ones, was scrambling to unpublish a package version for the second time in thirteen months.
The mechanism was almost insultingly simple. Version 2.1.88 of the npm package shipped with a cli.js.map file, 59.8 megabytes, that contained the complete, unobfuscated TypeScript source for Claude Code. The source map's internal references pointed to an R2 cloud storage bucket on Anthropic's own infrastructure, where the entire src/ directory sat as a downloadable zip. Anyone who ran npm install could find it in their node_modules folder.
This happened before
On February 24, 2025, the day Claude Code launched, developer Dave Shoemaker opened the bundled cli.mjs file in Sublime Text and found an 18-million-character inline source map encoded in base64. Anthropic quietly pulled the file, scrubbed it from npm caches, and moved on. Shoemaker only recovered the original because Sublime Text's undo history still had it.
Between that first incident and this one, Anthropic published 363 versions of the package. At some point, the source map came back. Not inline this time, but as a separate .map file sitting right next to cli.js. Nobody at Anthropic caught it.
The Bun bundler, which Claude Code uses, generates source maps by default. Excluding them requires either configuring the bundler to skip generation or adding *.map to the .npmignore file. Neither happened. And here is the part that's genuinely hard to process: the leaked codebase contains a subsystem called "Undercover Mode," specifically designed to scrub internal Anthropic information from git commits so it doesn't leak into public repositories. They built a system to prevent leaks. Then they shipped the source in a .map file.
What's actually in there
The extracted codebase spans roughly 1,900 TypeScript files and 512,000 lines of code. Claude Code runs on Bun (not Node), uses React with Ink for terminal rendering, and is architecturally a lot more than the polished CLI it appears to be from the outside. The main entry point alone is 785KB.
The tool system, around 40 discrete tools spanning 29,000 lines, handles everything from file reads to bash execution to sub-agent spawning. Each tool is permission-gated. The query engine, at 46,000 lines, manages all LLM API calls, streaming, caching, and orchestration. It is by far the largest module in the codebase.
But the unreleased features are what's getting attention on Hacker News and Reddit. The code contains 44 feature flags covering capabilities that are fully built but hidden behind compile-time toggles that evaluate to false in external builds.
KAIROS, BUDDY, and other things Anthropic wasn't ready to talk about
KAIROS is a persistent assistant mode, described in the code as "Always-On Claude." It maintains context across sessions, stores memory logs in a private directory, and includes a nightly "dreaming" process that consolidates information while the user sleeps. The code handles midnight boundary conditions so the dream cycle doesn't break. It gets access to tools that regular Claude Code doesn't have, and uses a special "Brief" output mode designed for a persistent assistant that shouldn't flood your terminal.
ULTRAPLAN offloads complex planning tasks to a remote cloud container running Opus 4.6, gives it up to 30 minutes to think, and lets users approve the result from a browser. There is a sentinel value called __ULTRAPLAN_TELEPORT_LOCAL__ that teleports the result back to the local terminal. I don't know what to make of that name.
And then there's BUDDY. A Tamagotchi-style AI pet that sits in a speech bubble next to your input. Eighteen species (duck, dragon, axolotl, capybara, mushroom, ghost), rarity tiers from common to 1% legendary, cosmetics including hats and shiny variants, five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK. Claude generates a name and personality on first hatch. The plan, according to the leaked code, was a teaser rollout from April 1 through 7, going live in May, starting with Anthropic employees.
Someone at Anthropic is having fun.
The pattern problem
This is Anthropic's second leak in five days. On March 26, Fortune reported that a CMS misconfiguration had exposed nearly 3,000 unpublished assets, including draft blog posts describing an unreleased model called Claude Mythos (internal product name: Capybara, which explains the pet species). That leak revealed details about a model Anthropic itself warned poses "unprecedented cybersecurity risks." The company blamed "human error in the CMS configuration."
Five days later: human error in the npm configuration.
Anthropic spokesperson statements after the CMS incident were careful to note that AI tools were not at fault. That the exposed material was "early drafts" and didn't involve "core infrastructure, AI systems, customer data, or security architecture." The npm leak is harder to frame that way. This is the actual source code of a production tool that has filesystem access, terminal access, and codebase access on developer machines. The permissions model, authentication flows, and tool approval logic are all now public. As one Hacker News commenter put it, "A company you're trusting with filesystem access is failing to properly secure its own software."
Others are less concerned. The source map has technically been extractable since launch, and the argument goes that client-side CLI code is not the same as model weights or training data. This is the slot machine, not the house edge. Fair enough. But the slot machine's permission system is what stands between Claude and your filesystem.
What happens now
Anthropic has pulled the affected package version and pushed an update without the source map. But the code is already mirrored across multiple GitHub repositories, forked hundreds of times, and being actively studied. Whether Anthropic pursues takedowns remains unclear. The architectural patterns, the tool system, the multi-agent orchestration, all of that is now part of the public conversation about how to build agentic AI applications.
The timing compounds the embarrassment. Anthropic is positioning for an IPO. Its competitor OpenAI just finished pretraining its own model, codenamed Spud. And Anthropic's brand, the thing that differentiates it from everyone else in the AI race, is that it thinks harder about safety and risk than anyone. Two configuration errors in five days doesn't destroy that argument, but it does make the next safety briefing a little awkward.
One of the replies on Shou's original post summed it up: "Nothing says 'agentic future' like shipping the source by accident."




