Anthropic accidentally shipped the full source code of Claude Code to the public npm registry on March 31, 2026. A 59.8 MB source map file bundled into version 2.1.88 of the npm package pointed to a zip archive on Anthropic's own Cloudflare R2 storage, containing roughly 1,900 TypeScript files and over half a million lines of code. Security researcher Chaofan Shou flagged the exposure on X around 4:23 AM ET, and within hours the codebase had been mirrored across GitHub, racking up tens of thousands of stars and forks before Anthropic could pull it down.
Anthropic confirmed the leak in a statement, calling it a packaging issue caused by human error. No customer data or credentials were exposed, the company said. This is Anthropic's second accidental data exposure in under a week, following Fortune's report that nearly 3,000 internal files, including a draft blog post about an unreleased model, had been left publicly accessible days earlier.
It's the harness
The most consequential finding isn't any single feature. It's the confirmation of something developers had suspected for months: Claude Code's edge over competitors comes primarily from the software wrapped around the language model, not the model itself. As Fortune put it, the capabilities come from the "agentic harness" that instructs the AI on tool use and governs its behavior. The leaked code shows a 46,000-line query engine, roughly 40 discrete tools, a multi-agent orchestration system, and a bidirectional IDE bridge, all written in TypeScript and running on Bun.
Each tool (file reads, bash execution, web fetches, LSP-based code navigation) operates as a separate, permission-gated module. The base tool definition alone runs to 29,000 lines. A dedicated grep tool replaces raw bash calls with smarter permission handling. An LSP-based navigation tool gives the model access to call hierarchies and cross-references between code entities, which means it parses your codebase as a dependency graph rather than flat text.
Context management is where things get interesting. Claude Code deduplicates file reads so unchanged files aren't reprocessed. Oversized tool outputs get offloaded to disk with only a preview kept in context. Long conversations are automatically compacted and summarized. One developer's analysis found a comment noting that 1,279 sessions had logged 50 or more consecutive compaction failures, wasting an estimated 250,000 API calls per day globally. The fix was three lines: cap failures at three per session and stop trying.
Fake tools and undercover commits
The leak exposed two anti-distillation mechanisms that generated the most discussion on Hacker News. The first: a flag called ANTI_DISTILLATION_CC that injects decoy tool definitions into API requests, poisoning any training data harvested by intercepting Claude Code's traffic. The second mechanism summarizes the assistant's reasoning between tool calls server-side and returns the summary with a cryptographic signature, so anyone recording API traffic gets compressed fragments instead of full reasoning chains.
"Anyone serious about distilling from Claude Code traffic would find the workarounds in about an hour of reading the source," wrote Alex Kim, who published one of the more detailed breakdowns. He's probably right. The fake tools injection requires four conditions to fire simultaneously, and an environment variable can disable the whole system. The real deterrent is legal, not technical.
Then there's Undercover Mode. A file called undercover.ts instructs the model to strip all Anthropic-internal references when working in public repositories. No mention of internal codenames like "Capybara" or "Tengu," no Slack channels, no indication that Claude Code wrote the commit. The code comment is blunt: there is no force-off switch. This guards against codename leaks. Whether you read that as sensible operational security or an AI tool deliberately hiding its involvement in open-source contributions depends on where you stand, but the fact that Anthropic built an entire subsystem for stealth and then leaked its source code through a misconfigured build pipeline is, at minimum, ironic.
What the model roadmap shows
Internal comments reference "Capybara" as a codename for a Claude 4.6 variant, with "Fennec" mapping to Opus 4.6 and an unreleased model called "Numbat" still in testing. Anthropic appears to be iterating on Capybara v8, but the code notes a 29 to 30 percent false claims rate in that version, which is a regression from the 16.7 percent rate in v4. An "assertiveness counterweight" tries to prevent the model from being too aggressive with refactors. For competitors, these are the kind of internal benchmarks that normally stay behind closed doors.
The most discussed unreleased feature is KAIROS, referenced over 150 times in the source. It's an autonomous daemon mode: Claude Code running persistently in the background, watching for issues, performing what the code calls "memory consolidation" through an autoDream process that merges observations, removes contradictions, and converts vague notes into concrete facts while the user is idle. A Tamagotchi-style companion pet system called "Buddy," complete with a deterministic gacha mechanic and species rarity tiers, appears gated for an April teaser window with a full launch expected in May.
The security question
The cause appears to be mundane. Claude Code is built on Bun, which Anthropic acquired late last year. A Bun bug filed on March 11 reports that source maps are served in production mode despite documentation saying they should be disabled. The issue remains open. If that's what happened, Anthropic's own toolchain shipped a known bug that exposed their own product.
A concurrent, unrelated supply-chain attack made the timing worse. Malicious versions of the axios npm package appeared hours before the leak, containing a remote access trojan. Users who installed or updated Claude Code via npm between 00:21 and 03:29 UTC on March 31 may have pulled in compromised dependencies. The Hacker News reported that attackers also began typosquatting internal npm package names found in the leaked code.
For a company that positions itself as the safety-conscious AI lab, two exposures in one week raises questions that go beyond build pipelines. Axios noted that the leak gives every competitor a free engineering education on building a production-grade coding agent. The code can be refactored. The strategic surprise cannot.
Anthropic says it is rolling out measures to prevent recurrence. The company recommends using its native installer rather than npm going forward. A patched release is expected as version 2.1.89 or higher.




