Anthropic accidentally shipped version 2.1.88 of Claude Code to the public npm registry with a 59.8 MB source map file Tuesday. The map allows a full rebuild of the 512,000-line TypeScript codebase. Forensics reveal architectural features the company has not discussed in public.
- Anthropic accidentally leaks its 512,000-line Claude Code source code via an unmasked npm registry source map.
- The 59.8 MB file reveals Undercover Mode and Kairos, a persistent background daemon capable of autonomous file system execution.
- These hidden features contradict Anthropic's public transparency narrative, raising critical accountability concerns for AI-generated contributions in open-source software.
Build Failure and Exposure
Anthropic engineers released the enterprise CLI tool without excluding source maps from the production build. These files reverse minification. Anyone can now reconstruct the original readable code. Cached distributions across multiple CDNs kept the exposure live. Package immutability means every download before the fix contained the raw source. A basic build configuration error caused the leak.
Risk Assessment
The leak hit CLI client code and tool invocation logic. Model weights, backend inference servers, and API infrastructure stay private. Operational bypass is impossible. Users still require valid API tokens and paid credits to run the tool. Design intent is now visible: actual compute remains under Anthropic control.
Kairos: Background Automation
Kairos functions as a persistent background daemon. It operates without user prompts. Execution involves access to local file systems and GitHub webhooks. Memory consolidation routines called “dreaming” reorganize context during idle periods.
A Coordinator Mode spawns worker agents and delegates tasks without human approval. Retry logic and autonomous prioritization mark a shift from reactive tools to always-on infrastructure.
Genuine News Deserves Honest Attention.
High-conviction projects require an intelligent audience. Connect with readers who value sharp reporting.
👉 Submit Your PRUndercover Mode: Attribution Erasure
Hardcoded instructions trigger when the tool detects use on public repositories. The system deletes “AI-Co-Authored-By” tags and strips generation metadata from commits before pushes. Leaked prompts tell the model: “You are operating UNDERCOVER… Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
Open-source provenance relies on transparent attribution. Hiding AI contributions compromises accountability. Anthropic has not addressed whether autonomous processes propagate undetected fixes through these covert commits.
Defensive Architecture
Anthropic built two defensive systems to block competitors from training on Claude Code data. One mechanism poisons scraped training data by injecting fake tool calls into the output stream. A second replaces metadata with vague summaries. Opaque agent logic prevents competitors from reconstructing what the agent executed. Independent modules run these defenses separately from Undercover Mode.
Chain Street’s Take
A basic npm mistake did what a hack could not. The exposure reveals Anthropic’s operational playbook. The company builds infrastructure-scale agents while hiding AI work in public projects. This contradicts its transparency narrative.
Defensive mechanisms show a technical effort to stop competitive analysis. Leading players now compete on concealment as much as capability. Regulators face a structural hurdle: traditional oversight fails when systems hide their own tracks. Substance matters. Marketing does not. Regulators must police the code, not the press releases.
Activate Intelligence Layer
Institutional-grade structural analysis for this article.





