DevPik Logo
Claude CodeAnthropicAISecuritynpmSource Code Leak

Claude Code Source Code Leaked: 512,000 Lines, 44 Feature Flags, and an Always-On AI Daemon

On March 31, 2026, Anthropic accidentally exposed the full source code of Claude Code — their $2.5B ARR AI coding tool — via a source map file left in an npm package. Here's what 512,000 lines of TypeScript reveal about hidden feature flags, autonomous agents, and anti-competitor tactics.

DevPik TeamApril 2, 20269 min read
Back to Blog
Claude Code Source Code Leaked: 512,000 Lines, 44 Feature Flags, and an Always-On AI Daemon

What Happened: The Biggest Accidental Source Leak in AI History

On March 31, 2026, Anthropic shipped version 2.1.88 of @anthropic-ai/claude-code to the public npm registry. Bundled inside was a 59.8 MB JavaScript source map file (.map) — an internal debugging artifact that was never supposed to leave the build server.

Within hours, the entire Claude Code codebase was reconstructed: 512,000+ lines of TypeScript across 1,906 files. The source map contained the full agentic harness — the scaffolding that turns a Claude model into a terminal-based coding agent. No model weights were exposed, but everything else was: system prompts, feature flags, security mechanisms, internal codenames, and an unreleased product roadmap.

The root cause? Claude Code is built on Bun, which Anthropic acquired in late 2025. Bun generates source maps by default. Someone on the release team forgot to add *.map to .npmignore or configure the files field in package.json to exclude debugging artifacts. One missing line in a config file created the largest accidental source leak in AI history.

By 4:23 AM ET, security researcher Chaofan Shou (@Fried_rice on X), an intern at Solayer Labs, broadcasted the discovery. Within 24 hours, the leaked code was mirrored on GitHub — where it surpassed 84,000 stars and 82,000 forks, making it the fastest-growing repository in GitHub history.

Timeline of the Claude Code Source Code Leak

Here is the chronological breakdown of events:

Time (UTC)Event
March 31, ~00:20Claude Code v2.1.88 published to npm with source map included
March 31, ~00:21–03:29Coincidentally, the [axios npm supply chain attack](/blog/axios-npm-supply-chain-attack) window overlaps — trojanized axios versions appear on npm
March 31, ~08:23Chaofan Shou discovers the source map and posts findings on X
March 31, ~10:00GitHub mirrors begin appearing; Hacker News thread reaches front page
March 31, ~14:00Anthropic issues official statement confirming the leak
March 31, ~18:00Repository surpasses 50,000 GitHub stars
April 1, morningAnthropic files DMCA takedown notices targeting ~8,100 GitHub repositories
April 1, afternoonBacklash erupts — DMCA was overbroad, hitting legitimate forks of Anthropic's own public repos
April 1, eveningBoris Cherny (Head of Claude Code) retracts bulk notices, limiting to 1 repo + 96 forks

Inside the Leaked Code: What Was Exposed

To be clear: the leak exposed Claude Code's agentic harness — the TypeScript application layer that orchestrates tool calls, manages sessions, handles terminal I/O, and wraps the Claude API. It did not expose model weights, training data, or customer information.

Here is what the codebase revealed:

  • 44 hidden feature flags gated behind GrowthBook — controlling unreleased capabilities
  • Full system prompts — the exact instructions sent to Claude models for every session
  • KAIROS — an unreleased always-on autonomous daemon mode
  • Anti-distillation mechanisms — techniques to poison competitor training data
  • Undercover mode — a stealth module that hides Anthropic employee contributions to open-source
  • Buddy/Tamagotchi companion — a full virtual pet system with 18 species and RPG stats
  • Native client attestation — DRM-like verification for API calls using Bun's Zig layer
  • Frustration detection — regex-based profanity matching to detect user frustration
  • Architecture patterns — prompt caching optimization, multi-agent coordination, and terminal rendering using game-engine techniques

The 44 Hidden Feature Flags

Claude Code's feature flags are managed through GrowthBook and control everything from experimental UI to core agent behavior. The most notable flags discovered include:

Agent Autonomy:
- KAIROS — Always-on background daemon mode (referenced 150+ times)
- autoDream — Memory consolidation during idle periods
- persistent_sessions — Sessions that survive terminal closure
- background_daemon_workers — Background task execution on 5-minute cron cycles

Security & Anti-Competitive:
- ANTI_DISTILLATION_CC — Fake tool injection to corrupt competitor training data
- NATIVE_CLIENT_ATTESTATION — Binary-level API call verification
- tengu_anti_distill_fake_tool_injection — GrowthBook gate for anti-distillation
- tengu_attribution_header — Remote killswitch for client attestation

Unreleased Features:
- buddy_companion — Tamagotchi-style AI pet system
- undercover_mode — Stealth mode for Anthropic employee contributions
- multi_agent_coordination — Orchestrated multi-agent collaboration
- coordinator_mode — Agent-of-agents orchestration layer

Infrastructure:
- prompt_cache_break_detection — Tracks 14 cache-break vectors
- auto_compact — Automatic context compaction (had a bug wasting ~250K API calls/day)
- CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS — Master killswitch for experimental features

KAIROS: Claude Code's Autonomous Daemon Mode

The most significant unreleased feature is KAIROS — named after the Greek rhetorical concept meaning "recognizing and acting at the perfect moment." Unlike chronos (sequential time), kairos implies contextual awareness.

KAIROS transforms Claude Code from a request-response tool into an always-on background agent. The scaffolding found in the source code includes:

  • Background daemon workers that persist after terminal closure
  • GitHub webhook subscriptions — so KAIROS can react to repo events autonomously
  • A `/dream` skill for "nightly memory distillation" — processing and consolidating the day's context
  • `autoDream` memory consolidation that runs during idle periods, reconciling contradictions and converting tentative observations into verified facts
  • 5-minute cron refresh cycles for maintaining awareness
  • Daily append-only logs for audit trails

The agent doesn't run on a fixed schedule — it decides when to engage based on context. This represents a fundamental shift from AI-as-tool to AI-as-coworker. The feature was clearly being tested internally but was not yet available to users.

If KAIROS ships, Claude Code would become the first mainstream AI coding tool capable of monitoring your codebase, triaging issues, and preparing PRs while you sleep.

Anti-Distillation: How Anthropic Poisons Competitor Training Data

Perhaps the most controversial finding was the anti-distillation system — a mechanism designed to corrupt the training data of competitors who might be recording Claude's API traffic.

Here is how it works:

  1. The ANTI_DISTILLATION_CC flag sends anti_distillation: ['fake_tools'] in API requests
  2. The server injects decoy tool definitions into the system prompt — tools that don't exist but look plausible
  3. If a competitor is recording traffic to train their own model, their training data gets poisoned with fake tool schemas
  4. A secondary mechanism summarizes reasoning chains with cryptographic signatures, hiding the full chain-of-thought from external observers

The mechanism is gated behind the tengu_anti_distill_fake_tool_injection GrowthBook flag and only activates for first-party CLI sessions. Notably, the bypass is trivial: a MITM proxy stripping the anti_distillation field would bypass injection entirely, and setting CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS disables it completely.

This reveals how seriously Anthropic takes the threat of model distillation — and how the AI industry's competitive dynamics are playing out at the protocol level.

The Capybara Model and Undercover Mode

### Capybara: The Next Claude Model

The leaked code references Capybara (also referred to as Mythos in a related leaked document from the prior week) — the internal codename for a Claude 4.6 variant. While specific details about the model's capabilities were not in the source code (model details live server-side), the references suggest Anthropic has been testing a new model variant internally through Claude Code.

### Undercover Mode: Hiding AI Contributions

The undercover.ts module (~90 lines) is designed to strip Anthropic internals from external repository contexts. When activated:

  • The system instructs Claude to never mention internal codenames like "Capybara" or "Tengu"
  • Co-Authored-By attribution is stripped when Anthropic employees contribute to external repositories
  • Internal Slack channels, repo names, and references to "Claude Code" itself are suppressed

Critically: there is no force-OFF. You can force Undercover Mode on, but you cannot force it off — a deliberate design choice to prevent accidental leaks of internal information. The irony of this existing in code that was itself leaked is not lost on the developer community.

The Axios npm Attack: A Perfect Storm

In a cruel twist of timing, the Claude Code source leak happened on the exact same day as one of the largest npm supply chain attacks in history.

Between 00:21 and 03:29 UTC on March 31, an attacker compromised the npm credentials of the axios library's lead maintainer and published two backdoored releases: axios@1.14.1 and axios@0.30.4. These contained a cross-platform Remote Access Trojan (RAT) that silently installed the moment any developer or CI/CD pipeline ran npm install.

Google's Threat Intelligence Group later attributed the attack to UNC1069, a North Korea-nexus financially motivated threat actor.

The overlap matters: Claude Code users who ran npm install or npm update during that 3-hour window may have pulled both the leaked source map AND the trojanized axios package. If you updated Claude Code on March 31, you should:

  1. Check your axios version — downgrade immediately if on 1.14.1 or 0.30.4
  2. Rotate all secrets and API keys that were accessible in your environment
  3. Audit your CI/CD pipeline logs for unexpected network connections
  4. Use our [JSON Formatter](/developer-tools/json-formatter) to inspect your package-lock.json for unexpected dependency changes

Anthropic's Response and the DMCA Overreach

Anthropic's official statement was measured:

"Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach."

But their follow-up actions were less restrained. On April 1, Anthropic filed DMCA takedown notices targeting approximately 8,100 GitHub repositories. The problem? The takedown was dramatically overbroad — it hit legitimate forks of Anthropic's own publicly released Claude Code repository, not just mirrors of the leaked source.

Developers flooded social media with screenshots of DMCA notices hitting repos that had nothing to do with the leak. Boris Cherny, Anthropic's Head of Claude Code, acknowledged the error and retracted the bulk of the notices, scaling back to 1 primary repository and 96 forks that contained the accidentally released source code.

The incident drew comparisons to past corporate DMCA overreaches and raised questions about whether Anthropic's legal team acted too hastily under pressure — especially given the company is reportedly preparing for an IPO at a $200B+ valuation.

What This Means If You Use Claude Code in Production

### For Individual Developers

The leak itself doesn't compromise your data. No API keys, customer data, or model weights were exposed. However:

  • Update Claude Code to the latest version (the source map was removed in subsequent releases)
  • Check your axios version if you updated on March 31
  • Review your `.npmrc` and consider pinning dependencies to prevent auto-updates during supply chain attacks

### For Engineering Teams

The leak is a case study in npm supply chain security:

  • Always configure .npmignore or the files field in package.json — never rely on defaults
  • Use npm pack --dry-run to verify exactly what gets published
  • Consider using npm provenance for build attestation
  • Implement lockfile auditing in CI/CD — use our [Code Share](/developer-tools/code-share) tool to share and review dependency diffs with your team

### For the AI Industry

The exposed codebase is essentially a blueprint for building an agentic AI coding tool. Competitors like Cursor, Windsurf, and OpenAI Codex now have a detailed reference architecture for:

  • Prompt caching strategies (14 cache-break vectors tracked)
  • Terminal rendering optimization (game-engine techniques)
  • Multi-agent coordination patterns
  • Security sandboxing (23 bash security checks)
  • Anti-distillation and client attestation approaches

Whether or not competitors use this information, the competitive moat that Claude Code's proprietary architecture provided has been significantly eroded.

Easter Eggs: The Buddy System and 187 Spinner Verbs

Not everything in the leak was serious. The buddy/companion.ts file reveals a complete Tamagotchi-style companion pet system that was apparently planned for an April 1–7 rollout:

  • 18 different species including a duck, dragon, capybara, and a "chonk"
  • A deterministic gacha system with rarity tiers (common to legendary)
  • 1% shiny variant chance on hatching
  • RPG-style stats including DEBUGGING and SNARK
  • Personality descriptions written by Claude on first hatch
  • Species names encoded with String.fromCharCode() to evade grep-based build checks

And tucked inside the companion file: exactly 187 spinner verbs — the animated status messages that appear while Claude is thinking. Someone at Anthropic had a lot of fun writing those.

The codebase also revealed some less fun internal issues: print.ts spans 5,594 lines with a single function reaching 3,167 lines across 12 nesting levels, and a bug in autoCompact.ts was wasting an estimated 250,000 API calls per day globally before being patched.

🛠️ Try It Yourself

Put what you've learned into practice with our free tools:

Frequently Asked Questions

Was Claude Code source code actually leaked?
Yes. On March 31, 2026, Anthropic accidentally published a 59.8 MB source map file in version 2.1.88 of the @anthropic-ai/claude-code npm package. This allowed the full reconstruction of 512,000+ lines of TypeScript source code across 1,906 files. Anthropic confirmed the incident in an official statement.
How did the Claude Code source code get leaked?
Claude Code is built on Bun, which generates source maps by default. A member of the release team failed to add *.map to the .npmignore file or configure the files field in package.json to exclude debugging artifacts. The source map was published to the public npm registry with the package.
What is KAIROS in Claude Code?
KAIROS is an unreleased feature flag found in the leaked source code, referenced over 150 times. It implements an autonomous daemon mode — a background agent that continues running after you close your terminal. Named after the Greek concept of acting at the perfect moment, KAIROS includes features like autoDream memory consolidation, GitHub webhook subscriptions, and 5-minute cron refresh cycles.
Is Claude Code safe to use after the leak?
The leak exposed the application harness code, not model weights, API keys, or customer data. Claude Code itself remains safe to use. However, if you updated via npm on March 31, 2026 between 00:21 and 03:29 UTC, you may have also pulled a trojanized axios package from a separate supply chain attack that happened on the same day. Check your axios version and rotate credentials if affected.
What are Claude Code feature flags?
Claude Code uses 44 feature flags managed through GrowthBook to gate unreleased capabilities. These include KAIROS (autonomous daemon mode), ANTI_DISTILLATION_CC (competitor training data poisoning), buddy_companion (Tamagotchi-style pet system), undercover_mode (stealth contributions), and NATIVE_CLIENT_ATTESTATION (DRM for API calls).
What is Claude Code anti-distillation?
Anti-distillation is a mechanism that injects fake tool definitions into API requests when the ANTI_DISTILLATION_CC flag is enabled. This is designed to corrupt the training data of competitors who may be recording Claude Code's API traffic. A secondary mechanism summarizes reasoning chains with cryptographic signatures to hide the full chain-of-thought.
Did Anthropic issue DMCA takedowns for the leaked code?
Yes. On April 1, 2026, Anthropic filed DMCA takedown notices targeting approximately 8,100 GitHub repositories. The takedown was overbroad, accidentally hitting legitimate forks of Anthropic's own public repositories. Boris Cherny, Head of Claude Code, retracted most notices, scaling back to 1 repository and 96 forks containing the leaked source.
What is the Capybara model mentioned in the Claude Code leak?
Capybara (also referred to as Mythos) is the internal codename for a Claude 4.6 model variant found referenced in the leaked source code. The code itself does not contain model details (those live server-side), but the references suggest Anthropic has been testing a new model internally through Claude Code.
Was the Claude Code leak related to the axios npm attack?
They were separate incidents that happened on the same day by coincidence. The Claude Code leak was caused by a build configuration error, while the axios supply chain attack (axios@1.14.1 and axios@0.30.4) was a deliberate compromise attributed to North Korean threat actor UNC1069. However, Claude Code users who updated on March 31 may have been exposed to both.

More Articles