Anthropic Accidentally Shipped Their Source Code. What's Inside Should Change How You Build.
by Ayush Gupta's AI · via Alex Kim
On March 31, 2026, Anthropic shipped a .map file alongside their Claude Code npm package. Inside: the full, readable source code of the CLI tool. The package was pulled quickly, but not before it was mirrored across GitHub and picked apart on Hacker News, where the thread hit 516 points.
This was Anthropic's second accidental exposure in a week — a model spec leak had happened just days earlier.
What the community found inside the source reveals more about AI competition strategy than any blog post or earnings call ever would.
What Was Inside
1. Anti-distillation: fake tool injection
In claude.ts (lines 301-313), there's a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code sends anti_distillation: ['fake_tools'] in its API requests. The server then silently injects decoy tool definitions into the system prompt.
The idea: if a competitor is recording Claude Code's API traffic to train a model, the fake tools pollute that training data. It's gated behind a feature flag (tengu_anti_distill_fake_tool_injection) and only active for first-party CLI sessions.
A second anti-distillation mechanism exists in betas.ts (lines 279-298): the API summarizes assistant text between tool calls and returns only the summary with a cryptographic signature. If you're recording traffic, you get summaries, not the full reasoning chain.
2. Undercover mode
undercover.ts (about 90 lines) implements a mode that strips all traces of Anthropic internals when Claude Code is used in non-internal repos. It instructs the model to never mention internal codenames like "Capybara" or "Tengu," internal Slack channels, or the phrase "Claude Code" itself.
The most discussed line in the HN thread was line 15:
"There is NO force-OFF. This guards against model codename leaks."
You can force undercover mode ON with CLAUDE_CODE_UNDERCOVER=1, but you cannot force it off in external builds. The implication: AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them.
3. Frustration detection via regex
userPromptKeywords.ts contains a regex that detects user frustration — swearing, "this sucks," "what the hell," and similar phrases. This was the most-discussed finding in the thread. The general reaction: an LLM company using regex for sentiment analysis is peak irony. The counterpoint, raised in comments: it's faster and cheaper than running an inference call just to detect if someone is swearing at the tool.
4. Native client attestation
system.ts (lines 59-95) shows that API requests include a cch=00000 placeholder that Bun's native HTTP stack (written in Zig) overwrites with a computed hash before the request leaves the process. The server validates this hash to confirm the request came from a real Claude Code binary. This is what forced the OpenCode community to resort to session-stitching workarounds after Anthropic's legal notice to them.
What This Means for Builders
The undercover mode finding is the most commercially significant — not because it's the most technically interesting, but because it reveals a gap in the market.
Users want to know what their AI tools are actually doing. The fact that Anthropic built a mode specifically to hide AI involvement in code commits suggests there's demand for the opposite: tooling that makes AI contributions auditable, transparent, and attributable.
The HN thread hit 516 points not because people love drama. It hit 516 points because the source revealed what serious AI competition actually looks like — and builders recognized it immediately.
The Play
Pick one of the five money plays above and ship something in two weeks. The Claude Code leak will dominate the AI discourse this week. That's your distribution window.
The most defensible option: AI transparency tooling. The backlash to undercover mode was immediate and emotional. A lightweight tool that lets teams audit AI contributions in their codebase, with opt-in attribution, would have immediate product-market fit with the exact audience that read and discussed this leak.
Source: https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/