{"id":12915,"date":"2026-04-10T22:30:25","date_gmt":"2026-04-10T14:30:25","guid":{"rendered":"https:\/\/ai-stack.ai\/?p=12915"},"modified":"2026-04-10T22:35:25","modified_gmt":"2026-04-10T14:35:25","slug":"claude-code-leak","status":"publish","type":"post","link":"https:\/\/ai-stack.ai\/en\/claude-code-leak","title":{"rendered":"Claude Code Source Code Leak Explained: KAIROS, 44 Hidden Features &amp; the Post-Prompting Era"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><strong>A Packaging Mistake That Exposed the Full Blueprint of AI Agents<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">On March 31, 2026, security researcher Chaofan Shou discovered something unusual in Anthropic&#8217;s npm registry: version 2.1.88 of the @anthropic-ai\/claude-code package shipped with a 59.8 MB source map file (cli.js.map) that exposed the tool&#8217;s entire, unobfuscated source code.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This was no minor snippet. The exposed codebase comprised <strong>512,000 lines of TypeScript<\/strong> across <strong>1,906 files<\/strong>, containing <strong>44 hidden feature flags<\/strong>\u2014at least 20 pointing to fully built but unreleased capabilities. Within hours, the code was mirrored to GitHub, accumulating over 84,000 stars and 82,000 forks. Anthropic pulled the package, but the code had already entered the public domain permanently.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Anthropic<a href=\"https:\/\/www.theregister.com\/2026\/03\/31\/anthropic_claude_code_source_code\/\" target=\"_blank\" rel=\"noopener\"> called it<\/a> &#8220;a release packaging issue caused by human error, not a security breach,&#8221; and confirmed no customer data or credentials were involved. But for the broader AI industry, the leak&#8217;s significance goes far beyond the security incident itself\u2014it provided an unprecedented window into the next generation of AI coding agent architecture.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>KAIROS: An Always-On Autonomous AI Agent<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">The most striking discovery in the leaked code is <strong>KAIROS<\/strong> (from the Ancient Greek word meaning &#8220;the opportune moment&#8221;), a name<a href=\"https:\/\/venturebeat.com\/technology\/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know\" target=\"_blank\" rel=\"noopener\"> appearing over 150 times<\/a> across the source. KAIROS represents a fundamental shift in how developers interact with AI tools: from reactive command-response to a 24\/7 background agent that acts on its own initiative.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How KAIROS Works<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Daemon Mode:<\/strong> KAIROS is designed as a persistent background service. The system sends heartbeat signals at regular intervals, asking the agent: &#8220;Is there anything worth doing right now?&#8221;<\/li>\n\n\n\n<li><strong>Proactive Intervention:<\/strong> It monitors the development environment continuously. If a server crashes overnight, KAIROS can fix the code and restart the service. When a GitHub PR is updated, it can review changes and report back automatically.<\/li>\n\n\n\n<li><strong>Exclusive Tool Set:<\/strong> KAIROS has access to capabilities unavailable in standard mode, including push notifications (alerting developers directly on mobile devices) and PR subscriptions (actively tracking code repository changes).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>autoDream: Your AI &#8220;Dreams&#8221; While You Sleep<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Within the KAIROS framework lies <strong>autoDream<\/strong>, a memory consolidation mechanism. When the user is idle, the agent runs a background process that merges scattered observations, eliminates logical contradictions, and converts vague insights into verified factual records.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">A notable design principle: the agent is instructed to treat its own memory as a &#8220;hint&#8221; rather than ground truth, requiring verification against the actual codebase before taking action. This &#8220;skeptical memory&#8221; architecture reveals both the current reliability challenges in AI agents and the strategies being developed to address them.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>44 Feature Flags: An Accidentally Published Product Roadmap<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">The leaked code contained<a href=\"https:\/\/www.infoq.com\/news\/2026\/04\/claude-code-source-leak\/\" target=\"_blank\" rel=\"noopener\"> 44 compiled feature flags<\/a>\u2014features that are fully built but gated behind compile-time switches that evaluate to false in production builds. This is effectively a complete product roadmap laid bare. Key discoveries include:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Unreleased Capabilities<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/ai-stack.ai\/en\/mcp-ai-agents\"><strong>Multi-Agent Orchestration<\/strong><\/a><strong>:<\/strong> Full logic for multiple AI agents to collaborate on subtasks, with task delegation and result aggregation workflows.<\/li>\n\n\n\n<li><strong>Memory MD:<\/strong> A lightweight, self-healing memory architecture. Instead of stuffing all data into the context window, Memory MD stores only lightweight index pointers and retrieves original content on demand via identifiers\u2014dramatically<a href=\"https:\/\/ai-stack.ai\/en\/how-to-increase-gpu-utilization\"> reducing token consumption and operational costs<\/a>. Its design philosophy aligns closely with<a href=\"https:\/\/ai-stack.ai\/en\/ai-stack-architecture\"> enterprise-grade AI platform resource management<\/a>.<\/li>\n\n\n\n<li><strong>Undercover Mode:<\/strong> Approximately 90 lines of code designed to strip all traces of Anthropic internals when Claude Code is used on non-internal repositories\u2014suppressing mentions of internal codenames (&#8220;Capybara,&#8221; &#8220;Tengu&#8221;), Slack channels, and repo names.<\/li>\n\n\n\n<li><strong>Native Client Attestation:<\/strong> A verification mechanism to prevent third-party tools from impersonating Claude Code to access subscription-tier APIs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Internal Model Codenames Revealed<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The leak also exposed Anthropic&#8217;s internal model naming:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Capybara<\/strong> = Claude 4.6 variant<\/li>\n\n\n\n<li><strong>Fennec<\/strong> = Opus 4.6<\/li>\n\n\n\n<li><strong>Numbat<\/strong> = An unreleased model still in testing<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Internal comments show Capybara has reached v8, yet still struggles with a 29\u201330% false claims rate (a regression from v4&#8217;s 16.7%). Developers also noted an &#8220;assertiveness counterweight&#8221; to prevent the model from becoming overly aggressive during code refactoring. These internal benchmarks provide a rare ceiling reference for frontier models, forming an interesting contrast with<a href=\"https:\/\/ai-stack.ai\/en\/gpt-5-2\"> known limitations of competing models<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Perfect Storm: A Concurrent axios Supply Chain Attack<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Compounding the situation, a separate supply chain attack hit the npm registry on the same day. Between 00:21 and 03:29 UTC on March 31, malicious versions of the widely-used axios HTTP library (1.14.1 and 0.30.4) were published, embedding a Remote Access Trojan (RAT).<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Because Claude Code depends on axios, any developer who installed or updated via npm during that window may have pulled in the compromised dependency. Attackers subsequently weaponized the leak as a social engineering lure, creating fake &#8220;official leaked&#8221; repositories on GitHub that<a href=\"https:\/\/thehackernews.com\/2026\/04\/claude-code-tleaked-via-npm-packaging.html\" target=\"_blank\" rel=\"noopener\"> distributed Vidar Stealer and GhostSocks proxy malware<\/a>. According to<a href=\"https:\/\/www.zscaler.com\/blogs\/security-research\/anthropic-claude-code-leak\" target=\"_blank\" rel=\"noopener\"> Zscaler ThreatLabz&#8217;s analysis<\/a>, these attacks have formed a complete malicious supply chain.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This coincidence underscores a serious industry concern: when an AI tool&#8217;s full architecture is exposed, attackers gain the precision needed to design targeted attack vectors that circumvent known security defenses.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Broader Industry Implications: Accelerating the Post-Prompting Era<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">The leak&#8217;s greatest value lies not in the security scandal or competitive intelligence, but in how directly it demonstrated where the ceiling for AI coding tools is being pushed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>From Chat Box to Invisible Infrastructure<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">KAIROS confirms an industry trajectory: AI is evolving from a &#8220;conversational tool&#8221; that waits for user input into &#8220;<a href=\"https:\/\/ai-stack.ai\/en\/what-is-ai-infrastructure\">invisible infrastructure<\/a>&#8221; that runs continuously in the background. In this <strong>Post-Prompting Era<\/strong>, large language models recede behind the scenes, becoming the plumbing of development workflows.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This means developers will transition from &#8220;line-by-line code executors&#8221; to &#8220;curators and decision-makers&#8221;\u2014reviewing and steering AI-generated work rather than producing it manually.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Open-Source Acceleration<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Multiple developers described the leaked codebase as &#8220;the most detailed public documentation of how to build a production-grade AI agent harness that exists.&#8221; This will inevitably accelerate open-source replication of similar architectures, narrowing the gap between proprietary tools and community alternatives\u2014much as<a href=\"https:\/\/ai-stack.ai\/en\/deepseek-open-source\"> DeepSeek&#8217;s open-source strategy<\/a> and<a href=\"https:\/\/ai-stack.ai\/en\/gemini3\"> Gemini 3&#8217;s multimodal breakthroughs<\/a> have already demonstrated.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Enterprise Trust and IPO Timeline<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">For Anthropic, two leaks in five days (model spec document followed by full source code) challenges its core brand narrative of AI safety and operational rigor. Market analysts suggest this could push its anticipated IPO timeline from late 2026 to 2027\u2014while its biggest rival<a href=\"https:\/\/ai-stack.ai\/en\/openai-ipo-2026\"> OpenAI&#8217;s own IPO path remains equally turbulent<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Final Thoughts: Are You Ready to Hand Over Control?<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">The Claude Code leak is, at its core, an accidental dress rehearsal for the future of AI development. KAIROS&#8217;s autonomous agent mode, autoDream&#8217;s memory consolidation, multi-agent orchestration\u2014these are not proof-of-concept experiments. They are compiled, production-grade features waiting to ship.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">When AI transforms from a chat box waiting for your input into an invisible teammate running 24\/7 behind the scenes, every developer and technology leader will need to reassess: which parts of the workflow are worth delegating, and which must remain firmly in human hands?<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">That is the defining question of the Post-Prompting Era.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anthropic accidentally exposed Claude Code&#8217;s full source code via npm in March 2026\u2014revealing KAIROS autonomous agent mode, autoDream memory consolidation, and 44 unreleased features. Full breakdown of the biggest code leak in AI history.<\/p>\n","protected":false},"author":253372376,"featured_media":12916,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[96987604,96987592],"tags":[96988508],"class_list":["post-12915","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news","category-featured-articles","tag-claude-code"],"blocksy_meta":[],"acf":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/04\/%E6%A8%A1%E5%9E%8BA-33-7a62dc8d.jpg?fit=1920%2C1080&quality=100&ct=202603031250&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/ph344V-3mj","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/12915","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/users\/253372376"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/comments?post=12915"}],"version-history":[{"count":1,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/12915\/revisions"}],"predecessor-version":[{"id":12920,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/12915\/revisions\/12920"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media\/12916"}],"wp:attachment":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media?parent=12915"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/categories?post=12915"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/tags?post=12915"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}