Claw Code: The Open-Source Claude Code Rewrite That Hit 100K Stars in Hours
Deep technical analysis of Claw Code — the clean-room Python and Rust rewrite of Claude Code agent harness architecture, born from the March 2026 source code leak. Architecture comparison, legal implications, and honest assessment.
Claw CodeOpen SourceAI Coding ToolsClaude CodeAgent Framework
1545  Words
2026-04-04

On March 31, 2026, a missing .npmignore entry shipped 512,000 lines of unobfuscated TypeScript to the public npm registry. Within hours, the entire internal architecture of Anthropic’s Claude Code — the agent harness connecting LLMs to tools, file systems, and task workflows — was laid bare for the world to study.
Two days later, Claw Code launched as a clean-room Python and Rust rewrite. It became the fastest-growing repository in GitHub history, surpassing 100,000 stars in its first hours. This article examines what happened, what Claw Code actually is, and whether you should care.
The Claude Code Source Leak: What Actually Happened
Security researcher Chaofan Shou discovered that Anthropic had accidentally published a 59.8 MB source map file in their Claude Code npm package. This single debug artifact contained the complete, unobfuscated TypeScript source across roughly 1,906 files.
The leak was not a security breach. Anthropic confirmed it was a packaging error — human error during a release process. But the damage was immediate and irreversible:
- Snapshots spread within minutes. GitHub mirrors were forked over 41,500 times before anyone could react.
- A concurrent npm supply-chain attack on the
axiospackage happened hours before, meaning anyone who installed Claude Code between 00:21 and 03:29 UTC on March 31 may have pulled a malicious dependency containing a Remote Access Trojan. - The architecture was suddenly public knowledge. One agent loop, 40+ discrete tools, on-demand skill loading, context compression, subagent spawning, task dependency graphs, and worktree isolation — all documented in source form.
The community reaction was split. Many argued the CLI should have been open source from the start, noting that Google’s Gemini CLI and OpenAI’s Codex were already open. Others raised serious concerns about intellectual property and the ethics of building on leaked code.
What Is Claw Code?
Claw Code is an open-source AI coding agent framework created by Sigrid Jin (@instructkr), a Korean developer who had attended Claude Code’s first birthday party in San Francisco just weeks earlier. After the leak, Jin built the initial version overnight using oh-my-codex, an orchestration layer on top of OpenAI’s Codex, with parallel code review and persistent execution loops.
The project positions itself as a clean-room rewrite — reimplementing architectural patterns observed in the leaked source without copying proprietary code. Whether this distinction holds up legally remains untested.
Language and Architecture
The codebase splits across two languages:
| Component | Language | Percentage | Purpose |
|---|---|---|---|
| Agent orchestration | Python | 27.1% | LLM integration, command parsing, tool dispatch |
| Runtime execution | Rust | 72.9% | High-performance execution, memory safety |
The Rust layer is organized as a 6-crate workspace with 16 runtime modules, targeting production-grade performance. A dev/rust branch tracks active migration work.
Core Components
19 permission-gated tools covering:
- File I/O and bash execution
- Git operations
- Web scraping and HTTP requests
- LSP integration for code intelligence
- Notebook editing
- Subagent spawning
15 slash commands for session control, model switching, cost tracking, and session compaction.
Multi-LLM support — provider-agnostic design supporting Claude, OpenAI, and local models. This is the most significant architectural departure from Claude Code, which is locked to Anthropic’s models.
MCP integration with 6 transport types, automatic name normalization, and OAuth support. If you are familiar with MCP from Claude Code, the integration model will feel familiar. For background on MCP, see my MCP security analysis.
Architecture Comparison: Claw Code vs Claude Code
Having worked extensively with Claude Code’s harness architecture and studied the leaked source, here is how the two systems compare at a structural level.
The Agent Loop
Both systems follow the same fundamental pattern: a central loop that takes user input, constructs prompts with context, calls an LLM, parses tool-use responses, executes tools, and feeds results back into the next iteration.
Claude Code’s implementation is a monolithic TypeScript bundle with tight coupling between the agent loop and Anthropic’s API. Claw Code decomposes this into a Python orchestration layer backed by a Rust runtime, with explicit provider abstraction.
Tool System
Claude Code ships roughly 40 tools. Claw Code currently implements 19 with a similar permission model — three access modes (allow, deny, ask) with per-tool policy configuration. The reduced tool count is partly by design (avoiding rarely-used tools) and partly because the project is still maturing.
Context Management
Both systems implement context window management through transcript compaction — summarizing older conversation turns to stay within token limits. Claude Code’s implementation is more sophisticated, with multi-layer memory and persistent knowledge graphs. Claw Code has basic session persistence but lacks the depth of Claude Code’s memory strategy.
Multi-Agent Orchestration
Claw Code calls this “swarms” — parallel subtask execution where a primary agent spawns child agents for independent work. Claude Code has a similar concept with subagents, though the spawning model differs. Claude Code’s subagent system is more battle-tested in production workflows.
Where Claw Code Wins
- Full source visibility. You can read, modify, and understand every line.
- Multi-LLM support. Not locked to one provider.
- Rust performance layer. Potentially faster for I/O-heavy operations.
- MIT license. Use it however you want.
Where Claude Code Wins
- Polish and stability. Claude Code has been in production for over a year with a dedicated team.
- Deep Anthropic integration. Optimized prompt engineering for Claude models.
- Ecosystem maturity. Skills, hooks, CLAUDE.md conventions, worktrees — all battle-tested.
- Official support. Bug reports get fixed. Breaking changes get migration guides.
The Legal and Ethical Question
This is where things get uncomfortable. Claw Code claims clean-room status, but the reality is nuanced.
The clean-room defense requires that developers implementing the new system have never seen the proprietary source code. Given that the leaked source was publicly available and widely analyzed, and that Claw Code was built in direct response to seeing that architecture, the clean-room claim is on shaky ground.
What independent audits found: The project states that code audits confirm no Anthropic proprietary code or model weights are included. This addresses the narrowest legal question (direct copying) but not the broader one (was the architecture itself protectable trade secret information?).
The practical reality:
- Copyright protects expression, not ideas. Reimplementing an architecture pattern in a different language is generally permissible.
- Trade secrets lose protection once publicly disclosed, even accidentally. Anthropic’s leak may have inadvertently waived trade secret claims.
- No legal action has been taken as of April 2026. Anthropic has not issued DMCA takedowns against Claw Code or the mirror repositories.
My honest assessment: using Claw Code for personal projects and research carries low risk. Building a commercial product on it carries higher risk until the legal landscape clarifies. Anthropic’s silence is not the same as approval.
Should You Use Claw Code?
This depends on who you are.
Use Claw Code if you are:
- A researcher studying agent harness architectures
- Building a custom agent framework for a specific domain
- Need multi-LLM provider support
- Want to learn how production AI coding agents work internally
- Contributing to open-source AI tooling
Stick with Claude Code if you are:
- A professional developer who needs reliable daily tooling
- Working in a corporate environment with legal compliance requirements
- Invested in the Claude Code ecosystem (skills, hooks, CLAUDE.md)
- Prioritizing stability over customizability
For context on how Claude Code fits into the broader landscape, see my AI coding agents comparison.
What This Means for the AI Tooling Ecosystem
The Claw Code phenomenon reveals something important: the agent harness layer is not the moat. The models are.
Google’s Gemini CLI is open source. OpenAI’s Codex is open source. Now Claude Code’s architecture has been independently reimplemented. The pattern is clear — the shell connecting an LLM to your file system and tools is becoming a commodity.
The real value lies in:
- Model quality — how well the LLM reasons about code
- Prompt engineering — how effectively the harness leverages the model
- Ecosystem integration — skills, plugins, MCP servers, community tooling
- Reliability — edge case handling, error recovery, session management
Anthropic’s competitive advantage was never the TypeScript CLI wrapper. It was Claude’s ability to reason about complex codebases. That has not changed.
Getting Started with Claw Code
If you want to explore Claw Code:
git clone https://github.com/instructkr/claw-code.git
cd claw-code
pip install -r requirements.txt
python src/main.py
The repository includes a tests/ directory for validation and CLI utilities for subsystem inspection and parity audits against Claude Code’s feature set.
Important caveats:
- The project is under active development. Expect breaking changes.
- The Rust migration is ongoing. Some modules are Python-only.
- Community documentation is sparse compared to Claude Code.
- MCP server compatibility may vary from Claude Code’s implementation.
Conclusion
Claw Code is a technically impressive project born from unusual circumstances. It demonstrates that the agent harness pattern — the loop connecting LLMs to tools — is well-understood enough to be reimplemented in days. But being technically capable and being production-ready are different things.
For most developers, Claude Code remains the better choice for daily work. But Claw Code serves an important role as a reference implementation, a research platform, and a signal that the AI coding tool landscape is moving toward openness. The 100K+ stars are not just hype — they reflect genuine demand for transparency in how AI agents operate on our codebases.
The question is no longer whether agent harnesses should be open. It is how fast the industry will get there.
Further reading:
Comments
Join the discussion — requires a GitHub account