Anthropic has issued a takedown notice to a developer for trying to reverse engineer its coding tool. According to reports, the developer tried to reverse-engineer the company’s agentic coding tool Claude Code.
While there has been a rivalry between the two main agentic coding tools in the market, Claude Code developed by Anthropic, and Codex CLI developed by OpenAI, it looks as though OpenAI’s tool is more open to developer goodwill than the former.
According to records, the reason for the notice was that Claude Code is under a more restrictive usage license than OpenAI’s Codex CLI.
Anthropic issues takedown notice to stop Claude Code reverse-engineering
Claude Code and Codex CLI are tools that carry out much of the same thing, allowing developers to use the power of AI models running in the cloud to complete various coding tasks. Both companies released their products within months of each other, with the two of them racing to capture valuable developer mindshare.
The source code for Codex CLI is available under its Apache 2.0 license that authorizes distribution and commercial use. However, Claude Code, which has ties to Anthropic’s commercial license limits how developers can modify it without approval from the company. According to reports, Anthropic also obfuscated the source code for Claude Code, meaning that the source code isn’t readily available.
In a recent incident, a developer de-obfuscated the source code and released it on GitHub, leading Anthropic to file a DMCA complaint, and a copyright notification to have the code removed. The move angered developers, with most of them taking to social media to show their displeasure. “LOL @AnthropicAI DMCA’d the repo that contained the claude code source via the sourcemaps in the original release. So pathetic,” a user said.
Developers show displeasure as OpenAI takes the win
The developers mentioned that the move to serve the complaint compared unfavorably with OpenAI’s release of Codex CLI. This is because a week after OpenAI released Codex CLI, it merged dozens of developer suggestions into the tool’s codebase, including one that allows Codex CLI to tap models from rival providers, including Anthropic.
According to reports, the lab might be taking precautions due to its Claude Code still being in beta and a bit buggy. Reports on GitHub last month mentioned that Claude Code’s auto-update function contained a bug command that rendered some workstations unstable and broken, pushing the launch of the coding tool off to a rocky start. The reports clarified that when the tool was installed at the “root” or “superuser” levels, permissions that allow programs to make operating system-level changes, the buggy commands would let the apps modify the restricted file directories and, in worst case, “brick” systems.
Anthropic later announced that it had fixed the problematic commands from Claude Code and added a link in the program that directs users to the troubleshooting guide. This means it is possible that Anthropic will release the source code under a permissive license in the future. Companies have always had many reasons for obfuscating code, with security concerns being the major reason.
Meanwhile, the move looks like a win for OpenAI, which has been discussing a move away from open-source releases over the last few months. The company said it will prioritize proprietary, locked-down products, a move that has not gone down well with Tesla CEO Elon Musk. It may also be an emblem of a bigger shift in the company’s affairs, with OpenAI CEO Sam Altman stating at the beginning of the year that he feels the company has been on the wrong side of history when it comes to open source.
Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now