Significant Vulnerability Risks Uncovered in Anthropic's Claude Code
Recently, Check Point Research (CPR), the threat intelligence division of Check Point Software Technologies, disclosed alarming findings regarding serious vulnerabilities in the AI-powered coding assistant tool, Claude Code, developed by Anthropic, a notable player in artificial intelligence. These findings reveal that vulnerabilities identified as "CVE-2025-59536" and "CVE-2026-21852" permit remote code execution and API key theft through malicious repository configuration files, posing significant risks to organizations globally.
Overview of the Vulnerabilities
The vulnerabilities associated with Claude Code allowed malicious actors to execute remote code and steal API keys merely by cloning and opening untrusted projects. This alarming possibility was highlighted when CPR demonstrated how built-in functionalities—like Hooks and Model Context Protocol (MCP) integration—could be exploited to bypass trust controls. As a result, unauthorized shell commands could be executed, and API traffic could be redirected without user consent.
The consequences of stolen API keys extend throughout a company, as just a single compromised key can grant access to shared files and resources, allowing for modifications, deletions, and even incurring unauthorized costs.
CPR's findings signify a broader shift in the threat landscape of AI supply chains. Configuration files that were once benign operational metadata now play an active role in executing code. To address the unique risks presented by AI-driven automation, organizations must deploy state-of-the-art security controls.
Emerging Risks for Developers
As companies rapidly integrate agent-based AI development tools into their workflows, the trust boundary between configuration and execution is increasingly blurred. CPR's investigation revealed that Claude Code harbors vulnerabilities that can be exploited through malicious repository configuration files, leading to the potential for remote code execution and API credential theft. The intrinsic functionalities within Claude Code—like Hooks, MCP integrations, and environmental variables—present opportunities for malicious entities to execute arbitrary shell commands and steal API keys. The mere act of developers cloning and opening untrusted projects can trigger these severe risks without requiring any additional operations beyond the tool's launch.
Three Major Risk Categories
1.
Silent Command Execution via Claude Hooks
Claude Code features automation functions that execute pre-defined actions at session initiation. CPR showcased the potential for exploitation where arbitrary shell commands could be executed during the tool's initialization. A developer merely needs to open a malicious repository for these commands to run silently on their machine.
2.
Bypassing MCP User Consent
Claude Code integrates with external tools through MCP, initializing additional services when a project is opened. The investigation found that even though explicit user consent was a designed safeguard, repository configurations could overwrite this protective functionality, leading to executions without user agreement, thereby expanding the attack surface.
3.
API Key Theft Before Trust Establishment
The potential for API key theft arises when the tooling communicates with Anthropic services using API keys sent with authenticated requests. CPR demonstrated how an attacker could redirect authenticated API traffic to their servers before users confirmed the trust status of the project directory, leading to significant consequences, including external leakage of valid API keys tied to developers.
Why API Key Leakage Matters
Anthropic's API incorporates features known as