theSecNews

Severe Zero‑Click Vulnerability in Claude Desktop Extensions Exposes Users to Remote Code Execution

4 min read
The Sec News Team
Severe Zero‑Click Vulnerability in Claude Desktop Extensions Exposes Users to Remote Code Execution

Zero-Click Exploit in Claude Desktop Extensions Raises Serious Security Concerns Researchers from LayerX have revealed a critical vulnerability affecting Cl...

Zero-Click Exploit in Claude Desktop Extensions Raises Serious Security Concerns

Researchers from LayerX have revealed a critical vulnerability affecting Claude Desktop Extensions, demonstrating how an innocuous-looking Google Calendar event could be weaponised to execute arbitrary code on a user’s device. The flaw, which has been rated a maximum CVSS score of 10/10, allows remote code execution (RCE) without any interaction from the user.

Why This Discovery Matters

The finding underscores a growing challenge in the AI ecosystem: integrating AI assistants with local system capabilities without exposing users to high-impact security compromises. As models become more autonomous and are granted direct access to operating system resources, the risk of exploitation escalates dramatically.

In this case, the vulnerability emerges from how Claude uses Desktop Extensions (DXT), which rely on local MCP (Model Context Protocol) servers. These servers are afforded full system permissions and lack sandboxing, meaning they operate without the defensive isolation typically seen in browser extensions or mobile applications.

Understanding the Root of the Vulnerability

A Risky Chain of Autonomous Decisions

Claude Desktop Extensions serve as bridges between the language model and the local operating system. Users may install these extensions in a similar manner to browser add-ons, but unlike browser plug-ins, DXT extensions execute with unrestricted system access.

Claude autonomously chooses which installed MCP servers to invoke when fulfilling a user's prompt. Crucially, there are no built-in safeguards preventing the model from passing data from a low-risk integration — such as Google Calendar — to a high-risk tool capable of executing system commands.

This design flaw means even benign data, if crafted maliciously by an attacker, can trigger dangerous local actions.

How a Simple Calendar Event Became a Trigger

LayerX researchers demonstrated the exploit by creating an event titled "Task Management" in Google Calendar. The event contained text instructing the system to download a repository, store it locally, and run a make file. With no obfuscation and no trickery on the user's part, Claude interpreted the calendar event as a legitimate action request and executed the instructions via a connected MCP server.

The result was the execution of arbitrary code, such as launching the system’s calculator. While harmless in the demo, the same mechanism could easily retrieve sensitive files, extract stored credentials, or run malware.

Broader Implications for AI Security

This vulnerability highlights an emerging class of AI-driven security risks: autonomous tool orchestration attacks. As language models are increasingly empowered to manage files, perform automation tasks, or access personal data, attackers may embed instructions inside seemingly ordinary content. Calendar entries, emails, documents, or chat messages could all serve as vectors.

The researchers noted that in a secure design, data exchange between low-risk integrations and high-risk system tools should require explicit user approval. Currently, this verification layer is absent.

Mitigation and Best Practices

Some relief could come from requiring users to confirm each MCP “connection”, such as authorising calendar access to a specific folder while denying system command privileges. However, this only reduces the attack surface rather than solving the underlying architectural issue.

Users are strongly advised to limit the permissions granted to AI tools. This includes using dedicated accounts, restricting the resources accessible to AI assistants, and avoiding enabling command execution features unless absolutely necessary.

Conclusion

The discovery by LayerX serves as a timely reminder that the convenience of AI-driven automation must be carefully balanced with robust security controls. As models gain more operational autonomy, design oversights such as those found in Claude Desktop Extensions could have severe consequences. Ensuring transparency, user consent, and stricter isolation mechanisms will be essential as AI continues to integrate more deeply with personal systems.

What do you think? Let me know in the comments.

Share this Article

Spread cybersecurity knowledge

The Sec News Team
Team

The Sec News Team

Our dedicated team of security experts.

Follow me on:

You might be interested in...

Discussion

0 Comments
Last activity: N/A
Live Feed
Sending...