Cursor AI Code Editor Exposed to Remote Code Execution Risk
A serious security vulnerability has been discovered in Cursor, the increasingly popular AI-assisted code editor. Researchers warn that a malicious MCP file swap, even after user approval, could allow attackers to remotely execute code on affected systems.

Cybersecurity researchers have just revealed a serious security vulnerability in Cursor, the AI-powered code editor. This flaw could let attackers remotely run code on your system.
The vulnerability, now known as CVE-2025-54136 (and sporting a CVSS score of 7.2), has been cleverly dubbed MCPoison by the Check Point Research team. Why "MCPoison"? Because it messes with how Cursor handles changes to Model Context Protocol (MCP) server configurations.
"An attacker can pull off remote and persistent code execution by tweaking a trusted MCP configuration file, either in a shared GitHub repository or directly on the target machine," Cursor explained in an advisory released last week. Sounds scary, right?
The advisory continues, "Imagine a collaborator accepts a seemingly harmless MCP. The attacker could then secretly swap it out for a malicious command – think something like 'calc.exe' – without triggering any warnings or prompts." Sneaky!
So, what's MCP anyway? It's an open standard created by Anthropic that lets large language models (LLMs) play nice with external tools, data, and services in a standardized way. Anthropic introduced it back in November 2024.
How Does MCPoison Work?
Check Point breaks down CVE-2025-54136 like this: it's all about exploiting the ability to change an MCP configuration after a user has already given it the thumbs up in Cursor. Here's the attack in a nutshell:
- First, the attacker adds a harmless-looking MCP configuration (".cursor/rules/mcp.json") to a shared repository.
- Then, they wait for the victim to grab the code and approve it in Cursor.
- Next, the attacker swaps the innocent MCP configuration for a nasty payload – maybe launching a script or installing a backdoor.
- Boom! Persistent code execution every time the victim fires up Cursor.
The real issue? Once a configuration gets approved, Cursor trusts it forever, even if it gets altered. This could lead to supply chain attacks and data/intellectual property theft. Yikes!
Good news, though! After being notified on July 16, 2025, Cursor squashed the bug in version 1.3 (released in late July 2025). Now, it requires user approval every single time an MCP configuration file is tweaked.
"This flaw highlights a serious weakness in how we trust AI-assisted development environments," Check Point warns. "It raises the stakes for teams using LLMs and automation in their workflows."
This news comes on the heels of Aim Labs, Backslash Security, and HiddenLayer uncovering other vulnerabilities in Cursor that could lead to remote code execution and bypasses of its denylist protections. Those have also been patched in version 1.3.
AI and Security: A Growing Concern
All of this underscores the growing use of AI in business, including LLMs for code generation. This expands the attack surface and introduces new risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, and data leakage.
Consider this:
- A test of over 100 LLMs writing code in Java, Python, C#, and JavaScript found that 45% of the generated code had security flaws and introduced OWASP Top 10 vulnerabilities. Java was the worst offender, with a 72% failure rate!
- The LegalPwn attack shows how legal disclaimers or terms of service can be used for prompt injection. Malicious instructions can hide within these documents, tricking LLMs into misclassifying malicious code or suggesting unsafe code that can execute a reverse shell.
- The "man-in-the-prompt" attack uses a rogue browser extension to open a new tab, launch an AI chatbot, and inject it with malicious prompts to steal data or mess with the model.
- Fallacy Failure is a jailbreak technique that tricks an LLM into accepting invalid premises, causing it to produce restricted outputs and break its own rules.
- MAS hijacking manipulates the control flow of a multi-agent system (MAS) to run malicious code.
- Poisoned GPT-Generated Unified Format (GGUF) Templates embeds malicious instructions within chat template files to compromise outputs.
- Attackers can target machine learning (ML) training environments to steal data, poison models, or escalate privileges.
- Anthropic discovered subliminal learning, where LLMs learn hidden characteristics during distillation, potentially leading to misalignment and harmful behavior.
"As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly," Dor Sarig of Pillar Security warns.
"These attacks show that AI security needs a new approach," Sarig continues. "They bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability is in the very language and reasoning the model is designed to emulate."