A developer installs an AI plugin so their IDE can talk to Jira. That plugin quietly opens a port on their machine, binds it to every network interface, and waits with no authentication required. Anyone on the same network can now write any file anywhere on that laptop, read SSH keys, or use the machine as a launching pad into the corporate network. That was not a hypothetical. It was the documented reality of mcp-atlassian for every version below 0.17.0.
The vulnerabilities are tracked as CVE-2026-27825 CVSS 9.1 and CVE-2026-27826 CVSS 8.2. They were discovered by researchers at Pluto Security, reported to the project maintainer, and patched on February 24, 2026. Public technical details followed on February 27, 2026. Arctic Wolf confirmed as of publication that no active exploitation had been observed in the wild, a statement consistent with Pluto Security's own disclosure note, but both noted a meaningful risk of threat actor adoption given the unauthenticated nature of the flaws, the high-value Atlassian attack surface, and the availability of a public proof-of-concept. Following Pluto Security's disclosure, the Docker Security team updated the mcp-atlassian container on Docker Hub and MCP Hub to the patched version. Pluto Security also released an open-source remediation script on GitHub that can scan for vulnerable installations across pip, uv-tool, and virtualenv environments and optionally auto-upgrade them.
This is the story of how two missing validation functions inside a well-intentioned open-source project exposed millions of developer machines to a network-adjacent attacker who needed to send exactly two HTTP requests.
What Is mcp-atlassian and Why Does It Matter
Model Context Protocol (MCP) is an open standard introduced by Anthropic in late 2024. Its purpose is straightforward: give AI assistants a standardized way to connect with external tools and data sources. Instead of every developer writing a custom integration, you run an MCP server that exposes capabilities as callable tools, and an AI model like Claude, Cursor, or GitHub Copilot can invoke them directly. Within months of launch, MCP had been adopted by Microsoft, OpenAI, Google, Amazon, and dozens of development frameworks. It became the connective tissue between AI assistants and real-world enterprise systems.
mcp-atlassian, maintained on GitHub by the developer known as sooperset, is the most widely used open-source MCP server for Atlassian products. Because Atlassian had not historically provided its own local MCP server, developers reached for this community option. The project accumulated over 4,400 GitHub stars and, critically, more than four million downloads. It exposes over 40 tools covering Jira and Confluence: searching issues, reading pages, creating tickets, uploading attachments, downloading files, and managing sprints. For a developer who spends their day in Jira and Confluence, it is a genuine productivity accelerant. It is also, in versions below 0.17.0, a serious liability.
Both vulnerabilities affect mcp-atlassian when running the HTTP transport modes (streamable-http or sse). In these modes, the server binds to 0.0.0.0 by default, meaning it is reachable from any host on the local network, not just localhost. The stdio transport mode is not exposed to the network and is not directly affected. Anyone running an HTTP transport below version 0.17.0 should treat this as an active risk.
Understanding why this matters requires understanding the trust model MCP creates. When you connect an AI agent to Jira through an MCP server, you establish a chain: the agent calls the MCP client, the MCP client calls the MCP server, and the MCP server calls Atlassian APIs and the local filesystem. Each link in that chain is a trust boundary. Pluto Security researchers described this failure in their February 27, 2026 writeup, naming the flaw set "MCPwnfluence." In their framing, MCP servers act as privileged bridges between agent workflows and high-impact capabilities, and MCPwnfluence exploited two fundamental boundary failures: unvalidated outbound URL control, and unconstrained filesystem writes. — Pluto Security, MCPwnfluence writeup, February 27, 2026
That framing is important. These are not obscure edge-case bugs. They are fundamental failures at the trust boundaries the protocol was designed to protect.
CVE-2026-27825: The Arbitrary File Write
The first vulnerability lives in the Confluence attachment download functionality. The tools download_attachment and download_content_attachments both accept a destination path supplied by the caller. In the vulnerable code, that path was used directly with no restrictions:
# v0.16.1 - confluence/attachments.py
def download_attachment(self, url: str, target_path: str) -> bool:
try:
if not os.path.isabs(target_path):
target_path = os.path.abspath(target_path)
# NO PATH TRAVERSAL CHECK - writes to ANY path on the filesystem
os.makedirs(os.path.dirname(target_path), exist_ok=True)
response = self.confluence._session.get(url, stream=True)
response.raise_for_status()
with open(target_path, "wb") as f: # ARBITRARY FILE WRITE
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
There is no base directory restriction. There is no traversal protection. There are no symlink safeguards. If the process can write to a path, it will write to that path, with whatever content it downloads from the provided URL. The tool documentation for the project even explicitly stated that the file path parameter accepted absolute paths like /home/user/document.pdf, advertising the very behavior that made exploitation trivial.
The attack is uncomplicated. An attacker on the same network sends a request to the unauthenticated MCP endpoint and calls confluence_download_attachment with a download_path pointing at a sensitive target. Depending on the permissions of the running process, options include:
- Writing to
~/.bashrcor~/.zshrcto inject commands that execute the next time the legitimate user opens a terminal - Writing to
~/.ssh/authorized_keysto add an attacker-controlled public key and gain persistent SSH access - Overwriting cron jobs, startup scripts, or container entrypoints depending on the deployment environment
- Writing a malicious cron job that opens a reverse shell from a shared co-working space or airport WiFi network
The situation is compounded by the fact that mcp-atlassian frequently runs inside Docker containers and development environments where the process carries elevated permissions. Pluto Security noted the severity directly: on an MCP server running as root inside a container, the combination of no path restriction and elevated privileges is not merely dangerous but critical.
What makes this particularly uncomfortable is the silent data exfiltration dimension that existed alongside the write vulnerability. The upload tools — confluence_upload_attachment and jira_update_issue with its attachments parameter — accepted a file_path pointing to any file on the filesystem, read it in binary mode, and uploaded it to the configured Atlassian instance. An attacker could request that the server read and exfiltrate ~/.ssh/id_rsa, ~/.aws/credentials, ~/.kube/config (Kubernetes cluster credentials), or /etc/shadow with no additional exploitation required. The server would comply. No files modified. No processes spawned. No logs written on the victim's machine. The only trace was an outbound HTTP request to what appeared to be a legitimate Atlassian URL.
CVE-2026-27826: SSRF and the Full RCE Chain
The second vulnerability operates at the network layer rather than the filesystem. The mcp-atlassian middleware accepts two HTTP headers, X-Atlassian-Jira-Url and X-Atlassian-Confluence-Url, and uses the values those headers contain to construct Atlassian client objects for the duration of the request. No validation of those URLs was performed:
# v0.16.1 - main.py, _process_authentication_headers()
if jira_url_str:
service_headers["X-Atlassian-Jira-Url"] = jira_url_str # NO VALIDATION
if confluence_url_str:
service_headers["X-Atlassian-Confluence-Url"] = confluence_url_str # NO VALIDATION
scope["state"]["atlassian_service_headers"] = service_headers
When the server subsequently invoked any Jira tool, it would make outbound HTTP requests to whatever URL the attacker had provided. The attacker does not steal the victim's Atlassian credentials in this flow. They supply their own token. The goal is to redirect the server's outbound behavior: to turn the MCP server into a Server-Side Request Forgery (SSRF) proxy operating from the victim's network position.
In a cloud environment, this is especially damaging. An MCP server running inside an AWS EC2 instance with an IAM role attached can be directed to query the instance metadata service at 169.254.169.254. That endpoint returns temporary AWS access keys, secret keys, and session tokens. The attacker walks away with credentials that can be used to access cloud resources, create new accounts, exfiltrate stored data, or pivot deeper into the environment. Research published by BlueRock on similar SSRF patterns in cloud-hosted MCP servers confirmed that this metadata endpoint exposure is not theoretical. IMDSv1 — the older, less protected version of the AWS instance metadata service — was verified as exploitable in exactly this scenario. As BlueRock principal solutions engineer David Onwukwe noted in analysis covered by Dark Reading, the realistic concern is that the substantial majority of EC2 instances in production remain on IMDSv1 rather than the more protective IMDSv2, which requires a request header that SSRF attacks cannot easily supply. — Dark Reading, January 20, 2026
CVE-2026-27826 also enables internal network scanning from the victim's host. By providing different IP addresses in the header, an attacker can probe internal services, map the corporate network topology, and reach resources that are firewalled from the outside world but reachable from the developer's machine.
Where CVE-2026-27826 transforms from a standalone SSRF into something far more dangerous is in combination with CVE-2026-27825. Pluto Security documented the full unauthenticated RCE chain:
- The attacker scans the local network and finds an mcp-atlassian HTTP endpoint (default port, no authentication required to connect).
- The attacker initializes an MCP session with a standard
POST /mcprequest. - Using CVE-2026-27826, the attacker overrides the Atlassian destination URL to point at a machine they control, allowing the server to receive the server's outbound requests.
- Using CVE-2026-27825, the attacker calls
confluence_download_attachmentand provides a download path pointing at~/.ssh/authorized_keysor~/.bashrc. The content written is whatever the attacker's server returns in the response, which they control entirely. - Remote code execution is achieved on the victim machine, with no credentials required at any stage.
In Pluto Security's own words, anyone on the same local network can run code on the victim's machine as root by sending two HTTP requests, with no authentication required. — Pluto Security, MCPwnfluence, February 27, 2026
That sentence should sit with every developer who has mcp-atlassian running in HTTP transport mode on an unpatched version.
The Questions Most Articles Are Not Asking
The security community has done a solid job documenting the technical mechanics of CVE-2026-27825 and CVE-2026-27826. What is less examined is the set of questions that live one layer above the exploit code. These matter more for the long arc of how the MCP ecosystem develops.
Why did this tool accumulate four million downloads before anyone audited it?
The mcp-atlassian project filled a real gap. Atlassian had not published an official local MCP server, developers needed a way to connect their AI assistants to Jira and Confluence, and sooperset built one that worked. The trust extended to that project was not irrational. It was listed on official registries, featured in tutorials, and recommended in developer forums. But none of that constitutes a security review. The lesson here is about how the community assigns trust. Download counts, GitHub stars, and presence in a registry are signals of popularity, not security posture. The MCP ecosystem is replicating the same mistake made in the early npm era: conflating adoption with vetting.
Was this vulnerability discoverable through automated tooling?
Yes, and that makes it worse. Both vulnerabilities are textbook examples of issues that static analysis and SAST tooling can flag. A path traversal check missing from a file write function, and unvalidated user-supplied input flowing into an outbound HTTP request, are patterns that tools like Semgrep, Bandit, and CodeQL are specifically designed to catch. The question of why no automated security gate existed in the project's CI/CD pipeline is worth asking — not to assign blame to a solo maintainer, but because it illustrates the infrastructure gap that community-maintained security-adjacent tooling tends to fall into.
What is the liability surface for organizations that ran vulnerable versions?
This question is almost entirely absent from the technical writeups, but it is the one that compliance teams and legal counsel will be asking. If an organization ran mcp-atlassian in HTTP transport mode and a breach occurred, the question of whether reasonable security practices were in place becomes relevant. The fact that the tool had four million downloads and no warning about the unauthenticated network exposure in its documentation creates a complex picture. Organizations in regulated industries — healthcare, finance, defense contractors — who had this running in environments where developers access sensitive data should be conducting a formal exposure assessment, not just patching and moving on.
How does this interact with insider threat scenarios?
The attack model discussed in most writeups assumes an external attacker on the same network. But the unauthenticated file write and exfiltration capabilities are equally useful to a malicious insider. A developer with physical or network access to a colleague's machine running an unpatched mcp-atlassian instance could exfiltrate SSH keys, AWS credentials, or Kubernetes configs without leaving logs on the victim machine. The insider threat dimension of this vulnerability is worth naming explicitly because it changes the risk calculus for organizations that already face that threat model.
What does "no active exploitation observed" actually mean?
Arctic Wolf's statement that no active exploitation had been observed at the time of publication is meaningful and responsible reporting. But it is worth understanding what that statement can and cannot confirm. It means no observed exploitation in the telemetry Arctic Wolf had access to at that point in time. It does not mean the vulnerability was not exploited before disclosure. Exploitation of developer tools and MCP servers may not generate the kinds of indicators that security operations centers typically monitor for. A silent read of ~/.aws/credentials followed by cloud credential abuse days later may never be attributed to an MCP server compromise. The absence of observed exploitation is not equivalent to confirmed non-exploitation.
The Bigger Problem No One Is Talking About
The individual vulnerabilities in mcp-atlassian are serious, but they are symptoms of something structural. The MCP ecosystem has a security posture problem that extends well beyond this one project.
BlueRock's analysis of over 8,000 MCP servers in their MCP Trust Registry found that an SSRF exposure similar to CVE-2026-27826 may be present in approximately 36.7% of all publicly reachable MCP servers. That is not a fringe concern. That is a systemic pattern. MCP adoption has accelerated faster than the security practices required to support it, and community-developed servers are being deployed in enterprise environments with the same trust afforded to hardened commercial software.
The pattern of unauthenticated exposure is not limited to mcp-atlassian. Community reporting in February 2026 described scans of publicly visible MCP servers turning up admin panels, debug endpoints, and API routes reachable without any authentication. This is the same failure mode, repeated at scale: tools built for local developer use that gradually migrate into shared, networked environments while carrying their original permissive defaults.
The authentication gap is the most immediate structural flaw. Many public MCP servers do not verify requests or protect user sessions. The MCP specification has evolved with each release to introduce better security guidance, and OWASP has now published both a Top 10 for MCP and a separate Top 10 for Agentic Applications, both identifying inadequate authentication as a primary risk. As OWASP's MCP Top 10 frames it, when MCP ecosystems involve multiple agents, users, and services exchanging data and executing actions, weak or missing identity validation exposes critical attack paths. — OWASP MCP Top 10. Despite that guidance, server implementations widely ignore these recommendations. The result is that developers install an MCP plugin from a trusted registry, assume some reasonable level of built-in security exists, and never discover that the service is bound to 0.0.0.0 with zero authentication until a researcher publishes a proof-of-concept.
Supply chain risk compounds this further. Because MCP servers are often distributed as packages via npm and PyPI and run via tools like npx or uvx, they execute with the full permissions of the user who runs them. There is no sandboxing by default, no auditing of what the code does once running, and auto-update defaults mean a compromised package version can propagate instantly across all installations.
The "Living off AI" attack pattern that Cato Networks researchers documented in June 2025 adds another dimension to this concern. In that proof-of-concept, an external attacker submits a malicious Jira support ticket. When an internal support engineer uses an MCP-connected AI assistant like Claude Sonnet to process that ticket, the AI executes the embedded malicious instructions using the internal user's privileges. The attacker never interacts with the MCP server directly. The internal user becomes an unwitting proxy. As Cato's researchers described it, the threat actor effectively gains privileged access without ever authenticating. — Cato Networks, CTRL Threat Research, June 2025. Notably, some Jira Service Management portals allow ticket submission without any authentication at all, and are discoverable through basic Google dorking. When this "Living off AI" pattern is combined with the unauthenticated RCE capability of mcp-atlassian below 0.17.0, the possible attack chains become significantly more complex and the attacker's required foothold shrinks to nearly nothing.
The OWASP GenAI Security Project has also published a practical guide for secure MCP server development that addresses these architectural issues directly, covering strong authentication, strict validation, session isolation, and hardened deployment. That guidance existed before CVE-2026-27825 was published. The gap between available guidance and actual implementation in community tooling is not an information problem. It is an incentive and infrastructure problem.
Atlassian itself has published guidance on MCP risk, acknowledging that MCP clients and servers enabled in your environment can cause an AI agent to perform actions on your behalf, and that this creates structural risks alongside powerful workflows. — Atlassian, MCP Clients: Understanding the potential security risks. Atlassian launched its own official Rovo MCP Server in beta in May 2025, then released it as generally available on February 4, 2026 — three weeks before the CVE-2026-27825 patch landed — built on OAuth 2.1 and API token authentication, specifically to provide a trusted alternative to community-maintained options. That server represents what secure MCP implementation looks like. The existence of CVE-2026-27825 and CVE-2026-27826 in the most popular unofficial alternative illustrates the gap between what secure looks like and what many developers are actually running.
What the Fix Actually Does
Version 0.17.0, released February 24, 2026, addressed both vulnerabilities with targeted additions to the codebase rather than architectural changes.
For CVE-2026-27825, pull request #987 introduced a centralized validate_safe_path() function in src/mcp_atlassian/utils/io.py. The function resolves symlinks, normalizes the path, and then verifies that the resolved path remains within the configured base directory. If the resolved path escapes the base directory, the operation raises a ValueError before any file write occurs. It is applied to both download_attachment() and download_content_attachments(). The logic is concise but correct:
# v0.17.0 - utils/io.py
def validate_safe_path(
path: str | os.PathLike[str],
base_dir: str | os.PathLike[str] | None = None,
) -> Path:
if base_dir is None:
base_dir = os.getcwd()
resolved_base = Path(base_dir).resolve(strict=False)
p = Path(path)
if not p.is_absolute():
p = resolved_base / p
resolved_path = p.resolve(strict=False)
if not resolved_path.is_relative_to(resolved_base):
raise ValueError(
f"Path traversal detected: {path} resolves outside {resolved_base}"
)
return resolved_path
For CVE-2026-27826, pull request #986 introduced validate_url_for_ssrf() in src/mcp_atlassian/utils/urls.py. The function enforces a scheme allowlist (http and https only), blocks known internal hostnames including localhost and the Google Cloud metadata endpoint, validates IP addresses for global reachability using Python's ipaddress module including IPv4-mapped IPv6 addresses, performs DNS resolution to catch hostnames that resolve to private IP ranges, and supports an optional domain allowlist via the MCP_ALLOWED_URL_DOMAINS environment variable. It also includes a redirect-following hook that blocks SSRF attempts that try to bypass the initial validation by using open-redirect chains as intermediaries.
These are solid, well-constructed fixes. They address the specific failure modes documented by Pluto Security and add defense-in-depth layers against bypass attempts. The turnaround from coordinated disclosure to patch release was also responsible: the maintainer had the fix available the same day the coordinated disclosure reached them. Pluto Security's writeup explicitly thanks the project's sole lead maintainer, Hyeonsoo Lee, for a swift and professional response — a meaningful acknowledgment given the scope of responsibility that comes with maintaining a project at this scale alone.
Version 0.17.0 fixes the path traversal and SSRF validation failures. It does not add authentication to the HTTP transport. The server still binds to 0.0.0.0 by default in HTTP transport mode. It still operates with no credential requirement for initiating a session. Patching is necessary but not sufficient. The network exposure model is unchanged.
What You Need to Do Right Now
The immediate action is unambiguous. Upgrade mcp-atlassian to version 0.17.0 or later. If you are using uvx or a pinned version, update the version specifier and reinstall. If you are running a Docker image, pull the updated image from Docker Hub, which has been updated to the patched version. If you are unsure what version you are running, check your pyproject.toml, your requirements file, or run pip show mcp-atlassian.
Pluto Security published an open-source remediation script on GitHub at github.com/plutosecurity/MCPwnfluence that can scan for vulnerable mcp-atlassian installations across pip, pip3, uv-tool, and virtualenv environments and optionally perform automatic upgrades. Running it in check-only mode first is recommended. The script also has a --scan flag that inspects MCP client configuration files for Claude Desktop, Cursor, and VS Code, flagging dangerous transport settings like 0.0.0.0 binding and HTTP transport mode that may be present in other MCP servers beyond mcp-atlassian.
Beyond the immediate patch, there are defensive practices worth locking in for any MCP deployment:
- Never bind MCP HTTP transport to 0.0.0.0 unless you have deliberate network access controls in place. If you only need local access, bind to 127.0.0.1. The default binding behavior of mcp-atlassian's HTTP transport is what transformed a local tool into a network-exposed attack surface.
- Apply least privilege to any MCP server process. A server that runs as a non-root user in a containerized environment with explicit filesystem mounts limits the blast radius of an arbitrary file write vulnerability dramatically. The fix matters, but defense in depth matters more.
- Prefer Atlassian's official Rovo MCP Server for production and enterprise deployments. It is built on OAuth 2.1, requires HTTPS, and respects your existing Atlassian permission model. The community tooling served a real need before the official option existed, but the security tradeoffs are now clearly documented.
- Audit what MCP servers are running in your development environment and on what ports. Developers install these tools quickly and do not always track them. A port scanner run against your own machine may surprise you. The Pluto Security remediation script's
--scanflag checks MCP client config files and is a good starting point. - Monitor for lateral movement indicators if you ran an exposed version. If mcp-atlassian was running in HTTP transport mode on an unpatched version and your machine was accessible on the local network, treat it as a potential compromise. Check
~/.ssh/authorized_keys, review shell startup files for injected commands, examine recent outbound connections, and review cloud credential usage logs for unexpected API activity. - Enable IMDSv2 on all AWS EC2 instances. This limits SSRF exploitability against the instance metadata service regardless of which MCP server or other application is running on the instance. It does not eliminate SSRF risk, but it removes the most commonly exploited metadata endpoint from the attack surface.
Deeper Solutions Than "Just Patch It"
The standard remediation advice for a vulnerability like this is to patch, apply least privilege, and prefer official tooling. That advice is correct. It is also insufficient if the structural conditions that produced this vulnerability remain unchanged. The following are harder, less commonly discussed interventions that the MCP ecosystem and its stakeholders need to consider seriously.
Registry-level security requirements
PyPI and npm distribute MCP servers the same way they distribute everything else: with no security screening, no authentication requirements for listed packages, and no disclosure obligations when vulnerabilities are discovered. The ecosystem needs dedicated MCP registries that require minimum security disclosures: does the server authenticate? Does it bind to 0.0.0.0? What filesystem access does it require? This is not about blocking packages. It is about informed consent. A developer installing a package that will bind to all network interfaces with no authentication should be told that before they install it, not after a CVE is published.
Automated security gates in community projects
Both vulnerabilities in mcp-atlassian are detectable by freely available static analysis tools. The missing path traversal check and the unvalidated header-to-URL flow are exactly the patterns that Semgrep rules and Bandit checks are written to catch. A GitHub Actions workflow running basic SAST on pull requests costs nothing for an open-source project and would have flagged these issues before they shipped. The security community should build and promote MCP-specific Semgrep rulesets that project maintainers can adopt without needing deep security expertise. Pluto Security's open-source remediation tooling is a step in this direction. Purpose-built SAST rules for common MCP vulnerability patterns are the next.
Mandatory disclosure in tool manifests
The MCP specification defines how servers advertise their capabilities through tool manifests. Those manifests currently describe what a server can do for an AI agent. They do not describe the security properties of the server itself: whether it authenticates, whether it validates paths, whether it restricts outbound requests. Adding a standardized security metadata section to the MCP tool manifest specification would give MCP clients the information needed to warn users before connecting to a server that lacks authentication or makes unconstrained outbound requests. This is an architectural change to the protocol, not just a community practice. It requires Anthropic and the MCP specification maintainers to prioritize it. Given the scale of the ecosystem, that prioritization is warranted.
Separating filesystem and network MCP tools by default
mcp-atlassian's vulnerability surface exists because the same server that talks to the Atlassian API also has unconstrained filesystem write access. These capabilities do not have to live in the same process. An architectural pattern where filesystem-touching tools run in a separate, more restricted subprocess with an explicit allowlist of writable directories — distinct from the network-facing process handling API calls — would contain the impact of either class of vulnerability. This adds complexity for maintainers, but it also adds meaningful isolation. The OWASP Agentic AI Top 10 recommends separating "System Tools" with organizational-level permissions from "User Tools" scoped to individual contexts. The same principle applies at the MCP server architecture level.
Treating the AI agent as part of the threat model
The "Living off AI" attack documented by Cato Networks is a reminder that the AI model connected to your MCP server is itself an attack surface. If the model processes untrusted input — a Jira ticket submitted by an external user, a Confluence page edited by a partner — without prompt isolation, the model can be directed to invoke MCP tools with the internal user's privileges. The solution is not to distrust AI assistants. The solution is to apply input validation at the MCP layer, not just the AI layer. Prompt content flowing into MCP tool calls should be treated with the same skepticism as SQL queries or shell commands. The MCP server should enforce what the AI is allowed to do regardless of what the AI asks to do.
The MCP ecosystem is genuinely useful and the tooling is maturing fast. But the pattern visible in mcp-atlassian — where a widely adopted open-source integration layer carries no authentication by default and receives no security audit before accumulating millions of installs — is a pattern that will appear again. The developers building on these tools and the security teams tasked with protecting the environments they run in need to treat MCP servers with the same scrutiny applied to any network-exposed service. Because that is exactly what they are.
Arctic Wolf security bulletin (February 28, 2026): arcticwolf.com/resources/blog/cve-2026-27825/
Atlassian Rovo MCP Server GA announcement (February 4, 2026): atlassian.com/blog/announcements/atlassian-rovo-mcp-ga
Pluto Security technical writeup (February 27, 2026): blog.pluto.security/p/mcpwnfluence-cve-2026-27825-critical
Pluto Security remediation script on GitHub: github.com/plutosecurity/MCPwnfluence
GitHub Security Advisory CVE-2026-27825: GHSA-xjgw-4wvw-rgm4
GitHub Security Advisory CVE-2026-27826: GHSA-7r34-79r5-rcc9
Atlassian MCP risk guidance: atlassian.com/blog/artificial-intelligence/mcp-risk-awareness
Cato Networks "Living off AI" research (June 2025): catonetworks.com/blog/cato-ctrl-poc-attack-targeting-atlassians-mcp/
BlueRock / Dark Reading SSRF in MCP (January 2026): darkreading.com/application-security/microsoft-anthropic-mcp-servers-risk-takeovers
BlueRock MCP Trust Registry research: bluerock.io/post/private-repo-scanning-mcp-servers-secure-by-default
OWASP MCP Top 10: owasp.org/www-project-mcp-top-10/
OWASP Practical Guide for Secure MCP Server Development: genai.owasp.org/resource/a-practical-guide-for-secure-mcp-server-development/