No Malware. No Alerts. 216 Servers Gone.How a threat actor built a free SIEM, triaged stolen data from 34 organizations, and left no signatures behind — and what your detection strategy is still missing.

Security researchers at Huntress have uncovered a campaign in which a threat actor registered a free trial of Elastic Cloud's SIEM platform and repurposed it as a centralized repository for stolen data harvested from hundreds of compromised systems. The attacker exploited critical vulnerabilities in SolarWinds Web Help Desk and other enterprise software, exfiltrating victim data directly into the Elastic instance, where they used Kibana's built-in search and filtering capabilities to triage victims and identify high-value targets. The campaign affected at least 216 hosts across 34 Active Directory domains spanning government agencies, financial services firms, educational institutions, and manufacturing companies worldwide.

A SIEM platform — short for Security Information and Event Management — is one of the foundational tools in any security operations center. It collects logs, correlates events, and helps analysts identify threats in their environment. It exists to protect organizations. In this case, however, a threat actor set one up to do the exact opposite: systematically catalog stolen data from dozens of organizations and use the platform's own analytical features to decide which victims were worth targeting further. The campaign, documented in detail by Huntress researchers John Hammond, Anna Pham, and Jamie Levy, represents the first publicly documented case of an adversary using Elastic Cloud specifically for data exfiltration and victim triage.

The Discovery: A SIEM Turned Inside Out

The investigation began when Huntress SOC analyst Dipo Rodipe responded to a case of active SolarWinds Web Help Desk exploitation on February 7, 2026. During that investigation, the team identified an unusual exfiltration method: instead of sending stolen data to a traditional command-and-control server or a file-sharing service, the attacker was pushing victim information directly into an Elastic Cloud instance using hardcoded API credentials embedded in encoded PowerShell commands.

This was not a compromised Elastic deployment belonging to a victim organization. The attacker had created their own Elastic Cloud environment using a free trial account, effectively standing up a fully functional SIEM instance at no cost and with minimal friction. The deployment was created on January 28, 2026, running Elasticsearch version 9.2.4 on Google Cloud Platform's us-central1 region. The attacker left the default deployment name — "My deployment" — unchanged, suggesting confidence that the infrastructure would not be easily discovered.

Huntress described this as the first time they had observed an adversary use Elastic Cloud for exfiltration — noting that while DFIR-focused tools like Velociraptor had been seen repurposed for command and control before, weaponizing a SIEM platform to triage stolen victim data represented a meaningful escalation in attacker sophistication. (Huntress, March 2026)

The concept of abusing legitimate security tools is not new. Threat actors have previously co-opted forensic tools like Velociraptor for command and control, and Huntress has documented adversaries abusing free trials of security software before. But this campaign marked a significant escalation: the attacker was not merely using a defensive tool for access. They were using a platform specifically built for threat detection and data analysis to perform threat detection and data analysis — on their own victims.

The Attack Chain: From Exploitation to Exfiltration

The initial access vector centered on the exploitation of SolarWinds Web Help Desk (WHD), a widely deployed IT ticketing and asset management platform. The vulnerabilities involved include CVE-2025-26399, a critical unauthenticated remote code execution flaw with a CVSS score of 9.8, along with related bypass vulnerabilities CVE-2025-40551 and CVE-2025-40536. These flaws stem from insecure deserialization within the AjaxProxy component. Microsoft confirmed active exploitation in a February 6, 2026 advisory, noting that successful exploitation granted attackers unauthenticated remote code execution on internet-facing WHD deployments.

What makes the CVE-2025-26399 vulnerability particularly concerning is its lineage. It represents a patch bypass of CVE-2024-28988, which was itself a patch bypass of CVE-2024-28986 — meaning the same underlying deserialization issue had been patched three separate times. As Ryan Dewhurst, head of threat intelligence at watchTowr, observed via The Hacker News: SolarWinds has had to revisit and re-patch the very same flaw repeatedly, highlighting persistent architectural weaknesses in the application's handling of untrusted data.

The root of that persistence lies in the technology stack itself. SolarWinds Web Help Desk is built on WebObjects, a Java web application framework developed by Apple that has been effectively defunct for decades. As watchTowr noted in its technical analysis, securing an application built on legacy WebObjects code that exposes a component capable of materializing attacker-controlled objects and invoking methods on them is a fundamentally difficult foundation to defend. No amount of patching at the individual vulnerability level fully resolves the structural risk when the underlying framework was not built with modern threat models in mind. All of the past five SolarWinds vulnerabilities added to CISA's Known Exploited Vulnerabilities catalog have been in the Web Help Desk product — a pattern that suggests a dedicated threat actor or group has found the product reliably exploitable. (The Stack, March 2026)

It is worth pausing here to think about what that convergence means from a defender's perspective — not just technically, but operationally. CVE-2025-26399 was disclosed in September 2025. Microsoft observed the attacks it enabled in December 2025. The QEMU persistence mechanism that this campaign relied on was first installed on January 16, 2026, according to Huntress telemetry. The Elastic Cloud trial that received stolen data was created on January 28, 2026. That timeline means the window between public vulnerability disclosure and full multi-organization compromise with a working data-triage infrastructure was approximately four months. For organizations that patch on quarterly cycles, that window does not close before the attacker is already inside, exfiltrating, and triaging which of your servers is worth further investment.

The post-exploitation sequence deserves scrutiny beyond a simple list of tools deployed. Each stage of this attacker's toolkit was chosen to solve a specific operational problem while minimizing detection risk, and understanding that logic matters more than cataloging the individual components.

Stage one: Initial access via WHD. The choice of SolarWinds Web Help Desk as the entry point was not random. WHD is frequently internet-facing by design — it exists to receive support tickets from employees and managed clients. It runs as a privileged service. It stores credentials for service accounts, email configurations, and integrated directory services. And it has a documented history of deserialization vulnerabilities that have been added to CISA's KEV catalog. For a threat actor scanning for reliable initial access, it is a high-yield target: widely deployed, privileged, and repeatedly exploitable from the same bug class. The very qualities that make it a useful enterprise tool — broad connectivity, centralized credential storage, always-on availability — are the qualities that make it attractive to an adversary.

Stage two: Immediate tool deployment via RMM. Within seconds of exploitation, the attacker installed a Zoho Assist agent via a silent MSI payload hosted on the file-sharing service Catbox. This is the operational equivalent of kicking in a door and immediately installing a lock of your own choosing before the owner can respond. Traditional malware removal playbooks do not fully apply here: the attacker's persistence mechanism is a legitimate, signed remote management tool. There is no hash to blocklist, no YARA rule that fires on Zoho Assist. The eviction problem is fundamentally different — you are not removing malware, you are de-provisioning a legitimate application from an attacker-controlled account. Incident responders who do not specifically check for unattended RMM configurations registered to external email providers will miss this entirely.

Stage three: Velociraptor repurposed as C2. Velociraptor is a tool the security community built for rapid endpoint forensics. It is designed to execute remote queries, collect artifacts, and run commands across fleets of endpoints at speed. Repurposing it as a command-and-control framework is a conceptually elegant move: the traffic it generates looks like legitimate DFIR tool activity, the binary is signed and recognized as benign by endpoint protection platforms, and its capabilities — remote command execution, file retrieval, process inspection — are precisely what an attacker needs after initial access. The attacker deployed an outdated build, version 0.73.4, which contains CVE-2025-6264, a privilege escalation flaw affecting Velociraptor versions prior to 0.74.3. It is worth being precise here: Rapid7 confirmed that in observed campaigns using this same version, the vulnerability itself was not exploited — attackers had already obtained elevated access before deploying Velociraptor, and used it purely for persistence and command execution rather than as an escalation path. The deliberate choice of an outdated build likely reflects availability or operational familiarity rather than intent to exploit the CVE specifically. The broader point stands regardless: a signed, legitimate DFIR binary running as a Windows service generates no malware alerts by definition.

Stage four: Data exfiltration to Elastic Cloud. The Get-ComputerInfo PowerShell cmdlet is a built-in Windows tool. The Elasticsearch Bulk API is a documented, legitimate endpoint used by thousands of organizations. The API key that authenticated the connection was hardcoded in the script, meaning it was valid and would pass any destination-reputation check. HTTPS encrypted the traffic in transit. Taken together, every component of the exfiltration path would appear entirely normal to a network monitor that relies on destination reputation, protocol inspection, or known-bad signatures. The only anomaly is contextual: a Java service process spawning PowerShell that pushes structured system metadata to a cloud API at 3 AM is not normal behavior for a help desk application. That contextual signal is only visible to a detection system that has established a behavioral baseline for that specific process on that specific host — which the vast majority of deployed monitoring tools have not done.

KEV Urgency: Federal Patch Deadline

CISA added CVE-2025-26399 to the Known Exploited Vulnerabilities catalog on March 9, 2026 — one day before this article's publication — with a remediation due date of March 12, 2026 for federal agencies. The three-day window is among the shortest ever assigned in a KEV entry, reflecting CISA's assessment of active exploitation severity. Organizations outside the federal sector should treat this timeline as a benchmark, not a ceiling. Patch to SolarWinds Web Help Desk version 2026.1 immediately.

CVE Details

CVE-2025-26399 (CVSS 9.8): Unauthenticated AjaxProxy deserialization RCE affecting SolarWinds Web Help Desk 12.8.7 and all previous versions. Patch bypass of CVE-2024-28988, which was itself a bypass of CVE-2024-28986. Added to CISA's Known Exploited Vulnerabilities catalog in March 2026. Organizations should update to SolarWinds Web Help Desk version 2026.1 or later.

Once the attacker gained code execution on a victim's WHD server, they launched an encoded PowerShell command that executed Get-ComputerInfo to collect detailed system information, including operating system version, hardware specifications, Active Directory domain membership, installed patches, and general host metadata. This data was then pushed directly to an Elasticsearch index named systeminfo using a hardcoded API key. The approach was efficient and stealthy: outbound HTTPS connections to Elastic Cloud's legitimate infrastructure would appear benign to network monitoring tools, as Elastic is a trusted vendor whose domains are commonly whitelisted.

The attacker also deployed additional persistence tools on compromised systems. According to Huntress, the post-exploitation toolkit included Zoho Assist agents (installed via MSI payloads hosted on the file-sharing service Catbox), Cloudflare tunnels for maintaining persistent access, and Velociraptor — a legitimate open-source digital forensics and incident response (DFIR) tool — repurposed as a command-and-control agent. The Velociraptor server was hosted on a Cloudflare Worker subdomain, with the agent executing encoded PowerShell commands through Velociraptor's built-in command execution capabilities.

The Zoho Assist agent was configured specifically for unattended access and was registered to a Zoho account tied to a Proton Mail address — a deliberate choice that places account creation outside the visibility of corporate or organizational email providers. This detail matters operationally: unattended remote access means the attacker could reconnect to a compromised host at any time without requiring an interactive session, making eviction far harder than with traditional malware that can be simply removed. Zoho Assist is a separate product from Zoho ManageEngine's RMM suite; both are legitimate remote access tools, but Zoho Assist in particular is designed for unattended, persistent connections, which is precisely why it was the attacker's choice here.

Persistence went beyond software. Huntress documented that each compromised host had malicious scheduled tasks created to abuse QEMU — the open-source machine emulator — as a mechanism for opening SSH backdoors that would survive reboots. This is a notable tradecraft choice because QEMU is a legitimate virtualization tool that appears benign to software inventories and endpoint telemetry, yet it can be configured to run a virtual machine whose only purpose is to maintain an SSH tunnel back to attacker-controlled infrastructure. The earliest observed installation of this QEMU-based persistence mechanism was January 16, 2026, giving a potential campaign start date that predates the February 7 investigation by more than three weeks. Elastic Security Labs noted that exploitation activity may have begun as early as December 2025, suggesting the campaign ran for weeks before Huntress's first observed incident.

The Legacy Software Problem Nobody Wants to Pay to Fix

The WebObjects framework at the core of SolarWinds Web Help Desk is not a secret. It is not a recently discovered liability. It is a Java web application framework whose last major release was nearly two decades ago, and which was described by watchTowr researchers as technology that "most of us, born after 1960, have never encountered in real-world security research" — a fitting description of the challenge any security team faces trying to reason about vulnerabilities in code whose design assumptions predate the modern threat landscape. (watchTowr Labs, February 2026) SolarWinds has been patching around it rather than replacing it, and the results are visible in CISA's KEV catalog.

This raises a question that the cybersecurity industry is not yet comfortable asking directly: at what point does a software vendor's continued reliance on a deprecated, structurally unsafe technology stack become a product liability issue rather than merely a patching inconvenience? The answer the industry has provided so far is: never. Vendors patch individual vulnerabilities, issue advisories, and move on. Customers are expected to apply those patches on their own timelines, often weeks or months after disclosure, and to absorb the cost of any compromise that occurs in the gap. The vendor faces no material consequence from the third-patch-bypass-of-the-same-flaw cycle.

That arrangement is increasingly difficult to justify as the economic model of vulnerability exploitation matures. CVE-2025-26399 was disclosed in September 2025. By January 2026, it was being used as the front door to a multi-continent campaign that compromised over 200 servers across 34 organizations including government agencies and financial institutions. The structural root cause — an AjaxProxy component on a defunct framework capable of materializing arbitrary attacker-controlled Java objects — remains in the product. A fourth patch bypass is not a hypothetical.

Microsoft has taken a different approach with its Secure Future Initiative, which includes explicitly committing to rewriting legacy components and removing deprecated infrastructure. That program is imperfect and slow, but it represents a model in which the vendor accepts structural accountability rather than delegating it entirely to customers. watchTowr's three realistic remediation paths for SolarWinds WHD — eliminate AjaxProxy entirely, constrain deserialization to a safe subset, or continue playing whack-a-mole — are not equally viable in practice. (watchTowr Labs, February 2026) The third path has been chosen for four consecutive vulnerabilities. Customers running WHD who are waiting for a fifth patch should be simultaneously evaluating migration timelines to alternative ITSM platforms, not because a migration is easy, but because the alternative is accepting unbounded risk from a known-broken architectural decision that is not theirs to fix.

For organizations in regulated sectors, this conversation has a compliance dimension as well. If your WHD instance was internet-facing during December 2025 through February 2026, and you were running a version prior to 2026.1, you were vulnerable to a vulnerability that was actively being weaponized by at least one threat actor with a documented multi-organization campaign. Whether that constitutes a reportable incident under your sector's breach notification framework depends on whether data was accessed — which, given the QEMU backdoor's persistence and the 80-minute gap between deployment creation and first Kibana login, is not a question you can answer without forensic investigation of the affected hosts.

Infrastructure Unraveled: Disposable Emails, Privacy Networks, and Operational Patterns

The infrastructure behind this campaign reveals an operator who understood operational security but made critical mistakes in compartmentalization. When Huntress coordinated with Elastic to investigate the free trial account, they discovered the registration email used a disposable address from the domain quieresmail[.]com. This domain belongs to what researchers believe is a network of hundreds of throwaway email domains operated by firstmail.ltd, a Russian-registered temporary email service.

The email address followed one of two patterns observed across accounts linked to this network: either a fabricated first and last name followed by four digits resembling a year, or a random string of eight lowercase letters. The Elastic trial in question used the eight-character random format.

This is where the attacker's operational security broke down. Elastic shared with Huntress that multiple other free trial accounts had been created using different disposable email addresses, all following the same random string formatting. More importantly, the randomly generated email prefixes matched the exact subdomains the attacker had configured for Cloudflare Worker pages used to host their Velociraptor callback servers. For example, the subdomain qgtxtebl.workers[.]dev corresponded to an email address with the same qgtxtebl prefix. The adversary had been treating the randomly generated email handle as a reusable identifier across their entire infrastructure — a significant OPSEC failure that allowed researchers to link otherwise separate campaign components together.

Administrative login sessions to the Elastic Cloud instance were traced to two IP addresses: 154.26.156[.]181 and 51.161.152[.]26. In discussions with multiple security teams and law enforcement agencies, there was consensus that these addresses originated from the Safing Privacy Network (SPN), a specialized privacy routing system that positions itself as an alternative to traditional VPNs and Tor. Unlike a standard VPN, which routes all traffic through a single exit node, SPN spreads connections across multiple exit points and applies layered onion-style encryption per connection. The platform accepts payment via credit card, PayPal, and cryptocurrencies including Bitcoin and Monero, making it appealing to actors who want to minimize their attribution footprint while avoiding the friction and community scrutiny of Tor.

Browser telemetry captured from the Elastic Cloud Kibana sessions also provided a fingerprint of the operator's workstation: Windows 10 x64 running Chrome 144 (as reported in the browser user agent string captured by Elastic; whether this reflects an actual pre-release build or a spoofed UA value is not confirmed), with the first Kibana login recorded on January 28, 2026 at 03:05 UTC — approximately 80 minutes after the Elastic deployment was created.

Scope of Compromise: 216 Hosts, 34 Domains, and a Saved Search Named "oooo"

The data stored in the attacker's systeminfo index painted a troubling picture of scale. At the time of analysis, the index contained records for approximately 216 unique victim hosts spanning 34 distinct Active Directory domains, each representing a separate compromised organization. The victims were overwhelmingly servers — 91% of the compromised hosts — with the majority running Windows Server 2019 or 2022. The affected organizations spanned 37 time zones across multiple continents and included government agencies, higher education institutions, financial services firms, religious and nonprofit organizations, global manufacturing and automotive companies, IT service providers, retail businesses, and construction firms.

Kibana usage telemetry revealed that the operator spent significant time actively triaging this victim data. On January 28 alone — the same day the deployment was created — the attacker logged 164 interactions over nearly 150 minutes of active use. Activity continued over the following days, with a notable spike on February 4 when the operator performed 106 field lookups, applied 23 filters, and executed 15 search queries in under 13 minutes. Huntress described this pattern as consistent with systematic triage of high-value targets such as domain controllers.

One particularly revealing artifact was a saved Kibana search query named "oooo." Created on February 4, this saved search had been configured to display columns for victim domain, caption, timezone, and IP addresses, with a filter excluding standalone workstations. The query was focused on a single victim domain where the attacker had already compromised six servers, one of which appeared to be a domain controller. Huntress's investigation indicated that this targeted organization was an AI-powered SaaS platform — suggesting the attacker was selectively prioritizing technology companies with potentially valuable intellectual property or customer data.

That detail is worth sitting with. The attacker did not simply harvest whatever was reachable through the WHD vulnerability chain. They sorted. They filtered. They built a saved search specifically oriented toward an organization whose value proposition is AI-generated output — which means customer data, model weights, training pipelines, API credentials, and the kind of proprietary infrastructure that cannot be reconstructed from public sources. Whether this reflects general opportunism or a specific tasking to acquire AI-adjacent intellectual property is unknown, but the priority signal is unambiguous: among 34 compromised organizations, an AI company's domain controllers warranted their own named saved search. Security teams at AI-adjacent companies — not just AI developers but any organization that builds, integrates, or trains on proprietary models — should treat this as a concrete indicator that their asset class has moved up the attacker triage queue.

Across the full campaign, the operator accumulated approximately 249 minutes of active Discover usage with 449 total interactions. Evidence also showed that at least one Elasticsearch index was deleted on February 2, possibly representing an earlier data collection that was purged before the current dataset was populated.

That deleted index deserves more attention than it typically receives in post-incident summaries. An index purged mid-campaign implies the attacker had a reason to start fresh — either the earlier collection was from a different targeting phase that had served its purpose, the data was stale or malformed, or they were deliberately cleaning up a prior wave before investigators could examine it. If the latter, then the 216 hosts and 34 domains represent not the campaign's total scope but the portion the attacker chose to preserve. The organizations whose data preceded February 2 may have no indication they were ever collected.

What the Attacker's SIEM Tells Us About Yours

The most underexamined dimension of this campaign is not what the attacker did with Elastic Cloud — it is what the attacker's use of it reveals about the state of defensive SIEM implementations in most organizations. The attacker built a functional victim-triage system in under 80 minutes using a free trial account, a PowerShell script with a hardcoded API key, and Kibana's default interface. They needed no custom code. They needed no specialized data pipeline. They needed no analyst training. They used the same interface defenders use every day, and they produced operational intelligence from it within hours of deployment.

That efficiency is not a commentary on the attacker's sophistication. It is a commentary on how straightforward SIEM analytics have become, and how asymmetrically that benefit accrues to attackers versus defenders. The attacker was querying 216 hosts across 34 organizations with 449 total interactions in under 250 minutes of active use. Many security operations centers cannot query their own environment that quickly because their SIEM is poorly indexed, their log sources are incomplete, their retention policies have aged out the relevant data, or their analysts are managing alert fatigue from hundreds of low-fidelity signals. The attacker, working with a clean dataset of structured system metadata they had deliberately collected, had better visibility into those victim environments than the victims' own security teams did.

This inversion deserves to sit uncomfortably with anyone responsible for a SIEM deployment. The tool itself is not the problem — Kibana is powerful precisely because it is designed to give analysts rapid, structured visibility into large datasets. The problem is that the attacker's use of it was cleaner and more purposeful than defensive use typically is. The attacker defined exactly what data to collect (operating system, hardware, domain membership, IP configuration, installed patches), structured it as NDJSON, pushed it to a single index, and built a saved search with clear, narrow fields. Most organizational SIEM deployments ingest logs from dozens of sources in inconsistent formats, normalize them imperfectly, and rely on analysts to hand-craft queries against a schema that changes with every agent update.

The structural lesson is not to ingest less data or simplify your SIEM schema. It is to ask whether your SIEM is actually configured to produce the same category of answer the attacker was seeking: which of my hosts would be the highest-value target right now, and why? Most SIEM deployments answer the question "what events occurred that match known-bad signatures?" They are substantially less effective at answering "which of my assets, in their current state, represents the most critical risk?" The attacker's Kibana instance answered the second question. Your SIEM should too, and if it cannot produce an answer within minutes rather than days, that is a gap worth measuring and addressing before someone else measures it for you.

Connections to Broader Campaigns: SharePoint, Warlock, and Shared Exit Nodes

The investigation did not exist in isolation. One of the two IP addresses used to log into the attacker's Elastic instance — 51.161.152[.]26 — had also been observed by Palo Alto Networks Unit 42 in connection with ToolShell exploitation campaigns against Microsoft SharePoint servers. ToolShell refers to a chain of high-impact SharePoint vulnerabilities — CVE-2025-49704 and CVE-2025-49706, along with subsequent patch-bypass variants CVE-2025-53770 and CVE-2025-53771 — that have been exploited to deploy web shells and ransomware on compromised on-premises SharePoint servers.

Hostnames within the victim dataset recovered from the Elastic instance further corroborated that the threat actor was conducting opportunistic exploitation against multiple enterprise software platforms simultaneously. Beyond SolarWinds WHD, evidence pointed to intrusions involving Gladinet CentreStack, SmarterMail (SmarterTools), and Microsoft SharePoint. This pattern of exploiting whatever critical vulnerability provides immediate remote code execution is characteristic of an operationally driven threat actor focused on maximizing access across as many organizations as possible.

Insights shared by Lumen Technologies' Black Lotus Labs further corroborated these findings, and Huntress noted that they facilitated intelligence sharing between Black Lotus Labs and Elastic for broader coordination. The ToolShell exploitation campaigns have been linked in wider industry reporting to Storm-2603, a China-based threat cluster that Microsoft has observed deploying Warlock ransomware (also tracked as X2anylock) against compromised SharePoint servers. It is worth noting that the ToolShell campaign involved multiple distinct actors simultaneously: Microsoft identified two established Chinese nation-state groups — Linen Typhoon (APT27) and Violet Typhoon (APT31) — alongside Storm-2603, which appears to operate with financial motivation alongside potential espionage objectives. Whether Storm-2603 is a state-sponsored actor, a financially motivated group with Chinese origins, or something in between remains an active area of investigation across multiple research organizations including Check Point, Palo Alto Networks Unit 42, and Trustwave SpiderLabs. While Huntress has not made a direct attribution claim connecting the Elastic Cloud abuse to Storm-2603 or any specific nation-state, the overlapping infrastructure and shared Safing SPN exit nodes suggest at minimum a shared tooling ecosystem.

Test Infrastructure Identified

Among the 216 victim records, Huntress identified four hosts that appeared to be the operator's own test virtual machines. The hosts — named Hajbepfy, Bekpaseb, Hhbhymne, and Vdfyivhy — shared identical hardware fingerprints: a custom SMBIOS build string, fabricated BIOS serial numbers labeled "DELL 1 serial," and QEMU virtualization profiles with 4GB RAM and Realtek RTL8139C+ NICs. Two of the four had the system manufacturer set to "DADY" with the model name "Dady super powerfulPC."

The presence of these test machines inside the operational dataset is not just an amusing artifact. It tells us that the attacker developed and validated the full exfiltration pipeline — the PowerShell script, the Elasticsearch index schema, the API key authentication — against their own controlled environment before deploying it against real victims. The pipeline was tested, not improvised. And because they ran those tests through the same Elastic deployment they later used operationally, they left a fingerprint of their development process in the data. Researchers who can identify test infrastructure mixed into a campaign dataset can use it to infer payload development timelines, environment configurations, and the gap between "this works in the lab" and "we are now active in the wild."

Why This Campaign Matters: Lessons for Defenders

This campaign is significant for several reasons that extend beyond the specific vulnerabilities exploited or the number of victims compromised.

First, it demonstrates a maturing approach to offensive data management. Threat actors have long struggled with the "data problem" that comes with compromising hundreds of systems simultaneously: how do you efficiently sort through stolen information to identify which victims merit further exploitation? By co-opting a SIEM platform, this attacker solved that problem using the same tools defenders rely on, complete with structured queries, field-level filtering, and saved searches. This represents an operational sophistication that defenders should take seriously.

Second, the campaign highlights the ongoing challenge of what has been called "living off trusted services" (LoTS). When exfiltrated data travels over HTTPS to Elastic Cloud's legitimate infrastructure, it blends seamlessly with normal enterprise traffic. Network monitoring tools that rely on domain reputation or IP-based blocklists will not flag this activity, because the destination is a legitimate, trusted cloud service. This is analogous to the broader trend identified by Elastic Security Labs, which noted that adversaries increasingly abuse trusted networks for transport encryption and benign reputation.

Third, the abuse of free trial accounts represents a persistent and underaddressed risk in the cloud ecosystem. Free-tier and trial offerings from cloud providers give attackers disposable, low-cost infrastructure that is difficult to attribute and easy to replace. The registration process typically requires only an email address — and when that email comes from a throwaway domain, the barrier to entry is effectively zero. Cloud providers face a difficult balance: reducing friction for legitimate users while preventing abuse by adversaries. What makes this case specifically interesting is that Elastic was not simply a passive bystander. They cooperated actively with Huntress and law enforcement, shared account linkage data across multiple trial registrations, and took the instance offline. That cooperation is meaningful and worth acknowledging. But it happened after 216 hosts across 34 organizations had already been compromised and triaged. The question that cooperation does not yet answer is whether any of the behavioral signals that were detectable in retrospect — the identical email prefix format, the API-only onboarding pattern, the timing from account creation to first data push — can be operationalized as a real-time detection during the trial registration phase rather than a forensic reconstruction after the fact. Elastic has not made public any commitment to that capability.

Fourth — and this is often overlooked — consider what the QEMU persistence mechanism tells us about attacker patience. Using a legitimate hypervisor to spawn a hidden virtual machine that does nothing except maintain an SSH tunnel is not a technique borrowed from commodity malware. It requires knowledge of QEMU's command-line interface, virtual machine configuration, and network stack. The actor who set this up was not running a script-kiddie playbook. They were engineering persistence that would survive endpoint scans, appear as legitimate system activity, and survive reboots. Organizations whose incident response playbooks focus on "remove the malware" without explicitly hunting for unexpected QEMU processes are not fully evicting this class of attacker.

Fifth, the saved Kibana query named "oooo" is worth examining beyond its surface-level absurdity. The attacker had already compromised six servers in a single organization, identified what appeared to be a domain controller among them, and was actively triaging whether that organization justified deeper investment. The target was an AI-powered SaaS platform. This is not random opportunism in the final stage — it is systematic priority assessment. The attacker was making business decisions about which of their victims to exploit further, using the same kind of structured analytical workflow a security team would use to prioritize alerts. Defenders who want to understand where this class of attacker goes next should ask: which of our systems, if their metadata landed in a SIEM query, would flag us as a high-value target? Those systems deserve elevated monitoring.

What the Standard Advice Misses: Deeper Solutions

The typical post-incident guidance after a campaign like this follows a familiar arc: patch the vulnerability, monitor outbound traffic, hunt for unauthorized tools, rotate credentials. That advice is correct. It is also insufficient on its own, because it treats each component of the campaign as a discrete problem rather than as a system that an attacker deliberately assembled. The attacker here did not make one risky move. They made a series of individually low-risk moves — register a free trial, use a throwaway email, route through a privacy network, push data to a legitimate SIEM — each of which would look benign in isolation. Defending against this requires a different frame.

Monitor for anomalous API key usage patterns inside your own SaaS tools. The hardcoded Elastic API key embedded in PowerShell on victim machines is a specific detection opportunity that most organizations are not hunting for. Any PowerShell command that references an Elastic, Splunk, Datadog, or similar SIEM API endpoint and is spawned by a non-security process (java.exe, a web application service wrapper, or a scheduled task) should be an automatic escalation. This is a narrow, high-fidelity signal. Behavioral detection rules targeting this pattern are available from Huntress's Threat Intel GitHub repository and from Elastic Security Labs.

Apply cloud provider controls, not just perimeter controls. Organizations that use Elastic, Splunk, Datadog, or similar cloud analytics platforms in their own environments should audit their API key policies today. Elastic supports fine-grained API key permissions — a key used for log ingestion from one system should not have read access to other indices. If an attacker compromises a system and finds a hardcoded API key, the blast radius of that key should be minimal. Rotate API keys regularly, scope them narrowly, and set them to expire. None of this prevents the LoTS technique, but it dramatically limits what an attacker can do with a stolen credential.

Rethink your asset inventory with the attacker's query in mind. The "oooo" saved search filtered for domain controllers and high-server-count victim environments. Your most operationally critical assets are the same ones an attacker would filter for. Most organizations perform asset inventories from a compliance or vulnerability management perspective. Performing one from an attacker-triage perspective — asking "which hosts would score highest in an adversary's Kibana query" — produces a different and more useful priority list for hardening and monitoring investment.

For cloud providers: behavioral onboarding analysis, not just email verification. The free trial abuse in this campaign cannot be solved by credit card verification alone — many legitimate researchers and small security teams use prepaid cards or privacy-protecting payment methods. But cloud providers can analyze behavioral signals during trial onboarding: Is the deployment created within minutes of account registration? Is the first activity an API-only data push from an external IP rather than a human browsing the interface? Does the ingested data follow a pattern consistent with structured system metadata rather than normal log ingestion? None of these signals is individually conclusive, but a combination can trigger enhanced review without blocking legitimate users. Elastic's ability to correlate the random-string email prefixes across multiple accounts and link them to Cloudflare Worker subdomains shows that this analysis is possible in retrospect — the question is whether it can happen in near-real-time during onboarding.

Invest in egress visibility, not just egress control. Blocking Elastic Cloud at the firewall would be counterproductive for any organization that uses it legitimately. But logging all egress to SaaS analytics endpoints, capturing the data volume and frequency of each connection, and feeding that into a behavioral baseline gives your security team the visibility to spot anomalous data flows. A small server pushing 50MB of structured JSON to an Elastic endpoint at 03:00 UTC on the day after a patch was publicly released is a different signal than your SOC analysts querying Kibana from corporate IPs during business hours. Volume, timing, and source process together form a detection surface that destination reputation alone cannot provide.

Treat QEMU as a high-risk process outside of known virtualization workloads. QEMU should not be running on servers that are not explicitly configured as virtualization hosts. Any instance of qemu-system-x86_64.exe or equivalent spawned by a scheduled task, a web application service, or a script on a server that has no documented virtualization role should trigger an immediate investigation. This is a narrow detection rule with a very low false positive rate in most environments and a potentially very high true positive rate given this campaign's documented tradecraft.

Huntress confirmed they performed victim outreach and notification for organizations identified in the uncovered data, and coordinated with Elastic and law enforcement to investigate and dismantle the attacker's infrastructure. The attacker's cloud instance has since been taken offline. (Huntress, March 2026)

None of that answers the harder question: are the affected organizations actually clean? Taking down the Elastic instance removes the attacker's triage visibility, but it does not remove Zoho Assist agents registered to external accounts, QEMU processes running on scheduled tasks, Velociraptor services installed as Windows daemons, or Cloudflare tunnel configurations that persist independently of any central server. Each of those persistence mechanisms is self-contained and survives the loss of the command infrastructure. An organization notified by Huntress that their data appeared in the compromised index cannot simply rotate credentials and move on. They need to establish whether the QEMU backdoor was installed on their hosts specifically, audit every scheduled task and service for unexpected QEMU invocations, verify that no Zoho Assist or Velociraptor installation exists outside of authorized software inventories, and check for Cloudflare tunnel configurations that were not provisioned by their own teams. That is a non-trivial investigation, and organizations that treat the takedown notification as the end of the incident rather than the beginning of the response are likely still compromised.

The Detection Theory Gap: Why Behavior Beats Binaries

Every component of this campaign's toolset — QEMU, Velociraptor, Zoho Assist, Cloudflare tunnels, the Elasticsearch Bulk API, PowerShell, Windows BITS — is software that appears on legitimate software inventories in security-conscious organizations. None of it is malware by definition. None of it will trigger a hash-based detection rule. None of it shows up on a domain reputation blocklist. If your threat detection strategy is built around asking "is this a known-bad file or a known-bad destination," this entire campaign is invisible.

The detection theory that does work against this class of attack is fundamentally different, and it is worth being specific about what that means in practice. Intent-based detection asks not what a process is, but what it is doing in a context where it has no documented reason to do that thing. QEMU is not suspicious on a virtualization host. QEMU is highly suspicious when it is spawned by a Windows scheduled task on a server whose only documented role is running SolarWinds Web Help Desk. Zoho Assist is not suspicious when installed by an IT administrator. Zoho Assist is highly suspicious when it is installed silently by java.exe with no user interaction, registered to a Proton Mail account, and configured for unattended access on a server that receives no routine remote administration.

The challenge is that intent-based detection requires behavioral baselines, and behavioral baselines require time, consistency, and investment. You cannot write an intent-based detection rule for a system you have never instrumented. You cannot baseline normal behavior on a host you have never enrolled in an endpoint telemetry platform. Many organizations have critical, internet-facing servers that are not fully enrolled in their EDR deployment because the agent slows down the application, or because the server predates the EDR rollout, or because the licensing is scoped to endpoints rather than servers. SolarWinds WHD is precisely the kind of server that falls into those gaps — it runs as a service, it predates many EDR deployments, and its Java runtime creates unusual parent-child process relationships that generate noise even under normal operation, creating an incentive to exclude it from aggressive monitoring.

That exclusion is the opportunity. An attacker who understands that monitoring coverage is uneven will deliberately route their activity through the least-monitored process on the least-monitored host. In this campaign, that process was java.exe on a WHD server. The detection gap was not a flaw in the monitoring platform — it was a configuration decision made for operational reasons that the attacker's tradecraft was specifically designed to exploit.

Closing this gap does not require replacing your monitoring platform. It requires auditing your EDR coverage to identify which internet-facing servers are excluded or operating in detect-only mode, establishing explicit process behavior baselines for each of those servers, and writing detection rules that fire not on what the process is but on what parent-child relationships it is forming, what network destinations it is calling, and at what times. The combination of java.exe spawning powershell.exe that calls an external HTTPS endpoint with an encoded command is a sufficiently narrow and specific behavioral signature that it can serve as an automated escalation trigger with a very low false positive rate in any environment where that pattern has been baselined as abnormal. It does not require AI. It requires discipline in instrumentation and specificity in rule authorship.

There is also a personnel dimension that goes largely undiscussed. Behavioral detection rules require someone to write them, test them against historical telemetry, tune them against false positive rates, and maintain them as the environment changes. That work is different from the work of responding to alerts. Many security teams are structured to do the latter well and the former poorly, because alert response is reactive and measurable while rule development is proactive and produces no immediate visible output. The result is that detection coverage drifts over time: new systems are added, process baselines change, and rules written against an environment from two years ago fire inconsistently against the environment as it exists today. The attacker who spent four months in the wild before Huntress identified this campaign benefited substantially from that drift.

Key Takeaways

  1. Patch SolarWinds Web Help Desk immediately: Update to version 2026.1 or later. CVE-2025-26399 and CVE-2025-40551 are both confirmed under active exploitation. CVE-2025-40536 (a security control bypass) and additional critical flaws CVE-2025-40552, CVE-2025-40553, and CVE-2025-40554 are all addressed in version 2026.1. CISA set a federal remediation due date of March 12, 2026 for CVE-2025-26399. WHD administrative interfaces must not be publicly accessible — place them behind a VPN or firewall and remove direct internet access to admin paths. If you cannot patch immediately, blocking inbound access to the /helpdesk/WebObjects/ path at the perimeter reduces your exposure surface while you remediate.
  2. Hunt for QEMU processes outside of documented virtualization hosts: Any instance of qemu-system-x86_64.exe (or Linux equivalent) spawned by a scheduled task, web service, or script on a non-virtualization server is a strong indicator of compromise. This persistence mechanism is unlikely to trigger traditional malware detections because QEMU is a legitimate tool. Make it part of your weekly threat hunting baseline.
  3. Monitor outbound traffic to legitimate cloud services with behavioral context: Traditional blocklisting will not catch exfiltration to trusted platforms like Elastic Cloud. Implement behavioral analytics and Data Loss Prevention (DLP) controls that detect anomalous data flows based on volume, timing, source process, and frequency rather than destination reputation alone. A service process pushing structured JSON to a SIEM API at 03:00 UTC is categorically different from an analyst querying Kibana at 14:00 local time.
  4. Hunt for unauthorized remote access tools and hardcoded API credentials: This campaign deployed Zoho Assist configured for unattended access, Velociraptor, and Cloudflare tunnels. Audit for unexpected remote management software — specifically Zoho Assist instances registered to external (non-corporate) email accounts — silent MSI installations spawned by service processes, and encoded PowerShell containing SIEM API keys or bearer tokens. Sigma rules for this activity are available from Huntress's Threat Intel GitHub repository.
  5. Rotate credentials after any WHD compromise and narrow API key scope: Reset passwords for all service accounts, administrator accounts, and credentials stored within or accessible through the Web Help Desk application. Review all API keys issued by your own SaaS platforms and ensure they are scoped to the minimum necessary permissions with explicit expiration dates. A hardcoded API key with broad index access is an adversary's second-stage foothold after initial compromise.
  6. Perform an attacker-triage review of your own asset inventory: Identify which of your systems would score highest in a threat actor's Kibana query — domain controllers, single sign-on infrastructure, database servers, SaaS platforms with broad API access. Apply elevated monitoring, tighter egress controls, and shorter patch windows to those assets regardless of whether they are internet-facing.
  7. Cloud providers must advance behavioral onboarding analysis: Email verification is not sufficient as a sole control against free trial abuse. Providers should analyze behavioral signals during onboarding — time-to-first-API-call, data ingestion pattern, source IP diversity — to detect trial accounts being used as attacker infrastructure rather than for evaluation. The infrastructure linkage discovered in this case (email prefixes matching Cloudflare Worker subdomains) shows that cross-service correlation is possible; the challenge is making it happen before data from 34 organizations has already been ingested.

This campaign is a reminder that the tools built to defend organizations can be turned against them with alarming efficiency. When an attacker can register a free SIEM trial with a disposable email, push stolen data to it via legitimate API calls, and use its built-in analytics to triage victims — all while routing their own access through a privacy network and maintaining persistence via a hypervisor-based SSH backdoor — the traditional model of perimeter-based detection fails completely. Defenders need to assume that trusted services can be weaponized, that outbound traffic to reputable domains can be malicious, that legitimate administrative tools can be attacker infrastructure, and that the absence of traditional malware on a compromised host does not mean the attacker is absent. The playbook this actor used was assembled entirely from legitimate software. Defeating it requires detection strategies built around behavior, not binaries.

Sources: Huntress (Part 2, March 2026) | Huntress (SolarWinds WHD, February 2026) | Microsoft Security Blog | Infosecurity Magazine | SC Media | Elastic Security Labs | The Hacker News | The Hacker News (WHD CVEs) | The Stack | CIRCL CVE-2025-26399 | Lumen Black Lotus Labs | watchTowr Labs | Horizon3.ai

← all articles