Tampering with RUNNER_TRACKING_ID in GitHub Actions Runners
This rule detects processes spawned by GitHub Actions runners where "RUNNER_TRACKING_ID" is overridden from its default "github_*" value. Such tampering has been associated with attempts to evade runner tracking/cleanup on self-hosted runners, including behavior observed in the Shai-Hulud 2.0 npm worm campaign.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2025/11/27"
3integration = ["endpoint"]
4maturity = "production"
5updated_date = "2025/11/27"
6
7[rule]
8author = ["Elastic"]
9description = """
10This rule detects processes spawned by GitHub Actions runners where "RUNNER_TRACKING_ID" is overridden from its
11default "github_*" value. Such tampering has been associated with attempts to evade runner tracking/cleanup on
12self-hosted runners, including behavior observed in the Shai-Hulud 2.0 npm worm campaign.
13"""
14from = "now-9m"
15index = ["logs-endpoint.events.process*"]
16language = "eql"
17license = "Elastic License v2"
18name = "Tampering with RUNNER_TRACKING_ID in GitHub Actions Runners"
19note = """## Triage and analysis
20
21> **Disclaimer**:
22> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
23
24### Investigating Tampering with RUNNER_TRACKING_ID in GitHub Actions Runners
25
26This rule surfaces processes launched by GitHub Actions runners where RUNNER_TRACKING_ID is deliberately set to a non-default value. Attackers do this to break runner job tracking and cleanup on self-hosted runners, enabling long‑lived or hidden workloads. A common pattern is a workflow step that exports a custom RUNNER_TRACKING_ID and then spawns bash or node to fetch and execute a script via curl|bash or npm install scripts, keeping the process alive after the job finishes to run mining or exfil tasks.
27
28### Possible investigation steps
29
30- Correlate the event to its GitHub Actions run/job and workflow YAML, identify the repository and actor (commit/PR), and verify whether RUNNER_TRACKING_ID was explicitly set in the workflow or injected by a step script.
31- On the runner host, determine if the spawned process persisted beyond job completion by checking for orphaning or reparenting to PID 1, sustained CPU/memory usage, and timestamps relative to the runner process exit.
32- Review nearby telemetry for fetch-and-execute patterns (curl|bash, wget, node/npm lifecycle scripts), unexpected file writes under /tmp or actions-runner/_work, and outbound connections to non-GitHub endpoints.
33- Enumerate persistence artifacts created during the run, including crontab entries, systemd unit files, pm2 or nohup sessions, and changes to authorized_keys or rc.local, and tie them back to the suspicious process.
34- Assess blast radius by listing secrets and tokens available to the job, checking audit logs for their subsequent use from the runner IP or unusual repositories, and decide whether to revoke or rotate credentials.
35
36### False positive analysis
37
38- A self-hosted runner bootstrap script or base image intentionally sets a fixed RUNNER_TRACKING_ID for internal log correlation or debugging, causing all runner-spawned processes to inherit a non-github_* value.
39- A composite action or reusable workflow accidentally overrides RUNNER_TRACKING_ID through env mapping or variable expansion (for example templating it from the run ID), resulting in benign non-default values during standard jobs.
40
41### Response and remediation
42
43- Quarantine the self-hosted runner by stopping Runner.Listener, removing the runner from the repository/organization, and terminating any Runner.Worker children or orphaned processes (PID 1) that carry a non-default RUNNER_TRACKING_ID.
44- Purge persistence by removing artifacts created during the run, including systemd unit files under /etc/systemd/system, crontab entries in /var/spool/cron, pm2/nohup sessions, edits to ~/.ssh/authorized_keys or /etc/rc.local, and files under /tmp and actions-runner/_work linked to the tampered process.
45- Revoke and rotate credentials exposed to the job (GITHUB_TOKEN, personal access tokens, cloud keys), delete leftover containers and caches in actions-runner/_work, invalidate the runner registration, and redeploy the runner from a clean, patched image.
46- Escalate to incident response if you observe outbound connections to non-GitHub endpoints, processes persisting after job completion, modifications to ~/.ssh/authorized_keys or /etc/systemd/system, or repeated RUNNER_TRACKING_ID tampering across runners or repositories.
47- Harden by restricting self-hosted runners to trusted repositories and actors, enforcing ephemeral per-job runners with egress allowlisting to github.com, setting strict job timeouts, and adding a workflow guard step that exits if RUNNER_TRACKING_ID does not start with github_."""
48references = [
49 "https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise",
50 "https://socket.dev/blog/shai-hulud-strikes-again-v2",
51 "https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack",
52 "https://www.praetorian.com/blog/self-hosted-github-runners-are-backdoors/",
53]
54risk_score = 47
55rule_id = "df0553c8-2296-45ef-b4dc-3b88c4c130a7"
56setup = """## Setup
57
58This rule requires data coming in from Elastic Defend.
59
60### Elastic Defend Integration Setup
61Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app.
62
63#### Prerequisite Requirements:
64- Fleet is required for Elastic Defend.
65- To configure Fleet Server refer to the [documentation](https://www.elastic.co/guide/en/fleet/current/fleet-server.html).
66
67#### The following steps should be executed in order to add the Elastic Defend integration on a Linux System:
68- Go to the Kibana home page and click "Add integrations".
69- In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
70- Click "Add Elastic Defend".
71- Configure the integration name and optionally add a description.
72- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
73- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. [Helper guide](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html).
74- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
75- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead.
76For more details on Elastic Agent configuration settings, refer to the [helper guide](https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html).
77- Click "Save and Continue".
78- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts.
79For more details on Elastic Defend refer to the [helper guide](https://www.elastic.co/guide/en/security/current/install-endpoint.html).
80
81Elastic Defend integration does not collect environment variable logging by default.
82In order to capture this behavior, this rule requires a specific configuration option set within the advanced settings of the Elastic Defend integration.
83 #### To set up environment variable capture for an Elastic Agent policy:
84- Go to “Security → Manage → Policies”.
85- Select an “Elastic Agent policy”.
86- Click “Show advanced settings”.
87- Scroll down or search for “linux.advanced.capture_env_vars”.
88- Enter the names of environment variables you want to capture, separated by commas.
89- For Linux, this rule requires the linux.advanced.capture_env_vars variable to be set to "RUNNER_TRACKING_ID".
90- For macOS, this rule requires the macos.advanced.capture_env_vars variable to be set to "RUNNER_TRACKING_ID".
91- Click “Save”.
92After saving the integration change, the Elastic Agents running this policy will be updated and the rule will function properly.
93For more information on capturing environment variables refer to the [helper guide](https://www.elastic.co/guide/en/security/current/environment-variable-capture.html).
94"""
95severity = "medium"
96tags = [
97 "Domain: Endpoint",
98 "OS: Linux",
99 "OS: macOS",
100 "Use Case: Threat Detection",
101 "Tactic: Execution",
102 "Tactic: Initial Access",
103 "Tactic: Defense Evasion",
104 "Data Source: Elastic Defend",
105 "Resources: Investigation Guide",
106]
107timestamp_override = "event.ingested"
108type = "eql"
109query = '''
110process where host.os.type in ("linux", "macos") and event.type == "start" and event.action == "exec" and
111process.parent.name in ("Runner.Worker", "Runner.Listener") and process.env_vars like~ "RUNNER_TRACKING_ID*" and
112not process.env_vars like~ "RUNNER_TRACKING_ID=github_*"
113'''
114
115[[rule.threat]]
116framework = "MITRE ATT&CK"
117
118 [rule.threat.tactic]
119 name = "Execution"
120 id = "TA0002"
121 reference = "https://attack.mitre.org/tactics/TA0002/"
122
123 [[rule.threat.technique]]
124 id = "T1059"
125 name = "Command and Scripting Interpreter"
126 reference = "https://attack.mitre.org/techniques/T1059/"
127
128[[rule.threat]]
129framework = "MITRE ATT&CK"
130
131 [rule.threat.tactic]
132 name = "Initial Access"
133 id = "TA0001"
134 reference = "https://attack.mitre.org/tactics/TA0001/"
135
136 [[rule.threat.technique]]
137 name = "Supply Chain Compromise"
138 id = "T1195"
139 reference = "https://attack.mitre.org/techniques/T1195/"
140
141 [[rule.threat.technique.subtechnique]]
142 name = "Compromise Software Dependencies and Development Tools"
143 id = "T1195.001"
144 reference = "https://attack.mitre.org/techniques/T1195/001/"
145
146
147[[rule.threat]]
148framework = "MITRE ATT&CK"
149
150 [rule.threat.tactic]
151 name = "Defense Evasion"
152 id = "TA0005"
153 reference = "https://attack.mitre.org/tactics/TA0005/"
154
155 [[rule.threat.technique]]
156 name = "Impair Defenses"
157 id = "T1562"
158 reference = "https://attack.mitre.org/techniques/T1562/"
159
160 [[rule.threat.technique.subtechnique]]
161 name = "Disable or Modify Tools"
162 id = "T1562.001"
163 reference = "https://attack.mitre.org/techniques/T1562/001/"
Triage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Tampering with RUNNER_TRACKING_ID in GitHub Actions Runners
This rule surfaces processes launched by GitHub Actions runners where RUNNER_TRACKING_ID is deliberately set to a non-default value. Attackers do this to break runner job tracking and cleanup on self-hosted runners, enabling long‑lived or hidden workloads. A common pattern is a workflow step that exports a custom RUNNER_TRACKING_ID and then spawns bash or node to fetch and execute a script via curl|bash or npm install scripts, keeping the process alive after the job finishes to run mining or exfil tasks.
Possible investigation steps
- Correlate the event to its GitHub Actions run/job and workflow YAML, identify the repository and actor (commit/PR), and verify whether RUNNER_TRACKING_ID was explicitly set in the workflow or injected by a step script.
- On the runner host, determine if the spawned process persisted beyond job completion by checking for orphaning or reparenting to PID 1, sustained CPU/memory usage, and timestamps relative to the runner process exit.
- Review nearby telemetry for fetch-and-execute patterns (curl|bash, wget, node/npm lifecycle scripts), unexpected file writes under /tmp or actions-runner/_work, and outbound connections to non-GitHub endpoints.
- Enumerate persistence artifacts created during the run, including crontab entries, systemd unit files, pm2 or nohup sessions, and changes to authorized_keys or rc.local, and tie them back to the suspicious process.
- Assess blast radius by listing secrets and tokens available to the job, checking audit logs for their subsequent use from the runner IP or unusual repositories, and decide whether to revoke or rotate credentials.
False positive analysis
- A self-hosted runner bootstrap script or base image intentionally sets a fixed RUNNER_TRACKING_ID for internal log correlation or debugging, causing all runner-spawned processes to inherit a non-github_* value.
- A composite action or reusable workflow accidentally overrides RUNNER_TRACKING_ID through env mapping or variable expansion (for example templating it from the run ID), resulting in benign non-default values during standard jobs.
Response and remediation
- Quarantine the self-hosted runner by stopping Runner.Listener, removing the runner from the repository/organization, and terminating any Runner.Worker children or orphaned processes (PID 1) that carry a non-default RUNNER_TRACKING_ID.
- Purge persistence by removing artifacts created during the run, including systemd unit files under /etc/systemd/system, crontab entries in /var/spool/cron, pm2/nohup sessions, edits to ~/.ssh/authorized_keys or /etc/rc.local, and files under /tmp and actions-runner/_work linked to the tampered process.
- Revoke and rotate credentials exposed to the job (GITHUB_TOKEN, personal access tokens, cloud keys), delete leftover containers and caches in actions-runner/_work, invalidate the runner registration, and redeploy the runner from a clean, patched image.
- Escalate to incident response if you observe outbound connections to non-GitHub endpoints, processes persisting after job completion, modifications to ~/.ssh/authorized_keys or /etc/systemd/system, or repeated RUNNER_TRACKING_ID tampering across runners or repositories.
- Harden by restricting self-hosted runners to trusted repositories and actors, enforcing ephemeral per-job runners with egress allowlisting to github.com, setting strict job timeouts, and adding a workflow guard step that exits if RUNNER_TRACKING_ID does not start with github_.
References
Related rules
- Privileged Container Creation with Host Directory Mount
- Curl or Wget Egress Network Connection via LoLBin
- Proxy Shell Execution via Busybox
- Potential Git CVE-2025-48384 Exploitation
- Elastic Agent Service Terminated