Sensitive Files Compression
Identifies the use of a compression utility to collect known files containing sensitive information, such as credentials and system configurations.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2020/12/22"
3integration = ["endpoint"]
4maturity = "production"
5updated_date = "2025/01/15"
6
7[rule]
8author = ["Elastic"]
9description = """
10Identifies the use of a compression utility to collect known files containing sensitive information, such as credentials
11and system configurations.
12"""
13from = "now-9m"
14index = ["auditbeat-*", "logs-endpoint.events.*", "endgame-*"]
15language = "kuery"
16license = "Elastic License v2"
17name = "Sensitive Files Compression"
18references = [
19 "https://www.trendmicro.com/en_ca/research/20/l/teamtnt-now-deploying-ddos-capable-irc-bot-tntbotinger.html",
20]
21risk_score = 47
22rule_id = "6b84d470-9036-4cc0-a27c-6d90bbfe81ab"
23setup = """## Setup
24
25This rule requires data coming in from one of the following integrations:
26- Elastic Defend
27- Auditbeat
28
29
30### Elastic Defend Integration Setup
31Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows
32the Elastic Agent to monitor events on your host and send data to the Elastic Security app.
33
34#### Prerequisite Requirements:
35- Fleet is required for Elastic Defend.
36- To configure Fleet Server refer to the [documentation](https://www.elastic.co/guide/en/fleet/current/fleet-server.html).
37
38#### The following steps should be executed in order to add the Elastic Defend integration on a Linux System:
39- Go to the Kibana home page and click "Add integrations".
40- In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
41- Click "Add Elastic Defend".
42- Configure the integration name and optionally add a description.
43- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
44- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. [Helper guide](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html).
45- We suggest to select "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
46- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead.
47For more details on Elastic Agent configuration settings, refer to the [helper guide](https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html).
48- Click "Save and Continue".
49- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts.
50For more details on Elastic Defend refer to the [helper guide](https://www.elastic.co/guide/en/security/current/install-endpoint.html).
51
52### Auditbeat Setup
53Auditbeat is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your systems. For example, you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.
54
55#### The following steps should be executed in order to add the Auditbeat on a Linux System:
56- Elastic provides repositories available for APT and YUM-based distributions. Note that we provide binary packages, but no source packages.
57- To install the APT and YUM repositories follow the setup instructions in this [helper guide](https://www.elastic.co/guide/en/beats/auditbeat/current/setup-repositories.html).
58- To run Auditbeat on Docker follow the setup instructions in the [helper guide](https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-docker.html).
59- To run Auditbeat on Kubernetes follow the setup instructions in the [helper guide](https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-kubernetes.html).
60- For complete “Setup and Run Auditbeat” information refer to the [helper guide](https://www.elastic.co/guide/en/beats/auditbeat/current/setting-up-and-running.html).
61"""
62severity = "medium"
63tags = [
64 "Domain: Endpoint",
65 "OS: Linux",
66 "Use Case: Threat Detection",
67 "Tactic: Collection",
68 "Tactic: Credential Access",
69 "Data Source: Elastic Endgame",
70 "Data Source: Elastic Defend",
71 "Resources: Investigation Guide",
72]
73timestamp_override = "event.ingested"
74type = "new_terms"
75
76query = '''
77event.category:process and host.os.type:linux and event.type:start and
78 process.name:(zip or tar or gzip or hdiutil or 7z) and
79 process.args:
80 (
81 /root/.ssh/id_rsa or
82 /root/.ssh/id_rsa.pub or
83 /root/.ssh/id_ed25519 or
84 /root/.ssh/id_ed25519.pub or
85 /root/.ssh/authorized_keys or
86 /root/.ssh/authorized_keys2 or
87 /root/.ssh/known_hosts or
88 /root/.bash_history or
89 /etc/hosts or
90 /home/*/.ssh/id_rsa or
91 /home/*/.ssh/id_rsa.pub or
92 /home/*/.ssh/id_ed25519 or
93 /home/*/.ssh/id_ed25519.pub or
94 /home/*/.ssh/authorized_keys or
95 /home/*/.ssh/authorized_keys2 or
96 /home/*/.ssh/known_hosts or
97 /home/*/.bash_history or
98 /root/.aws/credentials or
99 /root/.aws/config or
100 /home/*/.aws/credentials or
101 /home/*/.aws/config or
102 /root/.docker/config.json or
103 /home/*/.docker/config.json or
104 /etc/group or
105 /etc/passwd or
106 /etc/shadow or
107 /etc/gshadow
108 )
109'''
110note = """## Triage and analysis
111
112> **Disclaimer**:
113> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
114
115### Investigating Sensitive Files Compression
116
117Compression utilities like zip, tar, and gzip are essential for efficiently managing and transferring files. However, adversaries can exploit these tools to compress and exfiltrate sensitive data, such as SSH keys and configuration files. The detection rule identifies suspicious compression activities by monitoring process executions involving these utilities and targeting known sensitive file paths, thereby flagging potential data collection and credential access attempts.
118
119### Possible investigation steps
120
121- Review the process execution details to identify the user account associated with the compression activity, focusing on the process.name and process.args fields.
122- Examine the command line arguments (process.args) to determine which specific sensitive files were targeted for compression.
123- Check the event.timestamp to establish a timeline and correlate with other potentially suspicious activities on the host.
124- Investigate the host's recent login history and user activity to identify any unauthorized access attempts or anomalies.
125- Analyze network logs for any outbound connections from the host around the time of the event to detect potential data exfiltration attempts.
126- Assess the integrity and permissions of the sensitive files involved to determine if they have been altered or accessed inappropriately.
127
128### False positive analysis
129
130- Routine system backups or administrative tasks may trigger the rule if they involve compressing sensitive files for legitimate purposes. Users can create exceptions for known backup scripts or administrative processes by excluding specific process names or command-line arguments associated with these tasks.
131- Developers or system administrators might compress configuration files during development or deployment processes. To handle this, users can whitelist specific user accounts or directories commonly used for development activities, ensuring these actions are not flagged as suspicious.
132- Automated scripts or cron jobs that regularly archive logs or configuration files could be mistakenly identified as threats. Users should review and exclude these scheduled tasks by identifying their unique process identifiers or execution patterns.
133- Security tools or monitoring solutions that periodically compress and transfer logs for analysis might be misinterpreted as malicious. Users can exclude these tools by specifying their process names or paths in the detection rule exceptions.
134
135### Response and remediation
136
137- Immediately isolate the affected system from the network to prevent further data exfiltration and unauthorized access.
138- Terminate any suspicious processes identified by the detection rule to halt ongoing compression and potential data exfiltration activities.
139- Conduct a thorough review of the compressed files and their contents to assess the extent of sensitive data exposure and determine if any data has been exfiltrated.
140- Change all credentials associated with the compromised files, such as SSH keys and AWS credentials, to prevent unauthorized access using stolen credentials.
141- Restore any altered or deleted configuration files from a known good backup to ensure system integrity and functionality.
142- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if additional systems are affected.
143- Implement enhanced monitoring and logging for compression utilities and sensitive file access to detect and respond to similar threats more effectively in the future."""
144
145
146[[rule.threat]]
147framework = "MITRE ATT&CK"
148[[rule.threat.technique]]
149id = "T1552"
150name = "Unsecured Credentials"
151reference = "https://attack.mitre.org/techniques/T1552/"
152[[rule.threat.technique.subtechnique]]
153id = "T1552.001"
154name = "Credentials In Files"
155reference = "https://attack.mitre.org/techniques/T1552/001/"
156
157
158
159[rule.threat.tactic]
160id = "TA0006"
161name = "Credential Access"
162reference = "https://attack.mitre.org/tactics/TA0006/"
163[[rule.threat]]
164framework = "MITRE ATT&CK"
165[[rule.threat.technique]]
166id = "T1560"
167name = "Archive Collected Data"
168reference = "https://attack.mitre.org/techniques/T1560/"
169[[rule.threat.technique.subtechnique]]
170id = "T1560.001"
171name = "Archive via Utility"
172reference = "https://attack.mitre.org/techniques/T1560/001/"
173
174
175
176[rule.threat.tactic]
177id = "TA0009"
178name = "Collection"
179reference = "https://attack.mitre.org/tactics/TA0009/"
180
181[rule.new_terms]
182field = "new_terms_fields"
183value = ["host.id", "process.command_line", "process.parent.executable"]
184[[rule.new_terms.history_window_start]]
185field = "history_window_start"
186value = "now-10d"
Triage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Sensitive Files Compression
Compression utilities like zip, tar, and gzip are essential for efficiently managing and transferring files. However, adversaries can exploit these tools to compress and exfiltrate sensitive data, such as SSH keys and configuration files. The detection rule identifies suspicious compression activities by monitoring process executions involving these utilities and targeting known sensitive file paths, thereby flagging potential data collection and credential access attempts.
Possible investigation steps
- Review the process execution details to identify the user account associated with the compression activity, focusing on the process.name and process.args fields.
- Examine the command line arguments (process.args) to determine which specific sensitive files were targeted for compression.
- Check the event.timestamp to establish a timeline and correlate with other potentially suspicious activities on the host.
- Investigate the host's recent login history and user activity to identify any unauthorized access attempts or anomalies.
- Analyze network logs for any outbound connections from the host around the time of the event to detect potential data exfiltration attempts.
- Assess the integrity and permissions of the sensitive files involved to determine if they have been altered or accessed inappropriately.
False positive analysis
- Routine system backups or administrative tasks may trigger the rule if they involve compressing sensitive files for legitimate purposes. Users can create exceptions for known backup scripts or administrative processes by excluding specific process names or command-line arguments associated with these tasks.
- Developers or system administrators might compress configuration files during development or deployment processes. To handle this, users can whitelist specific user accounts or directories commonly used for development activities, ensuring these actions are not flagged as suspicious.
- Automated scripts or cron jobs that regularly archive logs or configuration files could be mistakenly identified as threats. Users should review and exclude these scheduled tasks by identifying their unique process identifiers or execution patterns.
- Security tools or monitoring solutions that periodically compress and transfer logs for analysis might be misinterpreted as malicious. Users can exclude these tools by specifying their process names or paths in the detection rule exceptions.
Response and remediation
- Immediately isolate the affected system from the network to prevent further data exfiltration and unauthorized access.
- Terminate any suspicious processes identified by the detection rule to halt ongoing compression and potential data exfiltration activities.
- Conduct a thorough review of the compressed files and their contents to assess the extent of sensitive data exposure and determine if any data has been exfiltrated.
- Change all credentials associated with the compromised files, such as SSH keys and AWS credentials, to prevent unauthorized access using stolen credentials.
- Restore any altered or deleted configuration files from a known good backup to ensure system integrity and functionality.
- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if additional systems are affected.
- Implement enhanced monitoring and logging for compression utilities and sensitive file access to detect and respond to similar threats more effectively in the future.
References
Related rules
- Linux Clipboard Activity Detected
- Linux Process Hooking via GDB
- Linux init (PID 1) Secret Dump via GDB
- Modification of OpenSSH Binaries
- Pluggable Authentication Module (PAM) Creation in Unusual Directory