Sensitive Files Compression Inside A Container
Identifies the use of a compression utility to collect known files containing sensitive information, such as credentials and system configurations inside a container.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2025/03/12"
3integration = ["endpoint"]
4maturity = "production"
5updated_date = "2025/03/12"
6
7[rule]
8author = ["Elastic"]
9description = """
10Identifies the use of a compression utility to collect known files containing sensitive information, such as credentials
11and system configurations inside a container.
12"""
13from = "now-9m"
14index = ["logs-endpoint.events.process*"]
15language = "eql"
16license = "Elastic License v2"
17name = "Sensitive Files Compression Inside A Container"
18risk_score = 47
19rule_id = "d9faf1ba-a216-4c29-b8e0-a05a9d14b027"
20setup = """## Setup
21
22This rule requires data coming in from Elastic Defend.
23
24### Elastic Defend Integration Setup
25Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app.
26
27#### Prerequisite Requirements:
28- Fleet is required for Elastic Defend.
29- To configure Fleet Server refer to the [documentation](https://www.elastic.co/guide/en/fleet/current/fleet-server.html).
30
31#### The following steps should be executed in order to add the Elastic Defend integration on a Linux System:
32- Go to the Kibana home page and click "Add integrations".
33- In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
34- Click "Add Elastic Defend".
35- Configure the integration name and optionally add a description.
36- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
37- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. [Helper guide](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html).
38- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
39- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead.
40
41For more details on Elastic Agent configuration settings, refer to the [helper guide](https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html).
42- Click "Save and Continue".
43- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts.
44For more details on Elastic Defend refer to the [helper guide](https://www.elastic.co/guide/en/security/current/install-endpoint.html).
45"""
46severity = "medium"
47tags = [
48 "Domain: Container",
49 "OS: Linux",
50 "Use Case: Threat Detection",
51 "Tactic: Credential Access",
52 "Tactic: Collection",
53 "Data Source: Elastic Defend",
54 "Resources: Investigation Guide",
55]
56timestamp_override = "event.ingested"
57type = "eql"
58query = '''
59process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and
60process.entry_leader.entry_meta.type == "container" and process.name in ("zip", "tar", "gzip", "hdiutil", "7z") and
61process.command_line like~ (
62 "*/root/.ssh/*", "*/home/*/.ssh/*", "*/root/.bash_history*", "*/etc/hosts*", "*/root/.aws/*", "*/home/*/.aws/*",
63 "*/root/.docker/*", "*/home/*/.docker/*", "*/etc/group*", "*/etc/passwd*", "*/etc/shadow*", "*/etc/gshadow*"
64)
65'''
66note = """### Triage and analysis
67
68> **Disclaimer**:
69> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
70
71### Investigating Sensitive Files Compression Inside A Container
72
73Containers are lightweight, portable environments used to run applications consistently across different systems. Adversaries may exploit compression utilities within containers to gather and exfiltrate sensitive files, such as credentials and configuration files. The detection rule identifies suspicious compression activities by monitoring for specific utilities and file paths, flagging potential unauthorized data collection attempts.
74
75### Possible investigation steps
76
77- Review the process details to confirm the use of compression utilities such as zip, tar, gzip, hdiutil, or 7z within the container environment, focusing on the process.name and process.args fields.
78- Examine the specific file paths listed in the process.args to determine if they include sensitive files like SSH keys, AWS credentials, or Docker configurations, which could indicate unauthorized data collection.
79- Check the event.type field for "start" to verify the timing of the process initiation and correlate it with any known legitimate activities or scheduled tasks within the container.
80- Investigate the user or service account under which the process was executed to assess whether it has the necessary permissions and if the activity aligns with expected behavior for that account.
81- Look for any related alerts or logs that might indicate a broader pattern of suspicious activity within the same container or across other containers in the environment.
82
83### False positive analysis
84
85- Routine backup operations may trigger the rule if they involve compressing sensitive files for storage. To handle this, identify and exclude backup processes or scripts that are known and trusted.
86- Automated configuration management tools might compress configuration files as part of their normal operation. Exclude these tools by specifying their process names or paths in the exception list.
87- Developers or system administrators might compress sensitive files during legitimate troubleshooting or maintenance activities. Establish a process to log and review these activities, and exclude them if they are verified as non-threatening.
88- Continuous integration and deployment pipelines could involve compressing configuration files for deployment purposes. Identify these pipelines and exclude their associated processes to prevent false positives.
89- Security tools that perform regular audits or scans might compress files for analysis. Ensure these tools are recognized and excluded from triggering the rule.
90
91### Response and remediation
92
93- Immediately isolate the affected container to prevent further data exfiltration or unauthorized access. This can be done by stopping the container or disconnecting it from the network.
94- Conduct a thorough review of the compressed files and their contents to assess the extent of sensitive data exposure. Focus on the specific file paths identified in the alert.
95- Change credentials and keys that may have been compromised, including SSH keys, AWS credentials, and Docker configurations. Ensure that new credentials are distributed securely.
96- Review and update access controls and permissions for sensitive files within containers to minimize exposure. Ensure that only necessary processes and users have access to these files.
97- Implement monitoring and alerting for similar compression activities in other containers to detect potential threats early. Use the identified process names and arguments as indicators.
98- Escalate the incident to the security operations team for further investigation and to determine if additional systems or data have been affected.
99- Conduct a post-incident review to identify gaps in security controls and update container security policies to prevent recurrence."""
100
101[[rule.threat]]
102framework = "MITRE ATT&CK"
103
104[[rule.threat.technique]]
105id = "T1552"
106name = "Unsecured Credentials"
107reference = "https://attack.mitre.org/techniques/T1552/"
108
109[[rule.threat.technique.subtechnique]]
110id = "T1552.001"
111name = "Credentials In Files"
112reference = "https://attack.mitre.org/techniques/T1552/001/"
113
114[rule.threat.tactic]
115id = "TA0006"
116name = "Credential Access"
117reference = "https://attack.mitre.org/tactics/TA0006/"
118
119[[rule.threat]]
120framework = "MITRE ATT&CK"
121
122[[rule.threat.technique]]
123id = "T1560"
124name = "Archive Collected Data"
125reference = "https://attack.mitre.org/techniques/T1560/"
126
127[[rule.threat.technique.subtechnique]]
128id = "T1560.001"
129name = "Archive via Utility"
130reference = "https://attack.mitre.org/techniques/T1560/001/"
131
132[rule.threat.tactic]
133id = "TA0009"
134name = "Collection"
135reference = "https://attack.mitre.org/tactics/TA0009/"
Triage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Sensitive Files Compression Inside A Container
Containers are lightweight, portable environments used to run applications consistently across different systems. Adversaries may exploit compression utilities within containers to gather and exfiltrate sensitive files, such as credentials and configuration files. The detection rule identifies suspicious compression activities by monitoring for specific utilities and file paths, flagging potential unauthorized data collection attempts.
Possible investigation steps
- Review the process details to confirm the use of compression utilities such as zip, tar, gzip, hdiutil, or 7z within the container environment, focusing on the process.name and process.args fields.
- Examine the specific file paths listed in the process.args to determine if they include sensitive files like SSH keys, AWS credentials, or Docker configurations, which could indicate unauthorized data collection.
- Check the event.type field for "start" to verify the timing of the process initiation and correlate it with any known legitimate activities or scheduled tasks within the container.
- Investigate the user or service account under which the process was executed to assess whether it has the necessary permissions and if the activity aligns with expected behavior for that account.
- Look for any related alerts or logs that might indicate a broader pattern of suspicious activity within the same container or across other containers in the environment.
False positive analysis
- Routine backup operations may trigger the rule if they involve compressing sensitive files for storage. To handle this, identify and exclude backup processes or scripts that are known and trusted.
- Automated configuration management tools might compress configuration files as part of their normal operation. Exclude these tools by specifying their process names or paths in the exception list.
- Developers or system administrators might compress sensitive files during legitimate troubleshooting or maintenance activities. Establish a process to log and review these activities, and exclude them if they are verified as non-threatening.
- Continuous integration and deployment pipelines could involve compressing configuration files for deployment purposes. Identify these pipelines and exclude their associated processes to prevent false positives.
- Security tools that perform regular audits or scans might compress files for analysis. Ensure these tools are recognized and excluded from triggering the rule.
Response and remediation
- Immediately isolate the affected container to prevent further data exfiltration or unauthorized access. This can be done by stopping the container or disconnecting it from the network.
- Conduct a thorough review of the compressed files and their contents to assess the extent of sensitive data exposure. Focus on the specific file paths identified in the alert.
- Change credentials and keys that may have been compromised, including SSH keys, AWS credentials, and Docker configurations. Ensure that new credentials are distributed securely.
- Review and update access controls and permissions for sensitive files within containers to minimize exposure. Ensure that only necessary processes and users have access to these files.
- Implement monitoring and alerting for similar compression activities in other containers to detect potential threats early. Use the identified process names and arguments as indicators.
- Escalate the incident to the security operations team for further investigation and to determine if additional systems or data have been affected.
- Conduct a post-incident review to identify gaps in security controls and update container security policies to prevent recurrence.
Related rules
- AWS Credentials Searched For Inside A Container
- Sensitive Keys Or Passwords Searched For Inside A Container
- Deprecated - Sensitive Files Compression Inside A Container
- Sensitive Files Compression
- Container Management Utility Run Inside A Container