Anomalous Process For a Linux Population

Searches for rare processes running on multiple Linux hosts in an entire fleet or network. This reduces the detection of false positives since automated maintenance processes usually only run occasionally on a single machine but are common to all or many hosts in a fleet.

Elastic rule (View on GitHub)

  1[metadata]
  2creation_date = "2020/03/25"
  3integration = ["auditd_manager", "endpoint"]
  4maturity = "production"
  5updated_date = "2024/06/18"
  6
  7[rule]
  8anomaly_threshold = 50
  9author = ["Elastic"]
 10description = """
 11Searches for rare processes running on multiple Linux hosts in an entire fleet or network. This reduces the detection of
 12false positives since automated maintenance processes usually only run occasionally on a single machine but are common
 13to all or many hosts in a fleet.
 14"""
 15false_positives = [
 16    """
 17    A newly installed program or one that runs rarely as part of a monthly or quarterly workflow could trigger this
 18    alert.
 19    """,
 20]
 21from = "now-45m"
 22interval = "15m"
 23license = "Elastic License v2"
 24machine_learning_job_id = ["v3_linux_anomalous_process_all_hosts"]
 25name = "Anomalous Process For a Linux Population"
 26setup = """## Setup
 27
 28This rule requires the installation of associated Machine Learning jobs, as well as data coming in from one of the following integrations:
 29- Elastic Defend
 30- Auditd Manager
 31
 32### Anomaly Detection Setup
 33
 34Once the rule is enabled, the associated Machine Learning job will start automatically. You can view the Machine Learning job linked under the "Definition" panel of the detection rule. If the job does not start due to an error, the issue must be resolved for the job to commence successfully. For more details on setting up anomaly detection jobs, refer to the [helper guide](https://www.elastic.co/guide/en/kibana/current/xpack-ml-anomalies.html).
 35
 36### Elastic Defend Integration Setup
 37Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app.
 38
 39#### Prerequisite Requirements:
 40- Fleet is required for Elastic Defend.
 41- To configure Fleet Server refer to the [documentation](https://www.elastic.co/guide/en/fleet/current/fleet-server.html).
 42
 43#### The following steps should be executed in order to add the Elastic Defend integration to your system:
 44- Go to the Kibana home page and click "Add integrations".
 45- In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
 46- Click "Add Elastic Defend".
 47- Configure the integration name and optionally add a description.
 48- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
 49- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. [Helper guide](https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html).
 50- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
 51- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead.
 52For more details on Elastic Agent configuration settings, refer to the [helper guide](https://www.elastic.co/guide/en/fleet/current/agent-policy.html).
 53- Click "Save and Continue".
 54- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts.
 55For more details on Elastic Defend refer to the [helper guide](https://www.elastic.co/guide/en/security/current/install-endpoint.html).
 56
 57### Auditd Manager Integration Setup
 58The Auditd Manager Integration receives audit events from the Linux Audit Framework which is a part of the Linux kernel.
 59Auditd Manager provides a user-friendly interface and automation capabilities for configuring and monitoring system auditing through the auditd daemon. With `auditd_manager`, administrators can easily define audit rules, track system events, and generate comprehensive audit reports, improving overall security and compliance in the system.
 60
 61#### The following steps should be executed in order to add the Elastic Agent System integration "auditd_manager" to your system:
 62- Go to the Kibana home page and click “Add integrations”.
 63- In the query bar, search for “Auditd Manager” and select the integration to see more details about it.
 64- Click “Add Auditd Manager”.
 65- Configure the integration name and optionally add a description.
 66- Review optional and advanced settings accordingly.
 67- Add the newly installed “auditd manager” to an existing or a new agent policy, and deploy the agent on a Linux system from which auditd log files are desirable.
 68- Click “Save and Continue”.
 69- For more details on the integration refer to the [helper guide](https://docs.elastic.co/integrations/auditd_manager).
 70
 71#### Rule Specific Setup Note
 72Auditd Manager subscribes to the kernel and receives events as they occur without any additional configuration.
 73However, if more advanced configuration is required to detect specific behavior, audit rules can be added to the integration in either the "audit rules" configuration box or the "auditd rule files" box by specifying a file to read the audit rules from.
 74- For this detection rule no additional audit rules are required.
 75"""
 76note = """## Triage and analysis
 77
 78### Investigating Anomalous Process For a Linux Population
 79
 80Searching for abnormal Linux processes is a good methodology to find potentially malicious activity within a network. Understanding what is commonly run within an environment and developing baselines for legitimate activity can help uncover potential malware and suspicious behaviors.
 81
 82This rule uses a machine learning job to detect a Linux process that is rare and unusual for all of the monitored Linux hosts in your fleet.
 83
 84#### Possible investigation steps
 85
 86- Investigate the process execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence, and whether they are located in expected locations.
 87- Investigate other alerts associated with the user/host during the past 48 hours.
 88- Consider the user as identified by the `user.name` field. Is this program part of an expected workflow for the user who ran this program on this host?
 89- Validate the activity is not related to planned patches, updates, network administrator activity, or legitimate software installations.
 90- Validate if the activity has a consistent cadence (for example, if it runs monthly or quarterly), as it could be part of a monthly or quarterly business process.
 91- Examine the arguments and working directory of the process. These may provide indications as to the source of the program or the nature of the tasks it is performing.
 92
 93### False Positive Analysis
 94
 95- If this activity is related to new benign software installation activity, consider adding exceptions — preferably with a combination of user and command line conditions.
 96- Try to understand the context of the execution by thinking about the user, machine, or business purpose. A small number of endpoints, such as servers with unique software, might appear unusual but satisfy a specific business need.
 97
 98### Response and Remediation
 99
100- Initiate the incident response process based on the outcome of the triage.
101- Isolate the involved hosts to prevent further post-compromise behavior.
102- If the triage identified malware, search the environment for additional compromised hosts.
103  - Implement temporary network rules, procedures, and segmentation to contain the malware.
104  - Stop suspicious processes.
105  - Immediately block the identified indicators of compromise (IoCs).
106  - Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that   attackers could use to reinfect the system.
107- Remove and block malicious artifacts identified during triage.
108- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services.
109- Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components.
110- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector.
111- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
112"""
113references = ["https://www.elastic.co/guide/en/security/current/prebuilt-ml-jobs.html"]
114risk_score = 21
115rule_id = "647fc812-7996-4795-8869-9c4ea595fe88"
116severity = "low"
117tags = [
118    "Domain: Endpoint",
119    "OS: Linux",
120    "Use Case: Threat Detection",
121    "Rule Type: ML",
122    "Rule Type: Machine Learning",
123    "Tactic: Persistence",
124    "Resources: Investigation Guide",
125]
126type = "machine_learning"
127[[rule.threat]]
128framework = "MITRE ATT&CK"
129[[rule.threat.technique]]
130id = "T1543"
131name = "Create or Modify System Process"
132reference = "https://attack.mitre.org/techniques/T1543/"
133[[rule.threat.technique.subtechnique]]
134id = "T1543.003"
135name = "Windows Service"
136reference = "https://attack.mitre.org/techniques/T1543/003/"
137
138
139
140[rule.threat.tactic]
141id = "TA0003"
142name = "Persistence"
143reference = "https://attack.mitre.org/tactics/TA0003/"

Triage and analysis

Investigating Anomalous Process For a Linux Population

Searching for abnormal Linux processes is a good methodology to find potentially malicious activity within a network. Understanding what is commonly run within an environment and developing baselines for legitimate activity can help uncover potential malware and suspicious behaviors.

This rule uses a machine learning job to detect a Linux process that is rare and unusual for all of the monitored Linux hosts in your fleet.

Possible investigation steps

  • Investigate the process execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence, and whether they are located in expected locations.
  • Investigate other alerts associated with the user/host during the past 48 hours.
  • Consider the user as identified by the user.name field. Is this program part of an expected workflow for the user who ran this program on this host?
  • Validate the activity is not related to planned patches, updates, network administrator activity, or legitimate software installations.
  • Validate if the activity has a consistent cadence (for example, if it runs monthly or quarterly), as it could be part of a monthly or quarterly business process.
  • Examine the arguments and working directory of the process. These may provide indications as to the source of the program or the nature of the tasks it is performing.

False Positive Analysis

  • If this activity is related to new benign software installation activity, consider adding exceptions — preferably with a combination of user and command line conditions.
  • Try to understand the context of the execution by thinking about the user, machine, or business purpose. A small number of endpoints, such as servers with unique software, might appear unusual but satisfy a specific business need.

Response and Remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Isolate the involved hosts to prevent further post-compromise behavior.
  • If the triage identified malware, search the environment for additional compromised hosts.
    • Implement temporary network rules, procedures, and segmentation to contain the malware.
    • Stop suspicious processes.
    • Immediately block the identified indicators of compromise (IoCs).
    • Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that attackers could use to reinfect the system.
  • Remove and block malicious artifacts identified during triage.
  • Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services.
  • Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components.
  • Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).

References

Related rules

to-top