Alerts From Multiple Integrations by User Name

This rule uses alert data to determine when multiple alerts from different integrations with unique event categories and involving the same user.name are triggered. Analysts can use this to prioritize triage and response, as these users are more likely to be compromised.

Elastic rule (View on GitHub)

 1[metadata]
 2creation_date = "2025/12/15"
 3maturity = "production"
 4updated_date = "2025/12/15"
 5
 6[rule]
 7author = ["Elastic"]
 8description = """
 9This rule uses alert data to determine when multiple alerts from different integrations with unique event categories and
10involving the same user.name are triggered. Analysts can use this to prioritize triage and response, as these users are
11more likely to be compromised.
12"""
13from = "now-60m"
14interval = "30m"
15language = "esql"
16license = "Elastic License v2"
17name = "Alerts From Multiple Integrations by User Name"
18risk_score = 73
19rule_id = "1dd99dbf-b98d-4956-876b-f13bc0ce017f"
20severity = "high"
21tags = ["Use Case: Threat Detection", "Rule Type: Higher-Order Rule", "Resources: Investigation Guide"]
22timestamp_override = "event.ingested"
23type = "esql"
24
25query = '''
26from .alerts-security.*
27
28// any alerts excluding low severity and the noisy ones
29| where kibana.alert.rule.name is not null and user.name is not null and kibana.alert.risk_score > 21 and
30        not kibana.alert.rule.type in ("threat_match", "machine_learning") and 
31        not user.id in ("S-1-5-18", "S-1-5-19", "S-1-5-20", "0")
32
33// group alerts by user.name and extract values of interest for alert triage
34| stats Esql.event_module_distinct_count = COUNT_DISTINCT(event.module),
35        Esql.rule_name_distinct_count = COUNT_DISTINCT(kibana.alert.rule.name),
36        Esql.event_category_distinct_count = COUNT_DISTINCT(event.category),
37        Esql.rule_risk_score_distinct_count = COUNT_DISTINCT(kibana.alert.risk_score),
38        Esql.event_module_values = VALUES(event.module),
39        Esql.rule_name_values = VALUES(kibana.alert.rule.name),
40        Esql.message_values = VALUES(message),
41        Esql.event_category_values = VALUES(event.category),
42        Esql.event_action_values = VALUES(event.action),
43        Esql.source_ip_values = VALUES(source.ip),
44        Esql.destination_ip_values = VALUES(destination.ip),
45        Esql.host_id_values = VALUES(host.id),
46        Esql.agent_id_values = VALUES(agent.id),
47        Esql.rule_severity_values = VALUES(kibana.alert.risk_score) by user.name, user.id
48
49// filter for alerts from same destination.ip reported by different integrations with unique categories and with different severity levels
50| where Esql.event_module_distinct_count >= 2 and Esql.event_category_distinct_count >= 2 and (Esql.rule_risk_score_distinct_count >= 2 or Esql.rule_severity_values == 73 or Esql.rule_severity_values == 99)
51| keep user.name, Esql.*
52'''
53note = """## Triage and analysis
54
55> **Disclaimer**:
56> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
57
58### Investigating Alerts From Multiple Integrations by User Name
59
60The detection rule uses alert data to determine when multiple alerts from different integrations involving the same user.name are triggered.
61
62### Possible investigation steps
63
64- Review the alert details to identify the specific user involved and the different modules and rules that triggered the alert.
65- Examine the timeline of the alerts to understand the sequence of events and determine if there is a pattern or progression in the tactics used.
66- Correlate the alert data with other logs and telemetry from the host, such as process creation, network connections, and file modifications, to gather additional context.
67- Investigate any known vulnerabilities or misconfigurations on the host that could have been exploited by the adversary.
68- Check for any indicators of compromise (IOCs) associated with the alerts, such as suspicious IP addresses, domains, or file hashes, and search for these across the network.
69- Assess the impact and scope of the potential compromise by determining if other hosts or systems have similar alerts or related activity.
70
71### False positive analysis
72
73- Alerts from routine administrative tasks may trigger multiple tactics. Review and exclude known benign activities such as scheduled software updates or system maintenance.
74- Security tools running on the host might generate alerts across different tactics. Identify and exclude alerts from trusted security applications to reduce noise.
75- Automated scripts or batch processes can mimic adversarial behavior. Analyze and whitelist these processes if they are verified as non-threatening.
76- Frequent alerts from development or testing environments can be misleading. Consider excluding these environments from the rule or applying a different risk score.
77- User behavior anomalies, such as accessing multiple systems or applications, might trigger alerts. Implement user behavior baselines to differentiate between normal and suspicious activities.
78
79### Response and remediation
80
81- Isolate the affected host from the network immediately to prevent further lateral movement by the adversary.
82- Conduct a thorough forensic analysis of the host to identify the specific vulnerabilities exploited and gather evidence of the attack phases involved.
83- Remove any identified malicious software or unauthorized access tools from the host, ensuring all persistence mechanisms are eradicated.
84- Apply security patches and updates to the host to address any exploited vulnerabilities and prevent similar attacks.
85- Restore the host from a known good backup if necessary, ensuring that the backup is free from compromise.
86- Monitor the host and network for any signs of re-infection or further suspicious activity, using enhanced logging and alerting based on the identified attack patterns.
87- Escalate the incident to the appropriate internal or external cybersecurity teams for further investigation and potential legal action if the attack is part of a larger campaign."""

Triage and analysis

Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.

Investigating Alerts From Multiple Integrations by User Name

The detection rule uses alert data to determine when multiple alerts from different integrations involving the same user.name are triggered.

Possible investigation steps

  • Review the alert details to identify the specific user involved and the different modules and rules that triggered the alert.
  • Examine the timeline of the alerts to understand the sequence of events and determine if there is a pattern or progression in the tactics used.
  • Correlate the alert data with other logs and telemetry from the host, such as process creation, network connections, and file modifications, to gather additional context.
  • Investigate any known vulnerabilities or misconfigurations on the host that could have been exploited by the adversary.
  • Check for any indicators of compromise (IOCs) associated with the alerts, such as suspicious IP addresses, domains, or file hashes, and search for these across the network.
  • Assess the impact and scope of the potential compromise by determining if other hosts or systems have similar alerts or related activity.

False positive analysis

  • Alerts from routine administrative tasks may trigger multiple tactics. Review and exclude known benign activities such as scheduled software updates or system maintenance.
  • Security tools running on the host might generate alerts across different tactics. Identify and exclude alerts from trusted security applications to reduce noise.
  • Automated scripts or batch processes can mimic adversarial behavior. Analyze and whitelist these processes if they are verified as non-threatening.
  • Frequent alerts from development or testing environments can be misleading. Consider excluding these environments from the rule or applying a different risk score.
  • User behavior anomalies, such as accessing multiple systems or applications, might trigger alerts. Implement user behavior baselines to differentiate between normal and suspicious activities.

Response and remediation

  • Isolate the affected host from the network immediately to prevent further lateral movement by the adversary.
  • Conduct a thorough forensic analysis of the host to identify the specific vulnerabilities exploited and gather evidence of the attack phases involved.
  • Remove any identified malicious software or unauthorized access tools from the host, ensuring all persistence mechanisms are eradicated.
  • Apply security patches and updates to the host to address any exploited vulnerabilities and prevent similar attacks.
  • Restore the host from a known good backup if necessary, ensuring that the backup is free from compromise.
  • Monitor the host and network for any signs of re-infection or further suspicious activity, using enhanced logging and alerting based on the identified attack patterns.
  • Escalate the incident to the appropriate internal or external cybersecurity teams for further investigation and potential legal action if the attack is part of a larger campaign.

Related rules

to-top