Multiple Machine Learning Alerts by Influencer Field

This rule uses alerts data to determine when multiple unique machine learning jobs involving the same influencer field are triggered. Analysts can use this to prioritize triage and response machine learning alerts.

Elastic rule (View on GitHub)

 1[metadata]
 2creation_date = "2026/02/02"
 3maturity = "production"
 4updated_date = "2026/02/02"
 5
 6[rule]
 7author = ["Elastic"]
 8description = """
 9This rule uses alerts data to determine when multiple unique machine learning jobs involving the same influencer field are triggered.
10Analysts can use this to prioritize triage and response machine learning alerts.
11"""
12from = "now-45m"
13interval = "30m"
14language = "esql"
15license = "Elastic License v2"
16name = "Multiple Machine Learning Alerts by Influencer Field"
17references = ["https://www.elastic.co/guide/en/security/current/prebuilt-ml-jobs.html"]
18risk_score = 73
19rule_id = "da7f7a93-26e1-49ce-b336-963c6dc17c7b"
20severity = "high"
21tags = ["Use Case: Threat Detection", "Rule Type: Higher-Order Rule", "Resources: Investigation Guide", "Rule Type: Machine Learning"]
22timestamp_override = "event.ingested"
23type = "esql"
24
25query = '''
26from .alerts-security.*
27| where kibana.alert.rule.type == "machine_learning"
28| stats Esql.count_distinct_job_id = COUNT_DISTINCT(job_id),
29        Esql.job_id_values = VALUES(job_id),
30        Esql.rule_name_values = VALUES(kibana.alert.rule.name),
31        Esql.influencer_field_values = VALUES(influencers.influencer_field_values),
32        Esql.influencer_field_name = VALUES(influencers.influencer_field_name) by influencers.influencer_field_values, process.name, host.name
33| where Esql.count_distinct_job_id >= 3 and not influencers.influencer_field_values in ("root", "SYSTEM")
34| KEEP influencers.influencer_field_values, process.name, host.name, Esql.*
35'''
36note = """## Triage and analysis
37
38### Multiple Machine Learning Alerts by Influencer Field
39
40Attackers may trigger multiple alerts by performing suspicious actions under a compromised user entity. The detection rule identifies such patterns by correlating diverse machine learning alerts linked to the same entity, excluding known system accounts, thus prioritizing potential threats for analysts.
41
42### Possible investigation steps
43
44- Review the alert details to identify the specific influencer field involved to gather initial context.
45- Examine the timeline and sequence of the triggered alerts to understand the pattern of activity associated with the influencer field, noting any unusual or unexpected actions.
46- Cross-reference the user activity with known legitimate activities or scheduled tasks to rule out false positives, ensuring that the actions are not part of normal operations.
47- Investigate the source and destination IP addresses associated with the alerts to identify any suspicious or unauthorized access points.
48- Check for any recent changes in user permissions or group memberships that could indicate privilege escalation attempts.
49- Look into any recent login attempts or authentication failures for the user account to detect potential brute force or credential stuffing attacks.
50- Collaborate with the user or their manager to verify if the activities were authorized or if the account might be compromised.
51
52### False positive analysis
53
54- Alerts triggered by automated system processes or scripts that mimic user behavior can be false positives. To manage these, identify and exclude known benign scripts or processes from the rule.
55- Frequent alerts from users in roles that inherently require access to multiple systems or sensitive data, such as IT administrators, may not indicate compromise. Implement role-based exceptions to reduce noise.
56- Alerts generated by legitimate software updates or maintenance activities can be mistaken for suspicious behavior. Schedule these activities during known maintenance windows and exclude them from the rule during these times.
57- Users involved in testing or development environments may trigger multiple alerts due to their work nature. Create exceptions for these environments to prevent unnecessary alerts.
58- High-volume users, such as those in customer support or sales, may naturally generate more alerts. Monitor these users separately and adjust the rule to focus on unusual patterns rather than volume alone.
59
60### Response and remediation
61
62- Isolate the affected user account immediately to prevent further unauthorized access. Disable the account or change the password to stop any ongoing malicious activity.
63- Conduct a thorough review of the affected user's recent activities and access logs to identify any unauthorized actions or data access. This will help in understanding the scope of the compromise.
64- Remove any malicious software or unauthorized tools that may have been installed on the user's system. Use endpoint detection and response (EDR) tools to scan and clean the system.
65- Restore any altered or deleted data from backups, ensuring that the restored data is free from any malicious modifications.
66- Notify relevant stakeholders, including IT security teams and management, about the incident and the steps being taken to address it. This ensures that everyone is aware and can provide support if needed.
67- Implement additional monitoring on the affected user account and related systems to detect any further suspicious activities. This includes setting up alerts for unusual login attempts or data access patterns.
68- Review and update access controls and permissions for the affected user and similar accounts to prevent future incidents. Ensure that least privilege principles are applied."""

Triage and analysis

Multiple Machine Learning Alerts by Influencer Field

Attackers may trigger multiple alerts by performing suspicious actions under a compromised user entity. The detection rule identifies such patterns by correlating diverse machine learning alerts linked to the same entity, excluding known system accounts, thus prioritizing potential threats for analysts.

Possible investigation steps

  • Review the alert details to identify the specific influencer field involved to gather initial context.
  • Examine the timeline and sequence of the triggered alerts to understand the pattern of activity associated with the influencer field, noting any unusual or unexpected actions.
  • Cross-reference the user activity with known legitimate activities or scheduled tasks to rule out false positives, ensuring that the actions are not part of normal operations.
  • Investigate the source and destination IP addresses associated with the alerts to identify any suspicious or unauthorized access points.
  • Check for any recent changes in user permissions or group memberships that could indicate privilege escalation attempts.
  • Look into any recent login attempts or authentication failures for the user account to detect potential brute force or credential stuffing attacks.
  • Collaborate with the user or their manager to verify if the activities were authorized or if the account might be compromised.

False positive analysis

  • Alerts triggered by automated system processes or scripts that mimic user behavior can be false positives. To manage these, identify and exclude known benign scripts or processes from the rule.
  • Frequent alerts from users in roles that inherently require access to multiple systems or sensitive data, such as IT administrators, may not indicate compromise. Implement role-based exceptions to reduce noise.
  • Alerts generated by legitimate software updates or maintenance activities can be mistaken for suspicious behavior. Schedule these activities during known maintenance windows and exclude them from the rule during these times.
  • Users involved in testing or development environments may trigger multiple alerts due to their work nature. Create exceptions for these environments to prevent unnecessary alerts.
  • High-volume users, such as those in customer support or sales, may naturally generate more alerts. Monitor these users separately and adjust the rule to focus on unusual patterns rather than volume alone.

Response and remediation

  • Isolate the affected user account immediately to prevent further unauthorized access. Disable the account or change the password to stop any ongoing malicious activity.
  • Conduct a thorough review of the affected user's recent activities and access logs to identify any unauthorized actions or data access. This will help in understanding the scope of the compromise.
  • Remove any malicious software or unauthorized tools that may have been installed on the user's system. Use endpoint detection and response (EDR) tools to scan and clean the system.
  • Restore any altered or deleted data from backups, ensuring that the restored data is free from any malicious modifications.
  • Notify relevant stakeholders, including IT security teams and management, about the incident and the steps being taken to address it. This ensures that everyone is aware and can provide support if needed.
  • Implement additional monitoring on the affected user account and related systems to detect any further suspicious activities. This includes setting up alerts for unusual login attempts or data access patterns.
  • Review and update access controls and permissions for the affected user and similar accounts to prevent future incidents. Ensure that least privilege principles are applied.

References

Related rules

to-top