Splunk External Alerts
Generates a detection alert for each Splunk alert written to the configured indices. Enabling this rule allows you to immediately begin investigating Splunk alerts in the app.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2025/07/31"
3integration = ["splunk"]
4maturity = "production"
5promotion = true
6min_stack_version = "8.18.0"
7min_stack_comments = "Introduced support for Splunk alert integration and promotion"
8updated_date = "2025/08/04"
9
10[rule]
11author = ["Elastic"]
12description = """
13Generates a detection alert for each Splunk alert written to the configured indices. Enabling this rule allows you to
14immediately begin investigating Splunk alerts in the app.
15"""
16from = "now-2m"
17index = ["logs-splunk.alert-*"]
18interval = "1m"
19language = "kuery"
20license = "Elastic License v2"
21max_signals = 1000
22name = "Splunk External Alerts"
23note = """## Triage and analysis
24
25### Investigating Splunk External Alerts
26
27Splunk monitors and analyzes data, often used in security environments to track and respond to potential threats. The rule identifies such manipulations by flagging alerts enabling timely investigation and response.
28
29### Possible investigation steps
30
31- Examine the specific indices where the alert was written to identify any unusual or unauthorized activity.
32- Cross-reference the alert with recent changes or activities in the Splunk environment to determine if the alert could be a result of legitimate administrative actions.
33- Investigate the source and context of the alert to identify any patterns or anomalies that could indicate manipulation or false positives.
34- Check for any related alerts or logs that might provide additional context or evidence of adversarial behavior.
35- Consult the Splunk investigation guide and resources tagged in the alert for specific guidance on handling similar threats.
36
37### False positive analysis
38
39- Alerts triggered by routine Splunk maintenance activities can be false positives. To manage these, identify and document regular maintenance schedules and create exceptions for alerts generated during these times.
40- Frequent alerts from specific indices that are known to contain non-threatening data can be excluded by adjusting the rule to ignore these indices, ensuring only relevant alerts are investigated.
41- Alerts generated by automated scripts or tools that interact with Splunk for legitimate purposes can be false positives. Review and whitelist these scripts or tools to prevent unnecessary alerts.
42- If certain user actions consistently trigger alerts but are verified as non-malicious, consider creating user-specific exceptions to reduce noise and focus on genuine threats.
43- Regularly review and update the list of exceptions to ensure they remain relevant and do not inadvertently exclude new or evolving threats.
44
45### Response and remediation
46
47- Immediately isolate affected systems to prevent further manipulation of Splunk alerts and potential spread of malicious activity.
48- Review and validate the integrity of the Splunk alert indices to ensure no unauthorized changes have been made.
49- Restore any compromised Splunk alert configurations from a known good backup to ensure accurate monitoring and alerting.
50- Conduct a thorough audit of user access and permissions within Splunk to identify and revoke any unauthorized access.
51- Escalate the incident to the security operations center (SOC) for further analysis and to determine if additional systems or data have been affected.
52- Implement enhanced monitoring on Splunk indices to detect any future unauthorized changes or suspicious activities.
53- Document the incident details and response actions taken for future reference and to improve incident response procedures.
54"""
55references = ["https://docs.elastic.co/en/integrations/splunk"]
56risk_score = 47
57rule_id = "d3b6222f-537e-4b84-956a-3ebae2dcf811"
58rule_name_override = "splunk.alert.source"
59setup = """## Setup
60
61### Splunk Alert Integration
62This rule is designed to capture alert events generated by the Splunk integration and promote them as Elastic detection alerts.
63
64To capture Splunk alerts, install and configure the Splunk integration to ingest alert events into the `logs-splunk.alert-*` index pattern.
65
66If this rule is enabled alongside the External Alerts promotion rule (UUID: eb079c62-4481-4d6e-9643-3ca499df7aaa), you may receive duplicate alerts for the same Splunk events. Consider adding a rule exception for the External Alert rule to exclude data_stream.dataset:splunk.alert to avoid receiving duplicate alerts.
67
68### Additional notes
69
70For information on troubleshooting the maximum alerts warning please refer to this [guide](https://www.elastic.co/guide/en/security/current/alerts-ui-monitor.html#troubleshoot-max-alerts).
71"""
72severity = "medium"
73tags = ["Data Source: Splunk", "Use Case: Threat Detection", "Resources: Investigation Guide", "Promotion: External Alerts"]
74timestamp_override = "event.ingested"
75type = "query"
76
77query = '''
78event.kind: alert and data_stream.dataset: splunk.alert
79'''
80
81
82[[rule.risk_score_mapping]]
83field = "event.risk_score"
84operator = "equals"
85value = ""
86
87[[rule.severity_mapping]]
88field = "event.severity"
89operator = "equals"
90severity = "low"
91value = "21"
92
93[[rule.severity_mapping]]
94field = "event.severity"
95operator = "equals"
96severity = "medium"
97value = "47"
98
99[[rule.severity_mapping]]
100field = "event.severity"
101operator = "equals"
102severity = "high"
103value = "73"
104
105[[rule.severity_mapping]]
106field = "event.severity"
107operator = "equals"
108severity = "critical"
109value = "99"
Triage and analysis
Investigating Splunk External Alerts
Splunk monitors and analyzes data, often used in security environments to track and respond to potential threats. The rule identifies such manipulations by flagging alerts enabling timely investigation and response.
Possible investigation steps
- Examine the specific indices where the alert was written to identify any unusual or unauthorized activity.
- Cross-reference the alert with recent changes or activities in the Splunk environment to determine if the alert could be a result of legitimate administrative actions.
- Investigate the source and context of the alert to identify any patterns or anomalies that could indicate manipulation or false positives.
- Check for any related alerts or logs that might provide additional context or evidence of adversarial behavior.
- Consult the Splunk investigation guide and resources tagged in the alert for specific guidance on handling similar threats.
False positive analysis
- Alerts triggered by routine Splunk maintenance activities can be false positives. To manage these, identify and document regular maintenance schedules and create exceptions for alerts generated during these times.
- Frequent alerts from specific indices that are known to contain non-threatening data can be excluded by adjusting the rule to ignore these indices, ensuring only relevant alerts are investigated.
- Alerts generated by automated scripts or tools that interact with Splunk for legitimate purposes can be false positives. Review and whitelist these scripts or tools to prevent unnecessary alerts.
- If certain user actions consistently trigger alerts but are verified as non-malicious, consider creating user-specific exceptions to reduce noise and focus on genuine threats.
- Regularly review and update the list of exceptions to ensure they remain relevant and do not inadvertently exclude new or evolving threats.
Response and remediation
- Immediately isolate affected systems to prevent further manipulation of Splunk alerts and potential spread of malicious activity.
- Review and validate the integrity of the Splunk alert indices to ensure no unauthorized changes have been made.
- Restore any compromised Splunk alert configurations from a known good backup to ensure accurate monitoring and alerting.
- Conduct a thorough audit of user access and permissions within Splunk to identify and revoke any unauthorized access.
- Escalate the incident to the security operations center (SOC) for further analysis and to determine if additional systems or data have been affected.
- Implement enhanced monitoring on Splunk indices to detect any future unauthorized changes or suspicious activities.
- Document the incident details and response actions taken for future reference and to improve incident response procedures.
References
Related rules
- Google SecOps External Alerts
- Microsoft Sentinel External Alerts
- SentinelOne Alert External Alerts
- SentinelOne Threat External Alerts
- Unusual Web Config File Access