Splunk External Alerts
Generates a detection alert for each Splunk alert written to the configured indices. Enabling this rule allows you to immediately begin investigating Splunk alerts in the app.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2025/07/31"
3integration = ["splunk"]
4maturity = "production"
5promotion = true
6updated_date = "2025/10/17"
7
8[rule]
9author = ["Elastic"]
10description = """
11Generates a detection alert for each Splunk alert written to the configured indices. Enabling this rule allows you to
12immediately begin investigating Splunk alerts in the app.
13"""
14from = "now-2m"
15index = ["logs-splunk.alert-*"]
16interval = "1m"
17language = "kuery"
18license = "Elastic License v2"
19max_signals = 1000
20name = "Splunk External Alerts"
21note = """## Triage and analysis
22
23### Investigating Splunk External Alerts
24
25Splunk monitors and analyzes data, often used in security environments to track and respond to potential threats. The rule identifies such manipulations by flagging alerts enabling timely investigation and response.
26
27### Possible investigation steps
28
29- Examine the specific indices where the alert was written to identify any unusual or unauthorized activity.
30- Cross-reference the alert with recent changes or activities in the Splunk environment to determine if the alert could be a result of legitimate administrative actions.
31- Investigate the source and context of the alert to identify any patterns or anomalies that could indicate manipulation or false positives.
32- Check for any related alerts or logs that might provide additional context or evidence of adversarial behavior.
33- Consult the Splunk investigation guide and resources tagged in the alert for specific guidance on handling similar threats.
34
35### False positive analysis
36
37- Alerts triggered by routine Splunk maintenance activities can be false positives. To manage these, identify and document regular maintenance schedules and create exceptions for alerts generated during these times.
38- Frequent alerts from specific indices that are known to contain non-threatening data can be excluded by adjusting the rule to ignore these indices, ensuring only relevant alerts are investigated.
39- Alerts generated by automated scripts or tools that interact with Splunk for legitimate purposes can be false positives. Review and whitelist these scripts or tools to prevent unnecessary alerts.
40- If certain user actions consistently trigger alerts but are verified as non-malicious, consider creating user-specific exceptions to reduce noise and focus on genuine threats.
41- Regularly review and update the list of exceptions to ensure they remain relevant and do not inadvertently exclude new or evolving threats.
42
43### Response and remediation
44
45- Immediately isolate affected systems to prevent further manipulation of Splunk alerts and potential spread of malicious activity.
46- Review and validate the integrity of the Splunk alert indices to ensure no unauthorized changes have been made.
47- Restore any compromised Splunk alert configurations from a known good backup to ensure accurate monitoring and alerting.
48- Conduct a thorough audit of user access and permissions within Splunk to identify and revoke any unauthorized access.
49- Escalate the incident to the security operations center (SOC) for further analysis and to determine if additional systems or data have been affected.
50- Implement enhanced monitoring on Splunk indices to detect any future unauthorized changes or suspicious activities.
51- Document the incident details and response actions taken for future reference and to improve incident response procedures.
52"""
53references = ["https://docs.elastic.co/en/integrations/splunk"]
54risk_score = 47
55rule_id = "d3b6222f-537e-4b84-956a-3ebae2dcf811"
56rule_name_override = "splunk.alert.source"
57setup = """## Setup
58
59### Splunk Alert Integration
60This rule is designed to capture alert events generated by the Splunk integration and promote them as Elastic detection alerts.
61
62To capture Splunk alerts, install and configure the Splunk integration to ingest alert events into the `logs-splunk.alert-*` index pattern.
63
64If this rule is enabled alongside the External Alerts promotion rule (UUID: eb079c62-4481-4d6e-9643-3ca499df7aaa), you may receive duplicate alerts for the same Splunk events. Consider adding a rule exception for the External Alert rule to exclude data_stream.dataset:splunk.alert to avoid receiving duplicate alerts.
65
66### Additional notes
67
68For information on troubleshooting the maximum alerts warning please refer to this [guide](https://www.elastic.co/guide/en/security/current/alerts-ui-monitor.html#troubleshoot-max-alerts).
69"""
70severity = "medium"
71tags = [
72 "Data Source: Splunk",
73 "Use Case: Threat Detection",
74 "Resources: Investigation Guide",
75 "Promotion: External Alerts",
76]
77timestamp_override = "event.ingested"
78type = "query"
79
80query = '''
81event.kind: alert and data_stream.dataset: splunk.alert
82'''
83
84
85[[rule.risk_score_mapping]]
86field = "event.risk_score"
87operator = "equals"
88value = ""
89
90[[rule.severity_mapping]]
91field = "event.severity"
92operator = "equals"
93severity = "low"
94value = "21"
95
96[[rule.severity_mapping]]
97field = "event.severity"
98operator = "equals"
99severity = "medium"
100value = "47"
101
102[[rule.severity_mapping]]
103field = "event.severity"
104operator = "equals"
105severity = "high"
106value = "73"
107
108[[rule.severity_mapping]]
109field = "event.severity"
110operator = "equals"
111severity = "critical"
112value = "99"
Triage and analysis
Investigating Splunk External Alerts
Splunk monitors and analyzes data, often used in security environments to track and respond to potential threats. The rule identifies such manipulations by flagging alerts enabling timely investigation and response.
Possible investigation steps
- Examine the specific indices where the alert was written to identify any unusual or unauthorized activity.
- Cross-reference the alert with recent changes or activities in the Splunk environment to determine if the alert could be a result of legitimate administrative actions.
- Investigate the source and context of the alert to identify any patterns or anomalies that could indicate manipulation or false positives.
- Check for any related alerts or logs that might provide additional context or evidence of adversarial behavior.
- Consult the Splunk investigation guide and resources tagged in the alert for specific guidance on handling similar threats.
False positive analysis
- Alerts triggered by routine Splunk maintenance activities can be false positives. To manage these, identify and document regular maintenance schedules and create exceptions for alerts generated during these times.
- Frequent alerts from specific indices that are known to contain non-threatening data can be excluded by adjusting the rule to ignore these indices, ensuring only relevant alerts are investigated.
- Alerts generated by automated scripts or tools that interact with Splunk for legitimate purposes can be false positives. Review and whitelist these scripts or tools to prevent unnecessary alerts.
- If certain user actions consistently trigger alerts but are verified as non-malicious, consider creating user-specific exceptions to reduce noise and focus on genuine threats.
- Regularly review and update the list of exceptions to ensure they remain relevant and do not inadvertently exclude new or evolving threats.
Response and remediation
- Immediately isolate affected systems to prevent further manipulation of Splunk alerts and potential spread of malicious activity.
- Review and validate the integrity of the Splunk alert indices to ensure no unauthorized changes have been made.
- Restore any compromised Splunk alert configurations from a known good backup to ensure accurate monitoring and alerting.
- Conduct a thorough audit of user access and permissions within Splunk to identify and revoke any unauthorized access.
- Escalate the incident to the security operations center (SOC) for further analysis and to determine if additional systems or data have been affected.
- Implement enhanced monitoring on Splunk indices to detect any future unauthorized changes or suspicious activities.
- Document the incident details and response actions taken for future reference and to improve incident response procedures.
References
Related rules
- CrowdStrike External Alerts
- Elastic Security External Alerts
- Google SecOps External Alerts
- Microsoft Sentinel External Alerts
- SentinelOne Alert External Alerts