Web Server Suspicious User Agent Requests
This rule detects unusual spikes in web server requests with uncommon or suspicious user-agent strings. Such activity may indicate reconnaissance attempts by attackers trying to identify vulnerabilities in web applications or servers. These user-agents are often associated with automated tools used for scanning, vulnerability assessment, or brute-force attacks.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2025/11/19"
3integration = ["nginx", "apache", "apache_tomcat", "iis"]
4maturity = "production"
5updated_date = "2025/12/01"
6
7[rule]
8author = ["Elastic"]
9description = """
10This rule detects unusual spikes in web server requests with uncommon or suspicious user-agent strings. Such activity may
11indicate reconnaissance attempts by attackers trying to identify vulnerabilities in web applications or servers. These
12user-agents are often associated with automated tools used for scanning, vulnerability assessment, or brute-force attacks.
13"""
14from = "now-9m"
15interval = "10m"
16language = "esql"
17license = "Elastic License v2"
18name = "Web Server Suspicious User Agent Requests"
19note = """ ## Triage and analysis
20
21> **Disclaimer**:
22> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
23
24### Investigating Web Server Suspicious User Agent Requests
25
26This rule flags surges of web requests that advertise scanner or brute-force tool user agents, signaling active reconnaissance against your web servers and applications. A common pattern is dirsearch or gobuster sweeping for hidden paths, firing hundreds of rapid GETs across diverse URLs from one host and probing admin panels, backup folders, and robots.txt.
27
28### Possible investigation steps
29
30- Verify whether the activity aligns with approved scanners or uptime checks by cross-referencing inventories, allowlists, change windows, and egress ranges; otherwise enrich the originating IP with ASN, geolocation, and threat reputation to gauge risk.
31- Sample representative requests to identify targeted paths and payloads (e.g., admin panels, .git/.env, backups, traversal, SQLi/XSS markers) and note any successful responses or downloads that indicate information exposure.
32- Analyze request rate, methods, and status-code distribution to separate noisy recon from successful discovery or brute-force patterns, highlighting any POST/PUT with nontrivial bodies.
33- Correlate the same client across hosts and security layers (application/auth logs, WAF/CDN, IDS) to determine whether it is scanning multiple services, triggering signatures, or attempting credential stuffing.
34- Assess user-agent authenticity and evasiveness by comparing HTTP header order/values and TLS fingerprints (JA3/JA4) to expected clients, and verify true client identity via forwarded-for headers if behind a proxy or CDN.
35
36### False positive analysis
37
38- Legitimate, scheduled vulnerability assessments by internal teams (e.g., Nessus, Nikto, or OpenVAS) can generate large volumes of requests with those user-agent strings across many paths.
39- Developer or QA testing using discovery/fuzzing or intercept-proxy tools (Dirsearch, Gobuster, Ffuf, Burp, or OWASP ZAP) may unintentionally target production hosts, producing a short-lived spike with diverse URLs.
40
41### Response and remediation
42
43- Immediately contain by blocking or rate-limiting the originating IPs at the WAF/CDN and edge firewall, and add temporary rules to drop or challenge requests that advertise tool user agents such as "nikto", "sqlmap", "dirsearch", "wpscan", "gobuster", or "burp".
44- If traffic is proxied (CDN/reverse proxy), identify the true client via forwarded headers and extend blocks at both layers, enabling bot management or JS challenges on swept paths like /admin, /.git, /.env, /backup, and common discovery endpoints.
45- Eradicate exposure by removing or restricting access to sensitive files and directories uncovered by the scans, rotating any credentials or API keys found, invalidating active sessions, and disabling public access to administrative panels until hardened.
46- Recover by verifying no unauthorized changes or data exfiltration occurred, tuning per-IP and per-path rate limits to prevent path-sweeps while preserving legitimate traffic, and reintroducing normal rules only after fixes are deployed and stability is confirmed.
47- Escalate to incident response if sensitive files are successfully downloaded (HTTP 200/206 on /.git, /.env, or backups), any login or account creation succeeds, multiple hosts or environments are targeted, or activity persists after blocking via UA spoofing or rapid IP rotation.
48- Harden long term by enforcing WAF signatures for known scanner UAs and path patterns, denying directory listing and direct access to /.git, /.env, /backup and similar artifacts, requiring MFA/VPN for /admin and management APIs, and deploying auto-ban controls like fail2ban or mod_security.
49"""
50risk_score = 21
51rule_id = "a1b7ffa4-bf80-4bf1-86ad-c3f4dc718b35"
52severity = "low"
53tags = [
54 "Domain: Web",
55 "Use Case: Threat Detection",
56 "Tactic: Reconnaissance",
57 "Tactic: Credential Access",
58 "Data Source: Nginx",
59 "Data Source: Apache",
60 "Data Source: Apache Tomcat",
61 "Data Source: IIS",
62 "Resources: Investigation Guide",
63]
64timestamp_override = "event.ingested"
65type = "esql"
66query = '''
67from logs-nginx.access-*, logs-apache.access-*, logs-apache_tomcat.access-*, logs-iis.access-*
68
69| eval Esql.user_agent_original_to_lower = to_lower(user_agent.original), Esql.url_original_to_lower = to_lower(url.original)
70
71| where
72 Esql.user_agent_original_to_lower like "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/74.0.3729.169 safari/537.36" or // Nikto
73 Esql.user_agent_original_to_lower like "nikto*" or // Nikto
74 Esql.user_agent_original_to_lower like "mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)" or // Nessus Vulnerability Scanner
75 Esql.user_agent_original_to_lower like "*nessus*" or // Nessus Vulnerability Scanner
76 Esql.user_agent_original_to_lower like "sqlmap/*" or // SQLMap
77 Esql.user_agent_original_to_lower like "wpscan*" or // WPScan
78 Esql.user_agent_original_to_lower like "feroxbuster/*" or // Feroxbuster
79 Esql.user_agent_original_to_lower like "masscan*" or // Masscan & masscan-ng
80 Esql.user_agent_original_to_lower like "fuzz*" or // Ffuf
81 Esql.user_agent_original_to_lower like "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/user_agent.original like~ 87.0.4280.88 safari/537.36" or // Dirsearch
82 Esql.user_agent_original_to_lower like "mozilla/4.0 (compatible; msie 6.0; windows nt 5.1)" or // Dirb
83 Esql.user_agent_original_to_lower like "dirbuster*" or // Dirbuster
84 Esql.user_agent_original_to_lower like "gobuster/*" or // Gobuster
85 Esql.user_agent_original_to_lower like "*dirsearch*" or // dirsearch
86 Esql.user_agent_original_to_lower like "*nmap*" or // Nmap Scripting Engine
87 Esql.user_agent_original_to_lower like "*hydra*" or // Hydra Brute Forcer
88 Esql.user_agent_original_to_lower like "*w3af*" or // w3af Web Application Attack and Audit Framework
89 Esql.user_agent_original_to_lower like "*arachni*" or // Arachni Web Application Security Scanner
90 Esql.user_agent_original_to_lower like "*skipfish*" or // Skipfish Web Application Security Scanner
91 Esql.user_agent_original_to_lower like "*openvas*" or // OpenVAS Vulnerability Scanner
92 Esql.user_agent_original_to_lower like "*acunetix*" or // Acunetix Vulnerability Scanner
93 Esql.user_agent_original_to_lower like "*zap*" or // OWASP ZAP
94 Esql.user_agent_original_to_lower like "*burp*" // Burp Suite
95
96| keep
97 @timestamp,
98 event.dataset,
99 user_agent.original,
100 source.ip,
101 agent.id,
102 host.name,
103 Esql.url_original_to_lower,
104 Esql.user_agent_original_to_lower
105| stats
106 Esql.event_count = count(),
107 Esql.url_original_count_distinct = count_distinct(Esql.url_original_to_lower),
108 Esql.host_name_values = values(host.name),
109 Esql.agent_id_values = values(agent.id),
110 Esql.url_original_values = values(Esql.url_original_to_lower),
111 Esql.user_agent_original_values = values(Esql.user_agent_original_to_lower),
112 Esql.event_dataset_values = values(event.dataset)
113 by source.ip, agent.id
114| where
115 Esql.event_count > 50 and Esql.url_original_count_distinct > 10
116'''
117
118[[rule.threat]]
119framework = "MITRE ATT&CK"
120
121[[rule.threat.technique]]
122id = "T1595"
123name = "Active Scanning"
124reference = "https://attack.mitre.org/techniques/T1595/"
125
126[[rule.threat.technique.subtechnique]]
127id = "T1595.001"
128name = "Scanning IP Blocks"
129reference = "https://attack.mitre.org/techniques/T1595/001/"
130
131[[rule.threat.technique.subtechnique]]
132id = "T1595.002"
133name = "Vulnerability Scanning"
134reference = "https://attack.mitre.org/techniques/T1595/002/"
135
136[[rule.threat.technique.subtechnique]]
137id = "T1595.003"
138name = "Wordlist Scanning"
139reference = "https://attack.mitre.org/techniques/T1595/003/"
140
141[rule.threat.tactic]
142id = "TA0043"
143name = "Reconnaissance"
144reference = "https://attack.mitre.org/tactics/TA0043/"
145
146[[rule.threat]]
147framework = "MITRE ATT&CK"
148
149[[rule.threat.technique]]
150id = "T1110"
151name = "Brute Force"
152reference = "https://attack.mitre.org/techniques/T1110/"
153
154[rule.threat.tactic]
155id = "TA0006"
156name = "Credential Access"
157reference = "https://attack.mitre.org/tactics/TA0006/"
Triage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Web Server Suspicious User Agent Requests
This rule flags surges of web requests that advertise scanner or brute-force tool user agents, signaling active reconnaissance against your web servers and applications. A common pattern is dirsearch or gobuster sweeping for hidden paths, firing hundreds of rapid GETs across diverse URLs from one host and probing admin panels, backup folders, and robots.txt.
Possible investigation steps
- Verify whether the activity aligns with approved scanners or uptime checks by cross-referencing inventories, allowlists, change windows, and egress ranges; otherwise enrich the originating IP with ASN, geolocation, and threat reputation to gauge risk.
- Sample representative requests to identify targeted paths and payloads (e.g., admin panels, .git/.env, backups, traversal, SQLi/XSS markers) and note any successful responses or downloads that indicate information exposure.
- Analyze request rate, methods, and status-code distribution to separate noisy recon from successful discovery or brute-force patterns, highlighting any POST/PUT with nontrivial bodies.
- Correlate the same client across hosts and security layers (application/auth logs, WAF/CDN, IDS) to determine whether it is scanning multiple services, triggering signatures, or attempting credential stuffing.
- Assess user-agent authenticity and evasiveness by comparing HTTP header order/values and TLS fingerprints (JA3/JA4) to expected clients, and verify true client identity via forwarded-for headers if behind a proxy or CDN.
False positive analysis
- Legitimate, scheduled vulnerability assessments by internal teams (e.g., Nessus, Nikto, or OpenVAS) can generate large volumes of requests with those user-agent strings across many paths.
- Developer or QA testing using discovery/fuzzing or intercept-proxy tools (Dirsearch, Gobuster, Ffuf, Burp, or OWASP ZAP) may unintentionally target production hosts, producing a short-lived spike with diverse URLs.
Response and remediation
- Immediately contain by blocking or rate-limiting the originating IPs at the WAF/CDN and edge firewall, and add temporary rules to drop or challenge requests that advertise tool user agents such as "nikto", "sqlmap", "dirsearch", "wpscan", "gobuster", or "burp".
- If traffic is proxied (CDN/reverse proxy), identify the true client via forwarded headers and extend blocks at both layers, enabling bot management or JS challenges on swept paths like /admin, /.git, /.env, /backup, and common discovery endpoints.
- Eradicate exposure by removing or restricting access to sensitive files and directories uncovered by the scans, rotating any credentials or API keys found, invalidating active sessions, and disabling public access to administrative panels until hardened.
- Recover by verifying no unauthorized changes or data exfiltration occurred, tuning per-IP and per-path rate limits to prevent path-sweeps while preserving legitimate traffic, and reintroducing normal rules only after fixes are deployed and stability is confirmed.
- Escalate to incident response if sensitive files are successfully downloaded (HTTP 200/206 on /.git, /.env, or backups), any login or account creation succeeds, multiple hosts or environments are targeted, or activity persists after blocking via UA spoofing or rapid IP rotation.
- Harden long term by enforcing WAF signatures for known scanner UAs and path patterns, denying directory listing and direct access to /.git, /.env, /backup and similar artifacts, requiring MFA/VPN for /admin and management APIs, and deploying auto-ban controls like fail2ban or mod_security.
Related rules
- Web Server Potential Command Injection Request
- Potential Spike in Web Server Error Logs
- Web Server Discovery or Fuzzing Activity
- Web Server Potential Spike in Error Response Codes
- Suspicious Kerberos Authentication Ticket Request