First Time Python Accessed Sensitive Credential Files

Detects the first time a Python process accesses sensitive credential files on a given host. This behavior may indicate post-exploitation credential theft via a malicious Python script, compromised dependency, or malicious model file deserialization. Legitimate Python processes do not typically access credential files such as SSH keys, AWS credentials, browser cookies, Kerberos tickets, or keychain databases, so a first occurrence is a strong indicator of compromise.

Elastic rule (View on GitHub)

 1[metadata]
 2creation_date = "2026/02/23"
 3integration = ["endpoint"]
 4maturity = "production"
 5updated_date = "2026/02/23"
 6
 7[rule]
 8author = ["Elastic"]
 9description = """
10Detects the first time a Python process accesses sensitive credential files on a given host. This behavior may indicate
11post-exploitation credential theft via a malicious Python script, compromised dependency, or malicious model file
12deserialization. Legitimate Python processes do not typically access credential files such as SSH keys, AWS credentials,
13browser cookies, Kerberos tickets, or keychain databases, so a first occurrence is a strong indicator of compromise.
14"""
15from = "now-9m"
16index = ["logs-endpoint.events.file-*"]
17language = "kuery"
18license = "Elastic License v2"
19name = "First Time Python Accessed Sensitive Credential Files"
20note = """## Triage and analysis
21
22### Investigating First Time Python Accessed Sensitive Credential Files
23
24Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch `__reduce__`) — often target sensitive credential files such as SSH keys, cloud provider credentials, browser session cookies, and macOS keychain data. Since legitimate Python processes do not typically access these files, a first occurrence from a Python process is highly suspicious.
25
26This rule leverages the Elastic Defend sensitive file `open` event, which is only collected for known sensitive file paths, combined with the New Terms rule type to alert on the first time a specific credential file is accessed by Python on a given host within a 7-day window.
27
28### Possible investigation steps
29
30- Examine the Python process command line and arguments to identify the script or command that triggered the file access.
31- Determine if the Python process was loading a model file (look for `torch.load`, `pickle.load`), running a standalone script, or executing via a compromised dependency.
32- Review the specific credential file that was accessed and assess the potential impact (SSH keys enable lateral movement, AWS credentials enable cloud access, browser cookies enable session hijacking).
33- Check for outbound network connections from the same process tree that may indicate credential exfiltration.
34- Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
35- Look for file creation events in `/tmp/` or other staging directories that may contain copies of the stolen credentials.
36
37### False positive analysis
38
39- Python-based secret management tools (e.g., `aws-cli`, `gcloud`) legitimately access credential files. Consider excluding known trusted executables by process path.
40- SSH automation scripts using `paramiko` or `fabric` may read SSH keys. Evaluate whether the access pattern matches known automation workflows.
41- Security scanning tools running Python may enumerate credential files as part of their assessment.
42
43### Response and remediation
44
45- Immediately rotate any credentials that were potentially accessed (SSH keys, AWS access keys, cloud tokens).
46- Quarantine the Python process and investigate the source script, package, or model file that triggered the access.
47- If a malicious file is confirmed, identify all hosts where it may have been distributed.
48- Review outbound network connections from the host around the time of the credential access to check for exfiltration.
49- Consider implementing `weights_only=True` enforcement for PyTorch model loading across the environment.
50"""
51references = [
52    "https://blog.trailofbits.com/2024/06/11/exploiting-ml-models-with-pickle-file-attacks-part-1/",
53    "https://github.com/trailofbits/fickling",
54]
55risk_score = 47
56rule_id = "03b150d9-9280-4eb8-9906-38cfb6184666"
57severity = "medium"
58tags = [
59    "Domain: Endpoint",
60    "OS: macOS",
61    "Use Case: Threat Detection",
62    "Tactic: Credential Access",
63    "Data Source: Elastic Defend",
64    "Resources: Investigation Guide",
65    "Domain: LLM",
66]
67timestamp_override = "event.ingested"
68type = "new_terms"
69query = '''
70event.category:file and host.os.type:macos and event.action:open and
71process.name:python*
72'''
73
74[[rule.threat]]
75framework = "MITRE ATT&CK"
76[[rule.threat.technique]]
77id = "T1555"
78name = "Credentials from Password Stores"
79reference = "https://attack.mitre.org/techniques/T1555/"
80[[rule.threat.technique.subtechnique]]
81id = "T1555.001"
82name = "Keychain"
83reference = "https://attack.mitre.org/techniques/T1555/001/"
84
85[rule.threat.tactic]
86id = "TA0006"
87name = "Credential Access"
88reference = "https://attack.mitre.org/tactics/TA0006/"
89
90[rule.new_terms]
91field = "new_terms_fields"
92value = ["host.id", "file.path"]
93[[rule.new_terms.history_window_start]]
94field = "history_window_start"
95value = "now-7d"

Triage and analysis

Investigating First Time Python Accessed Sensitive Credential Files

Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch __reduce__) — often target sensitive credential files such as SSH keys, cloud provider credentials, browser session cookies, and macOS keychain data. Since legitimate Python processes do not typically access these files, a first occurrence from a Python process is highly suspicious.

This rule leverages the Elastic Defend sensitive file open event, which is only collected for known sensitive file paths, combined with the New Terms rule type to alert on the first time a specific credential file is accessed by Python on a given host within a 7-day window.

Possible investigation steps

  • Examine the Python process command line and arguments to identify the script or command that triggered the file access.
  • Determine if the Python process was loading a model file (look for torch.load, pickle.load), running a standalone script, or executing via a compromised dependency.
  • Review the specific credential file that was accessed and assess the potential impact (SSH keys enable lateral movement, AWS credentials enable cloud access, browser cookies enable session hijacking).
  • Check for outbound network connections from the same process tree that may indicate credential exfiltration.
  • Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
  • Look for file creation events in /tmp/ or other staging directories that may contain copies of the stolen credentials.

False positive analysis

  • Python-based secret management tools (e.g., aws-cli, gcloud) legitimately access credential files. Consider excluding known trusted executables by process path.
  • SSH automation scripts using paramiko or fabric may read SSH keys. Evaluate whether the access pattern matches known automation workflows.
  • Security scanning tools running Python may enumerate credential files as part of their assessment.

Response and remediation

  • Immediately rotate any credentials that were potentially accessed (SSH keys, AWS access keys, cloud tokens).
  • Quarantine the Python process and investigate the source script, package, or model file that triggered the access.
  • If a malicious file is confirmed, identify all hosts where it may have been distributed.
  • Review outbound network connections from the host around the time of the credential access to check for exfiltration.
  • Consider implementing weights_only=True enforcement for PyTorch model loading across the environment.

References

Related rules

to-top