First Time Python Created a LaunchAgent or LaunchDaemon
Detects the first time a Python process creates or modifies a LaunchAgent or LaunchDaemon plist file on a given host. Malicious Python scripts, compromised dependencies, or model file deserialization can establish persistence on macOS by writing plist files to LaunchAgent or LaunchDaemon directories. Legitimate Python processes do not typically create persistence mechanisms, so a first occurrence is a strong indicator of compromise.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2026/02/23"
3integration = ["endpoint"]
4maturity = "production"
5updated_date = "2026/02/23"
6
7[rule]
8author = ["Elastic"]
9description = """
10Detects the first time a Python process creates or modifies a LaunchAgent or LaunchDaemon plist file on a given host.
11Malicious Python scripts, compromised dependencies, or model file deserialization can establish persistence on macOS by
12writing plist files to LaunchAgent or LaunchDaemon directories. Legitimate Python processes do not typically create
13persistence mechanisms, so a first occurrence is a strong indicator of compromise.
14"""
15from = "now-9m"
16index = ["logs-endpoint.events.persistence-*"]
17language = "kuery"
18license = "Elastic License v2"
19name = "First Time Python Created a LaunchAgent or LaunchDaemon"
20note = """## Triage and analysis
21
22### Investigating First Time Python Created a LaunchAgent or LaunchDaemon
23
24macOS LaunchAgents and LaunchDaemons are plist files that configure programs to run automatically at login or boot. Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch `__reduce__`) — can drop plist files to establish persistence on the compromised host. This ensures their payload survives reboots and user logouts.
25
26This rule uses the Elastic Defend persistence event type (`event.action:"launch_daemon"`), which captures plist metadata including the program arguments, run-at-load configuration, and keep-alive settings. The New Terms rule type alerts on the first time a Python process creates a LaunchAgent or LaunchDaemon on a given host within a 7-day window.
27
28### Possible investigation steps
29
30- Review the persistence event fields (`Persistence.runatload`, `Persistence.keepalive`, `Persistence.args`, `Persistence.path`) to understand the plist configuration.
31- Examine the program path and arguments specified in the plist to determine if they reference a known legitimate application or a suspicious binary.
32- Determine if the Python process was loading a model file (look for `torch.load`, `pickle.load`), running a standalone script, or executing via a compromised dependency.
33- Verify if the target binary referenced in the plist exists on disk and whether it is signed or trusted.
34- Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
35- Check for other persistence mechanisms that may have been established around the same time.
36
37### False positive analysis
38
39- Some Python-based system management tools (e.g., Ansible, SaltStack) may legitimately create LaunchAgent or LaunchDaemon plist files. Evaluate whether the activity matches a known automation workflow.
40- Python-based application installers may create plist files during setup. Check if the activity correlates with a known software installation.
41
42### Response and remediation
43
44- Immediately unload the suspicious LaunchAgent or LaunchDaemon using `launchctl unload` with the plist path.
45- Remove the suspicious plist file and any associated binary it references.
46- Kill any processes launched by the plist file.
47- Investigate and quarantine the Python script, package, or model file that created the persistence mechanism.
48- Scan the host for additional indicators of compromise.
49- If a malicious file is confirmed, identify all hosts where it may have been distributed.
50"""
51references = [
52 "https://blog.trailofbits.com/2024/06/11/exploiting-ml-models-with-pickle-file-attacks-part-1/",
53 "https://github.com/trailofbits/fickling",
54]
55risk_score = 47
56rule_id = "25368123-b7b8-4344-9fd4-df28051b4c6e"
57severity = "medium"
58tags = [
59 "Domain: Endpoint",
60 "OS: macOS",
61 "Use Case: Threat Detection",
62 "Tactic: Persistence",
63 "Data Source: Elastic Defend",
64 "Resources: Investigation Guide",
65 "Domain: LLM",
66]
67timestamp_override = "event.ingested"
68type = "new_terms"
69query = '''
70host.os.type:macos and event.action:"launch_daemon" and
71process.name:python*
72'''
73
74[[rule.threat]]
75framework = "MITRE ATT&CK"
76[[rule.threat.technique]]
77id = "T1543"
78name = "Create or Modify System Process"
79reference = "https://attack.mitre.org/techniques/T1543/"
80[[rule.threat.technique.subtechnique]]
81id = "T1543.001"
82name = "Launch Agent"
83reference = "https://attack.mitre.org/techniques/T1543/001/"
84
85[rule.threat.tactic]
86id = "TA0003"
87name = "Persistence"
88reference = "https://attack.mitre.org/tactics/TA0003/"
89
90[rule.new_terms]
91field = "new_terms_fields"
92value = ["host.id", "file.path"]
93[[rule.new_terms.history_window_start]]
94field = "history_window_start"
95value = "now-7d"
Triage and analysis
Investigating First Time Python Created a LaunchAgent or LaunchDaemon
macOS LaunchAgents and LaunchDaemons are plist files that configure programs to run automatically at login or boot. Attackers who achieve Python code execution — whether through malicious scripts, compromised dependencies, or model file deserialization (e.g., pickle/PyTorch __reduce__) — can drop plist files to establish persistence on the compromised host. This ensures their payload survives reboots and user logouts.
This rule uses the Elastic Defend persistence event type (event.action:"launch_daemon"), which captures plist metadata including the program arguments, run-at-load configuration, and keep-alive settings. The New Terms rule type alerts on the first time a Python process creates a LaunchAgent or LaunchDaemon on a given host within a 7-day window.
Possible investigation steps
- Review the persistence event fields (
Persistence.runatload,Persistence.keepalive,Persistence.args,Persistence.path) to understand the plist configuration. - Examine the program path and arguments specified in the plist to determine if they reference a known legitimate application or a suspicious binary.
- Determine if the Python process was loading a model file (look for
torch.load,pickle.load), running a standalone script, or executing via a compromised dependency. - Verify if the target binary referenced in the plist exists on disk and whether it is signed or trusted.
- Investigate the origin of any recently downloaded scripts, packages, or model files on the host.
- Check for other persistence mechanisms that may have been established around the same time.
False positive analysis
- Some Python-based system management tools (e.g., Ansible, SaltStack) may legitimately create LaunchAgent or LaunchDaemon plist files. Evaluate whether the activity matches a known automation workflow.
- Python-based application installers may create plist files during setup. Check if the activity correlates with a known software installation.
Response and remediation
- Immediately unload the suspicious LaunchAgent or LaunchDaemon using
launchctl unloadwith the plist path. - Remove the suspicious plist file and any associated binary it references.
- Kill any processes launched by the plist file.
- Investigate and quarantine the Python script, package, or model file that created the persistence mechanism.
- Scan the host for additional indicators of compromise.
- If a malicious file is confirmed, identify all hosts where it may have been distributed.
References
Related rules
- Unusual Process Modifying GenAI Configuration File
- First Time Python Accessed Sensitive Credential Files
- First Time Python Spawned a Shell on Host
- Execution via OpenClaw Agent
- GenAI Process Accessing Sensitive Files