Unusual Kubernetes Sensitive Workload Modification
Detects the creation or modification of several sensitive workloads, such as DaemonSets, Deployments, or CronJobs, by an unusual user agent, source IP and username, which may indicate privilege escalation or unauthorized access within the cluster.
Elastic rule (View on GitHub)
1[metadata]
2creation_date = "2026/03/05"
3integration = ["kubernetes"]
4maturity = "production"
5updated_date = "2026/03/05"
6
7[rule]
8author = ["Elastic"]
9description = """
10Detects the creation or modification of several sensitive workloads, such as DaemonSets, Deployments,
11or CronJobs, by an unusual user agent, source IP and username, which may indicate privilege escalation
12or unauthorized access within the cluster.
13"""
14index = ["logs-kubernetes.audit_logs-*"]
15language = "kuery"
16license = "Elastic License v2"
17name = "Unusual Kubernetes Sensitive Workload Modification"
18note = """## Triage and analysis
19
20> **Disclaimer**:
21> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
22
23### Investigating Unusual Kubernetes Sensitive Workload Modification
24
25This rule detects allowed create or patch activity against sensitive Kubernetes workloads (DaemonSets, Deployments, CronJobs) coming from an unusual combination of client, network origin, and user identity, which can signal stolen credentials, privilege escalation, or unauthorized control of cluster execution. Attackers commonly patch an existing Deployment to inject a new container or init container that runs with elevated privileges and pulls a remote payload, then rely on the workload controller to redeploy it across the environment.
26
27### Possible investigation steps
28
29- Retrieve the full audit event for the change and compare it to the most recent prior modification of the same workload to identify what was altered (e.g., image, command/args, env/secret refs, volumes, serviceAccount, securityContext, hostPath/hostNetwork, privileged settings).
30- Attribute the action to a real identity by tracing the Kubernetes user to its backing cloud/IAM identity or kubeconfig/cert and validate whether the access path (SSO, token, service account, CI/CD runner) and source network location are expected for that operator.
31- Determine blast radius by listing other recent creates/patches by the same identity and from the same origin across namespaces, and check for follow-on actions such as creating RBAC bindings, secrets, or additional controllers.
32- Inspect the affected workload’s rollout status and pod specs to confirm whether new pods were created, then review container images, pull registries, and runtime behavior for indicators of compromise (unexpected network egress, crypto-mining, credential access, or exec activity).
33- Validate the change against an approved deployment workflow by correlating with GitOps/CI commit history and change tickets, and if unapproved, contain by scaling down/rolling back the workload and revoking the credential or token used.
34
35### False positive analysis
36
37- A legitimate on-call engineer performs an emergency `kubectl` create/patch to a Deployment/CronJob/DaemonSet from a new workstation, VPN egress IP, or updated kubectl version, producing an unusual user_agent/source IP/username combination despite being authorized.
38- A routine automation path changes (e.g., CI runner or service account rotated/migrated to a new node pool or network segment) and continues applying standard workload updates, causing the same create/patch activity to appear anomalous due to the new origin and client identity.
39
40### Response and remediation
41
42- Immediately pause impact by scaling the modified Deployment/CronJob to zero or deleting the new DaemonSet and stopping any active rollout while preserving the altered manifest for evidence.
43- Roll back the workload to the last known-good version from GitOps/CI or prior ReplicaSet/Job template, then redeploy only after verifying container images, init containers, commands, serviceAccount, and privileged/host settings match the approved baseline.
44- Revoke and rotate the credential used for the change (user token/cert or service account token), invalidate related kubeconfigs, and review/remove any newly created RBAC bindings, secrets, or service accounts tied to the same actor.
45- Quarantine affected nodes and pods for analysis by cordoning/draining nodes that ran the new pods and collecting pod logs, container filesystem snapshots, and network egress details to identify payloads and persistence.
46- Escalate to the incident response/on-call security team immediately if the change introduced privileged containers, hostPath mounts, hostNetwork, new external images/registries, or any unexpected DaemonSet creation across multiple nodes.
47- Harden by enforcing admission controls to restrict privileged settings and sensitive namespaces, requiring changes via approved automation identities, and tightening RBAC so only designated deployment controllers can create/patch DaemonSets, Deployments, and CronJobs.
48"""
49references = [
50 "https://heilancoos.github.io/research/2025/12/16/kubernetes.html#overly-permissive-role-based-access-control",
51 "https://flare.io/learn/resources/blog/teampcp-cloud-native-ransomware",
52]
53risk_score = 21
54rule_id = "78c6559d-47a7-4f30-91fe-7e2e983206c2"
55severity = "low"
56tags = [
57 "Data Source: Kubernetes",
58 "Domain: Kubernetes",
59 "Use Case: Threat Detection",
60 "Tactic: Privilege Escalation",
61 "Tactic: Persistence",
62 "Resources: Investigation Guide",
63]
64timestamp_override = "event.ingested"
65type = "new_terms"
66query = '''
67event.dataset:"kubernetes.audit_logs" and user_agent.original:* and
68kubernetes.audit.annotations.authorization_k8s_io/decision:"allow" and
69kubernetes.audit.objectRef.resource:("daemonsets" or "deployments" or "cronjobs") and
70kubernetes.audit.verb:("create" or "patch") and
71not kubernetes.audit.user.groups:"system:masters"
72'''
73
74[[rule.threat]]
75framework = "MITRE ATT&CK"
76
77[[rule.threat.technique]]
78id = "T1098"
79name = "Account Manipulation"
80reference = "https://attack.mitre.org/techniques/T1098/"
81
82[[rule.threat.technique.subtechnique]]
83id = "T1098.006"
84name = "Additional Container Cluster Roles"
85reference = "https://attack.mitre.org/techniques/T1098/006/"
86
87[rule.threat.tactic]
88id = "TA0004"
89name = "Privilege Escalation"
90reference = "https://attack.mitre.org/tactics/TA0004/"
91
92[[rule.threat]]
93framework = "MITRE ATT&CK"
94
95[[rule.threat.technique]]
96id = "T1098"
97name = "Account Manipulation"
98reference = "https://attack.mitre.org/techniques/T1098/"
99
100[[rule.threat.technique.subtechnique]]
101id = "T1098.006"
102name = "Additional Container Cluster Roles"
103reference = "https://attack.mitre.org/techniques/T1098/006/"
104
105[rule.threat.tactic]
106id = "TA0003"
107name = "Persistence"
108reference = "https://attack.mitre.org/tactics/TA0003/"
109
110[rule.new_terms]
111field = "new_terms_fields"
112value = ["user_agent.original", "source.ip", "kubernetes.audit.user.username"]
113
114[[rule.new_terms.history_window_start]]
115field = "history_window_start"
116value = "now-7d"
Triage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Unusual Kubernetes Sensitive Workload Modification
This rule detects allowed create or patch activity against sensitive Kubernetes workloads (DaemonSets, Deployments, CronJobs) coming from an unusual combination of client, network origin, and user identity, which can signal stolen credentials, privilege escalation, or unauthorized control of cluster execution. Attackers commonly patch an existing Deployment to inject a new container or init container that runs with elevated privileges and pulls a remote payload, then rely on the workload controller to redeploy it across the environment.
Possible investigation steps
- Retrieve the full audit event for the change and compare it to the most recent prior modification of the same workload to identify what was altered (e.g., image, command/args, env/secret refs, volumes, serviceAccount, securityContext, hostPath/hostNetwork, privileged settings).
- Attribute the action to a real identity by tracing the Kubernetes user to its backing cloud/IAM identity or kubeconfig/cert and validate whether the access path (SSO, token, service account, CI/CD runner) and source network location are expected for that operator.
- Determine blast radius by listing other recent creates/patches by the same identity and from the same origin across namespaces, and check for follow-on actions such as creating RBAC bindings, secrets, or additional controllers.
- Inspect the affected workload’s rollout status and pod specs to confirm whether new pods were created, then review container images, pull registries, and runtime behavior for indicators of compromise (unexpected network egress, crypto-mining, credential access, or exec activity).
- Validate the change against an approved deployment workflow by correlating with GitOps/CI commit history and change tickets, and if unapproved, contain by scaling down/rolling back the workload and revoking the credential or token used.
False positive analysis
- A legitimate on-call engineer performs an emergency
kubectlcreate/patch to a Deployment/CronJob/DaemonSet from a new workstation, VPN egress IP, or updated kubectl version, producing an unusual user_agent/source IP/username combination despite being authorized. - A routine automation path changes (e.g., CI runner or service account rotated/migrated to a new node pool or network segment) and continues applying standard workload updates, causing the same create/patch activity to appear anomalous due to the new origin and client identity.
Response and remediation
- Immediately pause impact by scaling the modified Deployment/CronJob to zero or deleting the new DaemonSet and stopping any active rollout while preserving the altered manifest for evidence.
- Roll back the workload to the last known-good version from GitOps/CI or prior ReplicaSet/Job template, then redeploy only after verifying container images, init containers, commands, serviceAccount, and privileged/host settings match the approved baseline.
- Revoke and rotate the credential used for the change (user token/cert or service account token), invalidate related kubeconfigs, and review/remove any newly created RBAC bindings, secrets, or service accounts tied to the same actor.
- Quarantine affected nodes and pods for analysis by cordoning/draining nodes that ran the new pods and collecting pod logs, container filesystem snapshots, and network egress details to identify payloads and persistence.
- Escalate to the incident response/on-call security team immediately if the change introduced privileged containers, hostPath mounts, hostNetwork, new external images/registries, or any unexpected DaemonSet creation across multiple nodes.
- Harden by enforcing admission controls to restrict privileged settings and sensitive namespaces, requiring changes via approved automation identities, and tightening RBAC so only designated deployment controllers can create/patch DaemonSets, Deployments, and CronJobs.
References
Related rules
- Kubernetes Creation or Modification of Sensitive Role
- Kubernetes Cluster-Admin Role Binding Created
- Kubernetes Creation of a RoleBinding Referencing a ServiceAccount
- Kubernetes Sensitive RBAC Change Followed by Workload Modification
- Kubernetes Service Account Modified RBAC Objects