AWS S3 Bucket Enumeration or Brute Force

Identifies a high number of failed S3 operations from a single source and account (or anonymous account) within a short timeframe. This activity can be indicative of attempting to cause an increase in billing to an account for excessive random operations, cause resource exhaustion, or enumerating bucket names for discovery.

Elastic rule (View on GitHub)

  1[metadata]
  2creation_date = "2024/05/01"
  3maturity = "production"
  4updated_date = "2024/11/07"
  5min_stack_comments = "ES|QL rule type is still in technical preview as of 8.13, however this rule was tested successfully"
  6min_stack_version = "8.13.0"
  7
  8[rule]
  9author = ["Elastic"]
 10description = """
 11Identifies a high number of failed S3 operations from a single source and account (or anonymous account) within a short
 12timeframe. This activity can be indicative of attempting to cause an increase in billing to an account for excessive
 13random operations, cause resource exhaustion, or enumerating bucket names for discovery.
 14"""
 15false_positives = ["Known or internal account IDs or automation"]
 16from = "now-6m"
 17language = "esql"
 18license = "Elastic License v2"
 19name = "AWS S3 Bucket Enumeration or Brute Force"
 20note = """## Triage and analysis
 21
 22### Investigating AWS S3 Bucket Enumeration or Brute Force
 23
 24AWS S3 buckets can be be brute forced to cause financial impact against the resource owner. What makes this even riskier is that even private, locked down buckets can still trigger a potential cost, even with an "Access Denied", while also being accessible from unauthenticated, anonymous accounts. This also appears to work on several or all [operations](https://docs.aws.amazon.com/cli/latest/reference/s3api/) (GET, PUT, list-objects, etc.). Additionally, buckets are trivially discoverable by default as long as the bucket name is known, making it vulnerable to enumeration for discovery.
 25
 26Attackers may attempt to enumerate names until a valid bucket is discovered and then pivot to cause financial impact, enumerate for more information, or brute force in other ways to attempt to exfil data.
 27
 28#### Possible investigation steps
 29
 30- Examine the history of the operation requests from the same `source.address` and `cloud.account.id` to determine if there is other suspicious activity.
 31- Review similar requests and look at the `user.agent` info to ascertain the source of the requests (though do not overly rely on this since it is controlled by the requestor).
 32- Review other requests to the same `aws.s3.object.key` as well as other `aws.s3.object.key` accessed by the same `cloud.account.id` or `source.address`.
 33- Investigate other alerts associated with the user account during the past 48 hours.
 34- Validate the activity is not related to planned patches, updates, or network administrator activity.
 35- Examine the request parameters. These may indicate the source of the program or the nature of the task being performed when the error occurred.
 36    - Check whether the error is related to unsuccessful attempts to enumerate or access objects, data, or secrets.
 37- Considering the source IP address and geolocation of the user who issued the command:
 38    - Do they look normal for the calling user?
 39    - If the source is an EC2 IP address, is it associated with an EC2 instance in one of your accounts or is the source IP from an EC2 instance that's not under your control?
 40    - If it is an authorized EC2 instance, is the activity associated with normal behavior for the instance role or roles? Are there any other alerts or signs of suspicious activity involving this instance?
 41- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
 42- Contact the account owner and confirm whether they are aware of this activity if suspicious.
 43- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours.
 44
 45### False positive analysis
 46
 47- Verify the `source.address` and `cloud.account.id` - there are some valid operations from within AWS directly that can cause failures and false positives. Additionally, failed automation can also caeuse false positives, but should be identifiable by reviewing the `source.address` and `cloud.account.id`.
 48
 49### Response and remediation
 50
 51- Initiate the incident response process based on the outcome of the triage.
 52- Disable or limit the account during the investigation and response.
 53- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
 54    - Identify the account role in the cloud environment.
 55    - Assess the criticality of affected services and servers.
 56    - Work with your IT team to identify and minimize the impact on users.
 57    - Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
 58    - Identify any regulatory or legal ramifications related to this activity.
 59- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
 60- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users.
 61- Consider enabling multi-factor authentication for users.
 62- Review the permissions assigned to the implicated user to ensure that the least privilege principle is being followed.
 63- Implement security best practices [outlined](https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/) by AWS.
 64- Take the actions needed to return affected systems, data, or services to their normal operational levels.
 65- Identify the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
 66- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
 67- Check for PutBucketPolicy event actions as well to see if they have been tampered with. While we monitor for denied, a single successful action to add a backdoor into the bucket via policy updates (however they got permissions) may be critical to identify during TDIR.
 68
 69"""
 70references = [
 71    "https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1",
 72    "https://docs.aws.amazon.com/cli/latest/reference/s3api/"
 73]
 74risk_score = 21
 75rule_id = "5f0234fd-7f21-42af-8391-511d5fd11d5c"
 76severity = "low"
 77tags = [
 78    "Domain: Cloud",
 79    "Data Source: AWS",
 80    "Data Source: Amazon Web Services",
 81    "Data Source: AWS S3",
 82    "Resources: Investigation Guide",
 83    "Use Case: Log Auditing",
 84    "Tactic: Impact"
 85]
 86timestamp_override = "event.ingested"
 87type = "esql"
 88
 89query = '''
 90from logs-aws.cloudtrail*
 91| where event.provider == "s3.amazonaws.com" and aws.cloudtrail.error_code == "AccessDenied"
 92// keep only relevant fields
 93| keep tls.client.server_name, source.address, cloud.account.id
 94| stats failed_requests = count(*) by tls.client.server_name, source.address, cloud.account.id
 95  // can modify the failed request count or tweak time window to fit environment
 96  // can add `not cloud.account.id in (KNOWN)` or specify in exceptions
 97| where failed_requests > 40
 98'''
 99
100[rule.investigation_fields]
101field_names = [
102    "source.address",
103    "tls.client.server_name",
104    "cloud.account.id",
105    "failed_requests"
106]
107
108[[rule.threat]]
109framework = "MITRE ATT&CK"
110
111    [rule.threat.tactic]
112    id = "TA0040"
113    name = "Impact"
114    reference = "https://attack.mitre.org/tactics/TA0040/"
115
116        [[rule.threat.technique]]
117        id = "T1657"
118        name = "Financial Theft"
119        reference = "https://attack.mitre.org/techniques/T1657/"
120
121
122[[rule.threat]]
123framework = "MITRE ATT&CK"
124
125    [rule.threat.tactic]
126    id = "TA0007"
127    name = "Discovery"
128    reference = "https://attack.mitre.org/tactics/TA0007/"
129
130        [[rule.threat.technique]]
131        id = "T1580"
132        name = "Cloud Infrastructure Discovery"
133        reference = "https://attack.mitre.org/techniques/T1580/"
134
135
136[[rule.threat]]
137framework = "MITRE ATT&CK"
138
139    [rule.threat.tactic]
140    id = "TA0009"
141    name = "Collection"
142    reference = "https://attack.mitre.org/tactics/TA0009/"
143
144        [[rule.threat.technique]]
145        id = "T1530"
146        name = "Data from Cloud Storage"
147        reference = "https://attack.mitre.org/techniques/T1530/"

Triage and analysis

Investigating AWS S3 Bucket Enumeration or Brute Force

AWS S3 buckets can be be brute forced to cause financial impact against the resource owner. What makes this even riskier is that even private, locked down buckets can still trigger a potential cost, even with an "Access Denied", while also being accessible from unauthenticated, anonymous accounts. This also appears to work on several or all operations (GET, PUT, list-objects, etc.). Additionally, buckets are trivially discoverable by default as long as the bucket name is known, making it vulnerable to enumeration for discovery.

Attackers may attempt to enumerate names until a valid bucket is discovered and then pivot to cause financial impact, enumerate for more information, or brute force in other ways to attempt to exfil data.

Possible investigation steps

  • Examine the history of the operation requests from the same source.address and cloud.account.id to determine if there is other suspicious activity.
  • Review similar requests and look at the user.agent info to ascertain the source of the requests (though do not overly rely on this since it is controlled by the requestor).
  • Review other requests to the same aws.s3.object.key as well as other aws.s3.object.key accessed by the same cloud.account.id or source.address.
  • Investigate other alerts associated with the user account during the past 48 hours.
  • Validate the activity is not related to planned patches, updates, or network administrator activity.
  • Examine the request parameters. These may indicate the source of the program or the nature of the task being performed when the error occurred.
    • Check whether the error is related to unsuccessful attempts to enumerate or access objects, data, or secrets.
  • Considering the source IP address and geolocation of the user who issued the command:
    • Do they look normal for the calling user?
    • If the source is an EC2 IP address, is it associated with an EC2 instance in one of your accounts or is the source IP from an EC2 instance that's not under your control?
    • If it is an authorized EC2 instance, is the activity associated with normal behavior for the instance role or roles? Are there any other alerts or signs of suspicious activity involving this instance?
  • Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
  • Contact the account owner and confirm whether they are aware of this activity if suspicious.
  • If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours.

False positive analysis

  • Verify the source.address and cloud.account.id - there are some valid operations from within AWS directly that can cause failures and false positives. Additionally, failed automation can also caeuse false positives, but should be identifiable by reviewing the source.address and cloud.account.id.

Response and remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Disable or limit the account during the investigation and response.
  • Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
    • Identify the account role in the cloud environment.
    • Assess the criticality of affected services and servers.
    • Work with your IT team to identify and minimize the impact on users.
    • Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
    • Identify any regulatory or legal ramifications related to this activity.
  • Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
  • Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users.
  • Consider enabling multi-factor authentication for users.
  • Review the permissions assigned to the implicated user to ensure that the least privilege principle is being followed.
  • Implement security best practices outlined by AWS.
  • Take the actions needed to return affected systems, data, or services to their normal operational levels.
  • Identify the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
  • Check for PutBucketPolicy event actions as well to see if they have been tampered with. While we monitor for denied, a single successful action to add a backdoor into the bucket via policy updates (however they got permissions) may be critical to identify during TDIR.

References

Related rules

to-top