AWS Bedrock Guardrails Detected Multiple Policy Violations Within a Single Blocked Request

Identifies multiple violations of AWS Bedrock guardrails within a single request, resulting in a block action, increasing the likelihood of malicious intent. Multiple violations implies that a user may be intentionally attempting to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system.

Elastic rule (View on GitHub)

  1[metadata]
  2creation_date = "2024/05/02"
  3integration = ["aws_bedrock"]
  4maturity = "production"
  5updated_date = "2025/09/25"
  6
  7[rule]
  8author = ["Elastic"]
  9description = """
 10Identifies multiple violations of AWS Bedrock guardrails within a single request, resulting in a block action,
 11increasing the likelihood of malicious intent. Multiple violations implies that a user may be intentionally attempting
 12to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system.
 13"""
 14false_positives = ["Legitimate misunderstanding by users or overly strict policies"]
 15from = "now-60m"
 16interval = "10m"
 17language = "esql"
 18license = "Elastic License v2"
 19name = "AWS Bedrock Guardrails Detected Multiple Policy Violations Within a Single Blocked Request"
 20note = """## Triage and analysis
 21
 22### Investigating AWS Bedrock Guardrails Detected Multiple Policy Violations Within a Single Blocked Request
 23
 24Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.
 25
 26It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.
 27
 28Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects,
 29and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.
 30
 31#### Possible investigation steps
 32
 33- Identify the user account and the user request that caused multiple policy violations and whether it should perform this kind of action.
 34- Investigate the user activity that might indicate a potential brute force attack.
 35- Investigate other alerts associated with the user account during the past 48 hours.
 36- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
 37- Examine the account's prompts and responses in the last 24 hours.
 38- If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.
 39
 40### False positive analysis
 41
 42- Verify the user account that caused multiple policy violations, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.
 43
 44### Response and remediation
 45
 46- Initiate the incident response process based on the outcome of the triage.
 47- Disable or limit the account during the investigation and response.
 48- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
 49    - Identify the account role in the cloud environment.
 50    - Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
 51    - Identify any regulatory or legal ramifications related to this activity.
 52- Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
 53- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
 54- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
 55"""
 56references = [
 57    "https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-components.html",
 58    "https://atlas.mitre.org/techniques/AML.T0051",
 59    "https://atlas.mitre.org/techniques/AML.T0054",
 60    "https://www.elastic.co/security-labs/elastic-advances-llm-security",
 61]
 62risk_score = 21
 63rule_id = "f4c2515a-18bb-47ce-a768-1dc4e7b0fe6c"
 64setup = """## Setup
 65
 66This rule requires that guardrails are configured in AWS Bedrock. For more information, see the AWS Bedrock documentation:
 67
 68https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails-create.html
 69"""
 70severity = "low"
 71tags = [
 72    "Domain: LLM",
 73    "Data Source: AWS Bedrock",
 74    "Data Source: AWS S3",
 75    "Resources: Investigation Guide",
 76    "Use Case: Policy Violation",
 77    "Mitre Atlas: T0051",
 78    "Mitre Atlas: T0054",
 79]
 80timestamp_override = "event.ingested"
 81type = "esql"
 82
 83query = '''
 84from logs-aws_bedrock.invocation-*
 85
 86// Filter for policy-blocked requests
 87| where gen_ai.policy.action == "BLOCKED"
 88
 89// count number of policy matches per request (multi-valued)
 90| eval Esql.ml_policy_violations_mv_count = mv_count(gen_ai.policy.name)
 91
 92// Filter for requests with more than one policy match
 93| where Esql.ml_policy_violations_mv_count > 1
 94
 95// keep relevant fields
 96| keep
 97  gen_ai.policy.action,
 98  Esql.ml_policy_violations_mv_count,
 99  user.id,
100  gen_ai.request.model.id,
101  cloud.account.id
102
103// Aggregate requests with multiple violations
104| stats
105    Esql.ml_policy_violations_total_unique_requests_count = count(*)
106  by
107    Esql.ml_policy_violations_mv_count,
108    user.id,
109    gen_ai.request.model.id,
110    cloud.account.id
111
112// sort by number of unique requests
113| sort Esql.ml_policy_violations_total_unique_requests_count desc
114'''

Triage and analysis

Investigating AWS Bedrock Guardrails Detected Multiple Policy Violations Within a Single Blocked Request

Amazon Bedrock Guardrail is a set of features within Amazon Bedrock designed to help businesses apply robust safety and privacy controls to their generative AI applications.

It enables users to set guidelines and filters that manage content quality, relevancy, and adherence to responsible AI practices.

Through Guardrail, organizations can define "denied topics" to prevent the model from generating content on specific, undesired subjects, and they can establish thresholds for harmful content categories, including hate speech, violence, or offensive language.

Possible investigation steps

  • Identify the user account and the user request that caused multiple policy violations and whether it should perform this kind of action.
  • Investigate the user activity that might indicate a potential brute force attack.
  • Investigate other alerts associated with the user account during the past 48 hours.
  • Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day?
  • Examine the account's prompts and responses in the last 24 hours.
  • If you suspect the account has been compromised, scope potentially compromised assets by tracking Amazon Bedrock model access, prompts generated, and responses to the prompts by the account in the last 24 hours.

False positive analysis

  • Verify the user account that caused multiple policy violations, is not testing any new model deployments or updated compliance policies in Amazon Bedrock guardrails.

Response and remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Disable or limit the account during the investigation and response.
  • Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
    • Identify the account role in the cloud environment.
    • Identify if the attacker is moving laterally and compromising other Amazon Bedrock Services.
    • Identify any regulatory or legal ramifications related to this activity.
  • Review the permissions assigned to the implicated user group or role behind these requests to ensure they are authorized and expected to access bedrock and ensure that the least privilege principle is being followed.
  • Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).

References

Related rules

to-top