-
Detects when GenAI tools access sensitive files such as cloud credentials, SSH keys, browser password databases, or shell configurations. Attackers leverage GenAI agents to systematically locate and exfiltrate credentials, API keys, and tokens. Access to credential stores (.aws/credentials, .ssh/id_*) suggests harvesting, while writes to shell configs (.bashrc, .zshrc) indicate persistence attempts. Note: On linux only creation events are available. Access events are not yet implemented.
Read More -
GenAI Process Compiling or Generating Executables
Dec 5, 2025 · Domain: Endpoint OS: Linux OS: macOS OS: Windows Use Case: Threat Detection Tactic: Execution Tactic: Defense Evasion Data Source: Elastic Defend Data Source: Sysmon Data Source: Auditd Manager Data Source: Microsoft Defender for Endpoint Data Source: SentinelOne Resources: Investigation Guide Domain: LLM Mitre Atlas: T0053 ·Detects when GenAI tools spawn compilers or packaging tools to generate executables. Attackers leverage local LLMs to autonomously generate and compile malware, droppers, or implants. Python packaging tools (pyinstaller, nuitka, pyarmor) are particularly high-risk as they create standalone executables that can be deployed without dependencies. This rule focuses on compilation activity that produces output binaries, filtering out inspection-only operations.
Read More -
Detects when GenAI tools connect to domains using suspicious TLDs commonly abused for malware C2 infrastructure. TLDs like .top, .xyz, .ml, .cf, .onion are frequently used in phishing and malware campaigns. Legitimate GenAI services use well-established domains (.com, .ai, .io), so connections to suspicious TLDs may indicate compromised tools, malicious plugins, or AI-generated code connecting to attacker infrastructure.
Read More -
Detects GenAI tools connecting to unusual domains on macOS. Adversaries may compromise GenAI tools through prompt injection, malicious MCP servers, or poisoned plugins to establish C2 channels or exfiltrate sensitive data to attacker-controlled infrastructure. AI agents with network access can be manipulated to beacon to external servers, download malicious payloads, or transmit harvested credentials and documents.
Read More -
GenAI Process Performing Encoding/Chunking Prior to Network Activity
Dec 5, 2025 · Domain: Endpoint OS: Linux OS: macOS OS: Windows Use Case: Threat Detection Tactic: Exfiltration Tactic: Defense Evasion Data Source: Elastic Defend Data Source: Sysmon Data Source: Microsoft Defender for Endpoint Data Source: SentinelOne Resources: Investigation Guide Domain: LLM Mitre Atlas: T0086 ·Detects when GenAI processes perform encoding or chunking (base64, gzip, tar, zip) followed by outbound network activity. This sequence indicates data preparation for exfiltration. Attackers encode or compress sensitive data before transmission to obfuscate contents and evade detection. Legitimate GenAI workflows rarely encode data before network communications.
Read More -
Detects unusual modification of GenAI tool configuration files. Adversaries may inject malicious MCP server configurations to hijack AI agents for persistence, C2, or data exfiltration. Attack vectors include malware or scripts directly poisoning config files, supply chain attacks via compromised dependencies, and prompt injection attacks that abuse the GenAI tool itself to modify its own configuration. Unauthorized MCP servers added to these configs execute arbitrary commands when the AI tool is next invoked.
Read More -
Identifies multiple violations of AWS Bedrock guardrails within a single request, resulting in a block action, increasing the likelihood of malicious intent. Multiple violations implies that a user may be intentionally attempting to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system.
Read More -
Detects repeated high-confidence 'BLOCKED' actions coupled with specific 'Content Filter' policy violation having codes such as 'MISCONDUCT', 'HATE', 'SEXUAL', INSULTS', 'PROMPT_ATTACK', 'VIOLENCE' indicating persistent misuse or attempts to probe the model's ethical boundaries.
Read More -
Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'sensitive_information_policy', indicating persistent misuse or attempts to probe the model's denied topics.
Read More -
Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'topic_policy', indicating persistent misuse or attempts to probe the model's denied topics.
Read More -
Detects repeated compliance violation 'BLOCKED' actions coupled with specific policy name such as 'word_policy', indicating persistent misuse or attempts to probe the model's denied topics.
Read More -
Identifies multiple successive failed attempts to use denied model resources within AWS Bedrock. This could indicated attempts to bypass limitations of other approved models, or to force an impact on the environment by incurring exhorbitant costs.
Read More -
Identifies multiple violations of AWS Bedrock guardrails by the same user in the same account over a session. Multiple violations implies that a user may be intentionally attempting to cirvumvent security controls, access sensitive information, or possibly exploit a vulnerability in the system.
Read More -
Identifies multiple AWS Bedrock executions in a one minute time window without guardrails by the same user in the same account over a session. Multiple consecutive executions implies that a user may be intentionally attempting to bypass security controls, by not routing the requests with the desired guardrail configuration in order to access sensitive information, or possibly exploit a vulnerability in the system.
Read More -
Detects when Azure OpenAI requests result in zero response length, potentially indicating issues in output handling that might lead to security exploits such as data leaks or code execution. This can occur in cases where the API fails to handle outputs correctly under certain input conditions.
Read More -
Detects potential resource exhaustion or data breach attempts by monitoring for users who consistently generate high input token counts, submit numerous requests, and receive large responses. This behavior could indicate an attempt to overload the system or extract an unusually large amount of data, possibly revealing sensitive information or causing service disruptions.
Read More -
Monitors for suspicious activities that may indicate theft or unauthorized duplication of machine learning (ML) models, such as unauthorized API calls, atypical access patterns, or large data transfers that are unusual during model interactions.
Read More -
Detects patterns indicative of Denial-of-Service (DoS) attacks on machine learning (ML) models, focusing on unusually high volume and frequency of requests or patterns of requests that are known to cause performance degradation or service disruption, such as large input sizes or rapid API calls.
Read More -
Identifies multiple validation exeception errors within AWS Bedrock. Validation errors occur when you run the InvokeModel or InvokeModelWithResponseStream APIs on a foundation model that uses an incorrect inference parameter or corresponding value. These errors also occur when you use an inference parameter for one model with a model that doesn't have the same API parameter. This could indicate attempts to bypass limitations of other approved models, or to force an impact on the environment by incurring exhorbitant costs.
Read More