Possible Consent Grant Attack via Azure-Registered Application

Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide permissions to an application. An adversary may create an Azure-registered application that requests access to data such as contact information, email, or documents.

Elastic rule (View on GitHub)

  1[metadata]
  2creation_date = "2020/09/01"
  3integration = ["azure", "o365"]
  4maturity = "production"
  5updated_date = "2024/12/05"
  6
  7[rule]
  8author = ["Elastic"]
  9description = """
 10Detects when a user grants permissions to an Azure-registered application or when an administrator grants tenant-wide
 11permissions to an application. An adversary may create an Azure-registered application that requests access to data such
 12as contact information, email, or documents.
 13"""
 14from = "now-25m"
 15index = ["filebeat-*", "logs-azure*", "logs-o365*"]
 16language = "kuery"
 17license = "Elastic License v2"
 18name = "Possible Consent Grant Attack via Azure-Registered Application"
 19note = """## Triage and analysis
 20
 21### Investigating Possible Consent Grant Attack via Azure-Registered Application
 22
 23In an illicit consent grant attack, the attacker creates an Azure-registered application that requests access to data such as contact information, email, or documents. The attacker then tricks an end user into granting that application consent to access their data either through a phishing attack, or by injecting illicit code into a trusted website. After the illicit application has been granted consent, it has account-level access to data without the need for an organizational account. Normal remediation steps like resetting passwords for breached accounts or requiring multi-factor authentication (MFA) on accounts are not effective against this type of attack, since these are third-party applications and are external to the organization.
 24
 25Official Microsoft guidance for detecting and remediating this attack can be found [here](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants).
 26
 27#### Possible investigation steps
 28
 29- From the Azure AD portal, Review the application that was granted permissions:
 30  - Click on the `Review permissions` button on the `Permissions` blade of the application.
 31  - An app should require only permissions related to the app's purpose. If that's not the case, the app might be risky.
 32  - Apps that require high privileges or admin consent are more likely to be risky.
 33- Investigate the app and the publisher. The following characteristics can indicate suspicious apps:
 34  -  A low number of downloads.
 35  -  Low rating or score or bad comments.
 36  -  Apps with a suspicious publisher or website.
 37  -  Apps whose last update is not recent. This might indicate an app that is no longer supported.
 38- Export and examine the [Oauth app auditing](https://docs.microsoft.com/en-us/defender-cloud-apps/manage-app-permissions#oauth-app-auditing) to identify users affected.
 39
 40### False positive analysis
 41
 42- This mechanism can be used legitimately. Malicious applications abuse the same workflow used by legitimate apps. Thus, analysts must review each app consent to ensure that only desired apps are granted access.
 43
 44### Response and remediation
 45
 46- Initiate the incident response process based on the outcome of the triage.
 47- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
 48    - Identify the account role in the cloud environment.
 49    - Assess the criticality of affected services and servers.
 50    - Work with your IT team to identify and minimize the impact on users.
 51    - Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
 52    - Identify any regulatory or legal ramifications related to this activity.
 53- Disable the malicious application to stop user access and the application access to your data.
 54- Revoke the application Oauth consent grant. The `Remove-AzureADOAuth2PermissionGrant` cmdlet can be used to complete this task.
 55- Remove the service principal application role assignment. The `Remove-AzureADServiceAppRoleAssignment` cmdlet can be used to complete this task.
 56- Revoke the refresh token for all users assigned to the application. Azure provides a [playbook](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Revoke-AADSignInSessions) for this task.
 57- [Report](https://docs.microsoft.com/en-us/defender-cloud-apps/manage-app-permissions#send-feedback) the application as malicious to Microsoft.
 58- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
 59- Investigate the potential for data compromise from the user's email and file sharing services. Activate your Data Loss incident response playbook.
 60- Disable the permission for a user to set consent permission on their behalf.
 61  - Enable the [Admin consent request](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-admin-consent-workflow) feature.
 62- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).
 63
 64## Setup
 65
 66The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule."""
 67references = [
 68    "https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide",
 69    "https://www.cloud-architekt.net/detection-and-mitigation-consent-grant-attacks-azuread/",
 70    "https://docs.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth#how-to-detect-risky-oauth-apps",
 71]
 72risk_score = 47
 73rule_id = "1c6a8c7a-5cb6-4a82-ba27-d5a5b8a40a38"
 74severity = "medium"
 75tags = [
 76    "Domain: Cloud",
 77    "Data Source: Azure",
 78    "Data Source: Microsoft 365",
 79    "Use Case: Identity and Access Audit",
 80    "Resources: Investigation Guide",
 81    "Tactic: Initial Access",
 82]
 83timestamp_override = "event.ingested"
 84type = "query"
 85
 86query = '''
 87event.dataset:(azure.activitylogs or azure.auditlogs or o365.audit) and
 88  (
 89    azure.activitylogs.operation_name:"Consent to application" or
 90    azure.auditlogs.operation_name:"Consent to application" or 
 91    event.action:"Consent to application."
 92  ) and
 93  event.outcome:(Success or success)
 94'''
 95
 96
 97[[rule.threat]]
 98framework = "MITRE ATT&CK"
 99[[rule.threat.technique]]
100id = "T1566"
101name = "Phishing"
102reference = "https://attack.mitre.org/techniques/T1566/"
103[[rule.threat.technique.subtechnique]]
104id = "T1566.002"
105name = "Spearphishing Link"
106reference = "https://attack.mitre.org/techniques/T1566/002/"
107
108
109
110[rule.threat.tactic]
111id = "TA0001"
112name = "Initial Access"
113reference = "https://attack.mitre.org/tactics/TA0001/"
114[[rule.threat]]
115framework = "MITRE ATT&CK"
116[[rule.threat.technique]]
117id = "T1528"
118name = "Steal Application Access Token"
119reference = "https://attack.mitre.org/techniques/T1528/"
120
121
122[rule.threat.tactic]
123id = "TA0006"
124name = "Credential Access"
125reference = "https://attack.mitre.org/tactics/TA0006/"

Triage and analysis

In an illicit consent grant attack, the attacker creates an Azure-registered application that requests access to data such as contact information, email, or documents. The attacker then tricks an end user into granting that application consent to access their data either through a phishing attack, or by injecting illicit code into a trusted website. After the illicit application has been granted consent, it has account-level access to data without the need for an organizational account. Normal remediation steps like resetting passwords for breached accounts or requiring multi-factor authentication (MFA) on accounts are not effective against this type of attack, since these are third-party applications and are external to the organization.

Official Microsoft guidance for detecting and remediating this attack can be found here.

Possible investigation steps

  • From the Azure AD portal, Review the application that was granted permissions:
    • Click on the Review permissions button on the Permissions blade of the application.
    • An app should require only permissions related to the app's purpose. If that's not the case, the app might be risky.
    • Apps that require high privileges or admin consent are more likely to be risky.
  • Investigate the app and the publisher. The following characteristics can indicate suspicious apps:
    • A low number of downloads.
    • Low rating or score or bad comments.
    • Apps with a suspicious publisher or website.
    • Apps whose last update is not recent. This might indicate an app that is no longer supported.
  • Export and examine the Oauth app auditing to identify users affected.

False positive analysis

  • This mechanism can be used legitimately. Malicious applications abuse the same workflow used by legitimate apps. Thus, analysts must review each app consent to ensure that only desired apps are granted access.

Response and remediation

  • Initiate the incident response process based on the outcome of the triage.
  • Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context:
    • Identify the account role in the cloud environment.
    • Assess the criticality of affected services and servers.
    • Work with your IT team to identify and minimize the impact on users.
    • Identify if the attacker is moving laterally and compromising other accounts, servers, or services.
    • Identify any regulatory or legal ramifications related to this activity.
  • Disable the malicious application to stop user access and the application access to your data.
  • Revoke the application Oauth consent grant. The Remove-AzureADOAuth2PermissionGrant cmdlet can be used to complete this task.
  • Remove the service principal application role assignment. The Remove-AzureADServiceAppRoleAssignment cmdlet can be used to complete this task.
  • Revoke the refresh token for all users assigned to the application. Azure provides a playbook for this task.
  • Report the application as malicious to Microsoft.
  • Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions.
  • Investigate the potential for data compromise from the user's email and file sharing services. Activate your Data Loss incident response playbook.
  • Disable the permission for a user to set consent permission on their behalf.
  • Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR).

Setup

The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule.

References

Related rules

to-top