Linux Format

Build a dynamic security pipeline

Dynamic Analysis Security Testing takes centre stage in the third instalment of our Web Applicatio­n Security series with Tim Armstrong.

- Tim Armstrong is a former Lead Engineer turned Developer Advocate specialisi­ng in networking, software developmen­t, and security. You can find him on Twitter as @omatachyru or via his website at www.plain textnerds.com.

Dynamic Analysis Security Testing takes centre stage in the third and final instalment of our Web Applicatio­n Security series with Tim Armstrong.

The battle between developers and malicious hackers is one that developers have been losing. A lot of the time, it comes down to mentality and company priorities. Hackers, like burglars, only need to find a single open window or unlocked door to get in. You wouldn’t check that you’ve locked your door only once every few months, yet this is the exact approach many companies take to security.

Dynamic Analysis Security Testing (DAST) is perhaps the most overlooked stage of any security pipeline, frequently relegated to a check-up every six months by an outside consultanc­y that does an automated scan with Burp Suite or Zed Attack Proxy (ZAP) and provides you with a (hopefully short) report and an invoice in the range of £3,000-30,000, mostly depending on the scope. In most cases, the consultant­s don’t go further than the automated scan because at that point they already have enough to write a multi-page report.

But here’s the thing: when malicious actors (aka hackers) attack your web app, site or API, they aren’t checking if your code is neatly formatted, they’re essentiall­y doing dynamic analysis. They’re looking for a place where you’ve not validated the input, an endpoint that you’ve forgotten to protect, cookie slack, a vulnerable login system, leaked credential­s and hundreds of other things that are very difficult to detect statically. If you’re relying on a spot test every six months then odds are you’ve got security holes that you’re not aware of.

Building DAST into your CI/CD only takes a few minutes and gives you effectivel­y that same informatio­n that you’d get from a pen-test where all they did was run an automated scanner. The main difference is that instead of it only occurring every six months, the scan happens every time someone merges a PR to the main branch – meaning you find out about the vulnerabil­ity when it gets merged. Ultimately this means that when you do bring in the external consultant­s for the sixmonth check-up, you actually get your money’s worth!

In this tutorial, you’ll be adding DAST to the GitLab CI/CD pipeline that you’ve built over the course of this series. If you haven’t read the earlier instalment­s yet it’s a good idea to check those out first, but if you just want to dive in at this point, then you can pick up a copy of the progress so far at https://gitlab.com/ plaintextn­erds/web-app-security-tutorial2-lxf280.

Too many acronyms

When it comes to DAST there are a growing number of solutions in the market. Most of them use hosted scanners that run on a periodic schedule, so either you need to expose a test environmen­t to the internet and have the app scan that, or you need to set it up to scan production – which is a bit late (and could lead to instabilit­y of the production environmen­t).

The goal here is, as it was when you added static analysis and software compositio­n analysis to the pipeline, that you know about any vulnerabil­ities before the code goes live so that you can fix them before it gets deployed.

What you need, then, is something that can run inside the pipeline to test the service on commit. So what are your options? Well, you could build your own solution around an open-source tool like ZAP, or you could pick up an off-the-shelf solution. Unfortunat­ely, there aren’t many DAST solutions that you can build directly into your CI pipeline, with the leaders in this space being StackHawk and GitLab. Both based their scanners on ZAP, meaning that they can run in a Docker container in your pipeline (or even locally).

StackHawk’s scanner, HawkScan, is a little more advanced than GitLab’s version and has support for multiple authentica­tion methods and makes it easy to customise the scanner. While StackHawk holds the lead in scanning capabiliti­es, GitLab is ahead in pricing (for

proprietar­y/closed-source software) as it’s included in GitLab Ultimate, and obviously, it’s also ahead in its integratio­n with the rest of the GitLab platform.

What really separates these two solutions from others in this space, however, is their dedication to open-source, as both companies have decided to make their solutions free to open-source projects. Which, considerin­g that they are both built on ZAP (which itself is an open-source solution), should perhaps not be a surprise. Unfortunat­ely, however, it is not that common, as even though a number of their competitor­s are also likely based on ZAP, the vast majority of them do not extend the same offer.

Street – er, StackHawk

Kicking things off with StackHawk, the first thing you’ll want to do is register an account at http://stackhawk. com so that you can set up your app and get an API key-pair for your HawkScan instance with which to push the scan results.

When you open a Developer account you get to use it for free for one app, so you can follow along even if you’re not working on an open-source project. If you are working on an open-source project, make sure you contact the StackHawk team to unlock that free upgrade for your own team (they also help out start-ups with special deals).

With an account set up, make sure you’re on the Applicatio­ns dashboard and hit the ‘Add an App’ button. This will open up a modal dialogue box where you can give it a name, configure the environmen­t type, set the hostname, and generate the Applicatio­n ID and stackhawk.yml config file.

Next, you need to generate your API key. Do this by clicking your profile picture at the bottom left and then going to Settings. From here go to API Keys and create a new key and copy it into your clipboard. Then, head over to the Settings page for the CI/CD in your GitLab project and add it as a new variable with the key HAWK_ API_SECRET and with both the ‘Protect variable’ and ‘Mask variable’ boxes ticked. While you’re on this page you’ll also want to add the app_id as HAWK_APP_ID; however, you don’t need to tick the boxes for this one.

Next, you want to edit the .gitlab-ci.yml to add a new stage and of course, the new job. To do this add the line

- dynamic-analysis to the stages section, then add the job as follows:

hawkscan: stage: dynamic-analysis image: docker:20 services:

- docker:20-dind before_script:

- docker pull stackhawk/hawkscan script:

- | docker run -v $(pwd):/hawk:rw -t \ -e API_KEY="hawk.${HAWK_API_ID}.${HAWK_

API_SECRET}” \

-e NO_COLOR=true \ stackhawk/hawkscan

Something you might notice is that this job is quite different from those defined in previous instalment­s of this series; this is because HawkScan needs to run in a DinD (Docker in Docker) environmen­t. But how does it know how to run your app so that it can test it? The answer to that question of course is that it doesn’t, so that’s what you’ll need to define next.

To run the app in the CI/CD pipeline, you’ll need it running in a Docker container. This means you need to define a build stage that makes a Docker image from the source code of the merge request. To do this you’ll need a Dockerfile to build and a stage in the pipeline that will build it.

When it comes to building a Dockerfile for a Django project you only really need it to be a handful of lines long, like this:

FROM python:3.9

WORKDIR /usr/src/app

COPY requiremen­ts.txt ./

RUN pip install -r requiremen­ts.txt

COPY ./i_am_vulnerable/. .

EXPOSE 8000

CMD ["python”, “manage.py”, “runserver”,

“0.0.0.0:8000"]

Place this Dockerfile in the project’s src directory next to the requiremen­ts.txt.

To build this Docker image in the CI/CD pipeline you’ll need to add a new stage called build to the list of stages, placing it directly before the dynamic-analysis stage. Then you’ll need to add a build job, that will look something like this:

build-docker: stage: build image: docker:20 services:

- docker:20-dind script:

- cd src

- docker login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD $CI_ REGISTRY

- > docker build

--tag $CI_REGISTRY_IMAGE:$CI_COMMIT_

SHORT_SHA

.

- docker push $CI_REGISTRY_IMAGE:$CI_

COMMIT_SHORT_SHA

So what does this do? Looking through the script section of this job, you can see that it will login to the GitLab Docker repository for the project, then build the Dockerfile found in the root of the project directory tagging it with the current commit tag, after which it pushes the container up to the GitLab Docker repository for the project.

This is all good to go, so you’ll need to make it available to HawkScan. To do this you’ll want to replace the script section of the HawkScan job with the following code:

- docker login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD $CI_ REGISTRY

- docker run --name djangoapp -d $CI_REGISTRY_

IMAGE:$CI_COMMIT_SHORT_SHA

- | docker run --link djangoapp -v $(pwd):/hawk:rw -t \ -e API_KEY="hawk.${HAWK_API_ID}.${HAWK_API_

SECRET}” \

-e NO_COLOR=true \ stackhawk/hawkscan

When you’ve finished, the first few lines of your .gitlab-ci.yml should look something like this:

stages:

- static-analysis

- compositio­n-analysis

- build

- dynamic-analysis hawkscan: stage: dynamic-analysis image: docker:20 services:

- docker:20-dind before_script: - docker pull stackhawk/hawkscan variables:

APP_HOSTNAME: $CI_REGISTRY_IMAGE script:

- docker login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD $CI_ REGISTRY

- docker run --name djangoapp -d $CI_REGISTRY_

IMAGE:$CI_COMMIT_SHORT_SHA

- | docker run --link djangoapp -v $(pwd):/hawk:rw -t \ -e API_KEY="hawk.${HAWK_API_ID}.${HAWK_

API_SECRET}” \

-e NO_COLOR=true \ stackhawk/hawkscan build-docker: stage: build image: docker:20 services:

- docker:20-dind script:

- cd src

- docker login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD $CI_ REGISTRY

- > docker build

--tag $CI_REGISTRY_IMAGE:$CI_COMMIT_

SHORT_SHA

.

- docker push $CI_REGISTRY_IMAGE:$CI_

COMMIT_SHORT_SHA

Then you’ll need to add the stackhawk.yml file that was generated at the start to the root of the project. Finally, you’ll need to edit a couple of lines in that file, so change the host field of http://djangoapp:8000 to http://djangoapp:8000/bad_sql and uncomment the antiCsrfPa­ram field and set it to be csrfmiddle­waretoken . Then git add , git commit and git push those changes up to the GitLab project.

This will trigger the pipeline which, in addition to running the SAST and SCA stages defined in previous instalment­s, will now build a Docker image, push that to the GitLab project’s Docker Registry and then run HawkScan against the image, posting the result up to your StackHawk account when it’s complete.

Scan results

Results are broken down into three main categories. High category should be fixed immediatel­y as they pose an immediate danger to business continuity; Medium

are generally issues with known exploit paths, but might not be a direct risk to business. Low category tend to be informatio­nal leaks (such as server versions) that could make an attacker’s job easier.

Clicking one of the findings brings up a summary of what the specific vulnerabil­ity is, including some notes on how a malicious hacker might abuse it.

Next to the findings is a complete list of the paths scanned, so you know if HawkScan was able to find and scan a particular path in your applicatio­n. This can be really helpful for ensuring that you have full coverage of the applicatio­n. Web crawlers are rarely perfect, and the one utilised by HawkScan is no different; however, if the applicatio­n you’re scanning has a GraphQL or OpenAPI/ Swagger schema (or even a SOAP descriptor) then

HawkScan won’t need to use a crawler and should hit 100 per cent of the paths every time.

It should be noted that just because a vulnerabil­ity isn’t detected doesn’t mean that it isn’t there. At the time of writing the SQL Injection vulnerabil­ity in the

bad_sql_practices Django app that is used as the base example for this series was not detected by HawkScan (or any of its competitor­s).

Triage and false positives

Once we know of a vulnerabil­ity we need to triage it. The first step to a good triage process is to discard known false positives. When testing developer environmen­ts, like the one configured in this tutorial, it’s common to exclude things like TLS/SSL certificat­es. It’s no surprise then that HawkScan detected it as an ‘HTTP Only site’.

Because of this the StackHawk UI wishes to inform you of the dangers of HTTP Only websites. However, as this is a developmen­t-grade deployment this is not actually a concern. To mark this finding as a false positive, open up the finding, and at the top right of the page you’ll see a button labelled Validate and a dropdown called Actions. From the drop-down you can select ‘False Positive’ and provide a descriptio­n as to why it should be ignored.

Once you’ve filtered out known false positives, the next step of triage is to ensure that you have tickets in your project management for all of the remaining risks. Any high-risk vulnerabil­ities should be expected to break sprint and receive immediate attention, as failure to do so would mean knowingly leaving the door wide open to attackers.

Medium-risk vulnerabil­ities are commonly scheduled into the next sprint. This isn’t ideal as, while they aren’t normally as big of a problem as the high-risk ones, when coupled with other vulnerabil­ities they can be just as bad as a high-risk vulnerabil­ity. However, if you attempt to treat everything as a sprint-breaking priority then management might start to ignore you as if you were crying wolf – ultimately leading to the opposite of the desired outcome.

Making a plan

If you’ve followed this series thus far, you now have a pipeline containing SAST, SCA and DAST. Once all three stages do not find any vulnerabil­ities, you should be in good shape moving forward.

However, it’s important to remember that this just means there are no vulnerabil­ities found – not that they aren’t there.

This is why it’s important to take time to make a security breach response plan and to build your defence in depth. When developing a complex stage to a CI/CD pipeline, it can be a good idea to comment out the existing stages – otherwise, you can find yourself spending a lot of time waiting.

 ??  ?? Adding an app in StackHawk is pretty straightfo­rward, with a clean workflow and UI.
Adding an app in StackHawk is pretty straightfo­rward, with a clean workflow and UI.
 ??  ??
 ??  ?? Make sure you download the stackhawk.yml file and make a note of the App ID as you’ll need it later.
Make sure you download the stackhawk.yml file and make a note of the App ID as you’ll need it later.
 ??  ?? Once you’ve generated your StackHawk API secret and APP ID you need to put them where HawkScan can find them.
Once you’ve generated your StackHawk API secret and APP ID you need to put them where HawkScan can find them.
 ??  ?? From the scan results view you can see how many of the issues are new, and how many still remain since you last triaged the results.
From the scan results view you can see how many of the issues are new, and how many still remain since you last triaged the results.
 ??  ?? A completed pipeline can take up to 15 minutes to run (or longer in more advanced projects), so combining stages can sometimes be necessary.
A completed pipeline can take up to 15 minutes to run (or longer in more advanced projects), so combining stages can sometimes be necessary.
 ??  ?? The finding details view provides a very useful overview of the issue, the evidence, and the paths where it was detected so that you can tackle it efficientl­y.
The finding details view provides a very useful overview of the issue, the evidence, and the paths where it was detected so that you can tackle it efficientl­y.

Newspapers in English

Newspapers from Australia