HawkScan and Azure Pipelines
StackHawk makes it easy to add security scanning to Azure Pipelines. The basic steps are:
- Configure your Pipeline by adding or editing the
azure-pipelines.yml
file in your project repository - Configure HawkScan with a
stackhawk.yml
file - Secure your API key as a secret Variable in your Pipeline
In this guide, we walk through scanning a publicly accessible example app, example.com. Then under Local Scanning, we launch and scan apps within the self-contained Azure Pipelines build environment.
Sign Up or Sign In
If you already have a StackHawk account, sign in. If not, sign up! You will need a StackHawk account to continue. You will also need your StackHawk API key, so be sure to record it.
Create a Git Repo
If you don’t already have a Git repo, go ahead and create one for this tutorial. We recommend Azure Repos, Bitbucket, or GitHub for ease of integration.
Configure Your Azure Pipeline
At the base directory of your code repository, add an azure-pipelines.yml
file to configure Azure Pipelines to run HawkScan.
azure-pipelines.yml
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: Remote_Scan
steps:
- script: >
docker run -v $(pwd):/hawk:rw -t
-e API_KEY="${HAWK_API_KEY}"
stackhawk/hawkscan
displayName: Run HawkScan
env:
HAWK_API_KEY: $(hawk_api_key)
This configuration tells Pipelines to run a single job which runs HawkScan as a Docker container. The job will pass the StackHawk API key to HawkScan as an environment variable, HAWK_API_KEY
, taken from a secret Pipeline Variable, hawk_api_key
, which we will set up momentarily.
Configure HawkScan
At the base directory of your code repository, create a stackhawk.yml
configuration file. For our example, we will create a bare-minimum configuration pointing to our test endpoint in the Development
environment.
stackhawk.yml
app:
applicationId: xxXXXXXX-xXXX-xxXX-XXxX-xXXxxXXXXxXX
host: http://example.com
env: Development
Replace app.applicationId
with your App ID.
Add, commit, and push these two files to your Git repository.
Create an Azure Pipeline
Make sure the files above have been pushed to your central Git repo so that Azure Pipelines can find them.
From your Azure DevOps Console, select (or create) the Project you wish to add a Pipeline to. From your Project, select Pipelines from the left pane. Then click the blue New Pipeline button to create a new Pipeline.
From here, Azure will step you through the process of adding your repository, as follows:
- Where is your code? Select your provider, Azure Repos, Bitbucket, or GitHub
- Select a repository - Select the repo you just pushed your new configurations to
- Configure your pipeline - Select “Existing Azure Pipelines YAML File”
- Select an Existing YAML File - Enter
azure-pipelines.yml
in the Path field
- Select an Existing YAML File - Enter
- Review your pipeline YAML - Click the grey Variables button
- Variables - Click the blue New variable button
In the New variable dialogue, name your variable hawk_api_key
, and add your StackHawk API key as the Value. Check the box to Keep this value secret.
The New variable screen should look like this.
![]() |
Save that variable.
Run It
Now that you have identified your Pipeline configuration file and saved your API key as a Variable, Pipelines will allow you to Review your pipeline YAML. It should contain exactly the Pipeline code you entered from above. Hit the blue Run button, and watch your pipeline run.
You should see the HawkScan container run and print some summary information to the screen when the scan is complete.
![]() |
Check your StackHawk account for your scan results.
Local Scanning
The previous example works great if you have an integration environment that is publicly accessible. But if you don’t have that, you can run your app and scan it directly on the ephemeral Azure Pipelines build host. Here are two common ways of doing that.
Localhost
One way to test an app after building it is to launch it directly on the Pipeline build VM, and scan it on the localhost address. Normally reaching a localhost address on the host with Docker can be tricky, but HawkScan inserts a proxy to translate it to the host address on the given platform. Because that proxy runs as an unprivileged user, your localhost application must listen on a port number higher than 1023.
For our example app, we will launch an Nginx container listening on port 8080. Then we will scan it at the localhost address. This method works with any app you wish to scan, containerized or not, as long as it is listening on the localhost address, and on a port above 1023.
Here’s our modified GitHub workflow.
azure-pipelines.yml
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: Local_Scan
steps:
- script: docker run --detach --publish 8080:80 nginx
displayName: Start Nginx
- script: >
docker run -v $(pwd):/hawk:rw -t
-e API_KEY="${HAWK_API_KEY}"
stackhawk/hawkscan
displayName: Run HawkScan
env:
HAWK_API_KEY: $(hawk_api_key)
Finally, here’s our modified HawkScan configuration pointing to the localhost address and port, http://localhost:8080
.
stackhawk.yml
app:
applicationId: xxXXXXXX-xXXX-xxXX-XXxX-xXXxxXXXXxXX
host: http://localhost:8080
env: Development
Pro Tip: when scanning the localhost address, you must scan ports above 1023. Internally, HawkScan creates an unprivileged proxy which cannot listen on lower ports.
When you commit and push this code to your repository, Azure Pipelines will run this job. The Pipeline will launch Nginx, scan it with HawkScan, and post the results to your StackHawk scan list.
Docker Compose
Another way to test your app is to run it in a container and scan it on a Docker bridge network. You can do this with docker run
commands, but we will be using Docker Compose for simplicity and flexibility. Docker Compose allows you to define a set of containers that can address one another by name using a declarative YAML configuration.
Add a Docker Compose configuration file, docker-compose.yml
, to the root of your repository.
docker-compose.yml
version: "3.7"
services:
# Fire up the app to test, nginx_test
nginx_test:
image: nginx
# Fire up hawkscan to scan the test app (nginx_test)
hawkscan:
image: stackhawk/hawkscan
environment:
API_KEY: "${HAWK_API_KEY}"
volumes:
- type: bind
source: .
target: /hawk
tty: true
depends_on:
- nginx_test
This configuration creates 2 containers (services) named nginx_test and hawkscan running on a bridge network, and they can reach one another by service name.
In the Pipelines configuration, we replace the docker
script with docker-compose
, which reads docker-compose.yml
by default for its configuration. And since Nginx is defined in the Docker Compose configuration, we can remove it from the before_install
section.
azure-pipelines.yml
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: Docker_Compose_Scan
steps:
- script: docker-compose up --abort-on-container-exit
displayName: Docker Compose Scan
env:
HAWK_API_KEY: $(hawk_api_key)
The flag, --abort-on-container-exit
, tells Docker Compose to tear down all the containers as soon as any one of them exits. This will cause Docker Compose to stop all containers once the HawkScan container finishes. Without this flag, the nginx_test container would continue running, and the job would hang until it times out.
Finally, update the HawkScan configuration to point to your test app, http://nginx_test
.
stackhawk.yml
app:
applicationId: xxXXXXXX-xXXX-xxXX-XXxX-xXXxxXXXXxXX
host: http://nginx_test
env: Development
Check your code in and watch your scan run in Azure Pipelines. Then head over to StackHawk to view your scan results.
With this approach you can build sophisticated arrangements of Docker containers, adding databases and other services as needed to create realistic test environments.