Application Security Testing - Part 1
A lot of people typically think of automation when the topic of DevOps or DevSecOps comes up. In this first part of the Application Security Testing series we will look at how should an organisation adopt such a Static Application Security Testing (SAST) tool.
SAST is a testing approach that has evolved since the early 80's e.g. LINT being the first generation. Quickly fast forward to 2019 and we can easily find numerous open source and commercial tools e.g. https://www.owasp.org/index.php/Source_Code_Analysis_Tools.
For the uninitiated, here are a few design considerations when selecting a SAST tool:
What development languages and frameworks are being used?
Some SAST tools will only cover a specific language or framework. For example:
Commercial tools typically covers a wider range. One could question or challenge the vendor with how thorough the scan rules are (scan quality). As for the scan rules, are they grouped for each language or a default set (spray and pray). How often are these updated? Can you customise rules and continue to inherit these as you update/upgrade the tool.
TCO: do you have operational resources to manage the SAST tool?
If you are a small company you will typically not have the resource to look after (yet another) solution and so a SaaS approach could be acceptable (please consider your risk appetite, data classification, etc). However, if you decide to adopt an on-premise approach, then sizing of the initial solution and well as reacting to the growth in usage are important considerations.
TCO: what is the software licence model?
Typically, the licensing models adopted by commercial products are based around users, number of "projects", "applications" and size of code submissions.
I strongly believe that a licence unit MUST be based around the number of applications i.e. business application. The vendor's tool SHOULD NOT dictate or influence the software architecture.
What integration capabilities does the tool have? e.g. Source Code Management, ticketing system, IDE, etc
What integrations (plugins) are available and supported. How comprehensive are the APIs for interfacing for custom integrations. Note for the APIs don't forget to also consider if they allow for operational management tasks e.g. register additional scan engines, changing the tools configuration, user management, etc.
How does the SAST tool scale e.g. single large instance, multiple instances, SasS, etc
Sizing is always going to be tricky; for example with large organisations a lot of the points above may not be clear to start with. In addition, to this what is the adoption rate - which leads to considering the deployment model i.e. multiple instances, cloud base approach (so the tooling team "controls" the full stack, or a SaaS model.
In true DevOps style I will recommend to start small, gather data and then grow incrementally.
What metrics to measure?
"Output" from the tool typically fall into two camps: management and operations.
Management data points:
How many findings/vulnerabilities I have?
How many of these are being remediated?
How often are my teams or team X are scanning?
What are the common vulnerabilities?
Which team is improving or on a decline trend?
Operations data points:
How many scans do we process per day, week, month?
How many Lines of Code (LOC) can we process in one hour?
Can we determine we are near capacity i.e. can we handle an additional 20% more scans?
Which teams are submitting "naughty scans" frequently e.g. 10 million LOC every hour or day?
Are different code bases being used against the same "scan project" and polluting the trend analysis?
Well obviously, the above really only skims the surface of working out what SAST tool to use and if SAST should be one of the testing approaches to adopt.