In Part 1 of this blog post, we looked at the concept of “firing bullets and then cannonballs” that comes from the book Great By Choice by Jim Collins and Morten T. Hansen. The idea works a little like this: first fire your “bullets” – low-cost, low-risk, low-distraction experiments to figure out what will work. This allows you to calibrate what you ultimately want to do by taking small shots. Once you have a solid idea of what will work dialed in – with supporting evidence – then you fire your “cannonball” where you concentrate resources in a big bet.
In the previous blog post we looked at the example of determining the effectiveness of developer security training in your environment. This time around, we’ll look at another common aspect of building an application security program – rolling out scanning tools.
Another Example: Let’s Roll Out an xAST Tool!
Figure 1 – Please note – this is not ACTUAL benchmarking data for these scanners. You’re going to have to do this yourself in your environment with your applications.
So you want to roll out an application scanning tool? Great! Automation is required if you want to get anywhere near an acceptable level of coverage in your application security program. But you have a lot of options when selecting testing tools and they are not all going to be equally effective in your environment. So – select the set of tools you want to test as well as what you consider to be a representative sample of the applications in your environment. Scan each application with all of the tools and load the data into ThreadFix. Then clear out the false positives and keep track of the time required to do this.
Based on that, you can go and run the Scan Comparison Summary report in ThreadFix for each of the applications. This lets you see the performance of each of the tools on your applications. By default the report will give you total counts for vulnerabilities found, but you can tighten this up to focus in on the vulnerabilities you truly care about. Are you really concerned with all the Info and Low vulnerabilities that a tool is going to spit out? Are you really going to have time to even look at those vulnerabilities? Or do you really only care about Criticals and Highs? In any case, using this report you can see which tool found the most vulnerabilities you really care about and you can also see the false positive rates for each tool. This is important because false positives can be killers. If you don’t remove them, but instead feed them to developers to get fixed, you’re outsourcing false positive identification to the development teams. And there may be no more effective way of engendering hostility from development teams than dumping unfiltered security testing results on them for triage. So – you really need to clear out false positives before communicating with developers. But that takes time from the limited resources that your security analysts have to devote to this task. Since you know how long it took you to clear out false positives, you can estimate the costs associated with that process for each of the tools. And then you can add this cost in alongside raw licensing costs to get a better idea of what the true cost of operationalization will be for each of these technologies.
Deciding to roll out automation is a critical decision, but you need to feel comfortable that this automation is going to be both effective in your environment – is it finding the stuff you need it to find – as well as cost-effective in your environment – is the process of culling false positives going to be too expensive because it overwhelms your analysts?
Before you make large-scale commitments in your application security program, fire off a couple of bullets so you know where you want to send your cannonballs. You’ll find that your aim is going to be a lot better and you’ll have a far better idea of what results to expect. The ThreadFix platform can drive this process. Rather than guessing, you have access to all the data that flows through it and its metrics and reports so you can make quantitative decisions based on data.