ThreadFix

Applied ThreadFix: Fire Bullets, Then Cannonballs – AppSec Edition

The concept of “firing bullets and then cannonballs” comes from the book Great By Choice by Jim Collins and Morten T. Hansen. The idea works a little like this:  first fire your “bullets” – low-cost, low-risk, low-distraction experiments to figure out what will work. This allows you to calibrate what you ultimately want to do by taking small shots. Once you have a solid idea of what will work dialed in – with supporting evidence – then you fire your “cannonball” where you concentrate resources in a big bet.

The truth is – what works in application security for one organization may not work for others. Every organizations has a different culture and different value drivers, and these factors will determine what is going to be effective in those environments. Some organizations place a high value on developing their internal teams – perhaps training will work best for them. Other organizations outsource a lot of their development – perhaps focusing on gates and process controls would be a better approach. Some organizations have a very penetration-testing-centric view of security. We see this a lot where the security leadership comes from a very network-centric view of the world. In environments like these, DAST scanning programs and application assessments may be the best way to roll out a testing program. Other organizations have security leadership that comes from a stronger development background. In these environments, SAST scanning, IAST instrumentation, and threat modeling may be easier to get to take hold.

The point is – before you roll out an initiative program-wide it would be best to have both some evidence of why you think it will be successful as well as some “lessons learned” that guide the way. ThreadFix can be tremendously helpful allowing you to fire a couple of bullets before you unleash your cannonballs. The ThreadFix platform collects data over time and across your application security program and allows you to view this data from a very high level as well as drilling down into very specific views of that data. This allows you to run experiments and try things out on smaller groups – your bullets – and then analyze the data and recalibrate what you want to do for a broader rollout – your cannonballs.

Let’s Look at an Example: How Valuable Is Secure Coding Training for Us?

Secure coding training for developers sounds like a great idea – how can we expect to hold our developers accountable for architecting, designing, and coding secure applications if they haven’t been training how to do it? A couple of years ago I gave a TEDx talk looking into this exact issueBut good training can be expensive. Knowledgeable individuals who are adept at training are relatively rare. Quality training materials are expensive to develop and maintain. For these reasons, solid commercial instructor-led training offerings tend to have a non-trivial price tag. And that isn’t even the real cost of training because you have to look at the opportunity cost for the developers and other attendees of the training classes. What could they have done with that time if they weren’t in the training class? In addition, people that you train may leave your organization. Arguably this is better than having untrained people stick around your organization to write more vulnerable code, but that is a topic for a different blog post… Finally, it isn’t entirely clear how actually effective training is. John Dickson did some research in this area a number of years ago and you can see slides and a video of this presentation.

So what sort of training do you want to roll out in your environment? To craft your answer to this question, you can use ThreadFix and run an experiment – provide instructor-led training (ILT) to one team, eLearning to another, and leave one team alone as a control. Before you get started, run the Progress By Vulnerability report in ThreadFix for each of the teams to see where they stand before they received the training. Run your training and keep track of your costs as well as feedback from the students who went through training. Wait a couple of months and then re-run the Progress By Vulnerability report for each team to see how their performance may have changed. You can also run the Trending report and the Remediation report for the intervening date ranges to get some additional information on how the teams have progressed – or not progressed – based on the training.

Now you have some numbers showing you “When we spent $X providing ILT to this team, three months later here’s how their security outcomes have changed” and “When we spent $Y providing eLearning to this other team, three months later here’s how their security outcomes have changed” And you also have the baseline numbers for teams that didn’t receive training.

Did you see an improvement? (Hopefully you saw some…) Was the improvement you saw worth $X or $Y? What would it cost to extrapolate that training program across the rest of your development teams? Now when you go to justify budget, you have some numbers to justify your thinking – and you have some feedback from the initial participants that can be incorporated to hopefully get even better results. Or you can decide to do nothing, and if you get questions about why you’re not providing secure coding training, you have quantitative data you can use to explain your reasoning.

Conclusions

Before you make large-scale commitments in your application security program, fire off a couple of bullets, so you know where you want to send your cannonballs. You’ll find that your aim is going to be a lot better, and you’ll have a far better idea of what results to expect. The ThreadFix platform can drive this process. Rather than guessing, you have access to all the data that flows through it and its metrics and reports, so you can make quantitative decisions based on data.

Contact us for help running a world-class application security program with ThreadFix.

How can we help?