I had the opportunity to lead a Peer2Peer session at RSA 2016 that asked attendees to talk about how they do vulnerability management for different types of vulnerabilities. In particular, what I wanted to discuss were the similarities and differences in how organizations deal with network and infrastructure vulnerabilities versus application-level vulnerabilities.
Who Attended?We had a capacity crowd at the session, and a couple of folks actually couldn’t make it into the room because we ran out of seats. So popular! The group was pretty diverse with industries ranging from high-tech, healthcare, government, and retail and a variety of company sizes ranging from well-funded startups to some of the world’s largest companies. This was great because we got to hear from folks with a wide-ranging set of perspectives.
Major Themes and Takeaways
Relative Maturity – Infrastructure/Network versus Application Vulnerability ManagementAs expected, most organizations had their network and infrastructure testing pretty well squared away, and many said that their scanning program basically ran continuously – finishing up coverage of their IP range and then starting over again immediately. Identified vulnerabilities get communicated to some sort of operations team to be addressed and, in general, most issues can be addressed. Application testing programs tended to be a lot less mature. Testing was not done as frequently and the resolution pathway for the identified vulnerabilities wasn’t as well-defined. In a handful of organizations, application vulnerabilities are being pushed to developer defect trackers. I view this as a “table stakes” requirement to start making headway in an application vulnerability management program. But overall the testing programs and vulnerability management protocols weren’t as mature for applications when compared with infrastructure and network vulnerabilities. Also it was more common to have more centralized server operations teams, versus application teams that were segmented off in different business units, making communication with those teams more challenging.
Product Vulnerabilities Versus Service VulnerabilitiesOne point that came up early in the discussion was that organizations hosting and managing their own applications had materially different vulnerability management workflows than organizations that shipped products to end-users to install. This is because in-house applications vulnerabilities are “fixed” once you’ve deployed the updated code and configuration in your own environment, and vulnerabilities in products provided to end-users aren’t really “fixed” until your end-users have deployed the updates and fixes you’ve provided. So – vulnerability management for product vendors also includes additional steps to notify end-users of vulnerabilities and available updates, as well as providing support to get those updates deployed in the end-user environments.
Bug Bounties Have Their PlaceOne participant related a number of interesting points about how their organization had made effective use of a public bug bounty program. They conceded that they received a lot of junk submissions, but, overall, found the quality of submissions was better than what they saw from automated scanning tools (“At least with a bug bounty submission, a human sent it in hoping that they’d get paid for it.”) Another piece of advice on using bug bounties was to triage the results and pay bug bounties quickly. This helps to maintain a good relationship with submitters and makes your program more attractive when compared to other available bug bounty programs. The participant did indicate that their bug bounty program was really only possible because they had a “giant website” and didn’t think they would have experienced the same success if they had a significant need to test internal-facing, partner-facing, or non-web applications. (I might suggest that public-facing mobile applications could benefit from bug bounty programs as well.)
How to Handle “Unpatchable” VulnerabilitiesA common lamentation from the group was that there were situations where:
- Certain supporting packages – like Java – had vulnerabilities identified in them too frequently to realistically be able to patch server-based systems requiring them, and
- Operating systems had been end-of-lifed in situations where systems requiring them unfortunately couldn’t be upgraded