Organizations that adopt vulnerability management strategies based heavily on the provided CVSS scores are very likely to be wasting their limited resources addressing vulnerabilities that don’t matter. When there are hundreds of thousands of vulnerabilities and only enough resources to remediate a small fraction of them, spending that time fixing the right vulnerabilities is more important than fixing the most vulnerabilities. Quality over quantity should be the mindset.
Too Many Vulnerabilities
For environments of any meaningful size, the volume of vulnerabilities exceeds the capacity to address them cost effectively, and remediation efforts are often disruptive to business operations.
According to research done by first.org, most organizations only manage to fix between 5% and 20% of known vulnerabilities per month. While that sounds like a recipe for never-ending toil, it’s important to understand that only a small subset of them, about 2-7% of all published vulnerabilities, are ever exploited in real-world scenarios.
Image sourced from https://www.first.org/epss/model
Broadly speaking, efficiency in vulnerability management is about maximizing the percentage of vulnerabilities remediated that are actually being exploited and minimizing the number of vulnerabilities fixed that never get exploited. With that in mind, the problem is actually not the fact that there are too many vulnerabilities. It’s clear it’s a problem in knowing which ones are meaningful to your specific environment and optimizing remediation efforts to that end.
So, how do organizations typically approach remediation, and what can be done to improve efficiency?
Many organizations lean heavily on the Common Vulnerability Scoring System (CVSS). It's a standardized framework to rate the severity of security vulnerabilities in software, and nearly every vulnerability management system factors in the CVSS score. It's a reasonable and common starting point, but taking published CVSS scores at face value and prioritizing on the score alone is not actually sufficient to perform a proper risk assessment. And that risk assessment is what actually drives remediation efficiency.
Despite its ubiquity, CVSS isn't tailored to make a comprehensive risk assessment 1 judgment for your organization. Its scores are derived based on generic criteria, which may not necessarily reflect the unique context specific to your business environment 2. Even if an organization supplements the base CVSS score calculation by adding temporal and environmental metrics, it still doesn’t take into account the types of data the system processes, how the software is used, or the consequences of a successful attack 3.
As stated above, focusing on vulnerabilities that are actively being exploited is a key element to achieving efficiency, but most organizations don’t have the capability to understand this by themselves. Consider incorporating EPSS. The Exploit Prediction Scoring System (EPSS) 4 is a community-driven effort that combines descriptive information about CVEs with evidence of actual exploitation in-the-wild to predict the likelihood of a vulnerability being exploited by malicious actors. The EPSS model uses a number of additional contextual factors to produce a probability score between 0 and 1 (0% and 100%). The higher the score, the greater the probability that a vulnerability will be exploited (in the next 30 days). While not explicitly tailored to every environment, this additional level of insight and intelligence can be invaluable in helping understand the potential risk of a given vulnerability relative to others and improve prioritization.
Improved scoring by supplementing CVSS scores with EPSS scores is a huge improvement, but there are a few more things that organizations can implement to get to a more accurate set of vulnerabilities to remediate and thereby improve their efficiency:
- Usage Matters: Many vulnerabilities are surfaced where artifacts such as software packages, virtual machine images, container images, and code repositories are stored. Typically, only a small fraction of those artifacts are actually in a live environment, so be sure to focus on vulnerabilities in actual use.
- Declutter: Regularly remove old or unused base virtual machine images and container images from artifact storage. In addition to reducing costs for storing old artifacts, it prevents new deployments from accidentally using outdated versions, and it means less work to filter out what’s in use.
- Gauge Exposure: Give priority to vulnerabilities in workloads with broad network exposure, but don’t fully deprioritize issues that are only exposed internally.
- Understand Exploitability: Factor in the likelihood of exploitation with supplemental scoring systems such as EPSS. If exploitation is happening in the wild, ensure those vulnerabilities are at the top of the list to remediate.
- Add Context: Consider the purpose and behavior of the environment of the potentially vulnerable software in which the software is deployed. Is the software actually used, or is it simply present in the system? Is it a production system handling sensitive data, or is it a development system with almost no data?
- Adopt an Efficiency Mindset: Embrace tools, methodologies, and processes that improve the speed and accuracy of prioritization efforts. Efficient use of the time and resources spent performing remediation efforts reduces more risk, saves more money, and lowers human toil.
While CVSS is widely adopted as the de-facto starting point, it’s clear that it lacks sufficient context to make useful risk calculations, and that leads to wasted resources remediating the wrong issues. Adopting an approach and mindset focused on efficiency when performing vulnerability management activities means organizations can direct their efforts towards the vulnerabilities that actually matter in their environment and make the most positive impact on their risk posture.