CARVIEW |
- About FIRST
- Mission Statement
- Strategy Framework
- History
- Sustainable Development Goals
- Organization
- FIRST Policies
- Anti-Corruption Policy
- Antitrust Policy
- Bylaws
- Board duties
- Bug Bounty Program
- Code of Conduct
- Conflict of Interest Policy
- Document Record Retention and Destruction Policy
- FIRST Press Policy
- General Event Registration Refund Policy
- Guidelines for Site Selection for all FIRST events
- Identity & Logo Usage
- Mailing List Policy
- Media Policy
- Privacy Policy
- Registration Terms & Conditions
- Services Terms of Use
- Standards Policy
- Statement on Diversity & Inclusion
- Translation Policy
- Travel Policy
- Uniform IPR Policy
- Whistleblower Protection Policy
- Partnerships
- Newsroom
- Procurement
- Jobs
- Contact
- Membership
- Initiatives
- Special Interest Groups (SIGs)
- SIGs Framework
- Academic Security SIG
- AI Security SIG
- Automation SIG
- Cybersecurity Communications SIG
- Common Vulnerability Scoring System (CVSS-SIG)
- CSIRT Framework Development SIG
- Cyber Insurance SIG
- Cyber Threat Intelligence SIG
- Curriculum
- Introduction
- Introduction to CTI as a General topic
- Methods and Methodology
- Priority Intelligence Requirement (PIR)
- Source Evaluation and Information Reliability
- Machine and Human Analysis Techniques (and Intelligence Cycle)
- Threat Modelling
- Training
- Standards
- Glossary
- Communicating Uncertainties in CTI Reporting
- Webinars and Online Training
- Building a CTI program and team
- Curriculum
- Detection Engineering & Threat Hunting SIG
- Digital Safety SIG
- DNS Abuse SIG
- Stakeholder Advice
- Detection
- Cache Poisoning
- Creation of Malicious Subdomains Under Dynamic DNS Providers
- DGA Domains
- DNS As a Vector for DoS
- DNS Beacons - C2 Communication
- DNS Rebinding
- DNS Server Compromise
- DNS Tunneling
- DoS Against the DNS
- Domain Name Compromise
- Dynamic DNS (as obfuscation technique)
- Fast Flux (as obfuscation technique)
- Infiltration and exfiltration via the DNS
- Lame Delegations
- Local Resolver Hijacking
- Malicious registration of (effective) second level domains
- On-path DNS Attack
- Stub Resolver Hijacking
- Detection
- Code of Conduct & Other Policies
- Examples of DNS Abuse
- Stakeholder Advice
- Ethics SIG
- Exploit Prediction Scoring System (EPSS)
- FIRST Multi-Stakeholder Ransomware SIG
- Human Factors in Security SIG
- Industrial Control Systems SIG (ICS-SIG)
- Information Exchange Policy SIG (IEP-SIG)
- Information Sharing SIG
- Law Enforcement SIG
- Malware Analysis SIG
- Metrics SIG
- NETSEC SIG
- Public Policy SIG
- PSIRT SIG
- Red Team SIG
- Security Lounge SIG
- Security Operations Center SIG
- Threat Intel Coalition SIG
- Traffic Light Protocol (TLP-SIG)
- Transportation and Mobility SIG
- Vulnerability Coordination
- Vulnerability Reporting and Data eXchange SIG (VRDX-SIG)
- Women of FIRST
- CCB Initiatives
- FIRST CORE
- Internet Governance
- IR Database
- Fellowship Program
- Mentorship Program
- IR Hall of Fame
- Victim Notification
- Volunteers at FIRST
- Previous Activities
- Special Interest Groups (SIGs)
- Standards & Publications
- Events
- Education
- Blog
Estimating CVSS v3 Scores for 100,000 Older Vulnerabilities
By Ben Edwards
The first EPSS model only scored recent vulnerabilities – those which had CVSS 3.1 metrics scored, and so one of the goals of the second model was to score vulnerabilities for all 170,000+ CVEs. But since these older vulnerabilities were only scored using CVSS 2.0, this presented a problem. What we needed was a way to score (or at least estimate the metric values for) 100,000 older vulnerabilities. This article explains how we did just that.
In order to successfully provide scores for older vulnerabilities it was necessary to have complete data on all of those old vulnerabilities. A key piece of that data is the CVSS v3 metrics. The vulnerability assessments provided by CVSS v3 are only available mostly for vulnerabilities created after 2015.

Because of the large increase in the number of vulnerabilities in the last 5 years however, there are ample data that allows us to infer the CVSSv3 metrics for older vulnerabilities. To accomplish this, much of the data used as inputs to EPSS was used to train an Artificial Neural Network (ANN) to predict the CVSS v3 base vector.
The model was developed using time stratified 8-fold cross validation on the data set of vulnerabilities which had both CVSS v3 metrics. We were able to achieve an accuracy of 75% for predicting the exact CVSSv3 vector. Each individual sub component was predicted with greater than 93% accuracy.

This equates to achieving 88% accuracy when considering the vulnerability “severity” (None/Low/Medium/High/Critical). Moreover for 99.9% of the predictions the ANN was able to predict at least 4 out of the 8 metric values correctly. The model also performed well across all time periods.

Future work may focus on improving these predictions. Predicted CVSSv3 values are included as EPSS inputs only when the CVSSv3 scores are unavailable. An additional variable indicating whether the CVSSv3 score was a prediction or from the original vulnerability was included as well.
This ability to produce CVSS 3.1 scores for 100,000 CVEs (i.e. those prior to 2016) made a significant contribution to our ability to produce EPSS scores.