0

Stop the Chaos

hackuity.io

The NVD Just Changed the Rules. Here's What That Actually Means for Your Vulnerability Program

Author
Wilfrid BLANC

On April 15, 2026, NIST announced they can no longer keep up with the flood of CVE submissions hitting the National Vulnerability Database. The numbers tell the story: CVE submissions jumped 263% between 2020 and 2025, and even after enriching a record 42,000 CVEs last year, the backlog keeps growing.

So they're changing how NVD works. And if your vulnerability management workflow depends on NVD enrichment, you need to understand what's shifting.

What Actually Changed

Starting now, NVD will only prioritize enrichment for three categories of CVEs:
- Vulnerabilities in CISA's Known Exploited Vulnerabilities (KEV) catalog
- CVEs affecting U.S. federal government software
- CVEs hitting "critical software" as defined by Executive Order 14028 (operating systems, browsers, hypervisors, endpoint security, network tools, remote access software, and similar infrastructure)

Everything else? It might get enriched eventually. Or it might sit in "Not Scheduled" status indefinitely.

That means for a large chunk of new CVEs, you won't get:
- A validated CVSS score from NIST
- CPE matching data to map the vulnerability to your assets
- CWE categorization
- The independent analysis NVD used to provide

Instead, you'll get whatever the CNA (CVE Numbering Authority, usually the vendor) submitted. And the quality of that data varies wildly.

The Glasswing Effect: Why This Was Inevitable

If you've been following industry developments, this announcement shouldn't come as a shock. It's a direct consequence of what's happening with AI-driven vulnerability discovery.

Three weeks before NIST's announcement, Anthropic launched Project Glasswing. Their Claude Mythos AI model autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser. Some had existed for 27 years. Others had survived 5 million automated scans without detection.

On Firefox alone, Mythos found 90 times more exploits than the previous best model.

Here's the connection: AI doesn't just find more vulnerabilities. It fundamentally changes the volume of what flows into the CVE ecosystem. When a single AI model can discover vulnerabilities at machine speed, the number of CVEs requiring enrichment explodes beyond what any centralized team can handle.

According to Alex Stamos, former CISO of Facebook and Yahoo, we have roughly six months before open-weight models with similar capabilities proliferate to other researchers and threat actors. When that happens, the CVE submission rate won't just climb. It'll accelerate exponentially.

NIST's decision to prioritize enrichment isn't a policy preference. It's a triage response to an infrastructure problem that AI-driven discovery just made permanent. The vulnerability pipeline just got a firehose attached to it, and NVD can't drink from the firehose.

Why This Matters for Your Daily Work

If you've been running vulnerability scans and relying on NVD data to prioritize what to patch, this creates some immediate friction.

The CVSS scores you see will be less reliable.

When a vendor scores their own vulnerability, they don't always call it like a neutral third party would. Some CNAs are rigorous. Others routinely downplay severity. Until now, NVD provided a second opinion. That safety net just disappeared for most CVEs.

Your scanner might not recognize affected products.

Without CPE data, your vulnerability scanner can't automatically match a CVE to the software in your environment. The CVE exists. It's published. But your tools can't tell you whether it applies to you. That's a detection gap, and it creates manual work.

Prioritization logic breaks.

Most security teams use CVSS scores as an input, even if they layer on context like exploitability and asset criticality. If a CVE shows up with no score, how do your workflows handle it? Does it get ignored? Dropped into a manual review queue? Treated as low severity by default?

None of those options are great when AI is discovering thousands of new vulnerabilities every week.

The Real Problem: Discovery Isn't Your Bottleneck Anymore

The Glasswing findings revealed something uncomfortable about the current state of vulnerability management: AI can discover thousands of vulnerabilities in hours, but organizations still remediate at human speed.

When Claude Mythos proved we're exceptionally good at finding flaws now. The question is whether organizations can fix them before attackers exploit them.

The NVD announcement is a symptom of this deeper shift. When AI can surface thousands of vulnerabilities overnight, the bottleneck isn't "what are our vulnerabilities?" It's "which ones actually threaten us, what do we fix first, and how do we prove it's fixed?"

Discovery is solved. The unsolved problem is remediation operationalization at scale.

What You Can Do About It

The good news: you don't have to rely on NVD as your single source of vulnerability intelligence. There are practical steps you can take now to reduce your dependence on NVD enrichment quality.

Use Multiple Intelligence Sources

CISA KEV should already be a top priority in your workflow. If a vulnerability is being actively exploited in the wild, that matters more than any CVSS score. Make sure KEV listings feed directly into your prioritization model.

Beyond KEV, pull from multiple vulnerability and exploit intelligence feeds. CNA advisories directly from vendors provide context that NVD won't. Threat intelligence feeds tell you when exploits are circulating, even before they hit KEV. Commercial intelligence providers aggregate and validate data from dozens of sources simultaneously.

The era of "NVD is the truth" is over. Smart vulnerability programs correlate multiple feeds and triangulate severity signals rather than depending on a single database.

Build Context From Your Own Environment

Here's a shift in thinking that helps: stop asking "how severe is this CVE according to NVD?" and start asking "what's the actual risk of this vulnerability in my environment?"

That requires knowing:
- What assets you have and where they are
- Whether those assets are exposed to the internet or isolated
- How critical those systems are to your business
- Whether an exploit exists and whether it's being used

When you combine asset intelligence with threat intelligence and enrichment data from multiple sources, you build a more accurate risk picture than CVSS alone ever provided. This is the shift from vulnerability management to exposure management.

Automate Contextual Risk Assessment

Manual triage doesn't scale when vulnerability discovery accelerates. You need systems that automatically evaluate risk by combining:

- Multi-source severity signals (CVSS from multiple CNAs, EPSS exploitability scores, KEV status)
- Asset context (exposure level, business criticality, compensating controls)
- Threat intelligence (active exploitation, available exploits, attacker interest)

The goal is to automate the question "does this matter to us?" so your team focuses on decisions, not data gathering.

Automate Remediation Workflows

When AI discovers vulnerabilities at machine speed but your team remediates at human speed, manual processes collapse. Email chains don't scale. Spreadsheets become obsolete the moment you export them.

You need systems that automatically route prioritized issues to the right teams with the right context. That means knowing which team owns which assets, what remediation guidance applies to each vulnerability, and how to track progress without generating alert fatigue.

Automation doesn't replace human judgment. It removes the friction between "we know what's broken" and "the right people are fixing it."

Where Hackuity Stands

You might be wondering what this means for Hackuity's vulnerability intelligence.Use this agent to get morning briefs on new critical vulnerabilities with exploit activity.

Short answer: our data quality isn't impacted.

We built Hackuity around the assumption that centralized enrichment would eventually become incomplete. Our vulnerability and exploit intelligence comes from over 100 independent sources, including CISA KEV, CNA advisories, commercial threat feeds, and security research databases. When NVD data is incomplete, the platform automatically correlates signals from alternative sources.

Beyond multi-source intelligence, Hackuity automates the heavy lifting that NVD's changes now make manual for most teams:

- Severity qualification
happens automatically by cross-referencing multiple CVSS sources, exploit availability, and active exploitation indicators. This creates a more reliable severity assessment than any single database provides.

- Contextual risk assessment evaluates every vulnerability against your specific environment. Hackuity's True Risk Score (TRS) combines vulnerability severity with vulnerability intelligence, asset exposure, business criticality, network segmentation, and compensating controls to determine whether a CVE genuinely threatens you. This is the difference between "this CVE is critical" and "this CVE is critical to you."

- Remediation orchestration routes prioritized vulnerabilities to the right teams with environment-specific guidance, then tracks progress through closure. No spreadsheets, no email chains, no manual follow-up.

When Project Glasswing demonstrated that AI could discover vulnerabilities faster than organizations could patch them, it validated what we'd been building toward: the future of vulnerability management is exposure management. Not finding more vulnerabilities. Operationalizing remediation at the speed AI is discovering them.

What This Means Going Forward

Here's what matters: NVD's announcement isn't an isolated event. It's a preview of what happens when AI-driven discovery collides with human-speed processes.

The teams that adapt won't be the ones waiting for perfect data. They'll be the ones who can make good decisions with imperfect information and execute those decisions before attackers exploit the gaps.

If your vulnerability program still depends on a single source of truth, now's the time to build redundancy. If you're still manually triaging thousands of findings, now's the time to automate context. And if remediation is still happening through email and spreadsheets, now's the time to fix that workflow.

Because the gap between finding vulnerabilities and fixing them just became the defining challenge of modern cybersecurity, the organizations that close that gap won't be the ones with perfect visibility. They'll be the ones with the operational infrastructure to turn overwhelming data into verified risk reduction.

Security isn't about having perfect information. It's about making the right decisions fast enough to matter.

I WANT TO KNOW MORE