Web application vulnerabilities ranked by real-world exploitation frequency — not theoretical risk. Drawn from breach reports, CVE data, and incident response findings.
OWASP does important work. But its list is built on surveys and expert opinion. Ours is built on evidence.
Ranked by confirmed exploitation frequency across breach reports, IR engagements, and public CVE data.
| # | Vulnerability | ||
|---|---|---|---|
| Loading… | |||
These OWASP categories don't appear in the AWASP Top Ten because the evidence doesn't support their inclusion as distinct, exploitable findings.
Transparent about where the data comes from, how we used it, and where it runs out.
The OWASP Top 10 is built from testing data — what security scanners and pentesters find when they examine applications. That’s useful, but it tells you what’s discoverable, not what’s actually being exploited. We wanted to know what attackers are actually doing, so we went looking for exploitation data instead.
We filtered out anything that isn’t a web application vulnerability. CISA KEV is full of Use After Free bugs, buffer overflows, and kernel privilege escalations — those are real and serious, but they’re not web app security.
We kept the focus on vulnerabilities in things you access over HTTPS: login pages, APIs, web management interfaces, and web application frameworks.
OWASP’s data contributors are security testing vendors (Veracode, Contrast Security, HackerOne). Zero breach investigation firms — Mandiant, CrowdStrike, Palo Alto Unit 42 — contribute data to the OWASP Top 10.
OWASP uses incidence rate (percentage of apps with the vulnerability) rather than exploitation frequency. Two of their ten categories are chosen by community survey to compensate for things testing tools can’t find. Their own documentation says results are “largely limited to what the industry can test for in an automated fashion.”
None of that makes OWASP wrong or useless. It just means it answers a different question to ours.