The Model Context Protocol Registry passed 10,000 servers earlier this month. Two thousand seven hundred and three of them link to a public GitHub repository — the rest are remote endpoints (HTTPS-served MCP). I sampled 300 of the GitHub-backed servers at random, scored 202 successfully (the other 98 link to private or missing repos), and looked for patterns.
The pattern is striking: 83% of the servers I scored carry at least one disqualifier flag. Not a soft warning — a hard signal that the project lacks something a trustworthy MCP server should have. Average legitimacy across the sample is 3.05 out of 10.
TL;DR
Mean composite: 5.35/10 (vs. 6.17 for ClawHub). Median: 5.38. Range: 2.93–7.64. Fifteen percent of servers landed in the "New" tier — too few signals to evaluate confidently. Zero servers declared a security posture in their MCP server metadata. Zero known CVEs. Zero CISA KEV hits.
Tier Distribution
Three Verified servers in 202. The MCP Registry is a long tail — one verified head, a dense middle of "established but unproven," and a thick layer of new and unproven. The shape is what you'd expect from a fast-growing protocol with a low publishing barrier.
The Disqualifier Iceberg
Only five servers tripped to the Blocked tier — but tier doesn't tell the full story. 168 of 202 servers (83%) carry at least one disqualifier flag. Disqualifiers don't always hard-gate to blocked; they apply soft caps, score penalties, and tier downgrades depending on severity.
The two big drivers — single-author (58%) and no license (21%) — are both fixable. Both reflect publishing speed exceeding governance maturity. The MCP server template ships, the developer iterates fast, the LICENSE file gets skipped, the second contributor never lands. Multiply by 10,000 registry entries and you have a structural pattern.
Where the Weakness Lives
Average dimension scores across the full sample (n=202):
Legitimacy averages 3.05/10 — not weak, collapsed. This dimension covers author credibility, community adoption, contributor diversity, and download traction. Most MCP Registry entries have one author, dozens of stars or fewer, and no second contributor. The protocol is young; most servers are someone's weekend project; the registry doesn't filter for traction.
The contrast with ClawHub is sharp. ClawHub averaged Legit 7.36 because its top-ranked entries are mature multi-feature applications (cherry-studio, siyuan, casdoor) that ship with skill manifests. MCP Registry is dominated by purpose-built single-server projects, which don't have the community signals built up yet.
Top 10 Most Trusted
| Server | Score | Tier | Stars |
|---|---|---|---|
| Azure/containerization-assist | 7.64 | Verified | 40 |
| SonarSource/sonarqube-mcp-server | 7.39 | Verified | 542 |
| ArcadeData/arcadedb | 7.29 | Verified | 830 |
| Dave-London/Pare | 7.02 | Established | 121 |
| JustinBeckwith/linkinator-mcp | 6.95 | Established | 3 |
| JanDeDobbeleer/oh-my-posh | 6.93 | Established | 22,348 |
| ondata/ckan-mcp-server | 6.90 | Established | 42 |
| CodeAlive-AI/codealive-mcp | 6.88 | Established | 78 |
| Maxim-Mazurok/teams-api | 6.84 | Established | 2 |
| Elnora-AI/elnora-mcp-server | 6.81 | Established | 3 |
The top server has 40 stars. The contrast with ClawHub's top (38,691 stars) is telling. MCP Registry trust isn't dominated by popularity — it's dominated by maintainer discipline. Azure publishes containerization-assist with full LICENSE + governance + multiple contributors + clean CI; popularity hasn't caught up yet, but the trust signal is already there.
The Five Blocked Servers
| Server | Score | Disqualifiers |
|---|---|---|
| XXO47OXX/spa-reader-mcp | 4.31 | SAFETY_BLOCK + SINGLE_AUTHOR_LOW_ADOPTION |
| ariffazil/arifOS | 4.55 | SAFETY_BLOCK |
| JuanCF/scrcpy-mcp | 4.82 | SAFETY_BLOCK |
| abbacusgroup/Cortex | 4.84 | SAFETY_BLOCK |
| forgesworn/402-mcp | 5.82 | SAFETY_BLOCK |
All five hit the static-analysis safety scanner. The most common pattern: shell execution on user input from MCP tool arguments, or credential strings appearing in source code paths the scanner can read. None are confirmed malicious — the safety scanner flags patterns, not intent. But the patterns are worth investigating before installing.
The Good News
Zero known CVEs. Zero CISA KEV hits.
The known_vulnerabilities signal cross-referenced 81 servers (those with package.json files in the registry) against OSV.dev, CISA KEV, and FIRST.org EPSS. Result: no unpatched criticals, no actively-exploited vulnerabilities. The MCP ecosystem is young enough that the published-CVE problem hasn't accumulated yet. Catch this gap before it does.
What This Means for You
contributors.length >= 2 OR stars >= 50 rule would prune the long tail. Neither is a heavy lift; both raise the floor without blocking real publishers.What This Data Doesn't Tell You
Sample size: 202 of 2,703 GitHub-backed servers (~7.5% of the GitHub-linked registry). Selection: random shuffle of the registry, no stratification by stars or recency. The 98 unfetched repos are a separate signal — they're servers the registry trusts enough to publish but whose source code isn't actually accessible. That's its own quality issue, just outside the scope of this analysis.
Methodology
Discovery: Hit https://registry.modelcontextprotocol.io/v0/servers with cursor-based pagination across 101 pages. 10,100 total servers, 2,703 unique GitHub-backed (the rest are remote-only HTTPS endpoints).
Sampling: Random shuffle of the 2,703, take the first 300 not already in our score cache.
Scoring: every server ran through the production mcpskills.io engine — same 15-signal algorithm available at mcpskills.io. Skills Mode auto-detected via MCP keywords + server.json presence. The known_vulnerabilities signal queries OSV.dev (unified GHSA + npm + PyPA + Go + RustSec), CISA KEV (actively-exploited vulnerabilities), and FIRST.org EPSS (30-day exploit probability) for any package.json discovered in the repo.
Full algorithm: /methodology.
Companion report: State of ClawHub Trust — April 2026.
Data sources
Every score in this report is reproducible from public data. The trust algorithm itself is an opinionated combination, but the inputs are not.
- MCP Registry API — official catalog of Model Context Protocol servers; cursor-paginated server list with repository URLs and metadata. registry.modelcontextprotocol.io
- GitHub REST API — repository metadata, contributor graph, commit cadence, release history, issue responsiveness, license detection, file tree (for SKILL.md / server.json detection and source scanning). docs.github.com/en/rest
- OpenSSF Scorecard — security posture signals (branch protection, signed releases, dependency-update tooling, dangerous workflow patterns). scorecard.dev
- OSV.dev — unified vulnerability database (GHSA + npm + PyPA + Go + RustSec) queried at the currently-installable version. osv.dev
- CISA Known Exploited Vulnerabilities (KEV) — federal authoritative catalog of vulnerabilities with confirmed in-the-wild exploitation. Any CVE on KEV hard-gates the tier to
blocked. cisa.gov/known-exploited-vulnerabilities-catalog - FIRST.org EPSS — Exploit Prediction Scoring System (30-day exploit probability for any CVE). Used to weight non-KEV vulnerabilities by exploit likelihood. first.org/epss
- npm Registry — package metadata, weekly download counts, maintainer graph; queried during partial scoring for npm-published servers without GitHub source. docs.npmjs.com
Prior research that motivated this work
- Invariant Labs — mcp-scan: open-source vulnerability scanner for MCP servers focused on prompt-injection patterns. github.com/invariantlabs-ai/mcp-scan
- Trail of Bits — "ClawHavoc" (Jan 2026): 1,184 malicious AI skills discovered on a major skill marketplace; demonstrated that registry presence is not a trust signal. blog.trailofbits.com
- Snyk — "ToxicSkills" study (Apr 2025): 36.82% of sampled skills had at least one security flaw — a sister-ecosystem result that frames the MCP findings here. arxiv.org/abs/2504.03767
Score your own MCP server
Free trust report — paste any GitHub repo, npm package name, or registry URL.
Open Scanner