OpenClaw skills get system-level access — filesystem, shell, network, browser, credentials. Publishing to ClawHub requires a GitHub account that's at least seven days old. There's no code signing, no human review, no audit trail. VirusTotal scanning was added in February 2026 — but the OpenClaw team explicitly said it "cannot assess trust, detect prompt injection, or evaluate code quality."

So I scored 200 of them.

TL;DR

200Skills scored
10.5%Verified
81%Established
7.5%Blocked

Mean composite score: 6.17/10. Median: 6.20. The sample represents 1.36 million GitHub stars. Zero skills declared their security posture in the SKILL.md frontmatter — not 1.5%, not "rare" — literally none.

Tier Distribution

10.5%
81%
7.5%
Verified (7.0+, strong dimensions) Established (5.0+, sufficient signals) Blocked (disqualifier present) New (insufficient signal coverage)

The ClawHub ecosystem clusters in the middle band. Twenty-one skills earned the Verified tier; fifteen tripped a hard disqualifier. The 162 in the middle are "good enough" — established projects with sufficient signal coverage but missing one or more pieces that would push them up.

Where the Weakness Lives

Average dimension scores across the rich-data subset (n=31):

Alive 8.69
Legit 7.36
Solid 5.48
Usable 6.05

The pattern is consistent and counterintuitive: ClawHub skills are alive (8.69) and legit (7.36) — they have active commits, real maintainers, broad community adoption. Where they fall down is Solid (5.48). That's the dimension covering security posture, dependency health, tool safety, supply-chain safety, and known vulnerabilities.

The polished outer shell (popular repo, real org, recent commits) often hides a thinner security layer underneath. That's the gap a trust score is built to find.

Top 20 Most Trusted

SkillScoreTierStars
moeru-ai/airi7.74Verified38,691
activeloopai/deeplake7.71Verified9,103
siyuan-note/siyuan7.53Verified43,024
iOfficeAI/OfficeCLI7.52Verified2,157
cablate/mcp-google-map7.50Verified277
CherryHQ/cherry-studio7.45Verified44,582
iOfficeAI/AionUi7.36Established22,688
BlockRunAI/ClawRouter7.36Verified6,370
HKUDS/nanobot7.33Verified41,075
casdoor/casdoor7.31Verified13,495
mindsdb/anton7.31Verified637
afshinm/zerobox7.25Verified544
vava-nessa/free-coding-models7.23Verified1,538
23blocks-OS/ai-maestro7.19Verified647
yunionio/cloudpods7.18Verified2,886
iamlukethedev/Claw3D7.14Verified1,399
chrysb/alphaclaw7.10Verified1,234
zhayujie/CowAgent7.08Verified43,775
aiming-lab/AutoResearchClaw7.05Verified11,732
openclaw/openclaw7.04Verified365,344

Notice the pattern: most of the top 20 are not skill libraries in the strict sense. They're full applications (cherry-studio, siyuan, casdoor, cloudpods) that ship with skill manifests as one feature among many. Their trust scores reflect the health of the broader project, not the focused craftsmanship of a single skill. Pure-play skill repos (cablate/mcp-google-map at 277 stars, mindsdb/anton at 637 stars) prove that small skills can earn Verified — but the head of the distribution is dominated by mature multi-feature codebases.

What Got Blocked

Fifteen skills hit a hard disqualifier. From the rich-data subset (n=31), the disqualifier breakdown looks like:

SUPPLY_CHAIN_RISK3
SAFETY_BLOCK2

Across the rich-data subset, supply-chain risk (token exfiltration patterns or PR-target checkout in CI workflows) was the most common gating disqualifier — narrowly edging out direct safety-pattern detection. The full sample of 200 has 15 blocked, so the absolute counts above understate the ecosystem-wide rate; the distribution should be similar.

Lowest 10 Composite Scores

SkillScoreTier
miaoxworld/OpenClawInstaller4.45New
photon-hq/qclaw-wechat-client4.53New
SafeAI-Lab-X/ClawKeeper4.56Established
LAVARONG/wechat-automation-api4.66Established
xianyu110/clawbot4.69Established
yeuxuan/openclaw-docs4.77Established
golutra/golutra4.79Established
wwbin2017/bailing4.79Established
wexare-ai/openbrowserclaw4.83Established
slhleosun/EvoClaw4.84Established

The Transparency Gap

The SKILL.md format supports a security frontmatter section where authors declare what credentials they touch and what permissions they need. They can also declare allowed-tools to constrain what shell or HTTP calls a skill is allowed to make. Both are optional. Both reward authors with bonus points on the tool_safety signal.

Out of 200 skills surveyed: zero use the security field. Zero use allowed-tools.

That's the actual data — not "rare," literally none in the sample. The transparency adoption rate is 0%. Authors who try to constrain their own skills get no recognition because the market hasn't priced this signal in yet. Authors who don't try face no penalty either. The result is a flat zero.

This is a coordination problem masquerading as a security problem. The first agent platform that requires a populated security block as a publishing prerequisite would create a transparency floor overnight.

What This Means for You

If you publish OpenClaw skills:
Add a security block to your SKILL.md. Declare credentials accessed, permissions needed, and constrain allowed-tools. You'll be in a literal 0% category right now — first one in gets the entire transparency narrative for free, plus a bump on tool_safety that often nudges borderline composites into the Verified tier.
If you install OpenClaw skills:
10.5% of ClawHub is Verified. 7.5% is actively Blocked. The 81% in the middle is "probably fine" — but skills get system-level access, and "probably fine" is a low bar for that kind of trust. Score before you install.
If you build agent platforms:
A 0% transparency adoption rate is a coordination failure. Make the security block required (not optional) in your skill template. The rate jumps to 100% the day you ship the requirement.
If you write security policy:
The Solid dimension is the universal weak spot — averaging 5.48 even in the rich-data subset. That's not malware; it's the slow accumulation of unpinned dependencies, missing OpenSSF Scorecard adoption, and unreviewed CI workflows. A published "OpenClaw skill security baseline" would do more for ecosystem trust than another safety scanner.

What This Data Doesn't Tell You

Honest limitations: The mcpskills.io engine scores the project around a skill — repo health, author signals, security posture, dependency hygiene — not the runtime behavior of a skill at install or invocation. A skill could include prompt injection in its instruction text and still earn Verified if its repo is otherwise healthy. Static analysis catches obvious dangerous patterns in source files (eval, exec, credential exfiltration) but not novel runtime behavior or downstream agent context leaks. Runtime monitoring is on the roadmap; until it ships, treat the trust score as a strong prior, not a verdict.

Sample size: 200 of an estimated 13,700+ ClawHub skills. Selection: top 200 by GitHub star count from a 1,098-repo discovery pool. Stars correlate with project maturity but not with skill safety; large applications with skill manifests get oversampled at the head. The Top 20 reflects this. A weighted-by-skill-only analysis would surface a different leaderboard.

Methodology

Discovery: GitHub topic search (topic:openclaw, topic:openclaw-skill, topic:claw-skill, topic:clawhub-skill, topic:agent-skill) plus code search (filename:SKILL.md). 1,098 unique repos discovered; sampled the top 200 by GitHub stars.

Scoring: every skill ran through the production mcpskills.io engine — the same 15-signal algorithm available at mcpskills.io. Skills Mode auto-detected via SKILL.md presence (confidence 3, highest). YAML frontmatter parsed for security transparency scoring. Rich data (dimensions, disqualifiers, safety findings) captured for a 31-skill subset; the full 200 were scored for tier and composite.

Full algorithm: /methodology.

Companion report: State of MCP Security — April 2026.

Data sources

Every score in this report is reproducible from public data. The trust algorithm itself is an opinionated combination, but the inputs are not.

Prior research that motivated this work

Score your own skill

Free trust report — paste any GitHub repo, npm package, or ClawHub URL.

Open Scanner