SkillRank

Editorial

Editorial policy

SkillRank publishes directional intelligence for builders. That obligation requires predictable walls between editorial, monetization, and sponsorship experiments.

How tools are selected

Editors prioritize repositories and vendors that builders repeatedly cite in production-adjacent workflows: frontier chat models, IDE agents, multimodal stacks, embeddings/RAG utilities, and curated skill catalogs tied to Claude Code or OpenClaw.

Spam forks, empty README vending machines, and deceptive naming collisions are rejected or merged into canonical entries.

How rankings are reviewed

Rankings recombine nightly automated signals with quarterly editorial passes—or sooner when safety-critical inaccuracies surface.

Maintainers may flag stale metadata; we reconcile entries after verifying docs or releases.

How corrections can be submitted

Email airankskill@gmail.com with (1) entry slug or GitHub URL, (2) proposed correction, (3) authoritative citation such as vendor docs or tagged releases.

Verified corrections ship without demanding payment; SkillRank does not charge for accuracy fixes.

Organic ranking vs affiliate links vs sponsored placements

Organic rankings derive from editorial labels merged with public telemetry—never from whether SkillRank monetizes an outbound link.

Affiliate links may append tracking parameters to vendor CTAs; they use disclosed rel values per FTC-informed norms and never reorder leaderboard math.

Sponsored placements—if activated—receive explicit visual labeling and sit outside organic ordering blocks.