Agent Friendly Skill — score the repo, pick the model

A portable agent skill that scores your current repo's agent-friendliness locally and recommends which model to use for it. Vendored scorer, no service dependency — works offline, keeps working if Agent Friendly Code goes down.

Install

One command, any supported agent — the vercel-labs/skills CLI autodetects which agents you have configured locally and writes SKILL.mdplus the bundled scorer to each one's skill directory.

npx skills add hsnice16/agent-friendly-skill#v0

After install, run /agent-friendly (or however your agent invokes skills) inside any local repo — the skill resolves the repo root, runs the bundled scorer, and prints the score plus a model recommendation. The scorer profiles 8 agents (Claude Code, Cursor, Devin, GPT-5 Codex, Gemini CLI, Aider, OpenHands, Pi) and always returns scores for all 8 — the best-fit pick is score-driven, not driven by which agent invoked the skill, so the output is identical whether Claude Code, Cline, Copilot, Continue, or anything else vercel-labs/skills installs into is calling it.

How it works

  1. The agent tells you upfront that it scores the current working directory — so make sure you're at your project root before invoking. The CLI also emits a warning when the path has no project markers (package.json / README.md / AGENTS.md / .git) so a wrong path can't silently produce a low score.
  2. The agent runs node <skill-dir>/dist/index.js . — a single ncc-bundled file with no runtime deps and no network.
  3. The scorer evaluates the same sixteen signals this dashboard uses (AGENTS.md, CI, tests, README, linter, dev env, license, contributing, pre-commit, deps manifest, type config, codebase size, plus four agent-specific instruction files) and returns per-agent scores.
  4. The agent picks the highest-scoring entry as the best-fit (score-driven, regardless of which agent is invoking the skill), and recommends a model class using the table below — leaving the actual model switch up to the user.

Score → model mapping

Provider-neutral. The skill recommends a model class — the user picks the actual ID for their agent and runs /model (or equivalent) themselves.

BandScoreRecommendation
High≥ 80Frontier — Opus / GPT-5 / Gemini 2.5 Pro. The repo is well-prepped, the model can leverage it.
Mid60 – 79Standard — Sonnet / GPT-5 Codex / Gemini 2.5 Flash. Solid baseline; frontier is optional.
Low< 60Small / fast — Haiku / GPT-4o-mini / Gemini 2.5 Flash-Lite. Repo lacks the scaffolding to justify a frontier run.

Optional — score every session via SessionStart hook

For agents that support session-start hooks, you can have the skill print a one-line summary at the top of every session. Drop one of these into the matching settings file:

Claude Code · .claude/settings.json

{
  "hooks": {
    "SessionStart": [
      {
        "matcher": "startup",
        "hooks": [
          {
            "type": "command",
            "command": "node .claude/skills/agent-friendly/dist/index.js . --summary"
          }
        ]
      }
    ]
  }
}

Codex CLI · .codex/hooks.json

{
  "hooks": {
    "SessionStart": [
      {
        "command": "node .agents/skills/agent-friendly/dist/index.js . --summary"
      }
    ]
  }
}

Cursor, Cline, and Copilot don't expose a session-start hook today — paste the same node ... --summary command into .cursorrules / .clinerules as a static instruction, or invoke /agent-friendly manually when you want a fresh score.

Self-contained by design

The scorer and weights are bundled into dist/index.js via @vercel/ncc and committed to the skill repo. Every run is a local file-system pass — no network, no dashboard call, no token. If Agent Friendly Code disappears, the skill keeps scoring. The vendored scoring code is mirrored from Agent Friendly Code's lib/scoring/(this dashboard's source) and stays in sync via the mirror discipline documented in AGENTS.md.

FAQ

  • Does the skill talk to the dashboard?

    No. The scorer is vendored and bundled into the skill's dist/ via @vercel/ncc. After install, every score runs locally on your repo — no HTTP request. If this dashboard goes offline tomorrow, the skill keeps working unchanged.

  • Which agents does it score against?

    Eight: Claude Code, Cursor, Devin, GPT-5 Codex, Gemini CLI, Aider, OpenHands, Pi — the same set this dashboard profiles. Scoring is the same regardless of which agent invokes the skill; you always get all 8 per-agent scores. The skill installs into any vercel-labs/skills-compatible agent (Cline, Copilot, Continue, Roo Code, Windsurf, Amp, etc.) via the same one-line install.

  • How does the model recommendation work?

    After scoring, the skill maps the overall score to a band (high / mid / low) and suggests a model class. High-scoring repos are agent-prepped enough to leverage a frontier model (Opus / GPT-5 / Gemini 2.5 Pro). Low-scoring repos can't take advantage of the extra reasoning, so a smaller / faster model is the better trade-off. The mapping is provider-neutral and lives in SKILL.md.

  • Where is the source?

    github.com/hsnice16/agent-friendly-skill. MIT-licensed, semver-tagged. Pin a ref using the vercel-labs/skills CLI's '#<ref>' fragment syntax — `npx skills add hsnice16/agent-friendly-skill#v0` floats on the latest 0.x.y; '#v0.1.0' pins precisely. (The CLI uses '#' for refs and reserves '@' for skill-name filters.) The scoring code is vendored from the agent-friendly-code dashboard repo's lib/scoring/ and stays in sync per AGENTS.md's mirror discipline.

Source

github.com/hsnice16/agent-friendly-skill — MIT-licensed, semver-tagged. Sibling repo to agent-friendly-action — both vendor the same scorer.