AI Tools

React Doctor Scores Your Codebase Out of 100 and Lets AI Agents Fix It

A new open-source CLI scans React projects for anti-patterns, spits out a score, and plugs into AI coding agents.

Oliver Senti
Oliver SentiSenior AI Editor
February 18, 20264 min read
Share:
Terminal window showing a React codebase scan with a numeric health score and diagnostic output

Aiden Bai, the developer behind Million.js and React Scan, shipped another tool on Monday. React Doctor is a CLI that scans your React codebase for anti-patterns across security, performance, correctness, and architecture, then hands you a score between 0 and 100. The GitHub repo went up alongside a tweet that pulled over 300,000 views in its first day.

The pitch is simple: npx -y react-doctor@latest . at your project root. You get a number. If the number is low, you feel bad. Then (and here's where it gets interesting) you can hand that diagnostic output directly to an AI coding agent and tell it to fix things.

What it actually checks

React Doctor bundles 47-plus rules covering the usual suspects: unnecessary useEffect calls, accessibility gaps, prop drilling where context or composition would work better, dead code, and assorted lint violations. Run it with --verbose and you get file paths and line numbers. Run it with --score and you get just the number, which is clearly designed for CI pipelines and dashboards.

None of these checks are individually new. ESLint with eslint-plugin-react-hooks and eslint-plugin-jsx-a11y has covered much of this ground for years. The value proposition is packaging: one command, one score, one report. Whether 47 rules is enough to produce a meaningful composite score is a fair question. The weighting isn't documented in the README, and I couldn't find it in the repo either.

The scoreboard

Bai ran React Doctor against a dozen popular open-source projects and published the results. tldraw and excalidraw tied at 84. PostHog got a 72. Supabase, 69. Sentry landed at 64, and cal.com came in at 63. Dub brought up the rear at 62.

These numbers are fun to look at but hard to interpret without knowing what's being weighted. Is a codebase with 1,000 warnings but zero errors "better" than one with 50 warnings and 10 errors? The scoring methodology matters, and it is not explained. tldraw and excalidraw are both drawing tools with relatively focused component trees, which probably helps their numbers. Comparing them to PostHog's sprawling analytics frontend feels like comparing a studio apartment to a warehouse.

The agent skill angle

Here's the part that separates React Doctor from yet-another-linter. It ships as an agent "skill" compatible with the skills.sh ecosystem that Vercel launched earlier this year. Run npx skills add millionco/react-doctor and you inject all 47 rules into whatever coding agent you're using: Cursor, Claude Code, GitHub Copilot, or others. The CLI itself prompts you to install the skill on first run.

The idea is that your AI assistant doesn't just generate code, it generates code that would pass React Doctor's checks. It's a feedback loop: scan, diagnose, feed the diagnostics to an agent, have the agent fix things, scan again. "Repeat until passing," as Bai's announcement put it.

In practice, this assumes the 47 rules are correct enough to trust as a target. Blindly optimizing for a lint score has a long and inglorious history in software engineering. But the integration with the agent skills standard is genuinely clever, and it's timed well. The skills.sh ecosystem is still young, with companies like HashiCorp and Vercel itself publishing skill packs. React Doctor rides that wave.

And then there's the --fix flag

The --fix option doesn't auto-fix your code the way ESLint's --fix does. It opens Ami, which is Bai's company's commercial product. Million (YC W24) is building Ami as what they call a "post-IDE," a tool where you comment on a rendered page and the underlying code changes. React Doctor's fix path funnels directly into it.

So the open-source tool is also a lead-generation mechanism for a paid product. That's not unusual in developer tooling, and Bai has been transparent about it (the flag is right there in the help output). But if you're evaluating React Doctor as a standalone linter, know that the auto-fix story requires buying into a separate product. The scan-and-diagnose part works independently.

Who is this for?

Teams running React at scale who want a quick health check before a sprint. Developers onboarding to an unfamiliar codebase who want a map of where the problems cluster. And, increasingly, AI agents that need structured feedback about code quality to iterate on their own output.

The repo is MIT-licensed and accepting contributions. It had about 150 stars within a day of launch, which is modest by Bai's standards (React Scan has over 20,000). Whether React Doctor grows into something teams rely on probably depends on how transparent and configurable the scoring becomes. Right now, the number is opaque. And an opaque number is just a vanity metric with better packaging.

Tags:reactdeveloper-toolsopen-sourcelintingai-coding-agentsmillion-jscode-quality
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

React Doctor: Open-Source CLI Scores Your React Codebase | aiHola