I gave an AI code reviewer a PaymentProcessor class β seven services injected, payment + fraud + inventory + notifications all in one 60-line method.
Here's what it flagged:
π΄ Change Propagation β Seven-service constructor signals a God Class in formation
Source: Fowler β Refactoring β Divergent Change; Martin β Clean Architecture β SRP
Consequence: This class will change for at least four independent reasons. Each change is a merge conflict waiting to happen.
Remedy: Introduce
FraudCheckService,InventoryDeductionService,PaymentNotifier.PaymentProcessorthen injects 3 services, not 7.
Not "this is bad." Here's which book explains why, what breaks if you ignore it, and how to fix it.
That's brooks-lint β an open-source plugin for Claude Code, Codex CLI, and Gemini CLI. github.com/hyhmrright/brooks-lint Β· MIT Β· v1.0.0 shipped this week.
Why I built it
ESLint flags unused variables. Complexity checkers count branches. SonarQube tracks duplication percentages. All useful. None of them answer the question a senior engineer actually asks:
"What architectural principle is this violating, and what happens if we ignore it?"
I've spent years reading the classics β Fowler, Martin, Evans, Brooks, Ousterhout, McConnell. Each book has a chapter that makes you think "I've seen this exact failure before." But the insight stays locked in the book.
brooks-lint is an attempt to make that insight executable. Every finding follows the same shape:
Symptom β Source β Consequence β Remedy
Symptom is what the linter sees. Source is the book + chapter it comes from. Consequence is what breaks in six months if you ignore it. Remedy is a concrete refactor.
The six decay risks
After synthesizing twelve books, six patterns kept appearing as root causes of software decay:
| Code | Risk | Core Insight |
|---|---|---|
| R1 | Cognitive Overload | Mental load exceeds working memory β mistakes and avoidance |
| R2 | Change Propagation | One change forces unrelated changes elsewhere |
| R3 | Knowledge Duplication | Same fact in two places β they diverge |
| R4 | Responsibility Rot | One module does too many things |
| R5 | Dependency Disorder | Modules depend on modules that depend on modules... |
| R6 | Domain Model Distortion | Code doesn't speak the business's language |
There are also six test-space variants (T1βT6) covering test brittleness, mock abuse, coverage theater, and more.
The twelve books
| Book | Author | Risks |
|---|---|---|
| The Mythical Man-Month | Frederick Brooks | R2, R4, R5 |
| Code Complete | Steve McConnell | R1, R4 |
| Refactoring | Martin Fowler | R1, R2, R3, R4, R6 |
| Clean Architecture | Robert C. Martin | R2, R5 |
| The Pragmatic Programmer | Hunt & Thomas | R2, R3, R4, R5, T2, T3 |
| Domain-Driven Design | Eric Evans | R1, R3, R6 |
| A Philosophy of Software Design | Ousterhout | R1, R4 |
| Software Engineering at Google | Winters et al. | R2, R5 |
| xUnit Test Patterns | Meszaros | T1, T2, T4 |
| The Art of Unit Testing | Osherove | T1, T2, T3 |
| Working Effectively with Legacy Code | Feathers | T3, T4, T5 |
| Unit Testing: Principles, Practices, Patterns | Khorikov | T1, T2, T6 |
The books agree more than they disagree. Fowler's Divergent Change smell, Martin's Single Responsibility Principle, and Evans's Bounded Context are all describing the same underlying failure from different angles.
Five review modes
/brooks-review β PR code review (diff-focused)
/brooks-audit β Architecture audit
/brooks-debt β Tech debt assessment
/brooks-test β Test quality review
/brooks-health β Codebase health dashboard with score
Works on Claude Code, Codex CLI, and Gemini CLI.
Install
# Claude Code (one command)
/plugin install brooks-lint@brooks-lint-marketplace
# Codex CLI
codex plugin install hyhmrright/brooks-lint
# Gemini CLI
gemini extension install hyhmrright/brooks-lint
Source: github.com/hyhmrright/brooks-lint
The hard part wasn't the principles β it was the false positives
A 60-line function in a data-migration script isn't the same as a 60-line function in a payment handler. The tool needed to know when not to flag something, which meant reading the exception clauses in each book more carefully than I'd expected.
The benchmark suite now has 49 scenarios, including explicit false-positive cases that must not be flagged. That's probably the most useful artifact I built β it forces the skill to be calibrated, not just pattern-matching.
I'd love your help
v1.0 is out, but a linter grounded in books is only as good as the communities that stress-test it. Three specific ways you can help:
- Try it on a real repo and open an issue if a finding feels wrong β false positives are the highest-priority bug class.
- Propose a 13th book. If there's a classic that covers a failure mode R1βR6 misses, tell me which chapter and I'll prototype the rule.
- Share a code smell you see constantly in the comments. I'll run brooks-lint on a representative example and post the raw output as a reply.
Star if it resonates: β github.com/hyhmrright/brooks-lint
Which of the six risks (R1βR6) do you hit most often? Drop a line below β happy to dig into specific examples.













