You found the bug. heavyComputation at /app/server.js:42 was consuming 62% of CPU. You refactored it to use a worker thread. You deployed. The team celebrates.
Two hours later, latency is back. Not from heavyComputation β that's fixed. But the refactoring introduced a new bottleneck in the message serialization between the main thread and the worker. You didn't notice because you were looking at the old function, not the new one.
This is the verification gap. You profile, you fix, you deploy β but you never systematically compare "before" and "after" to confirm the fix worked and nothing else regressed.
With node-loop-detective v2.2.0, you can now diff two profiling reports:
# Before the fix
loop-detective 12345 --json > before.json
# After the fix
loop-detective 12345 --compare before.json
What the Comparison Shows
The diff report has five sections:
Pattern Changes
The most important section. Did the blocking pattern go away?
β Resolved issues:
- cpu-hog (high)
β New issues:
+ json-heavy: JSON operations took 1200ms (15% of profile)
Three categories:
- Resolved: was in the baseline, not in the current report. Your fix worked.
- New: wasn't in the baseline, appeared in the current report. Your fix introduced a new problem.
- Persistent: present in both. The issue wasn't addressed (or the fix didn't help).
Function Changes
Which functions got faster? Which got slower? Which are new?
Function Changes
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β² serializeMessage 0ms β 450ms (+450ms) NEW
/app/worker-bridge.js:23
βΌ heavyComputation 6245ms β 120ms (-6125ms)
/app/server.js:42
Functions are classified as:
- Regressed (β² red): self time increased by >1ms
- New (+ red): appeared in the current report but not in the baseline
- Improved (βΌ green): self time decreased by >1ms
- Removed: was in the baseline, gone from the current report
- Unchanged: delta within Β±1ms
Sorted with regressions first β the things you need to look at.
Lag Comparison
Event Loop Lag
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Events: 12 β 1 (-11)
Max: 312ms β 45ms
Avg: 156ms β 45ms
Count, max, and average compared with deltas. Green if improved, red if regressed.
I/O Comparison
Slow I/O
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Slow ops: 5 β 2 (-3)
Max dur: 2340ms β 800ms
Verdict
A one-line summary:
β Overall: IMPROVED (3 improvements)
Four possible verdicts:
- IMPROVED: regressions = 0, improvements > 0
- REGRESSED: regressions > 0, improvements = 0
- MIXED: both regressions and improvements
- NO SIGNIFICANT CHANGE: nothing moved
The Workflow
Basic: Before and After
# 1. Capture baseline
loop-detective 12345 --json > baseline.json
# 2. Deploy your fix
# 3. Compare
loop-detective 12345 --compare baseline.json
With Full Output
The comparison works alongside all other flags:
# Compare + HTML report + CPU profile
loop-detective 12345 --compare baseline.json --html after-fix.html --save-profile after.cpuprofile
You get the normal terminal report, the comparison report, the HTML report, and the CPU profile β all from one command.
In CI
# In your test pipeline
loop-detective --port 9229 --json > current.json
loop-detective --port 9229 --compare baseline.json --json > diff.json
# Check the verdict
REGRESSED=$(node -e "const d=require('./diff.json'); const r=d.comparison.functions.filter(f=>f.status==='regressed').length; process.exit(r > 0 ? 1 : 0)")
if [ $? -ne 0 ]; then
echo "Performance regression detected!"
exit 1
fi
How the Comparison Works
The comparator (src/comparator.js) is a pure function that takes two report objects and produces a structured diff:
function compareReports(baseline, current) {
return {
summary: compareSummaries(baseline.summary, current.summary),
functions: compareFunctions(baseline.heavyFunctions, current.heavyFunctions),
patterns: comparePatterns(baseline.blockingPatterns, current.blockingPatterns),
lagEvents: compareLag(baseline.lagEvents, current.lagEvents),
slowIO: compareIO(baseline.slowIOEvents, current.slowIOEvents),
};
}
Function Matching
Functions are matched by a composite key: functionName + url + lineNumber. This handles the common case where the same function exists in both reports at the same location.
const key = (f) => f.functionName + '|' + f.url + ':' + f.lineNumber;
If a function moved to a different line (e.g., you added code above it), it shows up as "removed" at the old line and "new" at the new line. This is a known limitation β line-level matching is imperfect when code changes significantly. But for the typical "fix one function, verify it improved" workflow, it works well.
Threshold for Change
A function is "improved" or "regressed" only if the self time changed by more than 1ms. This avoids noise from statistical sampling variation. A function that went from 5.2ms to 4.8ms is "unchanged" β the 0.4ms difference is within normal sampling variance.
Pattern Comparison
Pattern comparison is set-based. If cpu-hog was in the baseline but not in the current report, it's "resolved." If json-heavy is in the current report but not the baseline, it's "new." The comparison doesn't try to compare severity levels or thresholds within the same pattern type β it's a binary present/absent check.
Programmatic API
The comparator is exported for use in custom tooling:
const { compareReports, formatComparison } = require('node-loop-detective');
// Load two reports
const baseline = JSON.parse(fs.readFileSync('before.json'));
const current = JSON.parse(fs.readFileSync('after.json'));
// Compare
const diff = compareReports(baseline, current);
// Structured access
console.log('Resolved:', diff.patterns.resolved.map(p => p.type));
console.log('Regressed functions:', diff.functions.filter(f => f.status === 'regressed'));
console.log('Lag delta:', diff.lagEvents.delta);
// Formatted terminal output
console.log(formatComparison(diff));
// Or use in an API
app.get('/perf/compare', (req, res) => {
res.json(diff);
});
Why This Matters
Performance work without measurement is guesswork. You think the fix helped, but you don't know by how much. You think nothing else regressed, but you didn't check every function.
The comparison report makes verification systematic:
- Did the target function improve? Check the function changes.
- Did the blocking pattern resolve? Check the pattern changes.
- Did anything else regress? Check for new issues and regressed functions.
- Did lag improve? Check the lag delta.
- Did I/O improve? Check the I/O delta.
Five questions, one command, one report.
Try It
npm install -g node-loop-detective@2.2.0
# Save baseline
loop-detective <pid> --json > before.json
# After changes
loop-detective <pid> --compare before.json
Source: github.com/iwtxokhtd83/node-loop-detective
The hardest part of performance work isn't finding the problem. It's proving the fix worked.




![Defluffer - reduce token usage π by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)







