After every agent turn
Your CLAUDE.md compiled into grep and AST checks. Sub-second, zero cost, zero false positives. Exit code 2 tells the agent to fix it before moving on.
Continuous guardrails that steer your agents back on track before drift becomes debt.
Caliper is in private beta. Join the waitlist to get notified when it's ready.
You need a reviewer that checks code the moment your agent finishes composing — not after you've already opened a PR. A review that runs locally, finishes fast, and knows your project's conventions. One that catches drift before it compounds, not after.
Most AI code review tools can't do this. They're slightly better linters that show up at PR time, scan for generic issues, and leave noisy comments developers learn to ignore. They don't know your architecture. They don't adapt to your rules. And by the time they run, the damage is already done.
Caliper works differently. It compiles your conventions into deterministic checks that run after every agent turn, reviews your changes locally before you commit, and runs a full multi-phase review before you merge.
Your conventions already exist as prose in CLAUDE.md. Caliper reads them and compiles every mechanically-checkable rule into a deterministic check — grep patterns, AST analysis, file scans. You don't learn a rule format. You don't configure a dashboard. You write English, Caliper enforces it.
60-70% of convention violations are mechanically checkable. Caliper handles those with zero-cost pattern matching — no model calls, no false positives, no latency. AI budget is spent on what actually needs intelligence: logic bugs, security issues, design problems.
A Claude Code stop hook runs compiled checks after every agent turn. Violations are caught in sub-second, and the agent fixes them before continuing. Bad patterns don't compound. This is the layer no other tool operates at.
Convention enforcement is free. AI review costs $0.05–$2.00 per review, printed transparently after each run. No seat licenses. No opaque token consumption. No surprise bills when agents generate 50x more code than humans.
| Layer | When | How | Cost |
|---|---|---|---|
| Convention checks | After every agent turn | Deterministic grep + AST checks via Claude Code stop hook | $0 |
| Local AI review | Before every commit | AI review of staged changes with caliper check | ~$0.05–$0.30 |
| PR review | Before every merge | Multi-phase pipeline posted as inline GitHub comments | ~$0.20–$2.00 |
The tightest loop catches the most issues at the lowest cost. Each layer narrows what the next layer needs to evaluate.
caliper refresh reads your CLAUDE.md and extracts every enforceable rule. This is a one-time step — the compiled checks run free from that point on.
| Your CLAUDE.md says | Caliper compiles to |
|---|---|
| "No classes — use functions and plain objects" | grep check — flags class declarations |
| "Keep functions under ~30 lines" | AST check — measures each function's length |
| "Never use execSync with template strings" | grep check — flags execSync( calls |
| "Every migration needs a test file" | file-exists check — ensures .test.ts exists |
Rules that require judgment — "use meaningful error messages," "prefer composition over inheritance" — become conventions for the AI review layers. Nothing is dropped.
npm install --save-dev @caliperai/caliper
npx caliper init --yes # scaffold config (auto-detects framework)
npx caliper refresh # compile CLAUDE.md → deterministic checks
npx caliper init --agent # install the stop hookThree commands. Convention enforcement is live. Full setup guide →