Compair CLI is a multirepo context manager designed to help developers track changes across related repositories and catch cross-repo drift before it leads to downstream issues. It enables teams to review changes in the context of their entire product surface, including backend, frontend, SDKs, CLI tools, desktop apps, and documentation.
Key Features:
Cross-repo comparison: Identifies conflicts, hidden overlaps, and missing updates across related repositories.
Shared context management: Maintains a persistent team-wide review context to ensure consistency and reduce misalignment.
Issue surfacing: Flags high-confidence findings such as API drift or outdated references before they become user-facing problems.
Integration with workflows: Supports both local evaluations and cloud-based setups, offering flexibility for different team needs.
Easy installation: Available via winget for Windows users.
Audience & Benefit:
Ideal for developers and teams managing multiple repositories who need to ensure their product components remain aligned. Compair CLI helps prevent broken workflows, reduces technical debt, and improves collaboration by providing a shared understanding of cross-repo changes.
README
Compair CLI
Compair CLI helps developers catch cross-repo drift from the terminal.
Track your backend, frontend, SDK, CLI, desktop app, and docs in one shared review context. Compair compares changes across related repos and surfaces conflicts, hidden overlap, and missing updates before they turn into broken workflows or user-facing issues.
Compair is a context manager for teams.
Instead of asking one model call to hold your whole product in working memory, Compair keeps a shared, persistent cross-repo context for the team, narrows attention to the changed surface, and brings in the few related snippets that actually matter.
Why it's different: most AI review tools look at one pull request in one repo. Compair reviews a repo in the context of the other repos it depends on.
Catch backend/frontend/SDK/docs drift earlier
Review changes in the context of the rest of your product
Turn high-confidence findings into CI checks when you're ready
Install
Choose the path that fits your platform, then run compair demo --offline for the fastest first look.
Platform
Recommended install
Notes
macOS
brew tap RocketResearch-Inc/tap``brew install --cask compair
Release archives are published for macOS, Linux, and Windows on the GitHub Releases page. If you want deeper install details or command reference material, see docs/user_guide.md.
Positioning note: Compair Cloud is the strongest out-of-the-box experience today. It gives you the best review quality without bringing your own model key, plus hosted auth, shared accounts, email delivery, and the most polished team workflow. Local Core remains the right fit for self-hosting, evaluation, and offline/local setups, with two meaningful bring-your-own-key paths: keep embeddings local and use OpenAI for generation as the lower-outsourced-cost default, or use OpenAI for both generation and embeddings when you want the strongest current self-hosted quality.
Care to Compair? Try It In 5 Minutes
The fastest way to see what Compair does:
# 1) Install Compair CLI
# 2) Run the offline sample
compair demo --offline
What the offline demo does
creates a disposable workspace
seeds two small related repos with an intentional API/client mismatch
renders a prebaked Compair report
requires no Docker, OpenAI API key, or Cloud account
Start here if: you want the fastest possible first pass before trying Compair on your own repos.
When you want a real review, run:
compair demo --mode local
# or
compair demo --mode cloud
Choose Your Start
Demo
Use this if you want to see Compair end-to-end in a disposable workspace.
compair demo --offline
Use compair demo --mode local or compair demo --mode cloud when you want fresh generated feedback instead of the prebaked sample.
Local / self-hosted
Use this if you want to evaluate Compair locally with managed Core.
compair profile use local
compair core up
compair login
If you stay fully local with the bundled no-key providers, expect functional but simpler summaries than Cloud. For the best lower-outsourced-cost self-hosted start, keep embeddings local and use your own OpenAI key for generation:
export OPENAI_API_KEY="sk-..."
compair core config set --generation-provider openai --embedding-provider local --openai-model gpt-5.4-mini --openai-api-key "$OPENAI_API_KEY"
compair core restart
If you do not want the key saved in ~/.compair/core_runtime.yaml, set COMPAIR_OPENAI_API_KEY or OPENAI_API_KEY in your shell and omit --openai-api-key.
Skip compair signup if you already have an account. Cloud is the best default when you want the strongest first impression, the least setup friction, and the best shared team workflow.
New here? Start with compair demo --offline.Evaluating open/local? Start with Local.Working with teammates right away? Start with Cloud.
Help Test Compair
Compair CLI is ready for early developer testing.
The fastest path:
compair demo --offline
Then, if you want a real review:
compair demo --mode local
# or
compair demo --mode cloud
Feedback is especially useful from developers maintaining backend + frontend repos, API + SDK repos, CLI + cloud service repos, docs + implementation repos, or multi-repo internal tools.
Please open an issue with what worked, what broke, and where the output was confusing. Include your OS, install path, and whether you tested offline, local Core, or Cloud when you can.
Example
You change an API field name in a backend repo.
The web app and CLI still reference the old name.
Compair reviews the repos together and flags the mismatch before the change reaches users or turns into a broken workflow.
Potential Conflict
backend-api: review response now uses `items`
web-app / developer-cli: still read `reviews`
Likely impact: clients show fallback values or missing review data
Compair surfaced a high-confidence drift issue across related repos that would not appear in a single-repo review.
Try It On Your Own Repo Suite
Use this after you've run the demo and want to test Compair on the repos that make up your actual product surface.
Before you start:
Put all related repos in one group
Upload baselines first
Then run one warm review across the group
# 1. Choose a profile and create a shared review group
compair profile use local
# or: compair profile use cloud
compair login
compair group create "Product Suite"
compair group use "Product Suite"
compair self-feedback on
compair feedback-length brief
# 2. First-run bootstrap only:
# index each related repo before asking for cross-repo feedback
compair track ~/code/backend-api --initial-sync --no-feedback
compair track ~/code/web-app --initial-sync --no-feedback
compair track ~/code/developer-cli --initial-sync --no-feedback
compair track ~/code/desktop-client --initial-sync --no-feedback
# repeat for any other repos in the shared product surface
# Optional but recommended for larger suites:
# keep generated artifacts and low-signal files out of the review surface
# with repo-local .compairignore files before the first warm pass
compair ignore suggest ~/code/backend-api
# add --write to append high-confidence suggestions after review
# 3. Run the warm review pass across the whole group
compair review --all --snapshot-mode snapshot --reanalyze-existing --detach
compair wait --all
# Optional: if you want a slower, broader repo-pair sweep instead of the
# standard shared-peer review pool, run the attached pairwise mode
compair review --all --pairwise --cross-repo-only
# Optional: if you want a one-shot whole-bundle read instead of the normal
# per-chunk retrieval/index path, run review --now
compair review --all --snapshot-mode snapshot --reanalyze-existing --now --yes
compair review --all --snapshot-mode snapshot --reanalyze-existing --now --skip-index --yes
# 4. Inspect the results
compair reports
compair notifications
compair notifications prefs
After the first run:
Start with brief
Expect the first baseline to take longest
After the warm pass, use normal review / wait cycles day to day
Use review --detach when you want the same workflow without blocking your terminal
Use wait --timeout 20m when a large baseline needs more time without resubmitting
Use review --pairwise when you want a slower, higher-coverage repo-pair pass; --cross-repo-only skips same-repo pairs
Use review --now when you want one whole-bundle LLM pass over the current tracked repo set instead of the normal per-chunk retrieval path; the CLI prints a token/cost quote before the model call, and Cloud runs require prepaid credits once that feature is enabled
Use review --now --skip-index when you want that bundle review faster and can tolerate the indexed retrieval state staying stale until a later full sync/review
Use ignore suggest to find repo-local .compairignore candidates before a full-suite baseline
Treat sync as the advanced/CI control surface rather than the default daily command
Treat --initial-sync --no-feedback as a one-time bootstrap step, not the normal daily workflow
You want a fast, readable signal. Recommended for first full-suite reviews and most daily use.
detailed
You want more context and rationale for a smaller number of findings.
verbose
You are actively debugging a specific result and want the most supporting detail.
Add Compair To CI When You're Ready
For interactive use, prefer compair review, compair review --detach, and compair wait.
Use compair sync when you specifically want CI, machine-readable output, gating, or lower-level control.
Start in advisory mode:
compair sync --json
Move to a conservative failing check:
compair sync --json --gate api-contract
Tighten rules later as you build trust in the signal.
If the term gate is unfamiliar, treat it as the rule that decides whether CI should fail.
Command
What it does
Use it when...
compair sync --json
Advisory only. Produces machine-readable output and a Markdown report, but does not fail CI on its own.
You are introducing Compair and want visibility without disruption.
compair sync --json --gate api-contract
Fails CI on high-severity potential_conflict notifications.
Best first production preset.
compair sync --json --gate cross-product
Fails CI on broader high-severity cross-product issues.
You want more than API contract checks, but still want a conservative threshold.
compair sync --json --gate review
Fails CI on high-severity conflicts and review-oriented updates.
You want stronger code-review style enforcement.
compair sync --json --gate strict
Fails CI on high and medium issues across a broader set of notification types.
Use on integration or release branches after you trust the signal.
Recommended rollout: start with visibility, then fail only on the highest-confidence issues, then tighten thresholds later.
Traditional RAG is good at answering questions from retrieved snippets. Repo-scoped AI review is good at helping inside one repo or one pull request.
Compair is built for a different problem:
start from what changed, not from a free-form query
search across the other repos that make up the product surface
look for contradictions, drift, hidden overlap, and missing downstream updates
turn high-confidence findings into notifications and CI gates
This matters because larger context windows do not mean every token is equally weighted, inspected, or analyzed. Important evidence still gets lost when it is buried inside a huge prompt, especially once instructions, history, and output budget are sharing that same window. Compair improves signal by focusing attention on the changed chunk and the most relevant cross-repo evidence instead of asking the model to reason over the whole product at once.
The practical takeaway is simple: Compair wins less by stuffing everything into one prompt and more by repeatedly compressing a large shared code and document surface into a small grounded evidence pack around each change.
Docs
New users should start with the demo, user guide, or cross-repo workflow.Maintainers and operators can use the advanced docs below.