agent-lsp is a stateful MCP server designed to bridge AI capabilities with code intelligence. It operates as a runtime that maintains warm semantic indexes across multiple language servers, enabling accurate and efficient multi-step code operations. The platform supports over 50 tools, 30 languages, and persistent sessions, ensuring seamless integration of AI-driven workflows.
Key Features:
Persistent Indexing: Maintains a warm semantic index to reduce cold-start overhead and ensure fast access to code intelligence.
Skill Layer: Encodes correct multi-step operations, transforming complex tasks into single, reliable workflows.
Token Efficiency: Uses structured LSP responses to significantly reduce token usage compared to traditional text-based methods.
Concurrency Analysis: Provides cross-language audits for shared state and concurrent execution across 25 languages.
Phase Enforcement: Ensures correct workflow ordering by blocking unauthorized tool calls during specific phases.
Speculative Execution: Simulates edits in memory, allowing agents to preview changes and their impact without altering code on disk.
Audience & Benefit:
Ideal for developers, AI agents, and organizations seeking to enhance code accuracy and efficiency. By leveraging agent-lsp, users can reduce errors, accelerate workflows, and minimize the risk of introducing regressions during refactoring or code modification. The platform is particularly valuable for teams working with large-scale projects requiring precise semantic awareness.
agent-lsp can be installed via winget, making it accessible to integrate into existing development environments without additional setup complexity.
README
The most complete MCP server for language intelligence. 65 tools, 30 CI-verified languages, 24 agent workflows. Single Go binary.
AI agents make incorrect code changes because they can't see the full picture: who calls this function, what breaks if I rename it, does the build still pass. Language servers have the answers, but existing MCP bridges either cold-start on every request or expose raw tools that agents use incorrectly.
agent-lsp is a stateful runtime over real language servers. It indexes your workspace once, keeps the index warm, and adds a skill layer that encodes correct multi-step operations so they actually complete.
What agents say
We asked AI agents to evaluate agent-lsp across 10 coding tasks (find callers, rename safely, preview edits, detect dead code) and write an honest assessment. Four different models, four independent evaluations, same conclusion:
> Claude (Opus 4.6): "I would recommend agent-lsp for any workflow involving refactoring, impact analysis, or safe editing. The standout tools are blast_radius (blast radius in one call, with test/non-test partitioning that would take 5-10 grep commands to replicate), go_to_implementation (type-checked interface satisfaction that grep simply cannot do), and the simulation session workflow (speculative type-checking without touching disk, which has no grep/read equivalent at all)."
> Cursor (auto): "I would recommend agent-lsp for heavy refactors and code navigation because the rename, references, implementations, call hierarchy, and simulation tools remove a lot of brittle grep/manual-edit work and make changes safer."
> GPT-5.5 (via Codex): "I would recommend agent-lsp for symbol-aware work: references, implementations, rename previews, diagnostics, and large-file structure are materially faster and less error-prone than grep/read loops."
> Gemini 2.5 Pro (via Gemini CLI): "I would highly recommend agent-lsp because it provides a level of semantic awareness that standard text-searching tools simply cannot match. The ability to perform high-confidence renames, find interface implementations, and preview the diagnostic impact of edits without writing to disk significantly reduces the risk of introducing regressions."
How the pieces fit together:LSP (Language Server Protocol) is how editors get code intelligence: completions, diagnostics, go-to-definition. MCP (Model Context Protocol) is the standard way AI tools like Claude Code discover and call external tools. agent-lsp bridges the two: language server intelligence, accessible to AI agents.
How it works
One agent-lsp process manages your language servers. Point your AI at ~/code/. It routes .go to gopls, .ts to typescript-language-server, .py to pyright. No reconfiguration when you switch projects. The session stays warm across files, packages, and repositories.
Tested, not assumed
Every other MCP-LSP implementation lists supported languages in a config file. None of them run the actual language server in CI to verify it works.
agent-lsp CI runs 30 real language servers against real fixture codebases on every push: Go, Python, TypeScript, Rust, Java, C, C++, C#, Ruby, PHP, Kotlin, Swift, Scala, Zig, Lua, Elixir, Gleam, Clojure, Dart, Terraform, Nix, Prisma, SQL, MongoDB, and more. When we say "works with gopls," that's a verified, automated claim, not a hope.
Speculative execution
Simulate changes in memory before writing to disk. No other MCP-LSP implementation has this.
preview_edit previews the diagnostic impact of any edit. You see exactly what breaks before the file is touched. simulate_chain evaluates a sequence of dependent edits (rename a function, update all callers, change the return type) and reports which step first introduces an error.
Structured LSP responses use 5-34x fewer tokens than grep/read on the same tasks. On HashiCorp Consul (319K lines), a blast-radius analysis uses 17.7MB via grep vs 841KB via LSP, reducing 5,534 tool calls to 119. Savings scale with codebase size. See docs/token-savings.md for the full experiment across five codebases.
Persistent daemon mode
Python and TypeScript projects need minutes of background indexing before find_references works. agent-lsp automatically spawns a persistent daemon broker that survives between sessions, so the workspace stays indexed. First session: daemon starts and indexes (~10s for FastAPI). Subsequent sessions: instant connection to the warm daemon. Auto-exits after 30 minutes of inactivity. Go, Rust, and other fast-indexing languages bypass this entirely (zero overhead).
Phase enforcement
Skills tell agents the correct order of operations. Phase enforcement makes the runtime block violations instead of trusting the agent to follow instructions.
When an agent activates a skill, every tool call is checked against the current phase's permissions. Calling apply_edit during blast-radius analysis doesn't silently proceed; it returns an error with specific recovery guidance ("complete the blast_radius phase first, allowed tools: [blast_radius, find_references]"). Phases advance automatically as the agent calls tools from later phases.
The inspector includes 4 concurrency checks that work across 25 languages in 4 concurrency families (goroutine, thread, async, actor):
Unrecovered concurrent entry: goroutines/threads/tasks without recovery
Unchecked shared state: bare type assertions on sync.Map, ConcurrentHashMap
Channel never closed: channels/queues created but never closed (goroutine leaks)
Shared field without sync: fields accessed from concurrent contexts without synchronization
blast_radius annotates symbols with sync_guarded: true when the parent type has a mutex. find_callers with cross_concurrent: true traces call chains through goroutine/thread boundaries. The /lsp-concurrency-audit skill produces a field-level safety report for any type.
Auto-diagnostics
Symbol edit tools (replace_symbol_body, insert_after_symbol, insert_before_symbol, safe_delete_symbol) automatically return errors_after and warnings_after counts. Agents know immediately whether an edit broke something without a separate get_diagnostics call.
safe_apply_edit combines preview + apply in one call: previews speculatively, applies to disk only if net_delta == 0 (no new errors). One tool call instead of three.
Raw tools get ignored. Skills get used. Each skill encodes the correct tool sequence so workflows actually happen without per-prompt orchestration instructions. Skills are available as AgentSkills slash commands and as MCP prompts via prompts/list / prompts/get for any MCP client.
See docs/skills.md for full descriptions and usage guidance.
Before you change anything
Skill
Purpose
/lsp-impact
Blast-radius analysis before touching a symbol or file
/lsp-implement
Find all concrete implementations of an interface
/lsp-dead-code
Detect zero-reference exports before cleanup
Editing safely
Skill
Purpose
/lsp-safe-edit
Speculative preview before disk write; before/after diagnostic diff; surfaces code actions on errors
/lsp-simulate
Test changes in-memory without touching the file
/lsp-edit-symbol
Edit a named symbol without knowing its file or position
/lsp-edit-export
Safe editing of exported symbols, finds all callers first
/lsp-rename
prepare_rename safety gate, preview all sites, confirm, apply atomically
Images run as a non-root user (uid 65532) by default. Set AGENT_LSP_TOKEN via environment variable, never --token on the command line. Images are also mirrored to Docker Hub (blackwellsystems/agent-lsp). See DOCKER.md for the full tag list, HTTP mode setup, and security hardening options.
Setup
Step 1: Install agent-lsp
curl -fsSL https://raw.githubusercontent.com/blackwell-systems/agent-lsp/main/install.sh | sh
Probes each configured language server and reports capabilities. Fix any failures before proceeding. See language support for install commands and server-specific notes.
Step 4: Configure your AI tool
agent-lsp init
Detects language servers on your PATH, asks which AI tool you use, writes the correct MCP config, and installs skill awareness rules for your AI provider (CLAUDE.md for Claude Code, .cursor/rules/ for Cursor, .clinerules for Cline, .windsurfrules for Windsurf, GEMINI.md for Gemini CLI). For CI or scripted use: agent-lsp init --non-interactive.
Each arg is language:server-binary (comma-separate server args).
Step 5: Install skills
git clone https://github.com/blackwell-systems/agent-lsp.git /tmp/agent-lsp-skills
cd /tmp/agent-lsp-skills/skills && ./install.sh --copy
Skills are prompt files copied into your AI tool's configuration. --copy means the clone can be safely deleted afterward.
Skills are also available as MCP prompts: any MCP client can discover them via prompts/list and retrieve full workflow instructions via prompts/get, with no manual installation required. The install.sh path is for AgentSkills-compatible clients (Claude Code slash commands).
Step 6: Allow tool permissions (Claude Code)
For Claude Code, add mcp__lsp__* to your permissions allow list so all 65 tools are available without per-tool approval prompts:
Without this, Claude Code will prompt for permission on each tool call. Other MCP clients handle permissions differently; check your client's documentation.
Skills are multi-tool workflows that encode reliable procedures: blast-radius check before edit, speculative preview before write, test run after change. See docs/skills.md for the full list.
Step 7: Start working
Your AI agent calls tools automatically. The first call initializes the workspace:
start_lsp(root_dir="/your/project")
This is what the agent does, not something you type. Then use any of the 65 tools. The session stays warm; no restart needed when switching files.
What's unique about agent-lsp
Capability
Details
Tools
65
Languages (CI-verified)
30, end-to-end integration tests on every push
Agent workflows (skills)
24, named multi-step procedures, discoverable via MCP prompts/list
Speculative execution
8 tools, simulate changes before writing to disk
Phase enforcement
4 skills, runtime blocks out-of-order tool calls with recovery guidance
Connection model
persistent, warm index across files and projects
Call hierarchy
✓, single tool, direction param
Type hierarchy
✓, CI-verified
Cross-repo references
✓, multi-root workspace
Auto-watch
✓, always-on, debounced file watching
HTTP+SSE transport
✓, bearer token auth, non-root Docker
Distribution
single Go binary, 10 install channels
Use Cases
Multi-project sessions: point your AI at ~/code/, work across any project without reconfiguring
Polyglot development: Go backend + TypeScript frontend + Python scripts in one session
Large monorepos: one server handles all languages, routes by file extension
Code migration: refactor across repos with full cross-repo reference tracking
CI pipelines: validate against real language server behavior
Niche language stacks: Gleam, Elixir, Prisma, Zig, Clojure, Nix, Dart, Scala, MongoDB, all CI-verified
Multi-Language Support
30 languages, CI-verified end-to-end against real language servers on every CI run. No other MCP-LSP implementation tests a single language in CI.
Distribution: install channels and release pipeline
Development
git clone https://github.com/blackwell-systems/agent-lsp.git
cd agent-lsp && go build ./...
go test ./... # unit tests
go test ./... -tags integration # integration tests (requires language servers)
Library Usage
The pkg/lsp, pkg/session, and pkg/types packages expose a stable Go API for using agent-lsp's LSP client directly without running the MCP server.