If you are using VSCode or any other non-integrated editor (even vim or emacs), chances are you are already using a language server. These servers power features that are specific to the language or framework you are working with. They provide documentation, autocomplete, code navigation, warnings, and more.
When you click on a function and jump to its definition, a language server is likely behind the scenes making that possible.
What Is a Language Server?
Language servers understand the symbols of a programming language. They know what is a variable, a function, a class, or any other construct specific to that language. Before 2015, language servers were often custom tools built for one language and tied to a specific editor.
That changed when Microsoft started working on VSCode. They wanted an editor that could support any language. So they formalized the Language Server Protocol (LSP) in a public spec: https://microsoft.github.io/language-server-protocol/
This standard allowed developers to build language servers in many languages and integrate them with a wide variety of editors.
Language servers add symbolic understanding to code, which means they go beyond treating source files as plain text. This symbolic layer makes tasks like refactoring or finding definitions more accurate and safer.
Why This Matters for AI Coding Tools
So why am I telling you all this? Because this ties directly into how modern AI coding tools work, and where they might be heading.
Right now, to the best of my knowledge, there are two main approaches in AI code tools.
The Brute Force Approach: Claude Code
Claude Code takes a tool-based approach. It runs commands like find
, grep
, and rg
to explore the codebase. It lists directories, parses files, and looks for matches using basic command line tools.
Sometimes it misses. But often, it keeps trying until the task is done and all tests pass. It is effective at diving into a codebase and getting things done using only the tools at its disposal.
The downside is that it starts from scratch each time. Even though it may remember some context, it mostly relies on documentation and code instructions, and uses more tokens and LLM calls to reason through each task.
The Indexed Approach: Cursor and Windsurf
On the other side are tools like Cursor and Windsurf. These are full AI IDEs that take a broader approach. They index your codebase, understand file relationships, and pull in structured documentation and rules.
These indexes make their retrieval smarter. They can quickly bring in relevant code snippets to help the AI complete a task with more global context.
But this approach has tradeoffs. Indexes must be kept in sync with the current code. If they get out of sync, results may be wrong or misleading. Indexing large repos can take time. When working in teams, each person’s local environment may end up duplicating indexing efforts.
To my knowledge, neither Cursor nor Windsurf share indexes across a team, even if multiple people are working on the same codebase with the same tool. That means redundant work and potential inconsistencies.
A Middle Ground: Using Language Servers in AI Agents
I believe there is another way.
We can use language servers to give AI agents symbolic access to code. This avoids brute force and removes the need for heavy indexing.
To explore this, I built a generic Language Server Protocol (LSP) MCP server. It works locally using stdio, and connects any LSP to tools that speak MCP. So far, I have tested it with Claude Code.
The server supports all of the latest LSP 3.17 features (although not all language servers support every feature). This gives Claude Code the ability to perform symbolic queries, like finding definitions or references, without having to manually parse the codebase using grep.
It works across different languages without needing special instructions for each one. It also does not require indexing the entire codebase unless the language server does that internally.
Forcing Claude Code to Use the Right Tools
Getting Claude Code to actually use these LSP tools wasn’t automatic. Out of the box, it strongly prefers basic command line tools like grep, find, and rg. Even when more capable tools are available, it often falls back on what it already knows.
To steer it in the right direction, I had to write carefully worded tool descriptions for each exposed LSP function. These had to be just detailed enough to make Claude consider using them and to show they were smarter than basic grep-style searching.
I also had to include explicit instructions in the CLAUDE.md file within the repo, telling it to use the language server tools where appropriate. This helped, but not perfectly.
Even then, if a tool fails or its output seems too abstract, Claude will often revert to its default behavior. It’s a bit like training a junior engineer who’s used to doing everything the hard way — you need to keep reminding it there’s a better method.
Early Results
This is all still experimental, but it points to a promising middle ground. We can give AI coding tools better insight into code without requiring brute-force commands or complex indexing layers.
If you want to try it out, the code is here: https://github.com/erans/lsp-mcp
Use at your own risk.