Claude Code Gets Native Lsp Support
Key topics
The excitement is building around Claude Code's latest update, which brings native Language Server Protocol (LSP) support, and users are diving in to explore its capabilities. Some commenters, like vorticalbox, are drawing comparisons to other tools, like Agent Crush, which has had LSP support for a while, while others, such as tonyhart7, are speculating that native tool call support could improve read token efficiency. As users experiment with the new feature, they're sharing tips and troubleshooting advice, with monkpit's discovery that updating the marketplace plugin resolved issues for some users sparking a chorus of gratitude. Meanwhile, JamesSwift is working through some kinks with permissions prompts, highlighting the ongoing effort to refine this new functionality.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
87
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 10:59 AM EST
19 days ago
Step 01 - 02First comment
Dec 22, 2025 at 1:05 PM EST
2h after posting
Step 02 - 03Peak activity
87 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 25, 2025 at 4:33 AM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I’ve not noticed the agent deciding to use it all that much.
[0] https://github.com/charmbracelet/crush
• Use `/plugin` to open Claude Code's plug-in manager
• In the Discover tab, enter `lsp` in the search box
• Use `spacebar` to enable the ones you want, then `i` to install
Hope that helps!
https://github.com/anthropics/claude-code/issues/14803#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
https://github.com/anthropics/claude-code/issues/13952#issue...
I am disabling it for now since my flow is fine at the moment, I'll let others validate the usefulness first.
LSP Plugin Recommendation
LSP provides code intelligence like go-to-definition and error checking
Plugin: swift-lsp
Swift language server (SourceKit-LSP) for code intelligence Triggered by: •swift files
Would you like to install this LSP plugin? › 1. Yes, install swift-lsp 2. No, not now 3. Never for swift-lsp 4. Disable all LSP recommendations
I'd be disappointed if this were a feature only for the vscode version.
If you're vibe coding without an editor, would this have any benefits to code quality over a test suite and the standard linter for a language?
That would be the idea.
The LLM wants to see the definition of a function. More reliable than grepping.
They are definitely coding in a LLM maximalist way, in a good way.
I was playing around with codex this weekend and honestly having a great time (my opinion of it has 180'd since gpt-5.2(-codex) came out) but I was getting annoyed at it because it kept missing references when I asked it to rename or move symbols. So I built a skill that teaches it to use rope for mechanical python codebase refactors: https://github.com/brian-yu/python-rope-refactor
Been pretty happy with it so far!
It seems to be very efficient context-wise, but at the same time made precise context-management much harder.
And yes +1 for opus. Anthropic delivered a winner after fucking up the previous opus 4.1 release.
They feel like different coworker archetypes. Codex often does better end-to-end (plan + code in one pass). Claude Code can be less consistent on the planning step, but once you give it a solid plan it’s stellar at implementation.
I probably do better with Codex mostly due to familiarity; I’ve learned how it “thinks” and how to prompt it effectively. Opus 4.5 felt awkward for me for the same reason: I’m used to the GPT-5.x / Codex interaction style. Co-workers are the inverse, they adore Opus 4.5 and feel Codex is weird.
Surprised that you don't have internal tools or skills that could do this already!
Shows how much more work there is still to be done in this space.
Its hard to quantify what sort of value those examples generate (youtube and amazon were already massively popular). Personally I find it very useful, but it's still hard to quantify.
This is why I roll my eyes every time I read doomer content that mentions an AI bubble followed by an AI winter. Even if (and objectively there's 0 chance of this happening anytime soon) everyone stops developing models tomorrow, we'll still have 5+ years of finding out how to extract every bit of value from the current models.
Of course there is a bubble. We can see it whenever these companies tell us this tech is going to cure diseases and solve world hunger; whenever they tell us it's "thinking", can "learn skills", or is "intelligent", for that matter. Companies will absolutely devalue and the market will crash when the public stops buying the snake oil they're being sold.
But at the same time, a probabilistic pattern recognition and generation model can indeed be very useful in many industries. Many of our problems can be approached by framing them in terms of statistics, and throwing data and compute at them.
So now that we've established that, and we're reaching diminishing returns of scaling up, the only logical path forward is to do some classical engineering work, which has been neglected for the past ~5 years. This is why we're seeing the bulk of gains from things like MCP and, now, "agents".
This is objectively not true. The models have improved a ton (with data from "tools" and "agentic loops", but it's still the models that become more capable).
Check out [1] a 100 LoC "LLM in a loop with just terminal access", it is now above last year's heavily harnessed SotA.
> Gemini 3 Pro reaches 74% on SWE-bench verified with mini-swe-agent!
[1] - https://github.com/SWE-agent/mini-swe-agent
Sure, the models themselves have improved, but not by the same margins from a couple of years ago. E.g. the jump from GPT-3 to GPT-4 was far greater than the jump from GPT-4 to GPT-5. Currently we're seeing moderate improvements between each release, with "agents" taking up center stage. Only corporations like Google are still able to squeeze value out of hyperscale, while everyone else is more focused on engineering.
This doesn't refute the fact that this simple idea can be very useful. Especially since the utility doesn't come from invoking the model in a loop, but from integrating it with external tools and APIs, all of which requires much more code.
We've known for a long time that feeding the model with high quality contextual data can improve its performance. This is essentially what "reasoning" is.
In order to back up GP's claim, they should compare models from a few years ago with modern non-reasoning models in a non-agentic workflow. Which, again, I'm not saying they haven't improved, but that the improvements have been much more marginal than before. It's surprising how many discussions derail because the person chose to argue against a point that wasn't being made.
Those are improvements to the model, albeit in service of agentic workflows. I consider that distinct from improvements to agents themselves which are things like MCP, context management, etc.
No LSP support is wild.
We have 50 years worth of progress on top of grep and grep is one of the worse ways to refactor a system.
Nice to see LLM companies are ignoring these teachings and speed running into disaster.
I'll have to check again because 6 months ago this stuff was pure trash and more frustrating than useful (beyond a boilerplate generate that also boils the ocean).
Opus 4.5 in Claude Code is a massive jump over 4.0 which is a massive jump over 3.7.
Each generation is being fine-tuned on a huge corpus of freshly-generated trajectories from the previous generation so things like tool use improve really quickly.
The answer is use tools that have semantic info to rename things.
https://github.com/anthropics/claude-code/issues/1259#issuec...
2. https://github.com/microsoft/pyright
3. https://github.com/python-lsp/python-lsp-server
4. https://github.com/palantir/python-language-server
OpenCode has been truely innovating in this space and is actually open source, and would naturally fit into custom corporate LLM proxies. Yet, now we've built so many unrulely wrappers and tools around claude-code's proprietary binary just to sandbox it, and use it with our proxy, that now I fear it's too late to walk back.
Not sure how OpenCode can break through this barrier, but I'm an internal advocate for it. For hobby projects, it's definitely my goto tool.
One of my favorite features is that you can run it as a server, and then it has a API and SDKs to manage sessions etc.
Great to build a centrally managed agent for your team.
Source? I don't think this is true, you might be confusing this with the recent Agentic AI Foundation & MCP news?
What's incredible is just how bad it works. I nearly always work with projects that mount multiple folders, and the IDE's MCP doesn't support that. So it doesn't understand what folders are open and can't interact with them. Junie the same issue, and the AI Assistant appears to have inherited it. The issue has been open for ages and ignored by Jetbrains.
I also tried out their full line completion, and it's incomprehensibly bad, at least for Go, even with "cloud" completion enabled. I'm back to using Augment, which is Claude-based autocompletion.
But Augment is not the most stable. I've had lots of serious problems with it. The newest problem that's pushing me over the edge is that it's recently have been causing the IDE to shoot up to use all cores (it's rare to see an app use 1,000% CPU in the macOS Activity Monitor, but it did it!) when it needs to recompute indexes, which is the only thing that has ever made my M2 Mac run its fan. It's not very reliable generally (e.g. autocompletions don't always appear), so I'd be interested in trying alternatives.
VSCode? Select AI view via shortcut or CMD + P and you’re done. That’s how you do it.
After years of JetBrains PyCharm pro I'm seriously considering switch to cursor. Before supermaven being acquired, pycharm+supermaven was feeling like having superpowers ... i really wish they will manage to somehow catch up, otherwise the path is written: crisis, being acquired by some big corp, enshitification.
One thing that I'm really missing is the automatic cursor move.
They have an MCP server, but it doesn't provide easy access to their code metadata model. Things like "jump to definition" are not yet available.
This is really annoying, they just need to add a bit more polish and features, and they'll have a perfect counter to Cursor.
I much prefer their ides to say vscode, but their development has been a mess for a while with half-assed implementations and long standing bugs
This is 5% of what refactoring is, the rest is big scale re-architecting code where these tools are useless.
The agents can do this big scale architecturing if you describe exactly what you want.
IntelliJ has no moat here, because they can do well 5% of what refactoring is.
https://gitlab.com/rhobimd-oss/shebe/-/blob/main/docs/guides...
https://gitlab.com/rhobimd-oss/shebe/-/tree/main?ref_type=he...
Then in skills or CLAUDE.md I instruct claude to use this mcp tool to enumerate all files need changing/updating.
You can also access the full intellij API via groovy scripts in the filters or when computing replacement variables, if you really want.
Though most of the time built in refactors like 'extract to _' or 'move to' or 'inline' or 'change type signature' or 'find duplicates' are enough.
An explainer for others:
Not only can analyzers act as basic linters, but transformations are built right in to them. Every time claude does search-and-replace to add a parameter I want to cry a little, this has been a solved science.
Agents + Roslyn would be productive like little else. Imagine an agent as an orchestrator but manipulation through commands to an API that maintains guard rails and compilability.
Claude is already capable of writing roslyn analyzers, and roslyn has an API for implementing code transformations ( so called "quick fixes" ), so they already are out there in library form.
It's hard to describe them to anyone who hasn't used a similarly powerful system, but essentially it enables transforms that go way beyond simple find/replace. You get accurate transformations that can be quite complex and deep reworks to the code itself.
A simple example would be transforming a foreach loop into a for loop, or transforming and optimizing linq statements.
And yet we find these tools unused with agentic find/replace doing the heavy lifting instead.
Whichever AI company solves LSP and compiler based deep refactoring will see their utility shoot through the roof for working with large codebases.
It was code-named to disambiguate it from the old compiler. But Roslyn is almost 15 years old now, so I can't call it new, but it's newer than the really legacy stuff.
It essentially lets you operate on the abstract snytax tree itself, so there is background compilation that powers inspection and transformation.
Instant renaming is an obvious benefit, but you can do more powerful transformations, such as removing redundant code or transforming one syntax style into another, e.g. tranforming from a Fluent API into a procedural one or vice-versa.
It was one of the things that brought me to DataGrid in the first place
Like, the AI can't jump to definition! What are we fucking doing!?
This is why LSP support should be huge, and I'm surprised it's just a line-item in a changelog.
Days fucking around with clangd for jump to definition to sometimes work. Sigh
https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet...
Fleet is a completely different codebase.
So they’re correct, there’s only two families of IDEs.
Fleet was very stable to use , it just never successfully turned into a product which they address in their link as well why that happened
Uncharitable but yeah, reality isn't always charitable.
With a fair disclaimer, that it is very easy to vibe-code a skill oneself, with both pros (you can create one just for you!) and cons (if you look online, these are of any quality, quite a few with some hard-coded versions or practices).
172 more comments available on Hacker News