Cursor 1.7
Posted3 months agoActive3 months ago
cursor.comTechstoryHigh profile
controversialmixed
Debate
80/100
AI Coding AssistantsCursorVscodeClaude Code
Key topics
AI Coding Assistants
Cursor
Vscode
Claude Code
The release of Cursor 1.7 sparks discussion about the tool's value proposition and its competition with other AI coding assistants like Claude Code and VSCode Copilot.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
21m
Peak period
145
0-12h
Avg / period
40
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 1, 2025 at 9:51 AM EDT
3 months ago
Step 01 - 02First comment
Oct 1, 2025 at 10:12 AM EDT
21m after posting
Step 02 - 03Peak activity
145 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 4:49 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45437735Type: storyLast synced: 11/20/2025, 5:30:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Again, I haven't used Cursor in a while, I'm mostly posting this hoping for Cunningham's Law to take effect :)
idk seems worth it to me. If youre shelling out on one of the $200 plans maybe its not as worth it, but it just seems like the best all in one ai product out there.
Claude code is more reliable and generally better at using MCP for tool cal, like docs from contex7. So if I had only one prompt and it HAD to make something work, Claude code would be my bet.
Personally I like jumping between models and IDEs , if only to mix it up. And you get a reminder of different ways of doing stuff.
I wouldn't even bother with it, but my MCP coding tool I built uses Claud Desktop and is for windows only, and my laptop is MacOS. So I'm using Cursor, and it is WAY WORSE than my most simple of MCP servers (that literally just does dotnet commands, filesystem commands, and github commands).
I think having something that is so general like cursor causes the editor to try too many things that are outside what you actually want.
I fought for 2 hours and 45 minutes while Sonnet-4 (which is what my MCP uses) kept inventing worse ways to implement OpenAI Responses using the OpenAI-dotnet library. Even switching to GPT-5 didn't help. Adding the documentation didn't help. I went to claude in my browser, pasted the documentation, and my class I wanted extended to use Responses, and it finished it in 5 minutes.
The Cursor "special-sauce" seems to be a hinderance now-days. But beggars can't be choosers, as they say.
I am seeing a lot of folks talking about maintaining a good "Agent Loop" for doing larger tasks. It seems like Kilo Code has figured it out completely for me. Using the Orchestrator mode I'm able to accomplish really big and complex tasks without having to design an agent loop or hand crafting context. It switches between modes and accomplishes the tasks. My AGENTS.md file is really minimal like "write test for changes and make small commits"
Instead, I'll ask Cursor to refactor code that I know is inefficient. Abstract repetitive code into functions or includes. Recommend (but not make) changes to larger code blocks or modules to make them better. Occasionally, I'll have it author new functionality.
What I find is, Cursor's autocomplete pairs really with with the agent's context. So, even if I only ask it for suggestions and tell it to not make the change, when I start implementing those changes myself (either some or all), the shared context kicks in and autocomplete starts providing suggestions in the direction of the recommendation.
However, at any time I can change course and Cursor picks up very quickly on my new direction and the autocomplete shifts with me.
It's so powerful when I'm leading it to where I know I want to go, but having enormous amounts of training data at the ready to guide me in best-practices or common patterns.
I don't run any .md files though. I wonder what I'm missing out on.
Gone are the days of exhausting yourself by typing full requests like "refactor this function to use async/await." Now, simply type "refac—" and let our AI predict that you want an AI to refactor your code.
It's AI all the way down, baby.
The builders are quietly learning the tools, adopting new practices and building stuff. Everyone else is busy criticizing the tech for its shortcomings and imperfections.
It's not a criticism of AI, broadly, it's commentary on a feature designed to make engineers (and increasingly non-engineers) even lazier about one of the main points of leverage in making AI useful.
Because that's where the text the devs type still matters most.
Do I care significantly about this feature's existence, and find it an affront to humanity? No.
But, people who find themselves using auto-complete to make even their prompts for them will absolutely be disintermediated, so I think it wise to ensure people understand that by making funny jokes about it.
Anyone seriously using these tools knows that context engineering and detailed specific prompting is the way to be effective with agent coding.
Just take it to the extreme and youll see; what if you auto complete from a single word? A single character?
The system youre using is increasingly generating some random output instead of what you were either a) trying to do, or b) told to do.
Its funny because its like, “How can we make vibe coding even worse?”
“…I know, lets just generate random code from random prompts”
There have been multiple recent posts about how to direct agents using a combination of planning step, context summary/packing, etc to craft detailed prompts that agents can effectively action on large code bases.
…or yeah, just hit tab and go make a coffee. Yolo.
This could have been a killer feature about using a research step to enhance a user prompt and turn it into a super prompt; but it isnt.
I thought the "you're not a real programmer if you don't use AI" gatekeeping would take a little longer than this, but here we are. All from the most minor of jokes.
This brings up an interesting point that's often missed, IMO. LLMs are one of the few things that work on many layers, such that once you have a layer that works, you can always add another abstraction layer on top. So yes, you could very well have a prompt that "builds prompts" that "builds prompts" that ... So something like "do x with best practices in mind" can turn into something pretty complex and "correct" down the line of a few prompt loops.
Caught Claude 4.5 via Cursor yesterday trying to set a password to “password” on an outward facing EC2 service.
Cursor agents open terminals just fine in VSCode and is a major part of how Cursor works.
I personally code in VSCode text editor prior to Cursor (left VIM a while ago) and prefer to stay in the context of a desktop text editor. I find it's easier to see what's changing in real time, with a file list, file tabs, top level and inline undo buttons etc.
I've even stopped tabbing to a separate terminal by about 50%, I learned to use VSCode terminals to run tests and git commands, which works well once you learn the shortcuts + integrate it into some VSCode test runner extensions. Plus Cursor added LLM/autocomplete to terminal commands which is great. I don't need a separate CLI tool or Bash/zsh script inside terminal to inject terminal commands I forgot the arguments for.
Cursor's tab auto complete isn't, and it's the greatest strength point of the product
We also have a CLI, if you prefer coding in the terminal. We've seen this useful for folks using JetBrains or other IDEs: https://cursor.com/cli
See https://www.jetbrains.com/help/ai-assistant/use-custom-model...
I suppose this is by design so you don't know how much you have left and will need to buy more credits.
We added usage visibility in the IDE with v1.4: https://cursor.com/changelog/1-4#usage-and-pricing-visibilit.... By default, it only shows when you are close to your limits. You can toggle it to always display in your settings, if you prefer.
I always preferred the deep IDE integration that Cursor offers. I do use AI extensively for coding, but as a tool in the toolbox, it's not always the best in every context, and I see myself often switching between vibe coding and regular coding, with various levels of hand-holding. And I do also like having access to other AI providers, I have used various Claude models quite a lot, but they are not the be-all-end-all. I often got better results with o3 and now GPT-5 Thinking, even if they are slower, it's good to be able to switch and test.
I always felt that the UX of tools like Claude Code encourage you to blindly do everything through AI, it's not as seamless to dig-in and take more control when it makes sense to do so. That being said, they are very similar now, they all constantly copy each other. I suppose for many it's just inertia as well, simply about which one they tried first and what they are subscribed to, to an extent that is the case for me too.
I am not talking about "deep IDE integration" in a wishy-washy sense, what I care about as a professional engineer is that such an integration allows me to seamlessly intervene and control the AI when necessary, while still benefiting from its advantages when it does work well on its own.
Blindly trusting the AI while it does things in the background has rarely worked well for me, so a UX optimized for that is less useful to me, as opposed to one designed to have the AI right where I can interlieve it with normal coding seamlessly and avoid context-switching.
This suddenly reminded me that I have a Cursor subscription so I'm going to drop it.
But of course if someone says that Cursor's flow suddenly 2x'd in speed or quality, I would switch to it. I do like having the agent tool be model hotpluggable so we're not stuck on someone's model because their agent is better, but in the end CC is good at both things and codex is similar enough that I'm fine with it. But I have little loyalty here.
But I can see how it might make sense for you. It does depend a lot on how mainstream what you are working on is, I have definitely seen it be more than capable enough to leave it do its thing for webdev with standard stacks or conventional backend coding. I tend to switch a lot between that and a bit more exotic stuff, so I need to be able to fluidly navigate the spectrum between fully manual coding and pure vibe coding.
I think Claude's latest VSCode plugin is really great, and it does make me question why Cursor decided to fork instead of make a plugin. I'd rather have it be a plugin so I don't have to wipe out my entire Python extension stack.
It’s still plenty useful of course, but it absolutely needs constant babysitting for now, which is fine. I like AI coding tools that acknowledge those limits and help you work around them, rather than just pretending its magic and hiding its workings from you as an autonomous background process. Maybe soon such need for control will become obsolete, awesome, I will be the first one onboard.
PS:
Chess AI is definitely superhuman now, but Stockfish is a small NN surrounded by tons of carefully human-engineered heuristics and rules. Training an LLM (or any end-to-end self-supervised model) to be superhuman at chess is still surprisingly hard. I did some serious R&D on it a while back. Maybe we’ve gotten there in the last few years, not sure, but it’s very new and still not that much better than the best players.
Most real-world stock trading is still carefully supervised and managed by human traders. Even for automated high-frequency trading, what really works is to have an army of mathematicians devising lots of trading scripts, trading with proper deep-learning/reinforcement-learning is still surprisingly niche and unsuccessful.
Also combat aviation is far from being automated, sure they can bomb but not dogfight, and most drones are remote controlled dumb puppets.
I do agree with your point generally, but any good engineer needs to understand the details of where we are at so we can make real progress.
At least personally, the reason why I prefer CLI tools like Claude and Codex is precisely that they feel like yet another tool in my toolbox, more so than with AI integrated in the editor. As a matter of fact I dislike almost all AI integrations and Claude Code was when AI really "clicked" for me. I'd rather start a session on a fresh branch, work on something else while I wait for the task to be done, and then look at the diff with git difftool or IDE-integrated equivalent. I'd argue you have just as much control with this workflow!
A final note on the models: I'm a fan of Claude models, but I have to begrudgingly admit that gpt-5-codex high is very good. I wouldn't have subscribed just for the gpt-5 family, but Codex is worth it.
The official tools won't necessarily give you the best performance, but they're a safer bet for now. This is merely anecdotal as I haven't bothered to check rigorously, but I and others online have found that GPT-5-Codex is worse in Cursor than in the official CLI/extension/web UI.
Do people think there are better autocomplete options available now? Is it a case of just using a particular model for autocomplete in whatever IDE you want to use?
Even though it is also part of Cursor, you could subscribe to the $10/month Pro plan and use it in Jetbrains IDEs like Rider.
https://supermaven.com/pricing
Disclaimer: I get enterprise level subscriptions to these services via my employer. I personally don't pay for them and never consider their cost, if that matters.
Overall I do like VSCode better but Cursors blazing fast and intelligent autocomplete is awesome, will probably stick with Cursor.
Btw, I find the review / agent code stuff pretty bad on both. No idea how people get them working well.
1) The most useful thing about Cursor was always state management of agent edits: Being able roll back to previous states after some edits with the click of a button, or reapply changes, and preview edits, etc. But weirdly, it seems like they never recognized this differentiator, and indeed it remains a bit buggy, and some crucial things (like mass-reapply after a rollback) never got implemented.
2) Adding autocomplete to the prompt box gives me suspicion they somehow still do not understand best practices in using AI to write code. It is more crucial than ever to be clear in your mind what you want to do in a codebase, so that you can recognize when AI is deviating from that path. Giving the LLM more and earlier opportunities to create deviation is a terrible idea.
3) Claude Code was fine in CLI and has a nearly-identical extension pane now too. For the same price, I seem to get just as much usage, in addition to a Claude subscription.
I think Cursor will lose because models were never their advantage and they do not seem to really be thought leaders on LLM-driven software development.
1. Checkpoints/rollbacks are still a focus for us, albeit it's less used for those working with git. Could you share the bug you saw?
2. Autocomplete for prompts was something we were skeptical of as well, but found it really useful internally to save time completing filenames of open code files, or tabbing to automatically include a recently opened file into the context. Goal here is to save you keystrokes. It doesn't use an LLM to generate the autocomplete.
3. A lot of folks don't want to juggle three AI subscriptions for coding and have found the Cursor sub where they can use GPT, Claude, Gemini, Grok models to be a nice balance. YMMV of course!
Back to 1): The type of bug I see most often is where conversation history seems incomplete, and I have trouble rolling back to or even finding a previous point that I am certain existed.
Git shares some features but I think Git was not made for the type of conversational rapid-prototyping LLMs enable. I don't want to be making commits every edit in some kind of parallel-git-state. Cursor's rollback and branching conversations make it easy to backup if a given chat goes down the wrong path. Reapply is tedious since it has to be done one edit at a time - would be nice if you could roll-forward.
I haven't put much thought into what else would be useful, but in general the most value I get from Cursor is simplifying the complex state of branching conversations.
1. The apply button does not appear. This used to be mostly a problem with Gemini 2.5 Pro and GPT-5 but now sometimes happens with all models. Very annoying because I have to apply manually
2. Cursor doesn't recognize which file to apply changes to and just uses the currently open file. Also very annoying and impossible to change the file to which I want to apply changes after they were applied to one file.
Are you perhaps on Windows+MinGW? That's the only weird thing in my setup (and it has caused problems in the past for me).
Oh ok, thanks for clarifying. That indeed seems like would be helpful.
Should still be configurable to turn off though (like any auto-complete, right)
I find the amount of credits included in the pro subscription per month totally insufficient. Maybe it lasts 1-2 weeks.
Today I got a message telling me I exhausted my subscription when the web dashboard was showing 450/500. Is there a team level constraint on top of individual ones?
I personally wouldn’t want to use cursor without Max.
It requires constant attention and vigilance, but that's better for everyone than having some kind of "moat" that lets them start coasting or worse— lets them start diverting focus to features that are relevant for their enterprise sales team but not for developers using the software.
Companies really should have to stay competitive on features and developer happiness. A moat by definition is anti-competitive.
I’ve actually gone back to neovim, copying in snippets from ChatGPT. I don’t think I’ve given up anything in speed.
Isn't it super-annoying and frustrating to have half-useful text constantly thrown at you? As someone once said, 'Cursor tab is doing too much'.
There is probably a configurable way to request suggestions instead of having them automatically pop up - I should configure that.
https://forum.cursor.com/t/cursor-tab-is-doing-too-much/1881...
Specifically, alt+tab to enable/disable Cursor tab, and change hotkey for accepting suggestions to something besides tab (whoah, we can use [TAB] for coding again?).
One of us is wrong here. Last I checked, the extension pane was a command line, that doesn't use macOS keybindings, reimplements common controls, uses monospaced text for prose, etc.
I don't mind particularly about the last two but 'cmd A' on my Mac highlight all the text in the Claude Code user interface, rather than the text in the text box, is annoying.
Note that CC introduced this yesterday, it’s very fast and good.
Agreed 100%
Any time there's LLM auto complete on the prompt (chatgpt has done this too!) I find it horribly distracting and it often makes me completely lose track of what I had in mind, especially on more difficult tasks.
Now with gpt-5-codex and codex vs code ext .. getting through up to 20k line changes in a day again lots of parallel jobs; but codex allows for less rework.
The job of the "engineer" has changed a lot. At 5k lines I was not reviewing every detail but it was possible to skim over what had changed. At 20k it's more looking at logs performance / arch & observation of features less code is reviewed.
Maybe soon just looking at outcomes. Things are moving quickly.
If I was building a new project from scratch I'd probably use a CLI tool to manage a longer TODO easier. But working on existing legacy code I find an IDE integration is more flexible.
"Commands now execute in a secure, sandboxed environment. If you’re on allowlist mode, non-allowlisted commands will automatically run in a sandbox with read/write access to your workspace and no internet access."
The agentic side is nothing special and it's expensive for what you get. Even if you're the exact target audience - don't want CLI, want multiple frontier models to choose from for a fixed monthly price - Augment is both more competent and ends up cheaper.
Then for everyone else who is fine with a single model, Claude Code and now Codex are obviously better choices. Or those who want cheaper and faster through open weights models, there's Opencode and Kilo.
The mystery is that the other VC backed ones seemingly don't care or just don't put enough resources into cracking the autocomplete code, as many are still staying with Cursor purely for that - or were until CC became mainstream. Windsurf was making strides but now that's dead.
What? Cursor bought Supermaven last November and I have been using their much superior (compared to GH Copilot) completion since maybe early last year so it does not add up.
source?
Legacy tech, but a great idea before models got good enough to use via CLI.
What this means that dozens of procedures and activities are happening at any one time in orbit, and that a Flight Director on the ground and an Astronaut in space needs to be at least cognizant or aware of (at least enough to prevent disasters and complete the tasks) -this is the greatest challenge in on-orbit work.
IE to widen this metaphor: to collect and gather complex operational data on differing parts of a system in a USEFUL way is the greatest challenge of complex work and software engineering is about controlling complexity above all.
Now at NASA, we often wrote up procedures and activities with the- "Astronauts are smart they can grok it" mindset; but during debriefs the common refrain from those at the top of pyramid was that "I don't have the mental capacity to handle and monitor dozens of systems at the same time" -Humans are very bad at getting in flow when monitoring things.. maybe if some kind of flow state was achievable like a conductor over an orchestra in orchestrating agents..but I don't see that happening with multiple parts of the codebase getting altered at the same time by a dozen agents.
Cursor and Agentic tools bring this complexity (and try to tame it through a chat window or text response) to our daily work on our desktop; now we might have dozens of AI Agents working on aspects of your codebase! Yes, its incredible progress but with this amazing technical ability comes great responsibility for the human overseer...this is the 'astronaut' in my earlier metaphor- an overburdened software engineer.
Worryingly also culture wise- management teams now expect software devs to deliver much faster, this is dangerous since we can use these tools but are forced to leave more to autopilot in hopes of catching bugs in test etc - I see that trend is to push away the human oversight into blind agents but this is the wrong model I think for now -how can I trust and agent without understanding all that it did?
To summarize, I like both Cursor and Claude Code, but I think we need better paradigms in terms of UX so that we can better handle conflicts, stupid models, reversions, better windows on what changed code-wise.. I also get the trend of creating trash-able instances in containers and killing them on failure, but we still need to understand how a code change impacts other parts of the codebase -
anyway somebody on the cursor team will not even read this post -they will just summarize the whole HN thread with AI and implement some software tickets to add another checkbox to the chat window in response.. this is not the engineering we need in response to this new paradigm of working.. we need some deep 'human' design thinking here..
VS Code accepted the challenge and upped its game.
Claude Code changed the game.
Cursor's own heavy value decrease (always part of the strategy but poorly communicated and managed) hit Cursor users hard when the cheap premium tokens honeymoon ended in recent months.
Existing users are disappointed, potential new users no longer see it as the clear class leader, because it isn't.
3 more comments available on Hacker News