Cursor Acquires Graphite
Key topics
The developer community is abuzz about Cursor's acquisition of Graphite, with some users expressing skepticism about the fate of the Graphite workflow. While the Graphite team assures users that they're "doubling down on building the best workflow" with increased resources, others point to Cursor's history of acquiring and then sunsetting products, like Supermaven, as a worrying precedent. The debate is fueled by contrasting views, with some users urging others to "relax" while others dryly remark on the Bayesian likelihood of Graphite's demise. As one commenter notes, the Graphite team's size and revenue might make this acquisition a different story, but the jury's still out.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1m
Peak period
105
0-6h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 19, 2025 at 11:07 AM EST
15 days ago
Step 01 - 02First comment
Dec 19, 2025 at 11:08 AM EST
1m after posting
Step 02 - 03Peak activity
105 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 21, 2025 at 2:23 PM EST
13 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We know this isn't what all of you want to hear, and we've spent the last year really evaluating this deeply. At the same time, we're glad you're part of our journey to the future of agentic AI and we think you'll find it's the best alignment and fit for you, too, long-term.
everyone is staying on to keep making the graphite product great. we're all excited to have these resources behind us!
It's happened so many times that it's just part of how we do business, unfortunately.
If Cursor wants to re-allocate resources or merge Graphite into to editor or stagnate development and use it as a marketing/lead gen channel, it will for the business.
Anything said at time of acquisition isn’t trustworthy. Not because people are lying at the time (I don’t think you are!) but because these deals give up leverage and control explicitly. If they only wanted tighter integration, they could fund that via equity investment or staffing engineers (+/- paying Graphite to do the same.) Companies acquire for a reason and it isn’t to let the team + product stay independent
That is, a much smaller version of windsurf deal. Anyways, Cursor people seemed nice but supermaven was never built to last.
sweet summer child.
> "Will the plugin remain up? Yes!"
> https://supermaven.com/blog/sunsetting-supermaven
My usually prefer Gemini but sometimes other tools catch bugs Gemini doesn't.
As someone who has never heard of Graphite, can anyone share their experience comparing it to any of the tools above?
My other question is whether stacked PRs are the endpoint of presenting changes or a waypoint to a bigger vision? I can't get past the idea that presenting changes as diffs in filesystem order is suboptimal, rather than as stories of what changed and why. Almost like literate programming.
1: https://graphiteapp.org/
Turns out that the name's been re-used by some sort of slop code review system. Smells like a feature rather than a product, so I guess they were lucky to be acquired while the market's still frothy.
This is something GitHub should be investing time in, it’s so frustrating.
The problem however lies in who or what does this rebasing in a multi-tenant environment. You sort of need a system that can do it automatically, or one that gives you control over the process. For example, jj can often get tripped up with branch rules in git since you might accidentally move a bookmark that isn't yours to move, so to speak.
Obviously the working tree should be a commit like any other! It just makes sense!
Given the VP of GitHub recently posted a screenshot of their new stacked diff concept on X, I'd be amazed if Graphite folks (whos product is adding this function) didn't get wind of it and look for a quick sell.
So, we'll see what it ends up like, but they have apparently already executed.
Is it market share? Because I don't know who has a bigger user base that cursor.
A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?
More specific models with faster tools is the better shovel. We are not there yet.
Graphite is a really complicated suite of software with many moving pieces and a couple more levels of abstraction than your typical B2B SaaS.
It would be incredibly challenging for any group of people to build a peer-level Graphite replacement any faster than it took Graphite to build Graphite, no matter what AI assistance you have.
If you've used Graphite as a customer for any reasonable period of time or as part of a bigger enterprise/org and still think our integration is easy... I think that's more a testament to the work we've done to hide how hard it is :)
Most of the "hard" problems we're solving (which I'm referencing in my original comment) are not visually present in the CLI or web application. It's actually subtle failure-states or unavailability that you would only see if I'm doing my job poorly.
I'm not talking about just our CLI tool or stacking, to clarify. I'm talking about our whole suite, especially the review page and merge queue.
What kind of enterprise SaaS features do you wish you had in Graphite? (We have multiple orgs with 100s-1,000s of engineers using us today!)
What I do not understand is that if a high level staff with capacity can produce an 80% replacement why not assign the required staff to complete that last 10% to bring it to production readiness? That last 10% is unnecessary features and excess outside of the requirements.
Also, graphite isn't just "screenshots"; it's a pretty complicated product.
I hate the unrealistic AI claims about 100X output as much as anyone, but to be fair Cursor hasn't been pushing these claims. It's mostly me-too players and LinkedIn superstars pushing the crazy claims because they know triggering people is an easy ticket to more engagement.
The claims I've seen out of the Cursor team have been more subtle and backed by actual research, like their analysis of PR count and acceptance rate: https://cursor.com/blog/productivity
So I don't think Cursor would have ever claimed they could duplicate a SaaS company like Graphite with their tools. I can think of a few other companies who would make that claim while their CEO was on their latest podcast tour, though.
Then Cursor takes on GitHub for the control of the repo.
https://www.merriam-webster.com/dictionary/graphite
If that's not the concern, then what's the big deal?
The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.
The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.
What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.
If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.
Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.
They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.
The results of a diff-only review won't be very good. The good AI reviewers have ways to index your codebase and use tool searches to add more relevant context to the review prompt. Like some of them have definitely flagged legit bugs in review that were not apparent from the diff alone. And that makes a lot of sense because the best human reviewers tend to have a lot of knowledge about the codebase, like "you should use X helper function in Y file that already solves this".
You might want to look at existing products in this space (Cursor's Bugbot, Graphite's Reviewer FKA Diamond, Greptile, Coderabbit etc.). If you sign up for graphite and link a test github repo, you can see what the flow feels like for yourself.
There are many 1000s of engineers who already have an AI reviewer in their workflow. It comments as a bot in the same way dependabot would. I can't share practical lessons, but I can share that I find it to be practically pretty useful in my day-to-day experience.
As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.
But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)
Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.
I’ve tried picking up VSCode several times over the last 6-7 years but it never sticks for me, probably just preference for the tools I’m already used to.
Xcode’s AI integration has not gone well so far. I like being able to choose the best tool for that, rather than a lower common denominator IDE+LLM combination.
I use VS Code, open a terminal with VS Code, run `claude` and keep the git diff UI open on the left sidebar, terminal at the bottom.
For backend/application code, I find it's instead about focusing on the planning experience, managing multiple agents, and reviewing generated artifacts+PRs. File browsers, source viewers, REPLs, etc don't matter here, or at best, I'll look at occasionally while the agents do their thing.
I also like how cursor is model-agnostic. I prefer codex for first drafts (it's more precise and produces less code), for Claude when less precision or planning is required, and other, faster models when possible.
Also, one of cursor's best features is rollback. I know people have some funky ways to do it in CC with git work trees etc, but it's built into cursor.
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
Bake that into the workflow some other way.
What are we talking about? Autocomplete or GPT/Claude contender or...? What makes it so great?
Sir Opus is the fast one of the bunch. Try GPT 5.2 high.
Which is what I was mentioning elsewhere. They build huge models with infinite money and distill them for certain tasks. Cursor doesn't have the funding, nor would it be wise, to try to replicate that.
I'm very pro IDE. I've built up an entire collection of VSCode extensions and workflows for programming, building, customizing build & debugging embedded systems within VSCode. But I still prefer CLI based AI (when talking about an agent to the IDE version).
> Composer 1
My bet is their model doesn't realistically compare to any of the frontier models. And even if it did, it would become outdated very quickly.
It seems somewhat clear (at least to me) that economics of scale heavily favor AI model development. Spend billions making massive models that are unusable due to cost and speed and distill their knowledge + fine tune them for stuff like tools. Generalists are better than specialists. You make one big model and produce 5 models that are SOTA in 5 different domains. Cursor can't do that realistically.
I've been using composer-1 in Cursor for a few weeks and also switching back and forth between it, Gemini Flash 3, Claude Opus 4.5, Claude Sonnet 4.5 and GPT 5.2.
And you're right it's not comparable. It's about the same quality of code output of the aforementioned models but about 4x as fast. Which enables a qualitatively different workflow for me where instead of me spending a bunch of time waiting on the model, the model is waiting on me to catch up with its outputs. After using composer-1, it feels painful to switch back to other models.
I work in a larg(ish) enterprise codebase. I spend a lot of time asking it questions about the codebase and then making small incremental changes. So it works very well for my particular workflow.
Other people use CLI and remote agents and that sort of thing and that's not really my workflow so other models might work better for other people.
The Copilot version of this is just fucking terrible at suggesting anything remotely useful about our codebase.
I've had reasonable success just sticking single giant functions into context and asking Sonnet 4.5 targeted questions (is anything in this function modifying X, does this function appear to be doing Y) as a shortcut for reading through the whole thing or scattershot text search.
When I try to give it a whole file I actually hit single-query token limits.
But that's very "opt-in" on my part, and different from how I understand Cursor to work.
And when I open it in the parent directory of a bunch of repos in our codebase. It can very quickly trace data flow through a bunch of different services and tells me all the files the data goes through.
It's context window is "only" 200k tokens. When it gets near 200k, it compresses the conversation and starts a new conversation..... which mostly works but sometimes it has a bit of amnesia if you have a really long running conversation on something.
How does that work? Multiple agents grepping simultaneously?
LLMs are inherently single-threaded in how they ingest and produce info. So, as far as I can gather from the description, either it spawns sub-agents, or it has a tool dedicated for the job.
I have absolutely no horse in this race, but I turned from a 100% Cursor user at the beginning of the year, to one that basically uses agents for 90% of my work, and VS Code for the rest of it. The value proposition that Cursor gave me was not able to compete with what the basic Max subscription on anthropic gave me, and VS Code is still a superior experience to Claude in the IDE space.
I think though that Cursor has all the potential to beat Microsoft at the IDE game if they focus on it. But I would say it's by no way a given that this is the default outcome.
Right now VSCode can do things that Cursor cannot, but mostly because of the market place. If Cursor invests money into the actual IDE part of the product I can see them eclipsing Microsoft at the game. They definitely have the momentum. But at least some of the folks I follow on Twitter that were die-hard Cursor users have moved back to VSCode for a variety of reasons over the last few months, so not sure.
Microsoft itself though is currently kinda mismanaging the entire product range between GitHub, VS Code and copilot, so I would not be surprised if Cursor manages to capitalize on this.
I don't even like using CLI, in fact I hate it, but I don't use CLI - Claude does it for me. Using for everything: Obsidian vault, working on Home Assistant, editing GSheets, and so much more.
This is a pretty dumb statistic in a vacuum. It was clearly 100% a few years ago before CLI-based development was even possible. The trend is very significant.
Imaginary situation: People are using claude instead of cursor, and you can run claude in a terminal, so this is going back to the days of not using an IDE for the people that do it.
Straw man shake down: Terminal based development like vim and emacs are old and shit, and we moved away from that for a reason, and so (although totally unrelated) this means 'using claude' means going back to using a terminal for everything, which is similarly old and shit.
...but, obviously wrong.
- There's a claude desktop app that isn't done via the terminal.
- Agents use the terminal/powershell to do lots of things, even in cursor because that's the only way to automate some things, eg. running tests.
- Terminal environments like vim and emacs are ides. :face-palm:
- It literally makes no difference what interface you copy and paste your text prompt into and then walk off to get a coffee in agent mode.
Anyone who's seriously arguing that IDE integrated LLM chat windows somehow beat command line LLM chat windows is either a) religiously opposed to the terminal window, or b) hasn't actually tried using the tools.
...because, you'll find it makes no difference at all.
Why is cursor getting involved with graphite? ...because the one place where is makes a difference is reviewing code, where CLI based tools are just generally inferior to integrated code review tools.
You know what that is?
An acknowledgement that cursor, in terms of code generation has nothing that qualifies as the 'special sauce' to use it over any other tool.
So they're investing in another company that actually has a good, meaningful product.
I am betting it won't.
By the way, there are OS APIs, I am yet to write a CLI driven agent, as part of iPaaS deployments, which are basically SaaS IDEs.
That being said, surely the point here is about "agent driven development" vs "ai autocomplete". As they say, whether you type your command into a web window or a terminal window presumably doesn't change the flow that much.
Cursor has been both nice and awful. When it works, it has been good. However for a long time it would freeze on re-focus and recently an update broke my profile entirely on one machine so it wouldn't even launch anymore.
Kilocode with options of free models has been very nice so far.
Composer is extremely dumb compared to sonnet, let alone opus. I see no reason to use it. Yes, it's cheaper, but your time is not free.