Zedless: Zed Fork Focused on Privacy and Being Local-First
Original: Zedless: Zed fork focused on privacy and being local-first
Key topics
Regulars are buzzing about Zedless, a fork of the Zed editor that's laser-focused on privacy and local-first functionality. As commenters riff on Zed's origins and features, it becomes clear that the original Zed was designed with speed in mind, with some users praising its performance as on par with VS Code. However, not everyone is on the same page, with some disputing the notion that Zed is a direct competitor to Cursor, given its distinct history and GitHub Copilot's prior existence. Amidst the lively discussion, a tongue-in-cheek "Zed's dead" remark sparks a humorous exchange, adding to the thread's entertainment value.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
20m
Peak period
140
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 20, 2025 at 2:47 PM EDT
5 months ago
Step 01 - 02First comment
Aug 20, 2025 at 3:06 PM EDT
20m after posting
Step 02 - 03Peak activity
140 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 25, 2025 at 4:54 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://mastodon.online/@nikitonsky/112146684329230663
It also didn't start out as a competitor to either.
https://zed.dev/
More like a spiritual successor to Atom, at least per the people that started it who came from that project.
Where I think it gets really interesting is they are adding features in it to compete with slack. Imagine a tight integration between slack huddles and VS code's collaborative editing. Since it's from scratch it's much nicer than both. I'm really excited about it.
> Hey, you look to be doing business with someone who publicly advocates for harming others. Could you explain why and to what extend they are involved?
"doing business with someone whose views I dislike" is slightly downplaying the specific view here.
I don't think any of the evidence shown there demonstrates "advocacy for harming others". The narrative on the surely-unbiased-and-objective "genocide.vc" site used as a source there simply isn't supported by the Twitter screencaps it offers.
This also isn't at all politely asking "Could you explain why and to what extend they are involved?" It is explicitly stating that the evidenced level of involvement (i.e.: being a business partner of a company funding the project) is already (in the OP's opinion) beyond the pale. Furthermore, a rhetorical question is used to imply that this somehow deprives the Code of Conduct of meaning. Which is absurd, because the project Code of Conduct doesn't even apply to Sequoia Capital, never mind to Shaun Maguire.
Zed’s leadership does have to answer for why they invited people like that to become a part of Zed’s team.
Making a racist claim in a tweet is not advocacy for harming others.
* That this man actually advocates for harming others, versus advocating for things that the github contributor considers tantamount to harming others
* That his personal opinions constitute a reason to not do business with a company he is involved with
* That Zed is morally at fault if they do not agree that this man's personal opinions constitute a reason to not do business with said company
I find this kind of guilt by association to be detestable. If Zed wishes to do business with someone whom I personally would not do business with for moral reasons, that does not confer some kind of moral stain on them. Forgiveness is a virtue, not a vice. Not only that, but this github contributor is going for the nuclear option by invoking a public shaming ritual upon Zed. It's extremely toxic behavior, in my opinion.
In a perfect world, children don't get killed, but with that many levels of indirection, I don't think there is anything in this world that is not linked to some kind of genocide or other terrible things.
> Mr. Maguire’s post was immediately condemned across social media as Islamophobic. More than 1,000 technologists signed an open letter calling for him to be disciplined. Investors, founders and technologists have sent messages to the firm’s partners about Mr. Maguire’s behavior. His critics have continued pressuring Sequoia to deal with what they see as hate speech and other invective, while his supporters have said Mr. Maguire has the right to free speech.
https://archive.is/6VoyD#selection-725.0-729.327
Shaun Maguire is a partner, not just a simple hire, and Sequoia Industries had a chance to distance them selves from him and his views, but opted not to.
This is very different from your average developer using GitHub, most of them have no choice in the matter and were using GitHub long before Microsoft’s involvement in the Gaza Genocide became apparent. Zed’s team should have been fully aware of what kind of people they are partnering with. Like I said, it should have been very easy for them not to do so.
EDIT: Here is a summary of the “disagreeable views” in question: https://genocide.vc/meet-shaun-maguire/
At the end there is a simple request for Sequoia Industries, which Sequoia Industries opted against:
> We call on Sequoia to condemn Shaun’s rhetoric and to immediately terminate his employment.
Emphasizing the nature of Mr. Maguire's opinion is not really doing anything to change the argument. Emphasizing what other people think about that opinion, even less so.
> Zed’s team should have been fully aware of what kind of people they are partnering with.
In my moral calculus, accepting money from someone who did something wrong, when that money was honestly obtained and has nothing to do with the act, does not make you culpable for anything. And as GP suggests, Microsoft's money appears to have a stronger tie to violence than Maguire's.
As an aside—despite the popularity of the trolley problem—people don‘t have a rational moral calculus. And moral behavior does not follow a sequential order from best to worse. Whatever your moral calculus be, that has no effect on whether or not the Zed team’s actions were a moral blunder or not... they were.
Furthermore, if accepting funding in this manner is considered a violation of their CoC, then surely the use of Github is even more of a violation. Why wasn't that brought up earlier instead of not at all?
And finally, ycombinator itself has members of its board who have publicly supported Israel. Why are you still using this site?
Turns out when you try to tar by association, everybody is guilty.
I am sure plenty of people here know these things, this is Y Combinator after all, but to me, the general idea in life is that getting money is hard, and stories that make it look easy are scams or extreme outliers.
Do you have an example of that? I can't find any contributors that are upset about this aspect of the funding
But I can re-paste the link here: https://github.com/zed-industries/zed/discussions/36604
But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.
What, they're proud of the telemetery?
The fork claims to make everything opt-in and to not default to any specific vendor, and only to remove things that cannot be self-hosted. What proprietary features have to be cut that Zed people are really proud of?
https://github.com/zedless-editor/zedless?tab=readme-ov-file...
As far as I know, the Zed people have open sourced their collab server components (as AGPLv3), at least well enough to self-host. For example, https://github.com/zed-industries/zed/blob/main/docs/src/dev... -- AFAIK it's just https://github.com/livekit/livekit
The AI stuff will happily talk to self-hosted models, or OpenAI API lookalikes.
The FSF requires assignment so they can re-license the code to whatever new license THEY deem best.
Not the contributors.
A CLA should always be a warning.
tl;dr: If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.
(personally I don't release anything under virus licenses like the GPL but I don't think there's a nefarious purpose behind their CLA)
This seems to be factually untrue; you can assign specific rights under copyright (such as your right to sue and receive compensation for violations by third parties) without assigning the underlying copyright. Transfer of the power to relicense is not necessary for transfer of the power to sue.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.
But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.
From my understanding, Zed is GPL-3.0-or-later. Most projects that involve a CLA and have rugpull potential are licensed as some GPL or AGPLv3, as those are the licenses that protect everyone's rights the strongest, and thanks to the CLA trap, the definition of "everyone" can be limited to just the company who created the project.
https://github.com/zed-industries/zed/blob/main/crates/zed/C...
I think the caveat to the claim that CLAs are only useful for rug pulls still important, but this is a case where it is indeed a relevant thing to consider.
I don't like the term "rug-pull". It's misleading.
If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.
You should show gratitude, not hostility.
https://github.com/zedless-editor/zed/graphs/contributors
It's fair because those people contributed to the codebase you're seeing. Someone can't fork a repo, make a couple commits, and then have GitHub show them as the sole contributor.
https://github.com/zedless-editor/zed/pulls?q=is%3Apr+is%3Ac...
> Since someone mentioned forking, I suppose I’ll use this opportunity to advertise my fork of Zed: https://github.com/zedless-editor/zed
> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.
When did people start hating node and what do they have against it?
I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.
The fact that the tiny packages are so popular despite their triviality is, to me, solid evidence that simply documenting the warts does not in fact make everything fine.
And I say this as someone who is generally pro having more small-but-not-tiny packages (say, on the order of a few hundred to a few thousand lines) in the Python ecosystem.
Node and these NPM packages represent a large increase in attack surface for a relatively small benefit (namely, prettier is included in Zed so that Zed's settings.json is easier to read and edit) which makes me wonder whether Zed's devs care about security at all.
You're kidding, right?
WinterTC has only recently been chartered in order to make strides towards specifying a unified standard library for the JS ecosystem.
For node.js in general? The language isn't even considered good in the browser, for which it was invented. It is absolutely insane to then try to turn it into a standalone programming language. There are so many better options available, use one of them! Reusing a crappy tool just because it's what you know is a mark of very poor craftsmanship.
The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
LLM’s are fundamentally text generators, not verifiers.
They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.
In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)
In reality they work quite well for text and numeric (via tools) analysis, too. I've found them to be powerful tools for "linting" a codebase against adequately documented standards and architectural guidance, especially when given the use of type checkers, static analysis tools, etc.
Code quality improvements is the reason to do it, so *yes*. Of course, anyone using AI for analysis is probably leveraging AI for the "fix" part too (or at least I am).
Link to the ticket. Hopefully your team cares enough to write good tickets.
So if the problem is defined well in the ticket, do the code changed actually address it?
For example for a bug fix. It can check the tests and see if the PR is testing the conditions that caused the bug. It can check the code changed to see if it fits the requirements.
I think the goal with AI for creative stuff should be to make things more efficient, not replace necessarily. Whoever code reviews can get up to speed fast. I’ve been on teams where people would code review a section of the code they aren’t familiar with too much.
In this case if it saves them 30 minutes then great!
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.
But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.
Please note that even GNU themselves require you to do this, see e.g. GNU Emacs which requires copyright assignment to the FSF when you submit patches. So there are legitimate reasons to do this other than being able to close the source later.
Some GNU projects require this; it’s up to the individual maintainers of each specific GNU project whether to require this or not. Many don’t.
So yes, I trust a non-profit, and a collective with nearly 50 years of history supporting copyleft, implicitly more than I will ever trust a company or project offering a software while requiring THEY be assigned the copyright rather than a license. Even your statement holds a difference; they require assignment to FSF, not the project or its maintainers.
That’s just listening to history, not really a gotcha to me.
The way it otherwise works without a CLA is that you own the code you contributed to your repo, and I own the code I contributed to your repo, and since your code is open-source licensed to me, that gives me the ability to modify it and send you my changes, and since my code is open-source licensed to you, that gives you the ability to incorporate it into your repo. The list of copyright owners of an open source repo without a CLA is the list of committers. You couldn't relicense that because it includes my code and I didn't give you permission to. But a CLA makes my contribution your code, not my code.
[^0]: In this case, not literally. You instead grant them a proprietary free license, satisfying the 'because I didn't give you permission' part more directly.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
But now that I’ve been using it for a while it’s absolutely terrible with anything that deals with concurrency. It’s so bad that I’ve stopped using it for any code generation and going to completely disable autocomplete.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.
This also is ignoring that ideally business logic is implemented as a combination of smaller, stabler components that can be independently unit tested.
Having a lot of tests is great until you need to refactor them. I would rather have a few e2e for smoke testing and valuable workflows, Integration tests for business rules. And unit tests when it actually matters. As long as I can change implementation details without touching the tests that much.
Code is a liability. Unless you don’t have to deal with (assembly and compilers) reducing the amount of code is a good strategy.
Not having this is very indicative of a spaghetti soup architecture. Hard pass.
Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.
Find what it's good for in your workflows and try it for that.
I've tried throwing LLMs at every part of the work I do and it's been entirely useless at everything beyond explaining new libraries or being a search engine. Any time it tries to write any code at all it's been entirely useless.
But then I see so many praising all it can do and how much work they get done with their agents and I'm just left confused.
AI tooling my experience:
- React/similar webdev where I "need" 1000 lines of boilerplate to do what jquery did in half a line 10 years ago: Perfect
- AbstractEnterpriseJavaFactorySingletonFactoryClassBuilder: Very helpful
- Powershell monstrosities where I "need" 1000 lines of Verb-Nouning to do what bash does in three lines: If you feed it a template that makes it stop hallucinating nonexisting Verb-Nouners, perfect
- Abstract algorithmic problems in any language: Eh, okay
- All the `foo,err=…;if err…` boilerplate in Golang: Decent
- Actually writing well-optimized business logic in any of those contexts: Forget about it
Since I spend 95% of my time writing tight business logic, it's mostly useless.
- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?
- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?
- convert an idea or plan.md or a paper into working code?
- Fix flakes, fix test<->code discrepancies or increase coverage etc
If you do all this manually, why?
AI doesn't really help me code vs me doing it myself.
AI is better doing other things...
I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
So that's my $0.02!
https://en.m.wikipedia.org/wiki/Zenless_Zone_Zero
If there’s a group of people painfully aware of telemetry and AI being pushed everywhere is devs…
162 more comments available on Hacker News