Key Takeaways
"""Unlike some other projects, Ghostty does not use the issue tracker for discussion or feature requests. Instead, we use GitHub discussions for that. Once a discussion reaches a point where a well-understood, actionable item is identified, it is moved to the issue tracker. This pattern makes it easier for maintainers or contributors to find issues to work on since every issue is ready to be worked on.
This approach is based on years of experience maintaining open source projects and observing that 80-90% of what users think are bugs are either misunderstandings, environmental problems, or configuration errors by the users themselves.[...]"""
If you spend more time closing issues than creating them manually from discussions, the math adds up.
As a maintainers, if you want to be be able to tell real issues from non-issue discussions, you still gave to read them (triage). That's what's taking time.
I don't see how transforming a discussion into an issue is less effort than the other way around. Both are a click.
Github's issues and discussions seem the same feature to me (almost identical UI with different naming).
The only potential benefit I can see is that discussions have a top-level upvote count.
imo almost all issues are real, including "non-issue" - i think you mean non-bug - "discussions." for example it is meaningful that discussions show a potential documentation feature, and products like "a terminal" are complete when their features are authored and also fully documented or discoverable (so intuitive as to not require documentation).
99% of the audience of github projects are other developers, not non-programmer end users. it is almost always wrong to think of issues as not real, every open source maintainer who gets hung up on wanting a category of issues narrower than the ones needed to make their product succeed winds up delegating their product development to a team of professionals and loses control (for an example that I know well: ComfyUI).
The math is even better if you just ignore all issues and close them after two weeks for being stale!
Wish this was /s but it isn't.
but has not graduated to issue worthy status
For me, only Rust compilation necessitates more RAM. But, I assume devs just do RAM heavy dev work on a server over ssh.
And if you are lucky, the content will still be there the next time.
I’m honestly amazed OP is managing 30 GB regularly. I’d wager it’s a tall tale. It’s sort of perfect troll bait on a forum because you end up with people sounding nuts, defending web browser ram usage, against the common position, that browsers are RAM hogs.
I do this mostly for blog posts etc I might not get around to reading for weeks or months from now, and don't want them to disappear in the meantime.
Everything else is either a pinned tab (<5) or a bookmark (themselves shared when necessary on e.g a Slack canvas so the whole team has easy access, not just me).
I often see colleagues with many browser windows of many tabs each open struggling to find what they need, and ponder their methods.
Anyway, just strikes me as odd that the browsers have the functionality right there, it's just not used to its full potential.
Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.
I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.
It’s kind of humorous that everyone interpreted the comment as complaining about Chrome. For all I know, it’s justified in using that much memory, or it’s the crappy websites I’m required to use for work with absurdly large heaps.
I really just meant that at least for work I need more than 8GB of RAM.
In the SWE world, dev servers are a luxury that you don't get in most companies, and most people use their laptops as workstations. Depending on your workflow, you might well have a bunch of VMs/containers running.
Even outside of SWE world, people have plenty of use for more than 8GiB of RAM. Large Photoshop documents with loads of layers, a DAW with a bazillion plugins and samples, anything involving 4k video are all workloads that would struggle running on such a small RAM allowance.
Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more.
A really shame as running local docker/podman for postges was fine when you just ran the commands.
Large corp gotta large corp?
My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them?
It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason.
our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. I only wish we had more GPUs.
In enterprise, we get shared servers with constant connection issues, performance problems, and full disks.
Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution.
Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck.
This assumption is wrong. I compile stuff directly on my laptop, and so do a lot of other people.
Also, even if nobody ran compilers locally, there is still stuff like rustc, clangd, etc. which take lots of RAM.
It's a life of luxury, I tell you.
Sure it is bloated, but it is the stack we have for local development
Why do you assume that? Its nice to do things locally sometimes. Maybe even while having a browser open. It doesn't take much to go over 8gb.
I want to clarify though that there isn't a known widespread "memory leak issue." You didn't say "widespread", but just in case that is taken by anyone else. :) To clarify, there are a few challenges here:
1. The report at hand seems to affect a very limited number of users (given the lack of reports and information about them). There are lots of X meme posts about Ghostty in the macOS "Force Close" window using a massive amount of RAM but that isn't directly useful because that window also reports all the RAM _child processes_ are using (e.g. if you run a command in your shell that consumes 100 GB of RAM, macOS reports it as Ghostty using 100 GB of RAM). And the window by itself also doesn't tell us what you were doing in Ghostty. It farms good engagement, though.
2. We've run Ghostty on Linux under Valgrind in a variety of configurations (the full GUI), we run all of Ghostty's unit tests under Valgrind in CI for every commit, and we've run Ghostty on macOS with the Xcode Instruments leak checker in a variety of configurations and we haven't yet been able to find any leaks. Both of these run fully clean. So, the "easy" tools can't find it.
3. Following point 1 and 2, no maintainer familiar with the codebase has ever seen leaky behavior. Some of us run a build of Ghostty, working full time in a terminal, for weeks, and memory is stable.
4. Our Discord has ~30K users, and within it, we only have one active user who periodically gets a large memory issue. They haven't been able to narrow this down to any specific reproduction and they aren't familiar enough with the codebase to debug it themselves, unfortunately. They're trying!
To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users. That's why the discussion is open and we're soliciting input. I even spent about an hour today on the latest feedback (posted earlier today) trying to use that information to narrow it down. No dice, yet.
If anyone has more info, we'd love to find this. :)
> To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users
In this case it seems you believe a bug exists, but it isn't sufficiently well-understood and actionable to graduate to the bug tracker.
But the threshold of well-understood and actionable is fuzzy and subjective. Most bugs, in my experience, start with some amount of investigative work, and are actionable in the sense that some concrete steps would further the investigation, but full understanding is not achieved until very late in the game, around the time I am prototyping a fix.
Similarly the line between bug and feature request is often unclear. If the product breaks in specific configuration X, is it a bug, or a request to add support for configuration X?
I find it easier to have a single place for issue discussion at all stages of understanding or actionability, so that we don't have to worry about distinctions like this that feel a bit arbitrary.
Both are valid, and it makes sense to be clear about what the teams view is
I think the confusion of bug tracking with work tracking comes out of the bad old days where we didn't write tests and we shipped large globs of changes all at once. In that world, people spent months putting bugs in, so it makes sense they'd need a database to track them all after the release. Bugs were the majority of the work.
But I think a team with good practices that ships early and often can spend a lot more time on adding value. In which case, jamming everything into a jumped-up bug tracker is the wrong approach.
For bug reports, always using issues for everything also requires you to evaluate how long an issue should exist before it is closed out if it can't be reproduced(if trying to keep a clean issue list). That could lead to discussion fragmentation if now new reports start coming in that need to be reported, but not just anyone can manage issue states, so a new one is created.
From a practical standpoint, they have 40 pages of open discussion in the project and 6 pages of open issues, so I get where they're coming from. The GH issue tracker is less than stellar.
Memory usage is not really difficult to debug usually, tbh.
I reported the issue in discussions some time ago, but had no reaction/response.
I was able to reproduce the leak consistently. Finally I've got all the reports done by me, Ghostty sources and Claude Code and tried to fix it.
For the first couple of weeks there were no leaks at all, now it started again but only 1/10 of the times it was before.
https://github.com/ghostty-org/ghostty/discussions/9786 There are some logs and a Claude Code review md file that might be useful.
Hope it will help someone investigate further.
Your second link looks like an X user trying to start a flamewar.
The current "issues" system works fine for most small-medium projects and even many large projects. Any project who looks for a more "serious" solution would have its own Jira/bug tracker system, and you can find plenty of them.
Definitely discussing things could also happen in the issue tracker, and some <Actionable> tag could be used to mark issues that are ready to work upon. But I suspect that Discussions are better suited for, well, discussions, while the facilities of the issue tracker can then be used by maintainers / contributors.
I find this separation pretty smart.
How is this not trivially solved via a "ready-to-be-worked-on" tag?
Is it really that hard to open a discussion?
Very often in those infamous bugs that has been open for years, having hundreds of ”me too” comments, there are gems with workarounds or reproductions, unfortunately hidden somewhere under 4 iterations of ”click to load 8 more comments”, making it difficult to find. This generates even more ”anyone know how to solve this” spam, further contributing to the difficulty to find the good post.
technically, messages are messages. this approach no more than grouping messages into different forums. it could also all be under discussion with a sub forum for issues, one for features, one for other topics, etc, and then there would need to be a permission system for each sub forum.
so all this does is to create two spheres of access for users and developers. and that's the point.
in the end it's really a matter of taste and preference.
Compared to that, this system has been a huge success. It has its own problems, but it's directionally better.
(also, what is "huge success" in methods of organizing issues?)
bookmark: (and if your browser supports shortcuts, it can be as easy to open as remembering to type a single char)
https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...
You're technically correct, but practically it doesn't work.
And I am confidently saying "doesn't work" here because I tried this process for more than 5 years on projects that had a similar amount of issue and contributor scale. There's a handful of people in this thread who are throwing around words like "just" or "trivially" or just implying how obvious a simple solution looks without perhaps accepting that I've been triaging and working on GH issues in large open projects full-time non-stop for the last 15 years. I've tried it, I promise!
Here is an abridged set of reasons, just because it quickly turns into a very big thing:
1. The barrier to mislabel is too low. There is no confirmation to remove labels. There is no email notification on label change. We've had "accepted" issues accidentally lose their accepted label and enter the quagmire of thousands of unconfirmed issues. Its lost. In this new approach, every issue is critical and you can't do this. You can accidentally _close_ it, but that sends an email notification. This sounds dumb, but it happens, usually due to keyboard shortcuts.
2. The psychological impact of the "open issue count" has real consequences despite being meaningless on its own. People will see a project with 1K+ issues and think "oh this is a buggy hell hole" when 950 of those issues are untriaged, unaccepted, 3rd party issues, etc.
My practical experience with #2 was Terraform ~5 years ago (when I last worked on it, can't speak to the current state). We had something like 1,800 open issues and someone on Twitter decided to farm engagement and dunk on it and use that as an example of how broken it is. It forced me to call for a feature freeze and full on-hands triage. We ultimately discovered there were ~5 crashing bugs, ~50 or so core bugs, ~100 bugs in providers we control, and the rest were 3rd party provider bugs (which we accepted in our issue tracker at the time) or unaccepted/undesigned features or unconfirmed bugs (no reproduction).
With the new approach, these are far enough away that it gets rid of this issue completely.
3. The back-and-forth process of confirming a bug or designing and accepting a feature produces a lot of noise that is difficult to hide within an issue. You can definitely update the original post but then there might be 100 comments below that you have to individually hide or write tooling to hide, because ongoing status update discussions may still be valuable.
This is also particularly relevant in today's era of AI where well written GH issues and curated comments produce excellent context for an agent to plan and execute. But, if you don't like AI you can ignore that and the point is still valid... for people!
By separating out the design + accept into two separate posts, it _forces_ you to rewrite the core post and shifts the discussion from design to progress. I've found it much cleaner and I'm very happy about this.
4. Flat threads don't work well for issue discussion. You even see this in traditional OSS that uses mailing lists (see LKML): they form a tree of responses! Issues are flat. Its annoying. Discussions are threaded! And that is very helpful to chase down separate chains of thought, or reproductions, or possibly unrelated issues or topics.
Once an issue is accepted, the flat threads work _fine_. I'd still prefer a tree, but it's a much smaller issue. :)
-----------
Okay I'm going to stop there. I hope you can empathize a bit that there are some practical issues, and this is something I've both thought of critically for and tried for over a decade.
This is completely a failure of GitHub's product suite and as I noted in another comment I'm not _happy_ I have to do this. I don't think discussions are _good_. They're just the _least bad_ right now, unfortunately.
I definitely think splitting discussion and issues is a good idea for that reason alone.
Fully agree with this; as a beginner in the space I get nervous when I see a project having a thousand open issues since 2018.
An additional benefit of that is that a user whose discussion leads to a real issue being created will feel like they're genuinely being listened to. That creates a good customer experience, which is good for your brand's reputation.
Whereas if it goes via a Discussion first, the back and forth happens elsewhere.
Arguably an separate issue could still do this, but it being a discussion sets the expectation better.
> Arguably an separate issue could still do this, but it being a discussion sets the expectation better.
People do that all the time in bug trackers.
Somehow the distinction of just adding a tag / using filters doesn't communicate the cultural/process distinction in the same way.
1. Ask a high-quality LLM in research mode to gather empirical statistics on how different GitHub projects are setup.
2. Put human eyes on the data you find, look for patterns, see what is interesting. (I recommend reading on approaches that promote transparency about the order in which you collect data, form hypotheses, etc.)
3. Put on your anthropologist hat and do open-ended interviews with project maintainers.
And so on.
If it's someone else's project, they have full authority to decide what is and isn't an issue. With large enough projects, you're going to have enough bad actors, people who don't read error messages, and just downright crazy people. Throw in people using AI for dubious purposes like CVE inflation, and it's even worse.
It's simply a great idea. The mindset should be 'understand what's happening', not 'this is the software's fault'.
The discussion area also serves as a convenient explanation/exploration of the surrounding issues that is easy to find. It reduces the maintainer's workload and should be the default.
For instance, to give a specific example: Andy Maleh maintains a project called glimmer. I thought glimmer was a fine idea ("one GUI to rule them all"), but ... interacting with Andy is like interacting with some strange AI really. He makes assumption about people who use the bugtracker ("you are not a good open source person!"), for instance, and then goes on to speculate about them being "bad actors" as a consequence of his own "analysis", which is typically flawed and incomplete. That totally surprised me. Then I had a look on reddit and he amassed a grand total of about 42 karma after 3 years. Now - reddit has tons of issues, crazy moderators and so forth, but when you reach 42 karma after 3 years only, yet have about 500 comments and 300 new threads created (all basically him trying to advertise glimmer, so self-promo), yet only 42 karma, then something is strange. Before I deleted my account at reddit recently, after 2 years I had 65.000 karma - again, the whole karma system is totally pointless on reddit, but if you only amass 42 karma, then something may be wrong. I then realised that he managed to antagonize more people by his attitude and opinions; you can see this on his "blog" posts like this:
https://businessdiscriminationreport.blogspot.com/
This is just one example of many more. In the rails/ruby ecosystem you have some strange people; DHH too with his "Europe will perish" articles, guess he is now a US TechBro (see https://world.hey.com/dhh/europe-is-weak-and-delusional-but-... and other low quality articles).
The thing here, as TLDR, is that people have a different opinion, and summarizing this as "rude, entitled, and aggressive" really won't qualify. Andy classifed me as entitled; I don't think I am. I just have a completely different opinion to his - and often others - assessment of the situation, in particular when it is written text. A lot of intent is simply lost in text, that is why sometimes people write e. g. "/s" to denote sarcasm in text. I always found this very strange and I don't think I ever used /s at all as annotation in any of my texts. I do use smileys sometimes though.
> Yeah but a good issue tracker should be able to help you filter that stuff out.
Agreed. This highlights GitHub's issue management system being inadequate.
(Note: I'm the creator/lead of Ghostty)
Unfortunately there is no such magic bullet for trawling through bug reports from users, but pushing more work out to the reporter can be reasonably effective at avoiding that kind of time wasting. Require that the reporters communicate responsively, that they test things promptly, that they provide reproducers and exact recipes for reproduction. Ask that they run git bisect / creduce / debug options / etc. Proactively close out bugs or mark them appropriately if reporters don't do the work.
Downside is that "Facebookization" created a trend where people expect everything to be obvious and achievable in minimal amount of clicks, without configuring anything.
Now "LLMization" will push the trend forward. If I can make a video with Sora by typing what I want in the box, why would I need to click around or type some arcane configuration for a tool?
I don't think in general it is bad - it is only bad for specialist software where you cannot use that software without deeper understanding, but the expectation is still there.
Commenting on things is from a list of features (to be distinguished from UX/UI) I talked about.
That's just a stupid limitation and not even a technical one. You could happily send GBs over email. You can also easily filter allowed attachment size by sender on the recipient side, because by the time the attachment size is told, both information was already provided.
it is a UI designed to be hard to use
1) UI = a clearly documented way to configure all features and make the software work exactly how you want.
2) UI = load a web page and try to do the thing you wanted to do (in this case communicate with some specific people).
FB is clearly terrible at 1 but pretty alright at 2.
Try to do basic stuff like set up an event with an arbitrary string as a location (eg "my house") -- it can't be done.
Try navigating backwards. Depends on the page but I'd say this fails to take you back to the previous screen at least half the time (sometimes skipping a page, sometimes dumping you on a blank page).
And the infinite scrolling lags like nowhere else, every few posts and ads it stalls out while loading the next page (instead of loading the next several pages at once).
I'm guessing advertisers never actually look at the site to see if it's worth spending as much as they do.
Then people expect accounting software to be just login click one or two buttons.
IME, people cannot even articulate what they want when the know what they want, let alone when they don’t even understand what they want in the first place.
As far as I'm aware, most large open GitHub projects use tags for that kind of classification. Would you consider that too clunky?
It all stems from the fact that all issues are in this one large pool rather than there being a completely separate list with already vetted stuff that nobody else can write into.
With sufficient thrust, pigs fly just fine. However, this is
not necessarily a good idea. It is hard to be sure where they
are going to land, and it could be dangerous sitting under them
as they fly overhead.
Translation: sure, you can make this work by piling automation on top. But that doesn't make it a good system to begin with, and won't really result in a robust result either. I'd really rather have a better foundation to start with.138 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.