Back to Home11/16/2025, 7:21:13 PM

The fate of "small" open source

83 points
46 comments

Mood

thoughtful

Sentiment

mixed

Category

tech

Key topics

open source

sustainability

software development

Debate intensity60/100

The author reflects on the challenges faced by small open-source projects and their potential fate in the current ecosystem.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

9h

Peak period

24

Day 1

Avg / period

13.5

Comment distribution27 data points

Based on 27 loaded comments

Key moments

  1. 01Story posted

    11/16/2025, 7:21:13 PM

    2d ago

    Step 01
  2. 02First comment

    11/17/2025, 4:03:19 AM

    9h after posting

    Step 02
  3. 03Peak activity

    24 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/18/2025, 1:09:42 PM

    20h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (46 comments)
Showing 27 comments of 46
Waterluvian
2d ago
1 reply
AI offering the solution for a small problem that probably doesn’t deserve yet another dependency suggests to me that there’s a middle ground that we’ve failed to sufficiently cover: how to socialize code snippets that you’re meant to just inline into your project. Stack Overflow is probably the closest we’ve gotten to a generalized solution and it doesn’t exactly feel like a good one.

I came across this once before in the form of a react hooks library that had no dependency to install. It was just a website and when you found the hook you wanted you were meant to paste it into your project.

jcgl
1d ago
1 reply
GitHub gists are probably the best technical option. They come with versioning (I think), and have comment threads.
Waterluvian
1d ago
And a decent API as well.

I think what’s missing is some amount of organization to make them more discoverable.

p0w3n3d
2d ago
4 replies

  Given that some 80% of developers are now using AI in their regular work, blob-util is almost certainly the kind of thing that most developers would just happily have an LLM generate for them. Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Letting LLM write utility code is a sword that cuts both ways. You often create a throw-away code that is unproven and requires maintenance. It's not a guarantee that the blobutil or toString or whatever created by AI won't fail at some edge cases. That's why e.g. in Java there is Apache commons which is perceived as an industry standard nowadays.
jaapz
2d ago
1 reply
Exactly. When you assume blob-util to be a utility library that has been in use for quite a while by many people in many different contexts, hasn't seen much changes and just "works", IMHO the risk of weird bugs is a lot larger with LLM-generated code. Code generated by LLM's often have the problem that the code seems logical, but then contain weird bugs that aren't immediately obvious.
p0w3n3d
1d ago
Agreed. The code available in e.g. Apache commons is not perfect, but it's seasoned and has the edge cases documented
safety1st
1d ago
1 reply
This mostly sounds like a good thing to me from a utilitarian standpoint. Getting all your utility classes from somewhere like npm and creating dependencies on 20 different people and organizations who may or may not maintain their software has been a security nightmare with many highly public examples. If a LLM writes a utility class for me then my supply chain is smaller, meaning less surface area to attack plus I probably benefit from some form of security through obscurity for whatever non-trivial amount that's worth. "Downside" is I don't have some rando, probably unpaid labor out there updating a piece of my app for me...
philipov
1d ago
1 reply
Your supply chain is superficially fewer, but not smaller. The way you're counting the number of suppliers is heterogeneous. ChatGPT is a bigger surface area than 20 individuals.
safety1st
1d ago
1 reply
Your supply chain is smaller in the sense that every person or organization you obtain code from is similar to a vendor, just an unpaid one. They are a separate entity your business depends on.

If we replace code written by 20 of those organizations with code written by ChatGPT, we've gone from 20 code "vendors" we don't know much about who have no formal agreement to speak of with us, to 1 tools vendor which we can even make an enterprise agreement with and all that jazz.

So whatever else the outcome may be, from this perspective it reduces uncertainty to quit using random npm packages and generate your utility classes with ChatGPT. I think this is the conclusion many businesses may reach.

dbalatero
1d ago
What enterprise agreement would you make with OpenAI that would make you feel better about the supply chain? Seems to me you just get stochastic output that may or may not be battle tested level code, without guarantees either?
Cthulhu_
1d ago
[delayed]
m463
1d ago
Actually sounds like something someone that wrote a really small utility that is surprisingly (to him) used by a lot of people would say.

But the benefit he provided is significantly more than he realizes/acknowledges.

stevage
1d ago
2 replies
>Many software developers will argue that asking a candidate to reverse a binary tree is pointless

Is "reversing a binary tree" actually a thing, or is this a cute kind of "rocket surgery" phrase intentionally mixing reversing a linked list and searching a binary tree?

nitnelave
1d ago
I think it's a reference to the Google interview problem that the author of Homebrew (IIRC) failed. They were quite upset about it since they have proved their worth through their famous open-source contributions, but got rejected in a LeetCode-like interview.
stingraycharles
1d ago
It’s probably a mistake on the author’s end, but the problem comes across anyway.

I can only imagine reversing a binary tree would imply changing the “<“ comparison in nodes to “>” which would be a useless exercise

willtemperley
2d ago
LLMs give us the opportunity to work on more complex projects and gain fuller understanding of the problem space and concepts. Or create tons of slop. Take your pick.
mb2100
23h ago
I totally agree that the goal should be teaching. But I do wonder which code someone is more likely to actually read and understand: the LLM-generated one in their own codebase, or the one hidden in some npm package they installed to just ship things. I honestly don’t know. Careless devs probably won’t read either as long as it seems to work

I’ve been trying to encourage forking my libraries, or have people just copy them into their codebase and adapt them, e.g. https://github.com/mastrojs/mastro/ (especially the “extension” libs.) But it’s an uphill battle against the culture of convenience over understanding.

layer8
1d ago
This seems to be specific to the JavaScript ecosystem, and to some extent orthogonal to AI coding. Micro-libraries with trivial-ish functionality are generally more difficult to justify to add as a dependency. I’m sure AI coding agents will eventually learn as well that there’s a balance to strike between implementing stuff yourself and the use of libraries.
hamdouni
2d ago
The main point isn't about dependencies but loosing the mindset to learn from small domain problem
strogonoff
2d ago
Chances are, even if you deliberately and strategically pick to work on an OSS project that you are positively sure an LLM can’t just spit out on command, it will be capable of doing so by the time you are close to completion. In that sense, one has to either be not inclined to question “what’s the point” or have a bit of gambling mentality in order to work on anything substantial.

That’s not automatically a problem, however. The problem is that even if you do come up with a really cool idea that LLM is not capable of autocompleting, and you license it under a copyleft license (to ensure the project survives and volunteer contributor’s work is not adopted and extinguished by some commercial product), it will get incorporated into its dataset regardless of the licensing, and thereafter the LLM will be capable of spitting it out and its large corporate operator will be able to monetise your code (allowing anyone with money wishing to build a commercial product based on it).

lsferreira42
20h ago
Now way more than before, i value my feed collection because i know all these blogs are human written, some of them got a post every year, but it is always good quality and human written.
NoSalt
1d ago
Bleak
eviks
2d ago
I suppose some people would see this as progress: fewer dependencies, more robust code (even if it’s a bit more verbose), quicker turnaround time than the old “search npm, find a package, read the docs, install it” approach.

Why would randomized code be more robust? Also, how is searching/reading the docs slower than checking the correctness of a randomized function?

jasonjmcghee
1d ago
> it’s a future where we prize instant answers over teaching and understanding

It doesn't have to be this way- coupled with verification (to mitigate hallucination), llms can help so much with personal education.

tanin
2d ago
Right now I tend to not use an external library unless the code would be large e.g. http server and/or the library is extremely popular.

Otherwise, writing it myself is much better. It's more customizable and often much smaller in size. This is because the library has to generalize, and it comes with bloats.

Using AI to help write is great because I should understand that anyway whether AI writes it or not or whether it's in an external library.

One example recently is that I built a virtual list myself. The code is much smaller and simpler compared to other popular libraries. But of course it's not as generalizable.

kunley
2d ago
The level of apologetism with regard to ai is depressing in this article.
dominicrose
1d ago
Node.js's core library is purposefully minimal. Purposefully or not, it's still an issue. Wether you add many small things to your project using an LLM or NPM, it still requires work and the annoying thing is that you'll have to do it for every new Node.js project.

Node.js is very good for IO and it has decent performance even for CPU-intensive work considering it's a dynamic language, but it would sure be nice to have a rich core library like Ruby or Clojure has.

The fact that ClojureScript can do it proves that it's doable even for front-end javascript (using advanced optimisations).

19 more comments available on Hacker News

ID: 45947639Type: storyLast synced: 11/16/2025, 9:42:59 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.