The Bloat of Edge-Case First Libraries
Posted3 months agoActive3 months ago
43081j.comTechstoryHigh profile
heatedmixed
Debate
80/100
JavascriptLibrary DesignType CheckingSoftware Development
Key topics
Javascript
Library Design
Type Checking
Software Development
The article discusses the bloat caused by edge-case first libraries in JavaScript, sparking a debate about the trade-offs between defensive programming and code simplicity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
107
0-12h
Avg / period
20.9
Comment distribution146 data points
Loading chart...
Based on 146 loaded comments
Key moments
- 01Story posted
Sep 20, 2025 at 10:09 PM EDT
3 months ago
Step 01 - 02First comment
Sep 20, 2025 at 11:20 PM EDT
1h after posting
Step 02 - 03Peak activity
107 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 27, 2025 at 10:02 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45319399Type: storyLast synced: 11/20/2025, 6:51:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It leads to less code, and more generalizable code. Maybe the caller does know what they're doing, who am I to say they can't.
What happens is you get an error. So you immediately know something is wrong.
Javascript goes the extra mile to avoid throwing errors.
So you've 3>"2" succeeding in Javascript but it's an exception in python. This behavior leads to hard to catch bugs in the former.
Standard operators and methods have runtime type checks in python and that's what examples in the article are replicating.
Javascript was a prototype that was supposed to be refined and cleaned. Then management said "we will ship it now" and it was shipped as it was.
At the time, they thought it will be used for small things, minor snippets on web page. Webapps as we have them were not a thing yet.
Don’t most languages have the same problem? Even C or Rust have escape hatches that let you override compile-time type checking and pass whatever gibberish you want to a function. How is Typescript any worse?
I’ll pull off the cleanest, nicest generic constraints on some component that infers everything perfectly and doesn’t allow invalid inputs, just for a coworker to throw @ts-nocheck on the entire file. It hurts
In my experience nobody writes Typescript without checking the types. Unlike Python for example where it's not uncommon to have broken types. (And it's way more of a mess in general.)
So? Language is irrelevant here. What matter is the contract, the agreement that operates through time and space between caller and callee on behavior and expectations. Contracts exist outside of the type system. All a type system can do is enforce a subset of the contractual requirements.
For example, log(0) is undefined. If you write a function to compute log(x), you can (and should) document the function as having unspecified behavior when x is zero.
That's the contract.
The contract holds whether the function is spelled like this
or In the second example, we have the type system enforce the "is a number" rule, but at least in TypeScript, you can't use the type system to check that x is positive. In some languages you can, but even in those languages, programs have invariants the type system can't express.Likewise, you could define a version of log that accepted, e.g. spelled numbers, like log("three"). You can express this weird contract with static or dynamic typing.
The nasty thing is that if early in the log library's history, before the author wrote down the contract or even had TypeScript, someone had written log("three") and gotten a type error, the log library's author might have "fixed" the "bug" by checking for spelled numbers as inputs. That would have been an expansion of the function's contract.
The log library author probably wasn't thinking about the long-term downsides of this expansion. He could have, even when our log() was a pure-JavaScript, just said "The contract is the input is a positive number. Your code is buggy."
But our hypothetical log library author didn't have the experience or good judgement to say no, so we ended up with the kind of "corner case bloat" the article's author describes.
export function clamp(value: number, min: number, max: number): number { return }
That is just adding an extra jump and entry on the callstack where you could just have done:
Math.min(Math.max(value, min), max);
Where you need it.
(The example used was probably just for illustration though)
i disagree with this particular example - it's actually a good use case for being a utility maths function in a library.
I mean, the very `min()` and `max()` function you used in the illustration could also have been used as an example, if you use the same logic!
In terms of DRY and cleanliness, yes "clamp" sounds awesome. But in terms of practicality and quick understanding? Math.min and Math.max there is such a frequent pattern that brain immediately understands what is being tried to do there, as opposed to exact meaning of "clamp" to me.
It may be just me though, clamp is not a word I frequently hear in English, and I see it in code sometimes, but not frequently enough to consciously register what it does as fast in my brain. Despite seeing code for probably 8h+ a day for the past 10 years.
If it was in std lib of JS, maybe then it would be better.
Like there is some sort of balance line on how frequently something is used and whether it's REALLY worth to abstract it. If it's in stdlib, use it, if it's not use it if truly it's pretty much always used.
I'm also coming from being a Go programmer where the native library is very barebones so a lot of custom utility methods like this are inevitable.
C++: https://en.cppreference.com/w/cpp/algorithm/clamp.html
.NET/C#: https://learn.microsoft.com/en-us/dotnet/api/system.math.cla...
Java: https://docs.oracle.com/en/java/javase/24/docs/api/java.base...
Ruby: https://ruby-doc.org/core-2.4.0/Comparable.html#method-i-cla...
Rust: https://doc.rust-lang.org/stable/std/primitive.f64.html#meth...
Are there 1000 people running 100 CI pipelines/day where downloads aren’t cached?
Do they cache to shared storage and worry about concurrency? Do they only have local storage in various differing states?
For example, this is a completely empty rust crate, which I started years ago and never released, and it's still downloaded multiple times per day... https://crates.io/crates/ruble
Stuff like Docker that defaults to downloading things from Internet repos is kinda scary. There isn’t always a clear dividing line between the thing as a “tool” and the thing as a “repo”.
At least with “git”, I know they I’m not going to end up cloning an artifact from an Internet repo just because I made a small typo…
https://github.com/rust-lang/crater
https://github.com/deviceplug/btleplug
fwiw the downloads are probably automated crates.io indexers.
(Pip used to be right up there next to setuptools. It's still pretty popular — https://pypistats.org/packages/pip — but uv has clearly changed the equation.)
That is a really weird definition of clamp. I would expect to consistently get either min, max, or an error.
I bet Rust doesn't use your weird idea... Yep. Fully sane implementation:
https://docs.rs/num/latest/num/fn.clamp.html
Oh, look, somebody just re-discovered static typing.
The article also skipped over the following related topics:
Amusingly, nowhere in the original article is it mentioned that the article is only about Javascript.
Languages should have compile time strong typing for at least the machine types: integers, floats, characters, strings, and booleans. If user defined types are handled as an "any" type resolved at run time, performance is OK, because there's enough overhead dealing with user defined structures that the run time check won't kill performance.
(This is why Python needs NumPy to get decent numeric performance.)
It seems like the point of the article was to not do that though, contrary to my own opinion, and I just wonder why...
It seems like your approach is just trying to ignore programmer errors, which is rarely a good idea.
It is a special type of madness if we're supporting a reliance on implementation specific failure modes of the clamp function when someone calls it with incoherent arguments.
But it makes it harder for the developer to recognize that the code is buggy. More feedback to the developer allows them to write better code, with less bugs.
Your argument could be made in the same way to claim that static typing is bad; because the caller should be calling it with the right types of values in the first place.
But the feedback is unrelated to the bug, the bug here is that the programmer doesn't understand what the word "clamp" means and is trying to use the function in an incorrect way. Randomly throwing an exception on around 50% of intervals doesn't help them understand that, and the other 50% of the time they're still coding wrong and not getting any feedback. I'm not against the clamp function doing whatever if people want it to, it can make coffee and cook pancakes when we call it for all I care. But if it just clamps that is probably better. It isn't a bug if I call clamp and don't get pancakes. It also isn't a bug if I call clamp and it remains silent on the fact that one argument is larger than another one.
Feedback has to be relevant. It'd be like having a type system that blocks and argument that isn't set to a value. If the programmer provides code that has bugs, it'll give them lots of feedback. But the bug and the error won't be related and it is effectively noise.
To ensure only valid intervals are supported at the type system level, the function could perhaps be redefined as:
Of course, you need to deal with the distinction between closed and open intervals. Clamping really only makes sense for closed ones.> An interval where a > b is usually a programming error.
If you want it to be, sure. Anything can be a programming error if the library author feels like it. We may as well put all sorts of constraints on clamp, it is probably an error if the caller uses a large number or a negative too. It is still bad design in a theoretical sense - the clamp function throws an error despite there being an obvious non-error return value. It isn't hard to meaningfully clamp 2 between 4 and 3.
Personally I just write JS like a typed language. I follow all the same rules as I would in Java or C# or whatever. It's not a perfect solution and I still don't like JS but it works.
To detect this at compile time, you would need either min and max to be known at compile time, or a type system that supports value-dependent types. None of the popular language support this. (My language named 'Bau', which is not popular of course, support value-dependent types to avoid array-bound checks.)
If you're going to smug, at least do it when you're on the right side of the technology. The problem the article describes has nothing to do with the degree of static typing a language might have. You can make narrow, tight, clean interfaces in dynamic languages; you can make sprawling and unfocused ones in statically-typed languages.
The problem is one of mindset --- the way I'd do it, an insufficient appreciation of the beauty of parsimony. Nothing to do with any specific type system or language.
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
Some libraries take a `TryFrom<RealType>` as input, instead of RealType. Their return value is now polluted with the Error type of the potential failure.
This is a pain to work with when you’re passing the exact type, since you basically need to handle an unreachable error case.
Functions should take the raw types which they need, and leave conversation to the call site.
What I find annoying about the pattern is that it hinders API exploration through intellisense ("okay, it seems I need a XY, how do I get one of them"), because the TryFrom (sort of) obscures all the types that would be valid. This problem isn't exclusive to Rust though, very OO APIs that only have a base class in the signature, but really expect some concrete implementation are similarly annoying.
Of course you can look up "who implements X"; it's just an inconvenient extra step.
And there is merit to APIs designed like this - stuff like Axum in Rust would be much more significantly more annoying to use if you had to convert everything by hand. Though often this kind of design feels like a band aid for the lack of union types in the language.
I remember writing Python and Perl where functions largely just aimed you passed them the correct types (with isolated exceptions where it may have made sense) years before JavaScript was anything but a browser language for little functionality snippets. It's a dynamic language antipattern for every function to be constantly defensively checking all of it's input for type correctness, because despite being written for nominal "correctness", it's fragile, inconsistent between definitions, often wrong anyhow, slow, and complicates every function it touches, to the point it essentially eliminates the advantages of dynamic language in the first place.
Dynamic languages have to move some responsibility for being called with correct arguments to the caller, because checking the correctness of the arguments correctly is difficult and at times simply impossible. If the function is called with the wrong arguments and blows up, you need to be blaming the caller, not the called function.
I observe that in general this seems to be something that requires a certain degree of programming maturity to internalize: Just because the compiler or stack trace says the problem is on line 123 of program file X, does not mean the problem is actually there or that the correct fix will go there.
‘’’ export function clamp(value: number | string, min: number | string, max: number | string): number { if (typeof value === 'string' && Number.isNaN(Number(value))) { throw new Error('value must be a number or a number-like string'); } if (typeof min === 'string' && Number.isNaN(Number(min))) { throw new Error('min must be a number or a number-like string'); } if (typeof max === 'string' && Number.isNaN(Number(max))) { throw new Error('max must be a number or a number-like string'); } if (Number(min) > Number(max)) { throw new Error('min must be less than or equal to max'); } return Math.min(Math.max(value, min), max); } ‘’’
It's not about performance, it's about separating concerns: the caller handles the not-a-number case, so the callee doesn't have to.
Then you add optional runtime checking of the contract, preferably in a different build mode (or a different version, e.g. "my-lib1.2-debug"), to get sensible diagnostics quickly in tests and canary deployments. The checks are redundant by definition. Defensive programming.
Some contracts are more difficult to express, though. The worst kind involve sequences of behavior, e.g. "start() must have been called," or "must have been obtained from a previous call to 'allocate()' on the same object."
Even value parameter constraints can be tricky. "The input must be sorted."
I haven't studied type systems, but I like the idea of values picking up attributes as they pass through functions. Imagine a "sort" function that bestows its output with the "sorted" type attribute. But then what happens when you append to that value? Is it still sorted?
At some point it's convenient to be permissive with the parameter's static type and to constrain the caller with a contract.
To me it sounds like you are using a poorly-designed language
Not language specific. A C example that comes to mind is checking every non-nullable pointer parameter in every function for NULL and reporting an error instead of just letting the program crash. (Please, don't do that: just let contract violations produce crashes.)
The article's author describes a problem of experience and spirit, not a language wart.
The nullable pointer you’ve given is another great example of language wart. And one that’s often highlighted as a specific criticism against Go lang.
If your language requires you to manually track function contracts then that’s a problem with the language itself. Not the developer.
"
Note the consistent user interface and error reportage. Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity."Yes, having a library named is-number looks very stupid until you look at the state of javascript in 2014. Look at issue is-number#1[0] if you’re interested.
The library is-arrayish exists because array-like objects are actually a thing in javascript.
About is-regexp: the author mentions that their library supports cross-realm values because it’s useful, but then says that it’s an edge case that most libraries don’t need to care about? The whole reason that the library exists is to cover the edge cases. If not needed, yes the consumers of the library would have just been using the simple instanceof RegExp check.
If you’re arguing that there are consumers of those libraries that are wrong, the post might at least make sense – the presented case here is that the writer of the clamp function is stupid, not the other way around. Having a function that determines if a string is a number is not stupid; it’s importing that function and creating a clamp function with the wrong type signature part that’s stupid. Especially when it’s 2025 and typescript is universal.
All of the libraries that are mentioned are like 10 years old at this point. I don’t think we have to beat the dead horse one more time.
[0]: https://github.com/jonschlinkert/is-number/issues/1
So are you saying the author updated the implementation and added deprecation warning?))
And author is not "stupid". More like "strategic". popular npm package = money. this is why everybody falls over to write leftpads and stuff.
As far as I understand, the author wrote the clamp function by themselves. There is no clamp library that the author is arguing against.
In fact, it seems the library ‘clamp’ in npm is a library that does exactly what the author wants – no validation, assuming that the value is a number, and just returning a number.[0]
[0]: https://github.com/hughsk/clamp/blob/master/index.js
At some point someone went "let's decouple as much as we can! A library should be just a single function!" and we've spent a lot of time since then showing why that's quite a bad idea
The lack of types perhaps inspires some of these functions masquerading as libraries, but they're often trivial checks that you could (and probably should) do inline in your less pointless functions, if needed.
JS has exploded in popularity when Internet Explorer was still around, before ES6 cleanup of the language. JS had lots of gotchas where seemingly obvious code wasn't working correctly, and devs weren't keeping up with all the dumb hacks needed for even basic things. Working around IE6's problems used to be a whole profession (quirksmode.org).
Browsers didn't have support for JS modules yet, and HTTP/1.1 couldn't handle many small files, so devs needed a way to "bundle" their JS anyway. Node.js happened to have a solution, while also enabled reusing code between client and server, and the micro libraries saved developers from having to deal with JS engine differences and memorize all the quirks.
Splitting it into a library per quirk seems like an unforced error in that context.
One could have excused it as being a way to keep the code size down, but if you use npm you also usually use a build step which could drop the unused parts, so it doesn't really hold water
The Apache runtime isn't sent over the network every time it's used, but JS in the browser is.
JS ecosystem has got several fix-everything-at-once libraries. However, JS is very dynamic, so even when "compiled", it's very hard to remove dead code. JS compiling JS is also much slower than C compiling C. Both of these factors favor tiny libraries.
Abstractions in JS have a higher cost. Even trivial wrappers add overhead before JIT kicks in, and even then very few things can be optimized out, and many abstractions even prevent JIT from working well. It's much cheaper to patch a few gaps only where they're needed than to add a foundational abstraction layer for the whole app.
I'm not sure the dependency tree madness actually translates to smaller code in the end either, given the bloat in the average web app... but to be fair it would be perfectly plausible that javascript developers opted for micro-libraries motivated by performance, even if it wasn't the effect of that decision.
As in, "how do I check if a string starts with a shebang" should result in some code being pasted in your editor, not in a new dependency. There is obviously a complexity threshold where the dependency becomes the better choice, but wherever it is it should be way higher than this.
I mean, people have been writing these simple functions over and over for decades when they just needed one or two things that importing a library wasn’t needed. I wasn’t aware there was a gap to be filled.
Senior engineer: "you shouldn't have to use llm to do simple things" Their boss: "hey I need you to write this thing in Go for performance reasons" Senior engineer: "you shouldn't make me learn new languages, js is plenty performant for this task just install a new server. why are they making us use all these new tools? everything is new again, blah blah excuses excuses"
vs.
Boss: "hey can you write this in go" Vibe Coder: "write this loop to ignore the first semicolon in this response", and then proceeds to move on with life.
It seems like when people evaluate the merits of what a code generation ai agent should and shouldn't do, they leave out a lot of their implicit assumptions about how project should work and their own theory of mind around coding.
You don't need a "is it a number" package in java, because the function cannot be passed a string, because it actually has a type system. All this slop has come about because meme developers refuse to accept the truth - programming languages have proper syntax for a reason.
A lot of times you have something in Java which could be
and I’m going to argue that often that’s good enough but a lot of people think it isn’t so they might put 20 lines of code above that line to head off that exception or they might put it in a try-catch block [1] to respond to it or wrap it. Those unhappy paths bloat your code the same way as the guards in the blog post, make it unclear how the happy path works at a glance (place for bugs to hide in the happy path) and obviously create opportunities for bugs in the unhappy path.I’ll argue that (1) error handling is a function of the application as a whole or of a “unit of work” (e.g. database transaction) so unless you’re at the top level method of a unit of work you should “let it throw”, (2) try-finally is preferable to try-catch when it comes to cleaning up and tearing down, and (3) it is fair to wrap and rethrow exceptions if you want the unit of work manager to have more context to know what to do, or provide a better error message.
In theory, you could spend time writing and maintaining error preempting and handling code and get better error messages. However that code isn’t always correct. One funny thing about AI agent programming for me is that if I ask Junie to code me some tests it works harder at testing unhappy paths than I would.
Yes, static typing would help with some of this, but very much not all.
Starting to think that JS based front end is a trashfire top to bottom. Not this framework vs that framework...the entire thing
Yeah. A bit.
Have a little faith they validate the input and give the right min and max before they call your function or let them fail to learn.
Functions near the core can assume more and code less defensively than those near the edge.
Libraries shouldnt be edge case first? What? /What kind/ of libraries? You really can’t imagine a library where it makes sense to code very defensively?
Also the post doesn’t even use “edge case” correctly. Definitionally you can’t have an edge case before you’ve defined the acceptable input. In their clamp function the edge case would be something like a large number on the border of graduating from long to BigInt or something. Edge case does not mean invalid input. Edge case is a rare/extreme input that is valid.
It'd be a lot more productive to ask the authors of the (normally bigger, user-facing) libraries that you use nowadays to avoid using these libraries with deep and outdated dependencies. e.g. I stopped using Storybook because of this, but I'm happy to see that they've been working on cleaning up their dependency tree and nowadays it's much better:
https://storybook.js.org/blog/storybook-bloat-fixed/
So now some people just do it by habit.
Honestly the only place having code like that I see being useful is at the controller level inside an api or other untrusted input validation for an application.
If JS’s dynamic types worked like Ruby’s or Python’s then you wouldn’t think to build a library that tells you if something is a regex.
I don't understand this. Python lacks static typing, but it also doesn't have a need for the kind of type-checking that is-number and is-array perform — it's how Python's dynamic typing already works. What JavaScript is missing is strong typing: i.e. it's "fault tolerant" by performing implicit conversions everywhere and minimizing the chance of control flow ever being disrupted by an exception, presumably intended to maximize the chance that the user sees some kind of DOM update rather than the page ceasing to function. Python sidesteps these issues by just raising `TypeError` instead, and by making proper consideration of exception handling an expected part of the programmer's job. The boilerplate in TFA's opening example is essentially what the Python runtime already does.
Similarly, Python has a large standard library, but it's irrelevant to the problems described. The Python standard library doesn't solve the problem of determining whether something is "a number" or "an array"; trying to use it that way does. The Python standard library doesn't solve the problem of determining whether something is "a regex"; if anything, JS' functionality here is more native because you don't need the standard library to create a regex.
As for the "pascalcase" example, the standard library isn't really helping Python there, either. There are multiple third-party libraries for string-casing operations in Python; there are some things you can do first-party but they're almost all methods of the built-in string type rather than part of the standard library. Aside from some useful constants like "a string with all the ASCII letters in it", the `string` standard library implements two failed attempts at string formatting routines that weren't popular even when they did solve real problems, and a "capwords" function with subtly different semantics from the "title" method of strings. Which has existed since at least 2.0.
(Granted, Python documentation has separate "library" and "language" categories, and classifies the built-in types and their methods as "library". But when people talk about the importance of a "large standard library" in a language, I generally understand that they're thinking of code that has to be explicitly pulled in.)