The Uselessness of "fast" and "slow" in Programming
Key topics
The article argues that the terms 'fast' and 'slow' are meaningless in programming without context, and the discussion explores the nuances of performance optimization and its relation to human perception and context.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6d
Peak period
48
156-168h
Avg / period
22
Based on 66 loaded comments
Key moments
- 01Story posted
Nov 11, 2025 at 8:41 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 18, 2025 at 6:46 AM EST
6d after posting
Step 02 - 03Peak activity
48 comments in 156-168h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 18, 2025 at 9:32 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Yes, because there's usually context. To use his cgo example, cgo is slow compared to C->C and Go->Go function calls.
In web-development arguing about Go-Go vs CGo-Go times is probably inconsequential to most requests and if you want to speed up your application I would look else where for performance gains first.
For good, an example that perhaps I should lift into the essay itself is probably more useful than an explanation. A year or two or so back, there was some article about the details of CGo or something like that. In the comments there was someone who was being quite a jerk about how much faster and better Python's C integration was. This person made several comments and was doing the whole "reply to literally everyone who disagrees", insulting the Go designers, etc. until finally someone put together the obvious microbenchmark and lo, Go was something like 25% faster than Python. Not blazingly faster, but being faster at all rather wrecked the thesis. Nor would it particularly matter that "this was a microbenchmark and those don't prove anything" as clearly the belief was that CGo was something like an order of magnitude slower if not more so even a single microbenchmark where Go won was enough to prove the point.
While the being a jerk bit was uncalled for, I don't blame the poster for the original belief though. Go programmers refer to CGo as "slow". Python programmers refer to their C integration as "fast". It is a plainly obvious conclusion from such characterizations that the Python integration is faster than Go.
Only someone being far, far more careful with their uses of "fast" and "slow" than I am used to seeing in programming discussions would pick up on the mismatch in contexts there. As such, I don't think that's a particularly good context. People who use it do not seem to have a generally unified scale of "fast" and "slow" that is even internally consistent, but rather a mismash of relatively inconsistent time scales (and that's not "relatively inconsistent" as in "sort of inconsistent" but "inconsistent relative to each other"), thus making "fast" and "slow" observably useless to compare between any of them.
For useful, I would submit to you that unless you are one of the rare exceptions that we read about with those occasional posts where someone digs down to the very guts of Windows to issue Microsoft a super precise bug report about how it is handling semaphores or something, no user has ever come up to you and said that your software is fast or slow because it uses CGo, or any equivalent statement in any other language. That's not an acceptance criterion of any program at a user level. It doesn't matter if "CGo is slow" if your program uses it twice. The default context you are alluding to is a very low level engineering consideration at most but not something that is on its own fast or slow.
A good definition of fast or slow comes from somewhere else, and maybe after a series of other engineering decisions may work its way down to the speed of CGo in that specific context. 99%+ of the time, the performance of the code will not get worked down to that level. We are blinded by the exceptions when it happens but the vast majority of the time it does not.
By this, I mean it is an engineering mistake, albeit a very common one, to obsess in this "default context" about whether this technology or that technology is fast or slow. Programmers do this all the time. It is a serious, potentially project-crashing error. You need to start in the contexts that matter and work your way down as needed, only rarely reaching this level of detail at all. As such, this "default context" should be discarded out of your mind; it really only causes you trouble and failure as you ignore the contexts that really matter.
People typically live only once, so I want to make the best use out of my time. Thus I would prefer to write (prototype) in ruby or python, before considering moving to a faster language (but often it is not worth it; at home, if a java executable takes 0.2 seconds to delete 1000 files and the ruby script takes 2.3 seconds, I really don't care, even more so as I may be multitasking and having tons of tabs open in KDE konsole anyway, but for a company doing business, speed may matter much more).
It is a great skill to be able to maximize for speed. Ideally I'd love to have that in the same language. I haven't found one that really manages to bridge the "scripting" world with the compiled world. Every time someone tries it, the language is just awful in its design. I am beginning to think it is just not possible.
But why not simply write the code that needs to be fast in C and then use call it from Ruby?
Because often that's a can of worms and because are not as good with C as they think they are, as evidenced by plenty of CVEs and the famous example of quadratic performance degradation in parsing a JSON when the GTA V game starts -- something that a fan had to decompile and fix themselves.
For scripting these days I tend to author small Golang programs if bash gets unwieldy (which it quickly does; get to 100+ lines of script and you start hitting plenty of annoyances). Seems to work really well, plus Golang has community libraries that emulate various UNIX tools and I found them quite adequate.
But back to the previous comments, IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time. I do care in some of my workflows, hence I made scripts that pipe various Golang / Rust programs to empower my flow. But again, for many tasks this is not needed.
I actually hadn't heard this story. Is the gamedev world still averse to using proper modern C++ with basic tools like std::map and std::vector (never mind whatever else they've added since C++11 or so when I stopped paying attention)? Or else how exactly did they manage to mess it up? (And how big are the JSON files that this really matters, anyway?)
> IMO both bash and Ruby are quite fine for basic scripting... if you don't care about startup time.
`time bash -c ''` is only a couple of milliseconds on my system. The slow thing about a bash script, usually, is how much of the work you do by shelling out to other commands.
There were HN discussions at time time as well.
In this case I believe they should have just vendored a small JSON parser in the game. It's a big blob of .exe and .dll files, why not? Found it weird but then again, we have all made mistakes that we later cringed at.
What is your definition of "everything"? It seems to not include computation on a thing known as a computer.
But there are stuff, you immediately know you want a program, but they’re likely to be related to stuff like pure algorithms, protocols and binary file formats
From the things that have been coming out since YJIT has been in development and the core team members have been showing, that's not necessary. Methods that are written in pure ruby outperform C libraries called from Ruby due a variety of factors.
Yes, thank you! Worth emulating.
By comparison:
> A characteristic of these systems spanning so many orders of magnitude is that it is very frequently the case that one of the things your system will be doing is in fact head-and-shoulders completely above everything else your system should be doing, and if you have a good sense of your rough orders of magnitudes from experience, it should be generally obvious to you where you need to focus at least a bit of thought about optimization, and where you can neglect it until it becomes an actual problem.
If you hit a button that's supposed to do something (e.g. "send email" or "commit changes") and the page loads too fast, say in 20ms, a lot of users panic because they think something is broken.
So if the dialog closes in 20ms if likely means the message was queued internally by the email client and then I would be worried that the queue will not be processed by whatever reason.
The file copy dialog in modern windows versions also has (had) this weird disconnect between the progress it's reporting and what it's actually doing. Seems very clear one thread is copying and one is updating the UI, and the communication between the two seems oddly delayed and inaccurate.
The progress reporting is very bizarre and sometimes the copying doesn't seem to start immediately. It feels markedly flakey.
For example, having a faster-spinning progress wheel makes users feel like the task is completed faster even if the elapsed time remains unchanged.
I disagree with that as the choice of framework doesn't impact just the request/response lifecycle but is crucial to the overall efficiency of the system because they lead the user down a more or less performant path. Frameworks are not just HTTP servers.
Choosing a web framework also marries you to a language, hence the upper ceiling of your application will be tied to how performant that language is. Taking the article's example, as your application grows and more and more code is in the hot path you can very easily get into a space where your requests that took 50ms now take 500ms.
https://madnight.github.io/benchmarksgame/go.html
When functions are first-class objects, so much of the GoF book just dissipates into the ether. You don't have a "Command pattern", you just have a function that you pass as an argument. You don't have a "Factory pattern", you just have a function that you call to create an instance of some other thing. (And you don't even think about whether your "factory" is "abstract".) And so on and so forth. There is value in naming what you're doing, but that kind of formalism is an architectural smell — a warning that what you're doing could be more obvious, to the point where you don't notice doing it for long enough to think of a name.
Though there are a lot of unfortunate "truths" in Java programming that seems to encourage malignant abstraction growth, such as "abstractions are free", and "C2 will optimize that for you". It's technically kinda mostly true, but you write better code if you think polymorphism is terribly slow like the C++ guys tend to do.
Here's the current benchmarks game website —
https://benchmarksgame-team.pages.debian.net/benchmarksgame/
Here are some startup warmup measurements —
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
This sort of complicated analysis doubles as another example of the difficulty of context-free "fast" and "slow" labels. Is Go "fast"? For a general programming language, yes, though not the fastest. If you reserve "fast" for C/C++/Rust, then no it is not fast. Is it fast compared to Python, though? Yes, it'll knock your socks off if you're a Python programmer even with just a single thread, let alone what you can do if you can get some useful parallel processing going.
Just say no.
I hate having to mull over the pros and cons of Rust for the 89th time when I know that if I make a service in Golang I'll be called in 3 months to optimise it. But multiple times now I have swallowed the higher complexity and initial slow building curve of Rust just so I don't have to go debug the mess that a few juniors left while trying to be clever in a Golang codebase.
In other contexts I'm a huge proponent of validating that your language is fast enough. There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution. Exceptions include "we were a startup at the time and experienced rather substantial growth" and the rare cases where technology X is just that much faster for some particular reason... though probably not being a "scripting language" as nowadays I'm not particularly convinced they're all that much faster to develop with past the first week, but something more like "X had a high-level but slow library that did 90% of what we needed but when we really, really needed that last 10% we had no choice but to spend person-years more time getting that last 10% of functionality, so we went with Y anyhow for the speed".
> There's a constant low-level flow of "we switched from X to Y and got Z% speedup" articles on HN, and while the details of the articles are often quite interesting, with rare exceptions I find that I read them and conclude that it was an error to ever have used X in the first place and the team should have been able to tell immediately that it wasn't going to be a good solution.
The language X was probably a good solution at first. Then the company started to increase its product surface or acquired enterprise customers. Now you have new workloads that were not considered at first and the language is no longer suited for it.
Most likely a decision is made to not introduce a second language to the company just for these workloads as that complicates hiring and managing, so you stay with language X and try to make do.
This isn't really a case of startup growing pains, just that software itself cannot know ahead of time every application it'll have. You can choose a "mostly fast for all use cases" language and bet that your applications will fit those general use cases, this means you win small but also lose small.
I would contest even that. Most of the time it's a fight or flight response by the devs, meaning that they just go with whatever they are most comfortable with.
In the previous years I made good money from switching companies away from Python and Rails, to Elixir and Golang. The gains were massive and maintainability also improved a lot.
Of course, this is not advocacy for these technologies in particular. Use the right tool for the job is a popular adage for good reasons. But my point is: people don't use the right tool for the job as often as many believe. Mostly it's gut feelings and familiarity.
And btw I am not trashing on overworked CTOs opting for Python because they never had the time to learn better web tech. I get their pain and sympathise with it a lot. But the failing of most startup CTOs that I observed was that they fell into control mania and micro-management as opposed to learning to delegate and trust. Sadly that too is a function of being busy and overworked so... I don't know. I feel for them but I still want better informed tech decisions being made. At my age and experience I am extremely tired and weary of seeing people make all the same mistakes every time.
It’s just a better fit when you’re not quite sure what you’re building. So calling them better web tech assumes a lot about the development process that isn’t guaranteed.
As said though, I don't judge people who go by familiarity. But one should keep an open mind, and learning some of the modern languages (like Elixir) is much less work than many believe.
A better web tech in this case refers to having the potential to scale far above what Python can offer + have a very good developer experience. To me those two are paramount.
It's also true that many projects will never hit that point. For those Python is just fine. But I prefer to cover my bases in the last years, and have not been disappointed by any of the 3 PLs above.
Sure Elixir is fine to work with, the thing is in the back of my mind I’m thinking it’s way more likely to end up embedding Python libraries in Elixir code than the reverse. It’s those little bits of friction that I’m avoiding because the start of a project is play. Soon enough the perfectionist side of me may get involved, until then the goal is to maximize fun so I actually start.
In 1999 I'd agree with you completely, but static languages have gotten a lot better.
There are some cases where libraries may entirely dominate such a discussion, e.g., if you know Ruby on Rails and you have a problem right up its alley then that may blow away anything you can get in a static language. But for general tasks I find static languages get an advantage even in prototyping pretty quickly nowadays. What extra you pay generally comes back many times over in the compiler's instant feedback.
And for context, I would have called Python my favorite language from about 2000 to probably 2015 or so. This preference wasn't from lack of familiarity. Heck, I don't even regret it; if I had 2000 to 2015 to do all over again I'm not sure I'd do much differently. Static languages kind of sucked for a good long time.
> It is completely normal for web requests to need more than 5 milliseconds to run. If you’re in a still-normal range of needing 50 milliseconds to run, even these very slow frameworks are not going to be your problem.
Is that it apparently does make a huge difference. At least doing CRUD web stuff, my calibration for Scala (so pretty high-level functional code) is to expect under 1 ms of CPU-time per request. The only time I've ever seen something in the 50-100 ms range was when working with Laravel.
I'm unlikely to get bottlenecked on well written and idiomatic code in a slower framework. But I'm much more likely to accidentally do something very inefficient in such a framework and then hit a bottleneck.
I also think the difference in ergonomics and abstraction are not that huge between "slow" and "fast" frameworks. I don't think ASP.NET Core for example is significantly less productive than the web frameworks in dynamic languages if you know it.
Even if you find a slow function that constitutes 20% of the runtime, and optimize the living hell out of it to cut out 20% of the execution time, guess what your program is now about 4.1% faster.
But a lot of software engineering goes into building tools, libraries, frameworks, and systems, and even "application" code may be put to uses very distant from the originally envisioned one. And in these contexts, performance relative to the "speed of light" - the highest possible performance for a single operation - can be a very useful concept. Something "slow" that is 100x off the speed of light may be more than fast enough in some circumstances but a huge problem in others. Something "very fast" that is 1.01x the speed of light is very unlikely to be a big problem in any application. And this is true whether the speed of light for the operation in question is 1ns or 1min.
I honestly don't know if async makes this easier or harder. It makes it easier to write sections of code that may have to wait for several things. It seems to make it less likely to write code that will kick off several things that can then be acted on independently when they arrive.
Pretty often you have a hot path that looks like a matmul routine that does X FMAs, a physics step that takes Y matmuls, a simulation that takes Z physics steps, an optimizer that does K simulations. As a result, estimating performance across 10 orders of magnitude is just adding the logs of 4 numbers, which pretty well works out as “Count up the digits in XYZK, don’t get to 10” which is perfectly manageable to intuit
3 more comments available on Hacker News