The Spherical Cows of Programming
Key topics
The article 'The Spherical Cows of Programming' discusses the trade-offs between simplicity and complexity in programming, sparking a discussion on the importance of abstraction and the challenges of balancing simplicity with real-world complexity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
42m
Peak period
32
0-6h
Avg / period
6.6
Based on 46 loaded comments
Key moments
- 01Story posted
Oct 19, 2025 at 11:18 AM EDT
3 months ago
Step 01 - 02First comment
Oct 19, 2025 at 12:00 PM EDT
42m after posting
Step 02 - 03Peak activity
32 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 5:50 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Alas, 'tis not meant to be.
> To be kind, we’ve spent several decades twisting hardware to make the FP spherical cow work “faster”, at the expense of exponential growth in memory usage, and, some would argue, at the expense of increased fragility of software.
There is not one iota of support for functional programming in any modern CPU.
Doesn't makes any point very coherently, but it's not exclusively about FP though that gets mentioned a lot.
Well, for sure, a core tenet of computer science is that all models of computing are equally powerful in what inputs they can map to what outputs, if you set aside any other details
An executive is retiring. He's been very fond of horse races, but has been very responsible throughout the years. Now with some free time on his hands, he spends more time than ever at the tracks and collects large amounts of data. He takes his data, along with his conviction that he's certainly onto something, to a friend in research at a nearby university. He convinces his friend to take a look at his data and find a model they can use to win at betting. After many delays, and the researcher becoming more disheveled over months of work, he returns to the retired executive to explain his model. He begins "if we assume all the horses are identical and spherical..."
What does that mean in the context of the comment you reply to - which includes the literal quote about "twisting hardware to make the FP spherical cow work faster”? The article may not be exclusively about FP but nobody said it was.
I’m genuinely curious if anyone can derive a consistent definition of what the author thinks a spherical cow is.
Spherical cows are about simplifying assumptions that lead to absurd conclusions, not simplified models or simplified notation in general.
Calling functional programming a spherical cow when you mean that automatic memory management is a simplifying assumption, is such a gross sign of incompetence that nobody should keep reading the rest of the blog.
The joke as I recall it, was a physics student who brags that he can predict the winner of any horserace, so long as all of the horses were perfectly spherical perfectly elastic horses.
I'm actually not sure where cows came in, but maybe there's a different version of the joke out there.
The joke being because when you do mechanics you generally start modelling any problem with a lot of simplifying assumptions. In particular, that certain things are particles- spherical and uniform.
It's not an idiom for beautiful simplicity.
There aren’t any commonly-accepted conclusions from spherical cows because the bit is the punch line. It’s a joke a physics 101 student makes when toughing through problems that assume away any real-world complexity and thus applicability.
Spherical cows, in the real world, are pedagogical tools first, approximations second, and mis-applied models by inexperienced practitioners third.
“Hello World” is a spherical cow. Simplifying assumptions about data are spherical cows. (And real dairy farmers implicitly assume flat cows when using square feet to determine how much room and grazing area they need per head.)
This is a great example of why rewrites are often important, in both English essays and blogs as well as in software development. Don't get wedded to an idea too early, and if evidence starts piling up that you're going down a bad path, be fearless and don't be afraid of a partial or even total rewrite from the ground up.
assuming pointing at a problem counts as nugget.
But there is a ton of support for speeding up imperative, serial programs (aka C code) with speculative execution, out-of-order execution, etc.
I agree that code tends to be overrepresented--we don't 'data golf'. Even non-async dataflow oriented programs are much easier to follow, which happens to play exceptionally well with FP.
But for the classic ALU, I can’t think of anything. Anything that helps FP was probably meant to help with text processing and loops in general.
To the best of my understanding, the author describes the structured imperative programming style used since the 70s as "functional" because most languages used since the 70s offer functions. If so, it makes sense to describe hardware is optimized for what the author calls "functional programming", since hardware has long been optimized for C compilers. It also makes sense to describe callbacks, async, then, thread-safety as extensions of this definition if "functional programming", because yes, they're extensions of structured imperative programming.
There are a few other models of programming, including what people actually call functional programming, or logical programming, or synchronous programming, or odder beasts such as term rewriting, digraphs, etc. And of course, each of them has its own tradeoffs.
But all in all, I don't feel that this article has anything to offer to readers.
https://dataverse.jpl.nasa.gov/dataset.xhtml?persistentId=hd...
Even by the standards of substack TFA is an extraordinarily poor blogpost.
[1] https://www.cs.princeton.edu/~appel/papers/ssafun.pdf
The wrongfully assumed "O(1) memory access" (or worse, wrongfully assumed O(1) data structure access even when the data structure actually isn't O(1)) showed up more frequently in my experience. And I still don't understand how we keep writing code that assumes "fast, reliable network" when we're reminded every day that code is neither fast nor reliable.
On x86. x86 and x86-64 usually do a good job of enforcing memory access order as if the instructions were executed sequentially. Other architectures, not so much. On PowerPC you had to put explicit memory barriers guaranteeing a certain access order with the 'eieio' instruction. ARM processors feature similar weak memory order, but the Apple M1 and later feature a special mode that guarantees strong x86-64 memory order, to make emulation for programs written for Intel Macs easier.
But even then, if you're using any kind of compiler, either this is successfully hidden from you... or you've hit some UB in C or C++ and you're in nasal demon territory anyway, no?
https://en.wikipedia.org/wiki/Spherical_cow
You have to be absolutely certain you've got the right cows in the right trailer, for one thing.
(I used to know someone with an ostensibly Hindu coworker who she caught eating a burger. His response to being caught was to say that Indians aren’t reincarnated as American cows.)
The general idea is that if you live in a desert then - as with Jewish dietary restrictions - eating stuff like shellfish and pigs that don't naturally lend themselves to being eaten unrefrigerated a hundred miles from the sea in 40°C weather is not a good idea. But if it's roughly the same climate as Scotland and you know what you're doing, it's all just fine, it's far more haraam to starve yourself to death when you can eat a big bowl of lentil and ham soup.