How to Stop Functional Programming (2016)
Posted4 months agoActive4 months ago
brianmckenna.orgTechstoryHigh profile
heatedmixed
Debate
80/100
Functional ProgrammingCode ReadabilityTeam Management
Key topics
Functional Programming
Code Readability
Team Management
A developer shared an anecdote about being asked to simplify their functional programming code, sparking a discussion on the trade-offs between code readability and programming paradigms.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
69
0-6h
Avg / period
13.3
Comment distribution120 data points
Loading chart...
Based on 120 loaded comments
Key moments
- 01Story posted
Sep 21, 2025 at 10:55 AM EDT
4 months ago
Step 01 - 02First comment
Sep 21, 2025 at 11:31 AM EDT
36m after posting
Step 02 - 03Peak activity
69 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 23, 2025 at 8:27 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45323297Type: storyLast synced: 11/20/2025, 5:57:30 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Code review or pair programming might help here, to learn the team’s common idioms.
1. Global type inference.
2. Implicit syntax (no brackets for function calls, commas to separate arguments, semicolons to end statements/expressions, etc.)
3. Currying & point free style.
4. Tendency to have very deep nested expressions. The gap between `let foo =` and it's actual value can often be hundreds of lines.
I'm sure you can write FP code that avoids these issues and is easy to follow but it doesn't seem like people do that in practice.
Rust avoided all of these issues fortunately.
(Oh I forgot about monads.)
Conversely, a lot of code written in imperative languages would be clearer and/or less bug-prone if it avoided mutable state and used persistent data structures.
I wish there was a mainstream, high performance language that made both styles equally ergonomic.
Unironically, C++.
> Unironically, C++.
At best C++ falls under "equally unergonomic".
Compare to most "functional" languages, in which prepending an item to a list and ending up with immutable references to the old and new lists is almost the defining feature of the language.
Dynamic arrays / vectors / slices or whatever you want to call them are probably the most important or fundamental non-plain-old-data data structure, and in a purely immutable environment these are essentially impossible (or have awful performance characteristics).
I think everyone should take a shot at writing a non-trivial functional program to see the benefit. Once you understand what makes it great, you can apply what you've learned to the majority of OOP/impure languages.
I've been solving business problems with code for decades. I love pure, composable functions, they make my job easier. So do list comprehensions, and sometimes, map and filter. Currying makes sense.
But for the life of me, no forum post or FP tutorial that I could find explained monads in clear language. I've googled "what is a monad" once a year, only to get the vague idea that you need monads to handle IO.
I wondered if my brain was broken, but now I'm wondering if most FP adherents are simply ineffective communicators: they've got an idea in their head but can't/won't express it in a way that others after them can understand. In other words, the exact same reason why TFAuthor was corrected by his employer.
FP can be the pragmatic as well. You’re going to glue up monad transformers, use lenses like there’s no runtime cost, and compute whatever you need in days but at least you know it works. Maybe there’s accidentally quadratic behavior in lifting or lenses but that’s by design. The goal is to just throw software at things as fast as possible as correctly as possible.
Lenses abstract properties in a composable manner. How is this problem horrible?
> FP can be the pragmatic as well. You’re going to glue up monad transformers, use lenses like there’s no runtime cost, and compute whatever you need in days but at least you know it works. Maybe there’s accidentally quadratic behavior in lifting or lenses but that’s by design. The goal is to just throw software at things as fast as possible as correctly as possible.
Any abstraction can be used inappropriately. Slavish adherence to an approach in spite of empirical evidence is a statement about those making decisions, not the approach itself.
In other words:
Lenses are exactly glue for throwing software at things as fast as possible as correctly as possible. A poor tool.
The very need for lenses often indicates that the data model has been designed in a way that's hostile to direct, ergonomic manipulation. A glued up steampunk contraption, a side-effect of throwing software at everything as fast as possible. Invented in a language environment where they can't be efficient.
Monad transformers: lift has quadratic complexity and runtime cost. Not really composable. Similar to effect systems in other languages, control flow becomes very unclear, depending on the order of application.
Lenses and monad transformers are just a nice trick that you shouldn't ever learn.
But I agree with your last statement, many of these libraries are just poor craftsmans giving us new tools that they made for problems we never want to have.
It's similar to dependency injection, why would anyone need a topological sort over dependencies and an automatic construction of these dependencies? Is it so hard to invoke functions in the right sequence? Sounds like you've made a program with too many functions and too many arguments. (or in oop, too many classes with too much nesting and too many constructor args)
These tools are "pragmatic". Given the mess that will naturally arise due to poor craftsmanship, you'll have these nice tools to swim well in an ocean overflowing with your own poop.
I see the value of lenses from a different perspective, in that they can generalize algorithms by abstracting property position within an AST such that manipulation does not require ad hoc polymorphism. For example, if there exists an algorithm which calculates the subtotal of a collection of product line items, lenses can be used to enable its use with both a "wish list" and a "purchase order."
Another thing they cleanly solve is properly representing a property value change with copy-on-write types. This can get really ugly without lenses in some languages.
I respect your take on them though and agree their definitions can be cumbersome if having to be done manually.
For the wishlist and purchase order, just think of the boilerplate you have to write to get 1 computation for variable data shapes, compared to doing what you want 2 times.
Copy-on-write types are easy if they are shallow, I'd question why they're so deep that you need inefficient lens composition to modify a deep value. We've already invented relational structures to deal with this. I'm assuming you care about history so copy-on-write is important and is not purely an exercise in wasteful immutability.
This example is simple enough to not have to use lenses for sure. Another example which may better exemplify appropriate lens usage is having properties within REST endpoint payloads used to enforce system-specific security concerns. Things like verifying an `AccountId` is allowed to perform the operation or that domain entities under consideration belong to the requestor.
> Copy-on-write types are easy if they are shallow, I'd question why they're so deep that you need inefficient lens composition to modify a deep value. We've already invented relational structures to deal with this. I'm assuming you care about history so copy-on-write is important and is not purely an exercise in wasteful immutability.
While being able to track historical changes can be quite valuable, using immutable types in a multi-threaded system eliminates having to synchronize mutations (thus eliminating the possibility of deadlocks) and the potential of race conditions. This greatly simplifies implementation logic (plus verification of same) while also increasing system performance.
The implication of using immutable types which must be able to reflect change over time is most easily solved with copy-on-write semantics. Lenses provide a generalization of this functionality in a composable manner. They also enable propagation of nested property mutations in these situations such that the result of a desired property change is a new root immutable instance containing same. Add to this the ability to generalize common functionality as described above and robust logic can be achieved with minimal duplication.
It is for these and other reasons I often find making solutions with immutable types and lenses very useful.
"Flatten" takes a container of type M<M<T>> and returns a container of type M<T>. So a List<List<Int>> becomes a List<Int>.
Now comes the trick: combine "map" and "flatten" to get "flatMap". So if you have a M<T> and a function T->M<U>, you use "map" to get an M<M<U>> and "flatten" to get an M<U>.
So why is this useful? Well, it lets you run computations which return all their values wrapped in weird "container" types. For example, if "M" is "Promise", then you can take a Promise<T> and an async function T->Promise<U>, and use flatMap to get a Promise<U>.
M could also be "Result", which gets you Rust-style error handling, or "Optional", which allows you to represent computations that might fail at each step (like in languages that support things like "value?.a?.b?.c"), or a list (which gets you a language where each function returns many different possible results, so basically Python list comprehensions), or a whole bunch of other things.
So: Monads are basically any kind of weird container that supports "flatMap", and they allow you to support a whole family of things that look like "Promise<T>" and async functions, all using the same framework.
Should you need to know this in most production code? Probably not! But if you're trying to implement something fancy that "works a bit like promises, or a bit like Python comprehensions, or maybe a bit like Rust error handling", then "weird containers with flatMap" is a very powerful starting point.
(Actual monads technically need a bit more than just flatMap, including the ability to turn a basic T into a Promise<T>, and a bunch of consistency rules.)
You've highlighted here the part that would actually explain the purpose of a monad, but not explained it. You don't need a monad abstraction to have things with monadic properties, and indeed often reality isn't quite perfectly shaped to theory, so forcing your object to fit the abstraction can be costly. One very obvious cost is that you no longer get descriptive names of what the monadic bind does; you have to infer it from what you know of the type.
The one thing a monad abstraction definitely gives you is the ability to write code that's generic over all monads. This is weird because this almost never happens outside of strongly functional languages.
If you'll forgive linking to a decade-old Reddit post, I've talked about this before.
https://reddit.com/r/programming/comments/4t6a6q/functional_...
(also, parent for context: https://reddit.com/r/programming/comments/4t6a6q/comment/d5f...)
Please note that I was trying to explain what a monad is, to somebody who wanted to understand. I also suggested that people writing typical code shouldn't actually need to know this in order to do their jobs:
> Should you need to know this in most production code? Probably not! But if you're trying to implement something fancy that "works a bit like promises, or a bit like Python comprehensions, or maybe a bit like Rust error handling", then "weird containers with flatMap" is a very powerful starting point.
Monads are an incredibly stripped-down mathematical structure ("container-like things that support flatMap"). And as such, people who design certain types of programming languages or libraries may benefit from being aware of monads. At least for languages with closures. Surprisingly few languages can actually support monads as a first-class abstraction in the language, because to make first-class monads nice you need a certain kind of type system. Which often isn't worth it.
Where I probably differ from your opinion is that I think implementing "almost monads" like JavaScript promises is very often a mistake. The few places where JavaScript promises break the monad laws are almost all nasty edge cases and obscure traps for the unwary. Similarly, if you implement list comprehensions that break the monad laws, most likely you just get awful list comprehensions.
There are exceptions. Rust has a lot a "almost monads", but this is mostly because Rust function types are a mess thanks to the zoo of Fn, FnOnce and FnMut. Rust would be a simpler and easier-to-learn language if Future<...> actually followed the monad laws. But in this case, it sadly wasn't possible, and I would argue that Rust is worse for having so many "almost monads."
This may all make more sense if you knew my tastes in programming languages, which is "languages where all the parts fit together cleanly with no surprising edge cases that prevent 'obvious' things from working." One way you can accomplish this is to have some kind of simple mathematical structure underlying your language, and to avoid adding dozens of features that almost follow clean rules. C++ never took this approach, and so C++ library designers need to be aware of all sorts of interactions between weird corner cases.
So another way of summarizing my argument is "If you have list comprehensions that somehow don't follow the monad laws, then you're going to confuse users and permanently add technical debt to your language. Make sure it's actually worth it."
You also mention list comprehensions forming a monad. I think this is also a good illustration of the difference. In Haskell the structure of a type tells you the dependency tree of a computation, so it's not a problem that Monad maps the type to itself. Your list monad is just a bunch of thunks pointing to each other either way. In imperative languages, types describe what has been reified and how it's organised in memory. In these, a list monad is an actively bad abstraction; you almost always want to distinguish the stateful computational pipeline (eg. an iterator) from the source and target storage (eg. an array). Neither of those are monadic for good reason.
The thing that always makes FP concepts click for me is seeing them explained in a language that isn't Haskell or similar.
I don't know why people are so obsessed with using Haskell for tutorials. Its syntax is a nightmare for newcomers and you can do FP in so many other languages.
Explaining monads in JavaScript or C# might show the mechanics but will not show why anyone would actually want to use them, since it just result in overly convoluted code.
I was very confused about what they were until I saw an article similar to the one I linked, and then I realized that I had actually been using monads all along, I just didn't know they were called that. I think a lot of developers are in the same boat.
Another phrase to search for is "railway oriented programming"[0], which technically describes a monad called a Kleisli[1]. Still, there are many introductions in a lot of languages based on the term above.
HTH
EDIT:
Something to keep in mind is that the term "monad" identifies a mathematical concept having well-defined behaviors, not a specific construct. So searching for "what is a monad" is akin to searching for "what is a happiness."
0 - https://medium.com/geekculture/better-architecture-with-rail...
1 - https://bartoszmilewski.com/2014/12/23/kleisli-categories/
Monads are a pattern for function composition. In most languages it is verbose compared to alternatives (like map and filter), but Haskell have some syntactic sugar which makes it a nice succint way of chaining functions.
The problem with monad tutorials is they try to explain monads as some abstract, universal concept disconnected from practical code. But try to explain functions or classes the same way and it would also be incomprehensible for someone unfamiliar with the concept.
It is not due to ill will, it is just that Hakell-fans tend to have a background in mathematics and therefore default to the most abstract explanation possible, where software developers prefer to start with something practically useful and then generalize from there.
You probably wouldnt understand ‘map’ either, if it was explained to you by a haskelite, since they tend to explain functions in terms of their type signatures rather than what they actually does.
Monads should be exlained through practical code examples in Haskell since this is the context where it makes sense.
/s
Monads are, in my head, just a wrapper around a type. Or a box another type is inside. For example we can have an Int and we can put the Int in a box like Maybe<Int>.
Imagine Python code that gets a value from a function that is an Int or None, then another function that takes an Int and returns an Int or None, then another function that takes an Int and returns an Int or None. How hellish is that to handle? If not None, if not none, if not none, ad nauseam...
In Haskell I could use a Traverse function that takes a monad and passes the Int value to the next function or handles the None error, avoiding all the boilerplate.
Other Monads are like the State monad - a box that contains some variables I want to maintain over functions.
Or an IO monad to handle network calls / file calls.
It's probably not a perfect analogy, but I work with functional languages and it tends to hold up for my beginner/intermediate level.
One question you could ask yourself is, how could you reproduce list comprehensions without special syntax ?
Another way to view monads is by following the types. With map, you can chain pure functions from one type to another to be applied on lists (hence the (in -> out) -> ([in] -> [out]) ) . How would you do that chaining with function from one type to another but wrapped in a list ( (in -> [out]) -> ([in] -> [out]) ) ?
Then you can think about how it could be applied to other types than lists, for example, nullable/option types, result types, async/promise types, and more hairy types that implement state, reading from an environment, etc...
This demands an equivalent article which takes the same form as TFA.
A coworker complains about monads, so the manager insists all monads be taken out.
So Brian gets to work on the codebase, removing all Lists, Optionals, Streams, Functions, Parsers, Transactions, ...
Maybe it’s related, maybe it’s not.
It's not 'functional programming' that makes the code unreadable, but overly long chains of array-processing functions. Sometimes a simple for-loop which puts all operations that need to happen on an array item into the loop-body is indeed much more readable.
It's very similar to applicative style in FP. Conceptually, method chaining is equivalent to nested function application, it just comes with syntax sugar for specifying the `self` parameter.
As soon as you stop calling it "method chaining" and start calling it "function composition".
If you chain together a bunch of methods ('.' operator) in an OO setting, that's called a "fluent interface". It's a sign of good design and is to be commended.
If you compose a bunch of functions ('.' operator) in an FP setting, it's an unreadable mess, and you will receive requests to break it into separate assignment statements and create named variables for all the intermediate states.
This is becoming increasingly true as the years pass and the number of times I've had to drop whole mature architectures and reimplement something worse because some engineer's feefees got hurt can no longer be counted on my fingers & toes.
When writing code you have the motto "don't make me think" in mind, but how to know what's the maximum level of trickiness for readers? There are familiar techniques and idioms when it is your main programming language, but they are not for someone using this language on the side.
In any case, neither code nor comments should be tutorials. To a reasonable extent, it is up to the reviewer to do their homework or just ask. Then based on that interaction you can add a comment or a parenthesis, or uncompress a bit the code. But not to the point that it means to "dumb down" things, because it is a downward spiral.
I disagree with this phrasing. We’re engineering after all. The entire job is thinking. If someone doesn’t want to think, then they shouldn’t be a programmer.
Readability matters, though. I try to have a narrative structure in my code, so it leads the reader along. Formatting matters. And documentation helps. If you want to introduce an unfamiliar idiom that might be more functional, good, but document it. Share it with the team and talk about it. I know that writing and reading documentation is usually seen as a waste of time unless you’re doing it for AI, but I’ve seen it work well in multiple teams. In my experience, the teams with poor docs have the worst code.
It's difficult to explain how to write well, but bad writing and bad computing systems typically impose far greater cognitive burden on readers than might have been necessary. There is an art to software engineering.
Functional programming does involve idioms, though I would say no more than imperative programming, or OOP, or some other paradigm. One of the overarching themes of FP is to reduce the cognitive footprint to only the essential properties of the problem to be solved.
A novice artist can produce a recognizable figure using many pencil strokes. It takes a master to produce a compelling likeness with only the lines that are necessary.
> I disagree with this phrasing. We’re engineering after all. The entire job is thinking.
Well, sure. The implied full phrase - more technically-correct, but less pithily-quotable - would be something like "don't make me think unnecessarily; let me spend my thoughts productively. If you've already spent brainpower on figuring something out, explain it to me directly and clearly rather than forcing me to go through the same discovery process"
So - yes, if someone doesn't want to think _at all_, they shouldn't be a programmer; but if someone has an aversion to being forced to solve a problem that someone else has already solved, they likely have the right "shoulders-of-giants" mindset.
(For any potential pedants - yes, there are some practices you simply have to work through before understanding dawns, which cannot be explained directly. Still, though - the explanation should aim to minimize unnecessary thought-requirements so the student can get straight to learning)
You might choose to add comments and let the logic unfold in a less succinct way in order to improve readability and understandability.
You might also consider your colleagues’ limited cognitive reserves, some of which could be spent on more important issues.
* If it's "write once, run a few times, discard", go ahead and throw together whatever you want
* If it's "write once, run a bazillion times at the very core of your logic with very few changes over time", optimize for efficiency over legibility
* If it's going to be written and rewritten and evolved and tweaked many times, optimize for readability and flexibility
In engineering, thinking is inevitable but also costly and error-prone, and the more thinking an engineer needs to do the higher the costs, the higher the risks of errors. We should be striving to minimize the thinking required every step of the way, even if we’ll never get it down to zero; that’s the only way to keep it from spiraling towards infinity.
But like you say, if you've got an idiom that's actually useful (like programming in a functional style to avoid action-at-a-distance side effects), then document it so the whole team can get used to it and stop having to think about it.
On the one hand, this stance seems common sense: why overcomplicate things? Let's just do what's commonly understood and move on.
On the other hand, writing code that the most junior in your team can understand leads to mediocre code. In the end, it crumbles under its own weight.
Most of what we master now was difficult to understand first. Should abandon generics because "it's difficult" ? Should we ditch static typing because "we're not used to it" ?
Maybe.
Or, maybe, this is the right tool that can take us to the next step. So we pause and think at this new approach to understand it then master it.
Just remember that, at some point, the "dumbass" thing was pushing sticks into clay tablets.
Functional programming may not be the future. But dismissing other's code because it is not done "the way we are used to" is stupid.
And I've worked at a place where the imperative code was crap. Should we abandon all imperative code because I had that experience?
People can write bad code in any paradigm - that's not what decides how we should code.
Nobody says we should throw away all imperative code because of one poorly implemented code base. But for some reason I see that targeted at FP all the time.
Sorry but this is absolutely false in almost all regular corporate software jobs.
As i get greyer, I find myself spending much more time planning before I write any code now; what I would previously have run into headfirst and solved with a bunch of advanced language features and complicated logic usually now ends up being implemented in much less and simpler code.
This is the sign of a senior developer, in my book.
Writing code anyone finds hard to understand is a marker of inexperience to me.
But, at some point, spreadsheets crumble and become unmaintainable. And so does "basic" code.
Should people keep on using spreadsheets ? Depends.
Case in point, a ledger, like a spreadsheet but simpler, hasnt 'crumbled' in a hundred years despite being as basic as it gets.
Simple is good when it's fit for purpose.
Sounds like your idea of simple is just bad code that is unfit for its purpose. Theyre not the same thing.
Yet, for many reasons, your bank does not run your account with a spreadsheet.
> Sounds like your idea of simple is just bad code that is unfit for its purpose
Not quite. Simple is good. Do simple. But some things cannot be achieved with familiar tools. Sometime it is simpler to use an abstraction that is difficult to grasp, initially, than to write piles of code "the way we are used to" because we are afraid of "making others think".
Please make me think, but only if it is worth it.
Disagree on a couple levels. First, while the overall architecture of the system might be complex, there shouldn't be any one single function that isn't readily readable by a junior new hire. Complexity is in the system, there should be no complexity in small sections of code.
Second, the ultimate level of mastery is to be able to build simplicity. Any newbie can write a convoluted mess that nobody can understand.
Every so often I encounter some dev that is not familiar with SQL. So instead of writing a single SQL and relatively "simple" SQL request that does the job db-side (say with a few joins and coalesce but no sub-query), they write several requests then process the result in python.
The argument ? "SQL is complicated, it is simpler to just do it in Python."
Is it though ? I guess it depends on the request and the exact processing we are talking about but in many cases I suspect it is because that person is not familiar with SQL.
What "simple" means is not so simple to define. Sometimes it means "easy for me".
We need to have a basic level of competency.
And functional programming snorts these long chains like lines of coke and when you add some random curried functions and people who write in a pointless^H point-free style it's easier than folks like to admit it to end up with write-only code.
I've never found myself with a block of code that could be expressed as a flatMap and thought it would be easier to understand by expressing it as such. I'm perfectly capable of writing functional code but after reading it back I almost always throw it out for something imperative because it's clearer as to what's really going on. The code is going to be executed in an order at the end of the day, why make it harder to see what it will be?
Why not? To take the example from the article:
Why not add a comment like: Perhaps a note should also be added explaining why the domain model opted for u.departments and not for something more natural like u.company.departments. And the comment should also explain, why we do not filter for duplicates here.I insert many such comments into my own code when I use a rare language feature or a somewhat peculiar element of the domain model. This makes refactoring my own code much easier, faster and more bug-resistant. When composing code, it is generally simple and quick to add such a comment, as it is merely a matter of writing down the result of my immediate thought process.
Consider the case were two database fields should either both be NULL or both must have a value. Instead of 'amount' and 'currency' and documenting the relation, should we really name the fields 'amount_if_currency_is_not_null' and 'currency_if_amount_is_not_null'?
And is it really sufficient to define 'amount' as INTEGER without any documentation and then demand that everyone new to the project and not familiar with financial calculation immediately understands that the value is given in the lowest unit of the associated currency, i.e. cents for US dollar vs. one hundred millionth of a bitcoin vs. Icelandic króna (which has no subdivision).
Good luck to find a short name that makes that obvious.
You name the fields currency and amount and have the business logic rejecting invalid pairs.
You should use a currency type for the other issue, not an integer.
But only the author(s) can capture why it exists via commenting.
This 100%
When I am doing MBSE, and I discuss what level of detail is required in my abstract models one of my senior coworkers would remind me to “know your audience”. It has helped me and I use the same phrasing when I am coaching my teams now.
What gave FP a bad rep is, i guess, Haskell (and the "pure functional approach w/ monad transformer stacks").
Every shop I worked at the devs were already at a level that they'd appreciate FP code: easier to read, refactor and test.
The tendency is also towards FP: see the features in recent Java/C# version. Or languages that gain popularity recently (Kotlin, Rust) are more FP'ish than their predecessors (respectively Java and C++).
That's it. You've been told. No more OOP.
The manager has figured out what's good for the business and you figure that listening is what's good for your job.
don't remember exact title, sorry, but description i used above is close, iirc.
"Casey Muratori – The Big OOPs: Anatomy of a Thirty-five-year Mistake – BSC 2025" https://youtu.be/wo84LFzx5nI
But if your point is there aren't any written articles about stuff like this, I agree. If they're out there, they're a bit outside the mainstream.
Yeah. I don't want to work with those people. Did I quit? Was I fired? I can't really remember. But ultimately, I moved on to a better environment.
When someone pays me to write code for them, they get to call the shots; even if I think their judgement sucks.
If they want to hire incompetent programmers, that can't understand even halfway-advanced code, then that's their prerogative, and I need to suck it up, and play by their rules; which may include the need for me to write code as if I just started yesterday, because it will need to be maintained by folks that, um, just started yesterday.
I still have to advocate for minimizing mutable state. At least since then, popular languages and frameworks are putting in a ton of work to make functional programming look more like the imperative spaghetti that is still prevalent and taught.
Imperative programming style has many advantages over functional for some problems. Functional programming style has many advantages over imperative for some problems.
The only clearly 'wrong' approach is codebases where you can look at the code and determine a specific developer on the team wrote feature x because it fundamentally looks completely different from the other sections.