Functions Are Asymmetric
Posted3 months agoActive3 months ago
elbeno.comTechstory
calmmixed
Debate
40/100
ProgrammingFunctional ProgrammingSoftware Design
Key topics
Programming
Functional Programming
Software Design
The article 'Functions Are Asymmetric' discusses the asymmetry in functions, where inputs and outputs are not treated equally, sparking a discussion on the implications for programming and software design.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
59m
Peak period
13
96-108h
Avg / period
4.8
Comment distribution29 data points
Loading chart...
Based on 29 loaded comments
Key moments
- 01Story posted
Oct 12, 2025 at 3:41 AM EDT
3 months ago
Step 01 - 02First comment
Oct 12, 2025 at 4:40 AM EDT
59m after posting
Step 02 - 03Peak activity
13 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 3:37 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45556142Type: storyLast synced: 11/20/2025, 12:41:39 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Mathematicians have been packing all this stuff nicely for a couple of centuries now, maybe we could use more of their work on mainstream computing, and it could also be a nice opportunity to get more people to appreciate math and structure.
Something that has side effects all over the place should just not be called a function, but something else, maybe "procedure" would be an appropriate, clear term.
C++ sort of has this, with const.
In this case it's almost the opposite of most programming languages. In (say) Ruby or Java, any function or method can do anything; write to stdout, throw exceptions, access the network, mutate global state, etc. In haskell, a function can only do calculations and return the result by default. All the other things are still possible, but you do have to encode it in the type of the function.
EDIT: The annotations you mention with regards to identity elements etc do exist, but they live mostly on the data structures rather than on the functions that operate on those data structures.
[1] https://github.com/stefan-hoeck/idris2-algebra
I recall there being at least SDL2 bindings for Idris (not to mention those for Haskell, which also has game libraries), and some linear algebra libraries in those languages (complete with verification), but probably not particularly extensive.
They are not the most practical choice if you do not need verification, but if you would like to use languages like that, they are available and usable, but with additional effort/drawbacks. I wish they were more mature and had a better infrastructure, too, but that would take people pushing them to that point.
Rather, if you define (in your mind) a function of three variables, the compiler makes that a function of one variable that returns another function. And that return-value function takes one variable and returns a third function. And that third function takes one variable and returns the result of the triadic function you intended to write.
That's why the type of a notionally-but-not-really triadic function is a -> b -> c -> d and not (a, b, c) -> d. It's a function of one variable (of type a) whose return value is of type b -> c -> d.
https://ocaml.org/docs/values-and-functions#defining-functio...
It's a bit harder to tell the situation in SML, because it isn't really an actively used language. "ML for the Working Programmer" (lol) appears to idiomatically represent them as functions a single tuple parameter:
https://www.cl.cam.ac.uk/~lp15/MLbook/PDF/chapter2.pdf
Tuple is not special, though. Functions accept a single argument of any type.
To use the argument in the function body you can name it, or you can use any valid pattern to destructure it and bind parts of it to local variables.
Tuple is just one of the valid patterns. Coincidentally, it looks like argument list with positional arguments in many other languages. You can also use a record, which makes it look like "keyword arguments". You can also use patterns of custom types.
All the above is still about the single "argument" case, the single value that is "physically" passed to the function. Pattern matching is what makes it possible to bind parts of that value to multiple local variables in the body of the function.
OCaML has tuples like all the other MLs. Thus given a function f with a signature (int * int) -> int, invocation is written as something like:
f (1, 2)
So yes, they do accept tuples. This isn't weird, and is the standard way to write a multi-argument function in most of the ML family. In the 5th stage baudrillardian mess that OCaML has become in 2025 it might be weird since at some point they added implicit currying to the language and that became the default idiom. But personally I hate implicit currying and avoid it whenever possible.
> Functions map members of a set A to members of a set B.
> Something that has side effects all over the place should just not be called a function
Leibniz defines functions as a quantity that depends on some geometry like a curve. Bernoulli later defined it as a quantity that results from a variable. The latin word "functio" means process, not implying a mapping but an arbitrary sequential performance. Mathematicians are prone to taking words from elsewhere, either twisting their meaning or inventing wholly new meaning out of thin air, all according to their whimsy for their own particular needs. I do not think a reasonable case can be made to assert we have to respect ZFC's narrow conception of a function when we do not live in a ZFC world.
True but one benefit of those guys is that they actually define what they mean in a formal way. "Programmers" generally don't. There is in fact some benefit in having consistent names for things, or if not at least a culture in which concepts have unambiguous definitions which are mandated.
Sometimes yes. The big stuff is usually exhaustively formally defined down to the axioms. The further you get away from the absolute largest, most well-tread ground... the wood grows dark quite fast. Math, especially on the cutting edge, is intuitive like anything else and filled with hand-waives. Even among the exhaustively defined, there are plenty which only achieved exhaustiveness thanks to later work.
> "Programmers" generally don't.
On the contrary, all programming languages are formal grammars. I think the best way that I can underline the difference is that mathematicians are primarily utilizing formal grammars for communication to share meanings, and almost exclusively deal with meanings that are very well-defined. Programmers on the other-hand are often more concerned with some other pressing matter, usually involving architecting something unfathomably massive with a minute fraction of the man-hours used to construct ZFC, often dealing with far fuzzier things and with outright contradictory axioms which they have no control over.
They are as different as trophy truck rally and formula 1. As someone who lives in both worlds, I'm endlessly disappointed by shitflinging and irrational superiority contests between the two as though they even live in the same dimension.
> There is in fact some benefit in having consistent names for things
There is, in some contexts to some ends. Those are important and influential contexts and ends, and so the relevant math should be studied and well understood on an intuitive level. But they form a minority in both fields. I've known many mathematicians outside of programming contexts, and none of them have any grasp of category theory, type theory, the lambda calculus, etc. They might have heard of category theory, but they look at it with the same suspicion as you might expect from some fringe theoretical physics framework.
There is also the problem that these "consistent names" are built on a graveyard of previous conception. Mathematics is a history of mental frameworks, as are all ancient fields. As I underlined in my previous post, the formalization of mathematics gutted the idea of the function and filled it with straw. There's nothing wrong with how functions are defined within ZFC mind you. I just vehemently disagree with projecting it as a universal context, showing it any degree of favoritism.
Well, a function can't be a Cartesian product unless set B has cardinality 1. It's perfectly coherent to view a function as a set of tuples, but it's not legal for that set to contain two tuples (a, b) and (a, c) where b ≠ c.
> In my dream PL syntax a function call would be a function name followed by a tuple, and that tuple would be no different than the tuples you would use in any other part of the program (and so you could use all the tuple manipulation library goodies).
This already exists. For example, that's how `apply` works in Common Lisp.
https://www.lispworks.com/documentation/HyperSpec/Body/f_app...
So PRQL (prql-lang.org) is kind of like that, with the limitation that control flow is limited to the List Monad bind, i.e. the tuples from one step are piped to the function call in the next step one at a time producing 0..* result tuples and the resulting multiset is flatmapped. At the moment it just transpiles to SQL but a couple of months ago I was exploring different Lambda Calculi and how to extend this to a more general PL. Alas, that won't take shape until AI is at the level that it can write that code for me. I guess LINQ and similar Language Integrated Query Languages already provide this functionality.
P.S. Writing the above made me think that it's not quite what you asked for; in the PRQL case each function receives an implicit `this` argument which is the tuple I was thinking of. However the function can also take other arguments, including keyword arguments. Those are arbitrary. I guess they are implicitly ordered and could be represented as a tuple as well. What would you see as the benefit of that?
> (and so you could use all the tuple manipulation library goodies)
Other than indexing into tuples, I can't really think of anything else, at least for single tuples. I initially thought of something like `zip(*args)` but that's only really useful when you have list of tuples or tuple of lists and then you're back in PRQL land. Indexing into tuples is also brittle and does not produce self-documenting code so I prefer the PRQL and SQL namedtuples/structs where fields are referenceable by name.
I have this suspicion that PRQL functions are parameterised natural transformations but my Category Theory at that level is too rusty to check without extra work. If that's the case though then having the explicit function arguments be simple values feels justified to me since they're just indexing families of related transformations and are not the primary data being transformed (if that makes sense?).
// In typescript const [a, b, c] = foo(d, e, f)
You could even pass this to itself
foo(…foo(d, e, f))
Also one definition of a function is a map from a domain to a range. There’s nothing that forbids multiple values, or is there?
No, bijective functions are symmetric.
Injectivity is enough to invert a function.
> Even though they may take any number of arguments, they must each have one return value. [...] This is true of all functions.
Mathematical functions take one value and produce one value. This is true of all mathematical functions.
Programming functions can be modelled this way by treating multiple input arguments as a single product, and non-scalar outputs containing multiple values as a single product.
Programming functions don't even have to always return one value:
all have zero values, meaning the function can't meaningfully return.Arguably, programming functions must take at least one value as input, otherwise how can they be called?
In that sense, programming functions are asymmetric: It rarely makes sense to write a function you can't call, but it often makes sense to write a function that never returns.
When does it make sense to write a function that you can't call? When the point of the function is to prove something as a result of being compiled. The value lies in the compilation, not in being called.
A better title for this article: Some Functions Are Asymmetric.
Which is less of a profound insight.
If some functions are asymmetric, then functions (as a class of things) are not symmetric.
Just as some rectangles are squares, still leaves rectangle as a class/type as transpose asymmetric. I.e. width and height are not constrained to be equivalent, which is required for transpose symmetry.
Not really arguing for or against your comment. Just noting that it’s easy for multiple valid arguments to pass each other without connecting due to slight differences in what people mean with the same words.
Ah, those steeped in functional programming might say, but maybe this is the wrong way to look at it. Because if we represent the represent the functions with explicit continuations, we can say that they have N arguments and pass N arguments to their continuations, and then they are symmetric.
It seems like this has fertile overlap with Scheme and the (concurrent computatation) Actor model.
Of course, I can imagine the Execution control library authors know full well about those, with existing C++ goals and designs making that a bridge too far.
[1]: https://simon.peytonjones.org/verse-calculus/
Monty Python fan detected :D