Think in Math, Write in Code (2019)
Posted2 months agoActiveabout 2 months ago
jmeiners.comTechstoryHigh profile
calmmixed
Debate
80/100
ProgrammingMathematicsSoftware Development
Key topics
Programming
Mathematics
Software Development
The article 'Think in math, write in code' argues that mathematics is a more effective tool for thinking about computation than programming languages, sparking a discussion on the relationship between math and code.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
35
96-108h
Avg / period
17.8
Comment distribution71 data points
Loading chart...
Based on 71 loaded comments
Key moments
- 01Story posted
Nov 9, 2025 at 7:03 AM EST
2 months ago
Step 01 - 02First comment
Nov 13, 2025 at 1:27 PM EST
4d after posting
Step 02 - 03Peak activity
35 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 15, 2025 at 12:09 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45864954Type: storyLast synced: 11/20/2025, 5:42:25 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In my experience the issue is sometimes that Step 1 doesn't even take place in a clear cut way. A lot of what I see is:
Or even: Or even: Or even: :-(IMO, this last popular approach gets things completely backwards. It assumes there is no need to think about the problem before hand, to identify it, to spend any amount of time thinking about what needs to happen on a computer for that problem to be solved... you just write down some observable behaviors and begin reactively trying to implement them. Huge waste of time.
The point also about "C-style languages being more appealing" is well taken. It's not so much about the language in particular. If you are able to sit down and clearly articulate what you're trying to do, understand the design tradeoffs, which algorithms and data structures are available, which need to be invented... you could do it in assembly if it was necessary, it's just a matter of how much time and energy you're willing to spend. The goal becomes clear and you just go there.
I have an extensive mathematical background and find this training invaluable. On the other hand, I rarely need to go so far as carefully putting down theorems and definitions to understand what I'm doing. Most of this happens subliminally somewhere in my mind during the design phase. But there's no doubt that without this training I'd be much worse at my job.
I think you misunderstand this approach.
The point of writing the tests is to think about the desired behaviour of the system/module you are implementing, before your mind gets lost in all the complexities which necessarily happens during the implementation.
When you write code, and hit a wall, it’s super easy to get hyper-focused on solving that one problem, and while doing so: lose the big picture.
Writing tests first can be a way to avoid this, by thinking of the tests as a specification you think you should adhere to later, without having to worry about how you get there.
For some problems, this works really well. For others, it might not. Just don’t dismiss the idea completely :)
In fact, right now I'm doing exactly Test/Implement because I don't know how else to solve the problem. But this is a last resort. Only because the first few attempts failed and I must solve this problem have I resorted to grinding out individual cases. The issue is that I have my back against the wall and have to solve a problem I don't understand. But as I progress, eventually I will understand the problem, and then my many cases are going to get dramatically simplified or even rewritten.
But all the tests I've created along the way will stay...
Not that Implement/Test can't work. As frustrating as it is, "just do something" works far better than many alternatives. In particular, with enough places doing it, somebody may succeed.
Edit: to everyone responding that there are trade mags - yes SWE has those too (they're called developer conferences). In both categories, someone has to invite you to speak. I'm asking what compels Joe Shmoe SWE to pontificate on things they haven't been asked by anyone to pontificate on.
There are literally industry publications full of these.
I don't think that this is true. The vast majority of technical math publications, for example, are reviewed, but not invited. And expository, and even technical, math is widely available in fora without any refereeing process (and consequent lack of guarantee of quality).
(But I think it does apply more generally. We refer to it as Computer Science, it is often a branch of Mathematics both historically and today with some Universities still considering it a part of their Math department. Some of the industry's biggest role models/luminaries often considered themselves mathematicians first or second, such as Turing, Church, Dijkstra, Knuth, and more.)
do we really have to retread this? unless you are employed by a university to perform research (or another research organization), you are not a computer scientist or a mathematician or anything else of that sort. no more so than an accountant is an economist or a carpenter is an architect.
> The article you are complaining about starts from the presumption that software
reread my comment - at no point did i complain about the article. i'm complaining that SWEs have overinflated senses of self which compel them to write such articles.
Doing math or science is the criterion for being a mathematician or scientist, not who employs you how.
I wish there was a block button for the "overinflated senses of self".
To refute GP's point more broadly -- there is a lot in /applied/ computer science (which is what I think the harder aspects software engineering really is) that was and is done by individuals in software just building in a vacuum, open source holding tons of examples.
And to answer GP's somewhat rhetorical question more directly - none of those professions are paid to do open-ended knowledge work, so the analogy is extremely strained. You don't necessarily see them post on blogs (as opposed to LinkedIn/X, for example), but: investors, management consultants, lawyers, traders, and corporate executives all write a ton of this kind of long-form content that is blog post flavored all the time. And I think it comes from the same place -- they're paid to do open-ended knowledge work of some kind, and that leads people to write to reflect on what they think seems to work and what doesn't.
Some of it is interesting, some of it is pretty banal (for what it's worth, I don't really disagree that this blog post is uninteresting), but I find it odd to throw out the entire category even if a lot of it is noise.
> no more so than an accountant is an economist or a carpenter is an architect
I know many accountants who would claim you can't be an a good accountant without being an economist. Arguably that's most of the course load of an MBA in a nutshell. I don't know a carpenter who would claim to be an architect, usually when carpentry happens is after architecture has been done, but I know plenty of carpenters that claim to be artists and/or artisans (depending on how you see the difference), that take pride in their craft and understand the aesthetic underpinnings.
> reread my comment - at no point did i complain about the article. i'm complaining that SWEs have overinflated senses of self which compel them to write such articles.
You chose which article to post your complaint to. The context of your comment is most directly complaining about this specific article. That's how HN works. If you didn't read the article and feel like just generically complaining about the "over-inflated senses of self" in the software industry, perhaps you should be reading some forum that isn't HN?
My takes are:
1) There are a lot of IT workers in the world, and they're all online natives. So naturally they will discuss ideas, problems, etc. online. It is simply a huge community, compared to other professions.
2) Programming specifically is for many both a hobby and a profession. So being passionate about it compels many people to discuss it, just like others will do about their own hobbies.
3) Software is a very fast-moving area, and very diverse, so you will get many different takes on the same problems.
4) Posting is cheap. The second you've learned about something, like static vs dynamic typing, you can voice your opinion. And the opinions can range from beginners to CS experts, both with legit takes on the topic.
5) It is incredibly easy to reach out to other developers, with the various platforms and aggregators. In some fields it is almost impossible to connect / interact with other professionals in your field, unless you can get past the gatekeepers.
And the list goes on.
The Internet is absolutely full of this. This is purely your own bias here, for any of the trades you mentioned try looking. You will find Videos, podcast and blogs within minutes.
People love talking about their work, no matter their trade. They love giving their opinions.
[0] https://www.google.com/search?q=blaking+magazine [1] https://www.google.com/search?q=mechanics+magazines [2] https://dentistry.co.uk/dentistry-magazine-january-2023-digi...
In practice, I find it much more productive to start with a computational solution - write the algorithm, make it work, understand the procedure. Then, if there's elegant mathematical structure hiding in there, it reveals itself naturally. You optimize where it matters.
The problem is math purists will look at this approach and dismiss it as "inelegant" or "brute force" thinking. But that's backwards. A closed-form solution you've memorized but don't deeply understand is worse than an iterative algorithm you've built from scratch and can reason about clearly.
Most real problems have perfectly good computational solutions. The computational perspective often forces you to think through edge cases, termination conditions, and the actual mechanics of what's happening - which builds genuine intuition. The "elegant" closed-form solution often obscures that structure.
I'm not against finding mathematical elegance. I'm against the cultural bias that treats computation as second-class thinking. Start with what works. Optimize when the structure becomes obvious. That's how you actually solve problems.
Every single YouTube video from tom7[0] or 3blue1brown[1] do way more on transmitting the fascinations of mathematics.
[0]: https://www.youtube.com/@tom7
[1]: https://www.youtube.com/@3blue1brown
I could relate how he described the mathematical experience with what I feel is happening in my head/brain when I do programming.
The top-down (mathematical) approach can also fail, in cases where there's not an existing math solution, or when a perfectly spherical cow isn't an adequate representation of reality. See Minix vs Linux, or OSI vs TCP/IP.
But I think the Sudoku example is less about top-down vs bottom-up and more about dogmatic adherence to abstractions (OOP in that case). Jeffries wasn't just using a 'hacker' approach - he was forcing everything through an OOP lens that fundamentally didn't fit the problem structure.
But yes, same issue can happen with the 'mathematical' approach - forcing "elegant" closed-form thinking onto problems that are inherently messy or iterative.
IMO, the mathematical approach is essentially always better for software; nearly every problem that the industry didn’t inflict upon itself was solved by some egghead last century. But there is a kind of joy in creating pointless difficulties at enormous cost in order to experience the satisfaction of overcoming them without recourse to others, I suppose.
Just get a proof of the open problem no matter how sketchy. Then iterate and refine.
But people love to reinvent the wheel without caring about abstractions, resulting in languages like Python being the defacto standard for machine learning
Sidenote: I code fluid dynamics stuff (I'm trained in computer science, not at all in physics). It's funny to see how the math and physics deeply affect the way I code (and not the other way around). Math and physics laws feels unescapable and my code usually have to be extremely accurate to handle these laws correctly. When debugging that code, usually, thinking math/physics first is the way to go as they allow you to narrow the (code) bug more quickly. And if all fails, then usually, it's back to the math/physics drawing board :-)
https://www.youtube.com/watch?v=ltLUadnCyi0
Personally, I find a mix of all three approaches (programming, pen and paper, and "pure" mathematical structural thought) to be best.
That said, I mostly agree with you, and I thought I'd share an anecdote where a math result came from a premature implementation.
I was working on maximizing the minimum value of a set of functions f_i that depend on variables X. I.e., solve max_X min_i f_i(X).
The f_i were each cubic, so F(X) = min_i f_i(X) was piecewise cubic. X was dimension 3xN, N arbitrarily large. This is intractable to solve as, F being non-smooth (derivatives are discontinuous), you can't well throw it at Newton's method or a gradient descent. Non-differentiable optimization was out of the question due to cost.
To solve this, I'd implemented an optimizer that moved one variable at a time x, such that F(x) was now a 1d piecewise cubic function that I could globally maximize with analytical methods.
This was a simple algorithm where I intersected graphs of the f_i to figure out where they're minimal, then maximize the whole thing analytically section by section.
In debugging this, something jumped out: coefficients corresponding to second and third derivative were always zero. What the hell was wrong with my implementation?? Did I compute the coefficients wrong?
After a lot of head scratching and code back and forth, I went back to the scratchpad, looked at these functions more closely, and realized they're cubic of all variables, but linear of any given variable. This should have been obvious, as it was a determinant of a matrix whose columns or rows depended linearly on the variables. Noticing this would have been 1st year math curriculum.
This changed things radically as I could now recast my maxmin problem as a Linear Program, which has very efficient numerical solvers (e.g. Dantzig's simplex algorithm). These give you the global optimum to machine precision, and are very fast on small problems. As a bonus, I could actually move three variables at once --- not just one ---, as those were separate rows of the matrix. Or I could even move N at once, as those were separate columns. This could beat all the differentiable optimization based approaches that people had been doing on all counts (quality of the extrema and speed), using regularizations of F.
The end result is what I'd consider one of the few things not busy work in my PhD thesis, an actual novel result that brings something useful to the table. To say this has been adopted at all is a different matter, but I'm satisfied with my result which, in the end, is mathematical in nature. It still baffles me that no-one had stumbled on this simple property despite the compute cycles wasted on solving this problem, which coincidentally is often stated as one of the main reasons the overarching field is still not as popular as it could be.
From this episode, I deduced two things. Firstly, the right a priori mathematical insight can save a lot of time in designing misfit algorithms, and then implementing and debugging them. I don't recall exactly, but this took me about two months or so, as I tried different approaches. Secondly, the right mathematical insight can be easy to miss. I had been blinded by the fact no-one had solved this problem before, so I assumed it must have had a hard solution. Something as trivial as this was not even imaginable to me.
Now I try to be a little more careful and not jump into code right away when meeting a novel problem, and at least consider if there isn't a way it can be recast to a simpler problem. Recasting things to simpler or known problems is basically the essence of mathematics, isn't it?
Elegance is not first. First is rough. Solving by math sounds much like what you describe. I find my structures, put them together, and find the interactions. Elegance comes after cleaning things up. It's towards the end of the process, not the beginning. We don't divine math just as you don't divine code. I'm just not sure how you get elegance from the get go.
So I find it weird that you criticize a math first approach because your description of a math approach doesn't feel all that accurate to me.
Edit: I do also want to mention that there's a correspondence between math and code. They aren't completely isomorphic because math can do a lot more and can be much more arbitrarily constructed, but the correspondence is key to understanding how these techniques are not so different.
Can you give an example of how you "linguistically" approach a problem?
I mean, even in math, description of the problems are written in natural language, but they have to be precise.
I completely disagree with that assumption.
Any function call that proceeds to capture logic, e. g. data from reallife systems, drones or robot, or robots in logistics - you will often see they proceed in a logic chain. Sometimes they use a DSL, be it in rails, but also older DSLs such as the sierra game logic and other DSLs.
If you have a good programming language it is basically like "thinking" in that language too. You can also see this in languages such as C, and the creation of git. Now I don't think C is a particularly great language for higher abstractions, but the assumption that "only math is valid and any other instruction to a machine is pointless", is simply flat out wrong. Both is perfectly valid and fine, they just operate on a different level usually. My brain is more comfortable with ruby than with C, for instance. I'd rather want languages to be like ruby AND fast, than have to adjust down towards C or assembly.
Also the author neglects that you can bootstrap in language xyz to see if a specific idea is feasible. That's what happened in many languages.
It is that Mathematics is far more general and uses a myriad of notations developed over hundreds of years and adapted to various sub-fields/domains/models as necessary. This makes it far more flexible and powerful than any programming language. That is why Multi-Paradigm languages became a thing i.e. there is a need for programming languages to provide a larger set of computation models which can then be exploited by the programmer to map his domain models (mathematical or not).
For example; why do many(most?) programmers have difficulty in transcribing algorithms given in pseudocode to their favourite language? Simply because they have not understood the algorithm at the fundamental mathematical level but have only picked up the patterns through which it is expressed in their language. Note that this is the default way our brain works and how we manage real-world complexity without really understanding everything (satirically phrased as "monkey see, monkey do"). But we can use mathematical methods and reasoning to minimize going off the rails because it forces us to make explicit our assumptions, definitions and proofs which is at the heart of problem-solving. So we use all the mathematical tools we have at hand to structure and solve a problem and only later map it to our programming language. But note that as we gain more experience this mapping becomes intuitive and we can directly think and express it in our favourite programming language.
See also my comment here - https://news.ycombinator.com/item?id=45934301
Some References:
Notation as a tool of thought by Kenneth Iverson - https://dl.acm.org/doi/10.1145/358896.358899
Predicate Logic as Programming Language by Robert Kowalski - https://www.researchgate.net/publication/221330242_Predicate...
Here's hoping my recognising the issue will soften the blow this time! Mayhaps this comment might save someone else from a similar fate
If you want to put this to test, try formulating a React component with autocomplete as a "math problem". Good luck.
(I studied maths, if anyone is questioning where my beliefs come from, that's because I actually used to think in maths while programming for a long time.)
Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that. Very beneficial for dealing with purely mathematical problems, absolute distaster for quick problem solving IMO.
No, it’s exactly what the author is writing about. Just check his example, it’s pretty clear what he means by “thinking in math”
> Scientific conensus in math is Occam's Razor, or the principle of parsimony. In algebra, topology, logic and many other domains, this means that rather than having many computational steps (or a "simple mental model") to arrive to an answer, you introduce a concept that captures a class of problems and use that.
I don’t even know what you mean by this.
If we look at older code we'll actually see a similar thing. People are limited in character lengths so the exact same thing happened. That's why there's still conventions like 80 char text width, though now those things serve as a style guide rather than a hard rule. It also helps that we can auto complete variables.
Your criticism is misplaced.
I've written some nasty numerical integration code (in C using for loops) for example I'm not proud of it but it solved my issue. I remember at the time thinking surely there must be a better way for computers to solve integrals.
I think what helps is to take the time to sit down and practice going back and forth. Remember, math and code are interchangeable. All the computer can do is math. Take some code and translate it to math, take some math and translate it to code. There's easy things to see like how variables are variables, but do you always see what the loops represent? Sums and products are easy, but there's also permutations and sometimes they're there due to lack of an operator. Like how loops can be matrix multiplication, dot products, or even integration.
I highly suggest working with a language like C or Fortran to begin with and code that's more obviously math. But then move into things that aren't so obvious. Databases are a great example. When you get somewhat comfortable try code that isn't obviously math.
The reason I wouldn't suggest a language like Python is because it abstracts too much. While it's my primary language now it's harder to make that translation because you have to understand what's happening underneath or be working with a diffident mathematical system and in my experience not many engineers (or many outside math majors) are familiar with abstract algebra and beyond so these formulations are more challenging at first.
For motivation, the benefits are that you can switch modes for when a problem is easier to solve in a different context. It happens much more than you'd think. So you end up speaking like Spanglish, or some other mixture of languages. I also find it beneficial that I can formulate ideas when out and about without a computer to type code. I also find that my code can often be cleaner and more flexible as it's clearer to me what I'm doing. So it helps a lot with debugging too
Side note: with computers don't forget about statistics and things like Monte Carlo integration. We have GPUs these days and that massive parallelism can often make slower algorithms faster :). When looking at lots of computational code it's not written for the modern massively parallel environment we have today. Just some food for thought. You might find some fun solutions but also be careful of rabbit holes lol
I love the logical aspect and the visualization aspect like writing down a formula and then visualizing/imagining a graph of all possible curves which that formula represents given all possible values of x or z. You can visualize things that you cannot draw or even render on a computer.
I also enjoy visualizing powers and logarithms. Math doesn't have to be abstract. To me, it feels concrete.
My problem with math is all to do with syntax, syntax reuse in different contexts and even the language of how mathematicians describe problems seems ambiguous to me... IMO, the way engineers describe problems is clearer.
Sometimes I feel like those who are good at math are kind of biased towards certain assumptions. Their bias makes it easier for them to fill in gaps in mathematical language and symbolism... But I would question whether this bias, this approach to thinking is actually a universally good thing in the grand scheme of things. Wouldn't math benefit from more neurodiversity?
I remember at school, I struggled in maths at some points because I could see multiple interpretations of certain statements and as the teacher kept going deeper, I felt like I had to execute a tree search algorithm in my mind to figure out what was the most plausible interpretation of the lesson. I did much better at university because I was learning from books and so I could pause and research every time I encountered an ambiguous statement. I went from failing school math to getting distinction at university level maths.
Why a program is needed? What constraints lead to the existence of that need? Why didn't human interactions need a program or thinking in math? Why do computers use 0s and 1s? You need to start there and systematically derive other concepts, that are tightly linked and have a purpose driven by the pre-existing context.
I also disagree with the broader implication that the languages of programming and mathematics (i.e., logic) are inherently distant. On the contrary, they share deep structural isomorphisms as evidenced by the Curry–Howard correspondence.
I think a corollary to this is that we should teach math with code.
> Another limitation of programming languages is that they are poor abstraction tools
> Programming languages are implementation tools for instructing machines, not thinking tools for expressing ideas
Machine code is an implementation tool for instructing machines (and even then there's a discussion to be had about designing machines with instruction sets that map more neatly to the problems we want to solve with them). Everything we've built on top of that, from assembly on up, is an attempt to bridge the gap from ‘thinking tools for expressing ideas’.
The holy grail of programming languages is a language that seamlessly supports expressing algorithms at any level of abstraction, including or omitting lower-level details as necessary. Are we there yet? Definitely not. But to give up on the entire problem and declare that programming languages are inherently unsuitable for idea expression is really throwing the baby out with the bathwater.
As others in the comments have noted, it's a popular and successful approach to programming today to just start writing code and seeing where the nice structure emerges. The feasibility of that approach is entirely thanks to the increasing ability of programming languages to support top-down programming. If you look at programming practice in the past, when the available implementation languages were much lower-level, software engineering was dominated by high-level algorithm design tools like flowcharts, DRAKON, Nassi–Shneiderman diagrams, or UML, which were then painstakingly compiled by hand (in what was considered purely menial work, especially in the earlier days) into computer instructions. Our modern programming languages, even the ‘low-level’ ones, are already capable of higher levels of abstraction than the ‘high-level’ algorithm design tools of the '50s.
Was it about how to design a profitable algorithm? Was it about how to design the bot? was it about understanding if results from the bot were beneficial?
If that I would just backtest the algorithm to see the profit changes on real historical data?
> Definition: We say that the merchant rate is favorable iff the earnings are non-negative for most sets of typical purchases and sales. r'(t) is favorable iff e(P, S) >= 0.
If I understand the definition correctly, I would say that this is likely even wrong because you could have an algorithm that will be slightly profitable 90% of the time, but the 10% of the time it loses everything.
A correct solution to me is to simulate large numbers of trades based on as realistic data as you can possibly get and then consider the overall sum of the results, not positive vs negative trades ratio.
I think if you carefully read the author then many of you might be surprised you're using math as your frame of thinking. The symbols and rigor can help but mathematical thinking is all about abstraction. It is an incredibly creative process. But I think sometimes we're too afraid of abstraction that we just call it different names. Everything we do in math or programming is abstract. It's not like the code is real. There's different levels of abstraction and different types of abstraction, but all these things have their uses and advantages in different situations.
He explained this in his first book Elements of Programming (now freely available at https://www.elementsofprogramming.com/) and then simplified the basic ideas into the above book. In his interviews he often mentions George Chrystal's Algebra books as foundational. These are the ideas that he used to implement STL in C++.
Also related (maybe?) is Paul Halmos and Steven Givant's book Logic as Algebra. MAA review at https://old.maa.org/press/maa-reviews/logic-as-algebra
1 more comments available on Hacker News