Look Out for Bugs
Posted4 months agoActive4 months ago
matklad.github.ioTechstory
calmmixed
Debate
60/100
ProgrammingDebuggingCode Quality
Key topics
Programming
Debugging
Code Quality
The article 'Look Out for Bugs' discusses the importance of carefully reading code to identify bugs, sparking a discussion on the practicality and effectiveness of this approach among developers.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
4d
Peak period
19
84-96h
Avg / period
9.3
Comment distribution28 data points
Loading chart...
Based on 28 loaded comments
Key moments
- 01Story posted
Sep 4, 2025 at 10:59 AM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 7:16 AM EDT
4d after posting
Step 02 - 03Peak activity
19 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 3:29 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45128055Type: storyLast synced: 11/20/2025, 12:53:43 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".
I'm not saying you can't spot naive performance pitfalls. But how do you spot cache misses reading the code?
This sounds like every technical job interview.
Your perception is still warranted. It was clear enough to me what all of that meant, but I was well aware that static is an awkward, highly overloaded term, and I already had the sense that all this boilerplate is a negative.
Sounds crazy, but I usually end up doing that, anyway, as I work.
Another tip that has helped me, is to add code documentation to inline code, after it’s written (I generally add some, but not much inline, as I write it. Most of my initial documentation is headerdoc). The process of reading the code, helps cement its functionality into my head, and I also find bugs, just like he mentions.
[0] https://writingsolidcode.com/
> Sounds crazy, but I usually end up doing that, anyway, as I work.
This doesn't sound crazy to me. On the contrary, it sounds crazy not to do it.
How many bugs do we come across where we ask rhetorically, "Did this ever work?" It becomes obvious that the programmer never ran the code, otherwise the bug would have exhibited immediately.
1. One was soft real-time and stepping through the code in a debugger would first mean having a robust way to play back data input on a simulate time tick. Doing it on live sensor data would mean the code saw nonsense.
2. One requires a background service to run as root. Attaching a debugger to that is surely possible but not any fun.
3. Attaching a debugger to an Android app is certainly possible, but I have never practiced it, and I'm not sure if it can be done "in the field" - Suppose I have a bug that only replicates under certain conditions that are hard to simulate.
You're not wrong. But maybe bugs build up when programmers don't want to admit that we aren't really able to run our code, and managers don't want to give us time to actually run it.
Writing Solid Code is over 30 years old, and has techniques that are still completely relevant, today (some have become industry standard).
Reading that, was a watershed in my career.
The author doesn't grasp how much of what they've written amounts to flexing their own outlier intelligence; they must sincerely believe the average programmer is capable of juggling a complex 500 line program in their heads.
You can manipulate values in a debugger to make it go down any code path you like.
Now some people might be able to fit more than millers number 7+-2 there and juggle concepts with 20 interconnected entities, but this is mostly done by people having this as their main work / business logic.
These articles mix up same-form dimensional mapping like audio or visual to distinct data, it's similar to why its easy to replicate audio and images, but not olfactory / smell. Your nose picks up millions of different molecules and each receptor locks onto a certain one.
Thinking you can find general rules here is exactly why LLMs seem to work but can never be inductive - they map similarities in higher dimensional space, not reasoning. And the same mix up happens here: You map this code to a space that feels home to you, but it will not apply to reading another purpose software outside your field, a different process pipeline, language or form.
If your assumption would be correct all humans needed to train is reading assembly and then magically all bugs will resolve!
Maybe if you want to understand code with both hemispheres map it to a graph, but trying to make strategies from spatial recognition work for code is like trying to make sense of your traffic law rules by length of paragraphs.
There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.
I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".
Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.
Interestingly there's a post from the last day arguing that "Making invalid state unpresentable" is harmful[0], which I don't think I agree with. My experience is that bugs hide in crevices created by having invalid states remain representable and are often caused by the increased cognitive load of not having small reasoning scopes. In terms of reading code to find bugs, having fewer valid states and fewer intersections of valid state makes this easier. With well-define and constrained interfaces you can reason about more code because you need to keep fewer facts in your head.
electric_muse's point in a sibling comment "The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection." is a good case study in this too. Having poorly scoped state boundaries means this reasoning is hard, here too making invalid states unpresentable and interfaces constrained helps.
0: https://news.ycombinator.com/item?id=45164444
12 more comments available on Hacker News