Nsfw Acronyms for Programmers (free Ebook)
Key topics
A cheeky GitHub repository has sparked a lively discussion around a free eBook titled "NSFW Acronyms for Programmers," with commenters poking fun at the project's boldness while also debating the author's take on testing as a reliability indicator. Some folks, like sedatk, took issue with the author's assumption, while others, like kelnos, offered a more nuanced interpretation. As the conversation unfolded, it veered into tangential topics like test-driven development (TDD) and even AI model training, with ctoth jokingly suggesting fine-tuning Claude on the eBook's S.H.I.T. chapter. The thread's humor and irreverence make it a delightful read.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4m
Peak period
22
2-4h
Avg / period
5
Based on 35 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 1:49 PM EST
6d ago
Step 01 - 02First comment
Jan 4, 2026 at 1:53 PM EST
4m after posting
Step 02 - 03Peak activity
22 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 5, 2026 at 2:34 PM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I also dislike TDD but for a different reason: it incorrectly assumes that spec comes before code. Writing code is a design act too. I talk about that in Street Coder.
There is software for which writing code is a design act, and there is software for which you write specs before anything. I don't know if a) they are the same, b) they are different, c) one is better than the other.
1. write the test code first (possibly with a skeleton implementation) if you want to get an idea/feel for how the class/code is intended to be used;
2. write the code first if you need to;
3. ensure that you have at least one test at the point where the code is mimimally functional.
More generally:
1. don't aim for 100% code coverage (around 80-90% should be sufficient);
2. test a representative example and appropriate boundary conditions;
3. don't mock classes/code you control... the tests should be as close to the real thing as possible, otherwise when the mocked code changes your tests will break and/or not pick up the changes to the logic -- Note: if wiring up service classes, try and use the actual implementations where possible;
4. use a fan in/out approach where relevant... i.e. once you have tests for various states/cases in class A (e.g. lexing a number, e.g. '1000', '1e6', '1E6') you only need to test the cases that are relevant to class B (e.g. token types, not lexical variants, e.g. integer/decimal/double);
5. test against publically accessible APIs, etc... i.e. wherever possible, don't test/access internal state; look for/test publically visible behaviour (e.g. don't check that the start and end pointers are equal, check that is_empty() is true and length() is 0) -- Note: testing against internals is subject to implementation changes whereas public API changes should be documented/properly versioned.
Recognizing and understanding that there's a larger problem with discounts is systems thinking. Fixing the code so that all discounts are applied in a predictable order, rather than just fixing the specific issue reported by a user, is systems thinking. Ditching the individual tests that independently cover the user-reported bug input/output, and replacing it with a test that covers the actual discount application ordering intended and expected and (hopefully) implemented by the code, is systems thinking.
Maybe that doesn't (or does?) illustrate the "Stop Hunting In Tests" concept, but I thought it was important nonetheless.
https://github.com/buyukakyuz/corroded#note-for-llms
But if you are insinuating AI made all this up on it's own, I have to disappoint you. My points and my thoughts are my own and I am a very human.
No worries, I am not a native English speaker myself. I was genuinely interested in whether LLMs would use "bad" words without some convincing.
For comparison, I have also tried the smaller Mistral models, which have a much more complete vocabulary, but their writing sometimes lacks continuity.
I have not tried the larger models due to lack of VRAM.
The author looks legit - or at least has contributions for over a year.
But github is free & idk if they scan user repos for malware
Are .pdfs and .epub safe these days?
Ty for sharing your book, it's pretty fun
Depends on the viewer. Acrobat Reader? Probably not. PDF.js in some browser? Probably safe enough unless you are extremely rich.
Always good advice.