Code Review Can Be Better
Posted5 months agoActive5 months ago
tigerbeetle.comTechstoryHigh profile
calmpositive
Debate
40/100
Code ReviewGitSoftware Development
Key topics
Code Review
Git
Software Development
The article discusses ways to improve code review processes, and the discussion revolves around various tools, workflows, and best practices for effective code review.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
47m
Peak period
62
6-12h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 20, 2025 at 7:10 PM EDT
5 months ago
Step 01 - 02First comment
Aug 20, 2025 at 7:57 PM EDT
47m after posting
Step 02 - 03Peak activity
62 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 24, 2025 at 8:13 PM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44967469Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
VSCode is open source, and there are plenty of IDEs...
I guess I'm just focused on different lock-in concerns than you are.
I suspect that since this is possible with VSCode/Github, its probably extensible to other providers editors.
I didn't get why stick with the requirement that review is a single commit? To keep git-review implementation simple?
I wonder if approach where every reviewer commits their comments/fixes to the PR branch directly would work as well as I think it would. One might not even need any additional tools to make it convenient to work with. This idea seems like a hybrid of traditional github flow and a way Linux development is organized via mailing lists and patches.
i've had team members edit a correction as a "suggestion" comment and i can approve it to be added as a commit on my branch.
Yeah that is pretty weird. If 5 people review my code, do they all mangle the same review commit? We don't do that with code either, feels like it's defeating the point.
Review would need to be commits on top of the reviewed commit. If there are 5 reviews of the same commit, then they all branch out from that commit. And to address them, there is another commit which also lives besides them. Each commit change process becomes a branch with stacked commits beinf branches chained on top of one another. Each of the commits in those chained branches then has comment commits attached. Those comment commits could even form chains if a discussion is happening. Then when everybody is happy, each branch gets squashed into a single commit and those then get rebased on the main branch.
You likely want to make new commits for that though to preserve the discussions for a while. And that's the crux: That data lives outside the main branch, but needs to live somewhere.
To be fair you don't know if one line change is going to absolutely compromise a flow. OSS needs to maintain a level of disconnect to be safe vs fast.
I was on a lookout for best "precommit" review tool and zeroed on Magit, gitui, Sublime Merge.
I am not an emac user, so i'll have to learn this.
I suggest `git-precom` for conciseness.
This is eerily similar to how I review large changes that do not have a clear set of commits. The real problem is working with people that don’t realize that if you don’t break work down into small self contained units, everybody else is going to have to do it individually. Nobody can honestly say they can review tons of diffs to a ton of files and truly understand what they’ve reviewed.
The whole is more than just the sum of the parts.
``` review () { if [[ -n $(git status -s) ]] then echo 'must start with clean tree!' return 1 fi
} ```as a PR review tool in neovim. It's basically vscode's diff tool UI-wise but integrates with vim's inbuilt diff mode.
Also, `git log -p --function-context` is very useful for less involved reviews.
When we started graphite.dev years ago that was a workflow most developers had never heard of unless they had previously been at FB / Google.
Fun to see how fast code review can change over 3-4yrs :)
One thing I've found at $DAYJOB is that I have to set the PR's "base" branch to "main" before I push updated commits (and then switch it back to the parent after), otherwise CI thinks my PR contains everything on main and goes nuts emailing half the company to come review it.
I've played with git town which is great for what it is.
But at $DAYJOB we are now all on graphite and that stacking is super neat. The web part is frustratingly slow, but they got stacking working really well.
The worst offender is a slack notification[0] deep link into a PR I need to review.
It loads in stages, and the time from click to first diff is often so frustratingly long that I end up copying the PR ID and going to GitHub instead.
Sometimes I give up while Graphite is still loading and use the shortcut C-G to go to GitHub.
The second issue might be the landing page. I love what it shows compared to GitHub, but it's often slow to display loading blocks for things that haven’t even changed. Reloads are usually fast after that — until sometime later, maybe a day, when it slows down again.
I don't know why it feels worse than Linear, even though there are clearly many similarities in how it's supposed to load.
The guest instance isn’t so much about loading speed, but usage speed.
When I submit a stack of PRs, I get a nice carousel to fill in PR titles/descriptions and choose where to publish each PR. What’s missing for me there is access to files and diffs, so I can re-review before publishing. I often end up closing it and going back to the PR list instead.
[0] Thank God for those! You've made them much more useful than GitHub's. Also, the landing page is far more helpful in terms of what’s displayed.
And I very much appreciate both the ambition and results that come from making it interop with PRs, its a nightmare problem and its pretty damned amazing it works at all, let alone most of the time.
I would strongly lobby for a prescriptive mode where Graphite initializes a repository with hardcore settings that would allow it to make more assumptions about the underlying repo (merge commits, you know the list better than I do).
I think that's what could let it be bulletproof.
It seems non-obvious that you would have to prohibit git commands in general, they're already "buyer beware" with the current tool (and arcanist for that matter). Certainly a "strict mode" where only well-behaved trees could interact with the tool creates scope for all kinds of performance and robustness optimizations (and with reflog bisecting it could even tell you where you went off script).
I was more referring to the compromises that gt has to make to cope with arbitrary GitHub PRs seem a lot more fiddly than directly invoking git, but that's your area of expertise and my anecdote!
Broad strokes I'm excited for the inevitable decoupling of gt from GitHub per se, it was clearly existential for zero to one, but you folks are a first order surface in 2025.
Keep it up!
I'd recommend giving it a try to see what it's like. The `gt`/onboarding tour is pretty edifying and brief.
It's likely that you'll find that `gt` is "enabling" workflows that you've already found efficient solutions for, because it's essentially an opinionated and productive subset of git+github. But it comes with some guardrails and bells and whistles that makes it both (1) easier for devs who are new to trunk-based dev to grok and (2) easier for seasoned devs to do essentially the same work they were already doing with fewer clicks and less `git`-fu.
What can be a very nice experiment to try something new can easily become a security headache to deal with.
Frequent, small changes are really a good practice.
Then we have things like trunk-based development and continuous integration.
I think stacked PRs are a symptoms of the issues the underlying workflow (feature branches with blocking reviews) has.
Stacked pull requests can be an important tool to enable “frequent, small changes” IMO.
Sure, I can use a single pull request and a branch on top of that, but then it's harder for others to leave notes on the future, WIP, steps.
A common situation is that during code review I create a few alternative WIP changes to communicate to a reviewer how I might resolve a comment; they can do the same, and share it with me. Discussion can fork to those change sets.
Gerrit is much closer to my desired workflow than GitHub PRs.
But, to me, "creating a few alternative WIP changes to communicate to a reviewer" indicates an issue with code reviews. I don't think code reviews are the time to propose alternative implementations, even if you have a "better" idea unless the code under review is broken.
The //actually better// workflows stacking enables are the same sort of workflows that `git add -p`, `git commit --fixup` and `git rebase` enable, just at a higher level of abstraction (PRs vs commits).
You can "merge as a stack" as you imply, but you can also merge in sub-chunks, or make a base 2-3 PRs in a stack that 4 other stacks build on top of. It allows you to confidently author the N+1th piece of work that you'd normally "defer" doing until after everything up to N has been reviewed.
An example: I add a feature flag, implement a divergent behavior behind a feature flag gate, delete the feature flag and remove the old behavior. I can do this in one "stack", in which I deploy the first two today and the last one next week.
I don't have to "come back" to this part of the codebase a week from now to implement removing the flag, I can just merge the last PR that I wrote while I had full context on this corner.
In theory you can do all of this stuff with vanilla git and GitHub. In non-stacking orgs, I'd regularly be the only person doing this, because I was the only one comfortable enough with git (and stacking) for it to not be toooo big a burden to my workflow. Graphite (and other stacking tools) make this workflow more accessible and intuitive to people, which is a big net win for reviewers imo.
Empirically this is not true if you also control for review quality. If your code review is a rubber stamp then sure mega PRs win because you put up a PR and then merge. But why review then?
However, code review quality goes up when you break things down into smaller commits because the code reviewer can sanity check a refactor without going over each line (pattern matching) while spending more time on other PRs that do other things.
And if you are breaking things down, then stacked PRs are definitely faster at merged to master/unit of time. I introduced graphite to my team and whereas before we struggled to land a broken down PR of ~5 commits in one week, we’d regularly land ~10+ commit stacks every few days because most of the changes of a larger body of work got approved and merged (since often times the commit order isn’t even important, you can reorder the small commits), conditional approvals (ie cleanups needed) didn’t require further follow ups from the reviewer, and longer discussion PRs could stay open for longer without blocking progress and both developer and reviewer could focus their attention there.
Additionally, graphite is good about automatically merging a group of approved small individual commits from a larger set of changes automatically without you babysitting which is infinitely easier than managing this in GitHub and merging 1 commit, rebasing other PRs after a merge etc.
That’s the only models I can think of and it’s weird to advocate to have a variable time asynchronous process in the middle of your code or review loops. Seems like you’re just handicapping your velocity for no reason.
Stacked PRs are precisely about factoring out small changes into individually reviewable commits that can be reviewed and landed independently, decoupling reviewer and developer while retaining good properties like small commits that the reviewer is going to do a better job on, larger single purpose commits that the reviewer knows to spend more time on without getting overwhelmed dealing with unrelated noise, and the ability to see relationships between smaller commits and the bigger picture. Meanwhile the developer gets to land unobtrusive cleanups that serve a broader goal faster to avoid merge conflicts while getting feedback quicker on work while working towards a larger goal.
The only time stacked commits aren’t as useful is for junior devs who cants organize themselves well enough to understand how to do this well (it’s an art you have to intentionally practice at) and don’t generally have a good handle on the broader scope of what they’re working towards.
But combine it with TDD & pairing and it becomes a license to deliver robust features at warp speed.
So I’m really hoping something like Graphite becomes open-source, or integrated into GitHub.
https://abhinav.github.io/git-spice/
Best AI code review, hands down. (And I’ve tried a few.)
I'm not sure there's even a tech solution to this class of problems and it is down to culture. LGTMs exist because it satisfies the "letter of the law" but not the spirit. Classic bureaucracy problem combined with classic engineer problems. It feels like there are simple solutions but LGTMs are a hack. You try to solve this by requiring reviews but LGTMs are just a hack to that. Fundamentally you just can't measure the quality of a review[0]. Us techie types and bureaucrats have a similar failure mode: we like measurements. But a measurement of any kind is meaningless without context. Part of the problem is that businesses treat reviewing as a second class citizen. It's not "actual work" so shouldn't be given preference, which excuses the LGTM style reviews. Us engineers are used to looking at metrics without context and get lulled into a false sense of security, or convince ourselves that we can find a tech solution to this stuff. I'm sure someone's going to propose a LLM reviewer and hey, it might help, but it won't address the root problems. The only way to get good code reviews is for them to be done by someone capable of writing the code in the first place. Until the LLMs can do all the coding they won't make this problem go away, even if they can improve upon the LGTM bar. But that's barely a bar, it's sitting on the floor.
The problem is cultural. The problem is that code reviews are just as essential to the process as writing the code itself. You'll notice that companies that do good code review already do this. Then it is about making this easier to do! Reducing friction is something that should happen and we should work on, but you could make it all trivial and it wouldn't make code reviews better if they aren't treated as first class citizens.
So while I like the post and think the tech here is cool, you can't engineer your way out of a social problem. I'm not saying "don't solve engineering problems that exist in the same space" but I'm making the comment because I think it is easy to ignore the social problem by focusing on the engineering problem(s). I mean the engineering problems are magnitudes easier lol. But let's be real, avoiding addressing this, and similar, problems only adds debt. I don't know what the solution is[1], but I think we need to talk about it.
[0] Then there's the dual to LGTM! Code reviews exist and are detailed but petty and overly nitpicky. This is also hacky, but in a very different way. It is a misunderstanding of what review (or quality control) is. There's always room for criticism as nothing you do, ever, will be perfect. But finding problems is the easy part. The hard part is figuring out what problems are important and how to properly triage them. It doesn't take a genius to complain, but it does take an expert to critique. That's why the dual can even be more harmful as it slows progress needlessly and encourages the classic nerdy petty bickering over inconsequential nuances or over unknowns (as opposed to important nuances and known unknowns). If QC sees their jobs as finding problems and/or their bosses measure their performance based on how many problems they find then there's a steady state solution as the devs write code with the intentional errors that QC can pick up on, so they fulfill their metric of finding issues, and can also easily be fixed. This also matches the letter but not the spirit. This is why AI won't be able to step in without having the capacity of writing the code in the first place, which solves the entire problem by making it go away (even if agents are doing this process).
[1] Nothing said here actually presents a solution. Yes, I say "treat them as first class citizens" but that's not a solution. Anyone trying to say this, or similar things, is a solution is refusing to look at all the complexities that exist. It's as obtuse as saying "creating a search engine is easy. All you need to do is index all (or most) of the sites across the web." There's so much more to the problem. It's easy to over simplify these types of issues, which is a big part of why they still exist.
I've been out of the industry for a while but I felt this way years ago. As long as everybody on the team has coding tasks, their review tasks will be deprioritized. I think the solution is to make Code Reviewer a job and hire and pay for it, and if it's that valuable the industry will catch on.
I would guess that testing/QA followed a similar trajectory where it had to be explicitly invested in and made into a job to compete for or it wouldn't happen.
As for prioritization... isn't it enough knowing that other people are blocked on your review? That's what incentivizes me to get to the reviews quickly.
I guess it's always going to depend a lot on your coworkers and your organization. If the culture is more about closing tickets than achieving some shared goal, I don't know what you could do to make things work.
If your job description is reviewing the codebase and every change that goes into it, you will be actively engaged. Whoever the most fervent auditor of new packages/libraries is on the team, they're probably de facto doing this role. Whoever has the deepest knowledge actually, just let them observe/edit.
I also think there's benefits to review being done by devs. They're already deep in the code and review does have a side benefit of broadening that scope. Helping people know what others are doing. Can even help serve as a way to learn and improve your development. I guess the question is how valuable these things are?
AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
Educational establishments MUST prioritize teaching code review skills, and other high-level leadership skills.
Debatable, with same experience, depends on the language, existing patterns, code base, base prompts, and complexity of a task
For human written code, shape correlates somewhat with correctness, largely because the shape and the correctness are both driven by the human thought patterns generating the code.
LLMs are trained very well at reproducing the shape of expected outputs, but the mechanism is different than humans and not represented the same way in the shape of the outputs. So the correlation is, at best, weaker with the LLMs, if it is present at all.
This is also much the same effect that makes LLMs convincing purveyors of BS in natural language, but magnified for code because people are more used to people bluffing with shape using natural language, but churning out high-volume, well-shaped, crappy substance code is not a particularly useful skill for humans to develop, and so not a frequently encountered skill. And so, prior to AI code, reviewers weren't faced with it a lot.
If you’re going to use AI you have to be even more diligent and self reviewed your code, otherwise you’re being a shitty team mate.
AI assisted commits on my team are "precise".
It's also caused an uptick in inbound to dev tooling and CI teams since AI can break things in strange ways since it lacks common sense.
But it is just as unable to properly reason about anything slightly more complex as when writing code.
There just hasn't been as many resources yet poured into improving AI code reviews as there has for writing code.
And in the end the whole paradigm itself may change.
So where is your 3 startups?
I find that interesting. That has always been the case at most places my friends and I have worked at that have proper software engineering practices, companies both very large and very small.
> AI can already write very good code. I have led teams of senior+ software engineers for many years. AI can write better code than most of them can at this point.
I echo @ZYbCRq22HbJ2y7's opinion. For well defined refactoring and expanding on existing code in limited scope they do well, but I have not seen that for any substantial features especially full-stack ones, which is what most senior engineers I know are finding.
If you are really seeing that then I would either worry about the quality of those senior+ software engineers or the metrics you are using to assess the efficacy of AI vs. senior+ engineers. You don't have to even show us any code: just tell us how you objectively came to that conclusions and what is the framework you used to compare them.
> Educational establishments MUST prioritize teaching code review skills
Perhaps more is needed but I don't know about "prioritizing"? Code review isn't something you can teach as a self-contained skill.
> and other high-level leadership skills.
Not everyone needs to be a leader and not everyone wants to be a leader. What are leadership skills anyway? If you look around the world today, it looks like many people we call "leaders" are people accelerating us towards a dystopia.
Shockingly, the best code review tool I've ever used was Azure DevOps.
Javascript at scale combined with teams that have to move fast and ship features is a recipe for this.
At least it's not Atlassian.
You might be thinking of Fisheye/Crucible, which were acquisitions, and suffered the traditional fate of being sidelined.
(You are 100% correct that Stash/Bitbucket Server has also been sidelined, but that has everything to do with their cloud SaaS model generating more revenue than selling self-hosted licenses. The last time I used it circa 2024, it was still way faster than Bitbucket Cloud though.)
Source: worked at Atlassian for a long time but left a few years ago.
I use it every day and don't have any issues with the review system, but to me it's very similar to github. If anything, I miss being able to suggest changes and have people click a button to integrate them as commits.
So I'm back to liking dev-ops and github code reviews identically!
When I started my career, no one did code review. I'm old.
At some point, my first company grew; we hired new people and started to offshore. Suddenly, you couldn't rely on developers having good judgement... or at least being responsible for fixing their own mess.
Code review was a tool I discovered and made mandatory.
A few years later, everyone converged on GitHub, PRs, and code review. What we were already doing now became the default.
Many, many years layer, I work with a 100% remote team that is mostly experienced and 75% or more of our work is writing code that looks like code we've already written. Most code review is low value. Yes, we do catch issues in review, especially with newer hires, but it's not obviously worth the delay of a review cycle.
Our current policy is to trust the author to opt-in for review. So far, this approach works, but I doubt it will scale.
My point? We have a lot of posts about code review and related tools and not enough about whether to review and how to make reviews useful.
I think its easy to add processes under the good intention of "making the code more robust and clean", but I never heard anyone discuss what is the cost of this process to the team's efficiency.
I'm not a fan of automatic syntax formatting but you can have some degree of pre-commit checks.
(There should be breakglass mechanisms to bypass code reviews, sure. Just the default should always be to require reviews)
1. It's easy to optimise for talented, motivated people in your team. You obviously want this, and it should be the standard, but you also want it to be the case that somebody who doesn't care about their work can't trash the codebase.
2. I find even people just leaving 'lgtm' style reviews for simple things, does a lot to make sure folks keep up with changes. Even if there's nothing caught, you still want to make sure there aren't changes that only one person knows about. That's how you wind up with stuff like, the same utility functions written 10 times.
The owner is allowed to make changes without review.
GitLab enables this - make the suggestion in-line which the original dev can either accept or decline.
It always seems as if the code review is the only time when all stakeholders really gets involved and starts thinking about a change. There may be some discussion earlier on in a jira ticket or meeting, and with some luck someone even wrote a design spec, but there will still often be someone from a different team or distant part of the organization that only hears about the change when they see the code review. This includes me. I often only notice that some other team implemented something stupid because I suddenly get a notification that someone posted a code review for some part of the code that I watch for changes.
Not that I know how to fix that. You can't have everyone in the entire company spend time looking at every possible thing that might be developed in the near future. Or can you? I don't know. That doesn't seem to ever happen anyway. At university in the 1990's in a course about development processes there wasn't only code reviews but also design reviews, and that isn't something I ever encountered in the wild (in any formal sense) but I don't know if even a design review process would be able to catch all the things you would want to catch BEFORE starting to implement something.
Because in the software engineering world there is very little engineering involved.
That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.
Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not
Different stakes and patching opportunities lend to different practices.
Writing code is the design phase.
You don’t need design phase for doing design.
Will drop link to relevant video later.
However many, probably half, that I work with, and most that I worked with overall for the last 25+ years (since after I dropped out) have an engineering degree. Especially the younger ones, since this century it has been more focus on getting a degree and fewer seems to drop out early to get a job like many of us did in my days.
So when American employers insist on giving me titles like "software engineer" I cringe. It's embarrassing really, since I am surrounded by so many that have a real engineering degree, and I don't. It's like if I dropped out of medical school and then people started calling me "doctor" even if I wasn't one, legally. It would be amazing if we could find a better word so that non-engineers like me are not confused with the legally real engineers.
And proper software developement definitely has engineering parts. Otherwise titles are just labels.
As a aside, I find your example of doctor as amusing because it's overloaded with many considering the term a synonym of physician, and the confusion that can cause with other types of doctors.
Definitely making software can be engineering, most of the time it is not, not because of the nature of software, but the characteristics of the industry and culture that surrounds it, and argument in this article is not convincing (15 not very random engineers is not that much to support the argument from "family resemblance").
Software is clearly different than "hardware", but it doesn't mean that other industries do not use experiment and iteration.
This is the talk on real software engineering: https://www.youtube.com/watch?v=RhdlBHHimeM
In the context of software vs other sub-disciplines, the big difference is in the cost of iterating and validating. A bridge has very high iteration cost (generally, it must be right first time) and validation is proven over decades. Software has very low iteration cost, so it makes much more sense to do that over lots of upfront design. Validation of software can also generally be implemented through software tools, since it's comparatively easy to simulate the running environment of the software.
Other disciplines like electronics live a little closer to a bridge, but it's still relatively cheap to iterate, so you tend to plan interim design iterations to prove out various aspects.
No, the big difference is that in the Engineering disciplines, engineers are responsible end-to-end for the consequences of their work. Incompetence or unethical engineers can and regularly do lose their ability to continue engineering.
It's very rare that software developers have any of the rigour or responsibilities of engineers, and it shows in the willingness of developers to write and deploy software which has real-world costs. If developers really were engineers, they would be responsible for those downstream costs.
That is by definition not engineering.
> Equally, there's plenty of examples of software where careful processes are in place to demonstrate exactly the responsibilities you discuss.
Software engineering of course exists, but 99%+ of software is not engineered.
I'm not sure the generally accepted definition of engineering makes any reference to taking responsibility: https://dictionary.cambridge.org/dictionary/english/engineer...
Way to general to be useful. By that definition the store clerk is an engineer (tool cash register, problem solved my lack of gummy bears), janitors swinging a mops, or automotive techs changing oil.
Engineering is applied science.
By that standard, doctors and hair stylists are also engineers, as are some chimps and magpies. I don't think it's a useful definition, it's far too broad.
People forget that software is used in those other disciplines. CFD, FEA, model-based design etc. help to verify ideas and design without building any physical prototype and burning money in the real lab.
You can do some strain and stress analysis on a virtual bridge to get a high degree of confidence that the real bridge will perform fine. Of course, then you need to validate it at all stages of development, and at the end perform final validation under weight.
The thing is that people building engines, cars, planes, sensors, PCBs and bridges actually do so, largely because they are required to do so. If you give them freedom to not do that, many of them will spare themselves such effort. And they understand the principles of things they are working on. No one requires any of that from someone that glued together few NPM packages with a huge JS front-end framework, and such person may not even know anything about how the HTTP works, how browser handles the DOM etc. It's like having a mechanical engineer that doesn't even understand basic principles of dynamics.
There are industries that deal with the software (i.e. controls design) that have much higher degree of quality assurance and more validation tools, including meaningful quantitative criteria, so it clearly is not a matter of software vs hardware.
No, it really isn't. I don't know which amateur operation you've been involved with, but that is really not how things work in the real world.
In companies that are not entirely dysfunctional, each significant change to the system's involve a design phase, which often includes reviews from stakeholders and involved parties such as security reviews and data protection reviews. These tend to happen before any code is even written. This doesn't rule out spikes, but their role is to verify and validate requirements and approaches, and allow new requirements to emerge to provide feedback to the actual design process.
The only place where cowboy coding has a place is in small refactoring, features and code fixes.
You need a high level design up-front but it should not be set in stone. Writing code and iterating is how you learn and get to a good, working design.
Heavy design specs up-front are a waste of time. Hence, the agile manifesto's "Working software over comprehensive documentation", unfortunately the key qualifier "comprehensive" is often lost along the way...
On the whole I agree that writing code is the design phase. Software dev. is design and test.
Yes, you need a design that precedes code.
> Writing code and iterating is how you learn and get to a good, working design.
You are confusing waterfall-y "big design upfront" with having a design.
It isn't.
This isn't even the case in hard engineering fields such as aerospace where prototypes are used to iterate over design.
In software engineering fields you start with a design and you implement it. As software is soft, you do not need to pay the cost of a big design upfront.
I do not and I have explained it.
> In software engineering fields you start with a design and you implement it
And part of my previous comment is that this "waterfall-y" approach in which you design first and implement second does not work and has never worked.
> you do not need to pay the cost of a big design upfront
Exactly, and not only that but usually requirements will also change along the way. The design can change and will change as you hit reality and learn while writing actual, working code. So keep your design as a high-level initial architecture then quickly iterate by writing code to flesh out the design.
Software is often opposed to "traditional engineering" but it is actually the same. How many experiments, prototyopes, iterations go into building a car or a rocket? Many. Engineers do not come up with the final design up front. The difference it is that this is expensive while in software we can iterate much more, much quicker, and for free to get to the final product.
No where did anyone claim you need the full final design up front. For cars\rockets how many of those experiments, prototypes, and iterations had designs? All of them. You never see a mechanical engineer walk out to the shop and just start hammering on a pile of slop until it sort of looks like a car.
>The difference it is that this is expensive while in software we can iterate much more, much quicker, and for free to get to the final product.
If you have no design to meet how do you judge the output of an iteration or know you have arrived at the final product?
I think you mean "requirements" here instead of "design".
No. This is exactly what you are getting wrong. Requirements are constraints that guide the design. The design then is used to organize, structure, and allocate work, and determine what code needs to be written.
You should review the sources of your confusions and personal misconceptions, as you deny design and then proceed to admit there is design.
> And part of my previous comment is that this "waterfall-y" approach in which you design first and implement second does not work and has never worked.
Nonsense. "Big design upfront" works, but is suboptimal in software development. That's why it's not used.
"Big design upfront" approaches are costly as it requires know-how and expertise to pull off, which most teams lack, and it assumes requirements don't change, which is never the case.
Once you acknowledge that requirements will change and new requirements will emerge, you start to think of strategies to accommodate them. In software development, unlike in any hard engineering field, the primary resource consumed is man-hours. This means that, unlike in hard engineering fields, a software development process can go through total rebuilds without jeopardizing their success. Therefore in software development there is less pressure to get every detail right at the start, and thus designs can be reviewed and implementations can be redone with minimal impact.
> Exactly, and not only that but usually requirements will also change along the way. The design can change and will change as you hit reality and learn while writing actual, working code.
Yes.
But you do need a design upfront, before code is written. Design means "know what you need to do". You need to have that in place to create tickets and allocate effort. It makes no sense at all to claim that writing code is the design stage. Only in amateur pet projects this is the case.
The "some math" is used in engineering fields in things like preliminary design, sizing, verification&validation, etc. To a lesser degree, "some math" can be used in the design stages of software development projects. For example, estimating the impact of micro services tax in total response times to verify if doing synchronous calls can work vs doing polling/messaging. Another example is estimating max throughput per service based on what data features in a response and how infrastructure is scaled. This is the kind of things that you do way before touching code to determine if the expected impact of going with a particular architecture vs another that mitigates issues.
> In software, the logical details are the finished product. The math is what you're trying to make.
You're confused. The design stage precedes writing any code, let alone the finished product. Any remotely complex work, specially if it involves architecture changes, is preceded by a design stage where alternatives are weighed and validated, and tradeoffs are evaluated.
To further drive the point home, in professional settings you also have design reviews for things like security and data protection. Some companies even establish guidelines such as data classification processes and comparative design to facilitate these reviews.
> If you've actually thought through all of the details, you have written the software (if only in your head). If you haven't thought through all of the details and only figured out a high level design, you've still written some software (essentially, stubbing out some functionality, or leaving it as a dependency to be provided. However you want to think of it).
You're confusing having a design stage with having a big design upfront. This is wrong.
The purpose of the design stage is to get the necessary and sufficient aspects right from the start, before resources are invested (and wasted) in producing something that meets requirements. No one cares what classes or indentation style you use to implement something. The ultimate goal is to ensure the thing is possible to deliver, what it actually does and how it does it, and if it is safe enough to use. You start writing code to fill in the details.
https://www.youtube.com/watch?v=RhdlBHHimeM
Rich Hickey agrees it's a part of it, yes. https://www.youtube.com/watch?v=c5QF2HjHLSE
Now there's official support and tooling for reviews (at least in IDEA, but probably in the others too), where you also get in-line highlighting of changed lines, comments, status checks, etc...
I feel sorry for anyone still using GitHub itself (or GitLab or whatever). It's horrible for anything more than a few lines of changes here and there.
This is a pretty cool tool for it: https://github.com/sindrets/diffview.nvim
On the branch that you are reviewing, you can do something like this:
:DiffviewOpen origin/HEAD...HEAD
https://youtu.be/Qscq3l0g0B8
More often than not, it either doesn't exist, or turns out in a kind of architecture fetishism that the lead devs/architects have from conferences or space ship enterprise architecture.
Already without this garbage it feels so much better, than arguing about SOLID, clean code, hexagonal architecture, member functions being with an underscore, explicit types or not,...
I'm not convinced that review comments as commits make thing easier, but I think storing them in git in some way is a good idea (i.e. git annotations or in commit messages after merge etc)
82 more comments available on Hacker News