Pre-Commit Hooks Are Broken
Key topics
The debate around pre-commit hooks being "fundamentally broken" sparked a lively discussion, with many developers weighing in on their experiences and perspectives. Some commenters, like nrclark and Mic92, praised the original article and shared their own tips for effective pre-commit workflows, including using tools like git-absorb to simplify commit management. However, others, like tharkun__, pushed back against the idea of enforcing specific workflows or hooks, arguing that developers should have control over their own branches and commits. A key takeaway from the discussion is that while pre-commit hooks can be useful, they should be seen as client-side validation, with CI serving as the more trustworthy server-side validation, as darkwater astutely pointed out.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4h
Peak period
65
0-12h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 10:45 PM EST
9 days ago
Step 01 - 02First comment
Dec 27, 2025 at 3:03 AM EST
4h after posting
Step 02 - 03Peak activity
65 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 2, 2026 at 11:38 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
Instead, put what you want in an optional pre-push hook and also put it into an early CI/CD step for your pull request checker. You'll get the same end result but your fussiest developers will be happier.
If I or someone else bases something off anything but master that's on them to rebased and keep up to date.
But until that PR is open? Totally with you. There is no obligation to "preserve history" up until that point.
I regularly work with Github, Bitbucket, and Gitlab. Everything I said applies except for the fact that I said "PR" instead of "MR". But yes, you're right. I'm highlighting a specific, albeit extremely popular, workflow.
In any case, my comment just reflects on the fact that you had a series of patches that you could not squash or rebase. It stuck.
And the fact that I see many people use the abbreviation "PR" for something that is merely a patch or diff.
I'm in a camp that prefers single rebased commits as units of change, "stacked diffs" style.
GitHub in particular was annoying with this style but is definitely getting better. It's still not great at dealing with actual stacks of diffs, but I can (and do) work around that by keeping the stack locally and only pushing commits that apply directly to the main branch.
It's fairly common to consider working and PR branches to be "unpublished" from a mutability point of view: if I base my work on someone else's PR, I'm going to have to rebase when they rebase. Merging to `main` publishes the commit, at which point it's immutable.
Working with JJ, its default behaviour is to consider parents of a branch that's not owned by you to be immutable.
And with git, you can even make anything that happens on the dev machines mandatory.
Anything you want to be mandatory needs to go into your CI. Pre-commit and pre-push hooks are just there to lower CI churn, not to guarantee anything.
(With the exception of people accidentally pushing secrets. The CI is too late for that, and a pre-push hook is a good idea.)
s/can/can't?
you will save your org a lot of pain if you do force it, same as when you do force a formatting style rather than letting anyone do what they please.
You can discuss to change it if some parts don't work but consistency lowers the failures, every time.
We’re a game studio with less technical staff using git (art and design) so we use hooks to break some commands that folks usually mess up.
Surprisingly most developers don’t know git well either and this saves them some pain too.
The few power users who know what they’re doing just disable these hooks.
And sometimes I just want to commit work in progress so I can more easily backtrack my changes. These checks are better on pre-push, and definitely should be on the PR pipeline, otherwise they can and will be skipped.
Anyway, thanks for giving me some ammo to make this case.
One benign example of something that can break after merge even if each branch is individually passing pre-merge. In less benign cases it will your branch merged to main and actual bugs in the code.
One reason to not allow "unclean merges" and enforced incoming branches to be rebased up-to-date to be mergable to the main branch.
You do you but I find rebasing my branch on main instead of merging makes me scratch mybhead way less.
Be prepared to have your PR blocked tho.
One key requirement in my setup is that every hook is hermetic and idempotent. I don’t use Rust in production, so I can’t comment on it in depth, but for most other languages—from clang-format to swift-format—I always download precompiled binaries from trusted sources (for example, the team’s S3 storage). This ensures that the tools run in a controlled environment and consistently produce the same results.
Also, if most developers are using one editor, configure that editor to run format and auto-fix lint errors. That probably cleans up the majority of unexpected CI failures.
Otherwise, I agree, your project can not rely on any checks running on the dev machine with git.
I prefer to be able to push instantly and get feedback async, because by the time I've decided I'm done with a change, I've already run the tests for it. And like I said, my editor is applying formatting and lints, so those fail more rarely.
But, if your pre-push checks are fast (rather than ~minutes), I can see the utility! It sucks to get an async failure for feedback that can be delivered quickly.
I'm a fan of pre-commit/push hooks, but they have to be fast. At <dayjob> our suite of pre-commit hooks are <50ms and pre-push hooks are <5s. They get regularly reviewed for performance, and if anything can't be made faster, slow pre-commit hooks will get ejected to pre-push, and slow pre-push hooks will get ejected to the regular CI suite.
... and cos most people using git will have to take a second if the hook returns to them "hey, your third commit is incorrect, you forgot ticket number"
But if I intend to squash and merge, then who cares about intermediate state.
This is a really interesting perspective. Personally I commit code that will fail the build multiple times per day. I only care that something builds at the point it gets merged to master.
(i'm assuming your are not squashing when merging, else it's pretty much the same workflow)
Local wip commits didn't come to mind at all
Hell, even most WIP commits will pass the tests (e.g. tests are not yet added for the new code), so I'd run them then too.
`git stash` is always an option :) but even if you want to commit it, you can always undo (or `--amend`) the commit when you get back to working. I personally am also a big fan of `git rebase -i` and all the things it allows me to fix up in the history before merging (rebasing) in to the main branch.
I usually put it on a branch, even if this project otherwise does all its development on the main branch. And I commit it without running precommits, and with a commit message prefix "WIP: ". If it's on a branch you can even push it to not lose work if your local machine breaks/is stolen.
When it's time to get it into the main branch I rebase to squash commits into working ones.
Now, if my final commit history of say 3 commits all actually build at each commit? For personal projects, no. Diminishing returns. But in a collaborative environment: How fun will it be for future you, or your team mates, to run bisect if half the commits don't even build?
I have this workflow because it's so easy to add a feature, breaking 3 tests, to be fixed later. And formatting is bad. And now I add another change, and I just keep digging and one can end up in a "oh no, how did I end up here?" state where different binaries in the tree need to be synced to different commits to even build.
> I feel like insisting on atomic commits in your local checkout defeats the entire purpose of using a tool like git.
WIP commits is hardly the only benefit of git or other DVCS over things like subversion.
I AM squashing before merging. Pre-commit hooks run on any commit on any branch, AFAIK. In any serious repo I'd never be committing to master directly.
Thanks - this is the first example of a pre-commit hook that I can see value in.
To put it more bluntly, pre-commit hooks are pre-commit hooks, exactly what it says on the tin. Not linting hooks or checking hooks or content filters. Depending on what exactly you want to do, they may or may not be the best tool for the job.
To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
The `prepare-commit-msg` hook is a better place to do that as it gives the hook some context about the commit (is the user amending an existing commit etc.)
> To put it even more bluntly, if you are trying to enforce proper formatting, pre-commit hooks are absolutely the wrong tool for the job, as hooks are trivially bypassable, and not shared when cloning a repo, by design.
They aren't a substitute for server post-receive hooks but they do help avoid having pushes rejected by the server.
The pre commit script (https://github.com/ThomasHabets/rustradio/blob/main/extra/pr...) triggers my executor which sets up the pre commit environment like so: https://github.com/ThomasHabets/rustradio/blob/main/tickbox/...
I run this on every commit. Sure, I have probably gone overboard, but it has prevented problems, and I may be too picky about not having a broken HEAD. But if you want to contribute, you don't have to run any pre commit. It'll run on every PR too.
I don't send myself PRs, so this works for me.
Of course I always welcome suggestions and critique on how to improve my workflow.
And least nothing is stateful (well, it caches build artefacts), and aside from "cargo deny" no external deps.
My git rebase workflow often involves running `git rebase -x "cargo clippy -- --deny=warnings"`. This needs a full checkout to work and not just a single file input
https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
What I really want is some way within jj to keep track of which commits have been checked and which are currently failing, so I can template it into log lines.
The intended future solution is `jj run` (https://docs.jj-vcs.dev/latest/design/run/), which applies similar ideas to more general commands.
Don't do that, just dont.
I too was about to become a war criminal.
feature/{first initial} {last initial} DONOTMERGE {yyyy-MM-dd-hh-mm-ss}
Yes, the branch name literally says do not merge.
I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.
I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.
I create new branches early and often. I take upstream changes as they land on the trunk.
Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.
At the end, I create a nice branch name and there is usually only one commit before code review.
Rebasing is kind of a short hand for cherry-picking, fixing up, rewording, squashing, dropping, etc. because these things don't make sense in isolation.
Too often, merges is only understood as bring the changes from there to here, it may be useful especially if you have release candidates branches and hotfixes. And you want to keep a trave of that process. But I much prefer rebasing and/or squashing PR onto the main branch.
And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.
There are use cases for merging, but not as the normal workflow.
With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.
When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.
If you want to keep track of what commits belongs to a certain pr, you can still have an empty merge commit at the end of the rebase. Gitlab will add that for you automatically.
The ”hasty conflict resolution ” makes a broken merge waaaay harder to fix than a broken rebase.
And rebasing makes you take care of each conflict one commit at a time, which makes it order by magnitudes easier to get them right, compared to trying to resolve them all in a single merge commit.
Having a ”fix broken merge” commit makes it explicit that there was an issue that was fixed.
Rebase sometimes seems like an attempt at saving face.
Even if you do it properly, the workflow is erasing history of that conflict existing and needing to be resolved. It leaves no trace of what has been worked on, when, and by whom.
Do some work on a file, commit 1 to branch A.
Meanwhile, in another branch B created off main, someone else commits changes to the same part of the same file.
That other branch B gets merged to main.
Now, rebase branch A onto main.
The rebase stops at the commit 1 due to a conflict between main and branch A.
Fix the conflict and commit. This erases commit 1 and creates new commit 1' where the conflict has never existed. History has been rewritten.
Rebase successfully completes, branch A now contains different commits than previously, so it will need to be force-pushed to remote if it already exists there. The protocol has resistance against changing history.
Merge branch A to main.
No commit in main now contains any information that there was a conflict that was fixed.
Had a pull request workflow been used, the ”merge main to A” merge commit message would detail which files were conflicting. No such commit is made when using a rebase workflow, chasing those clean fast-forward merges.
> whether it’s better to rebase a work branch onto the main branch, or to pull the changes from the main branch to the work branch.
The problem with this is that the latter has an infinitely higher chance of resulting in criss-cross merges than the former (0).
I know that worst case isn't all that common or everyone would be scared of rebases, but I've seen it enough that I have a healthy disrespect of rebase heavy workflows and try to avoid them when given the option/in charge of choosing the tools/workflows/processes.
In git, the merge (and merge commit) is the primitive and rebase a higher level operation on top of them with a complex but not generally well understood cache with only a few CLI commands and just about no UI support anywhere.
Like I said, because the rerere cache is so out-of-sight/out-of-mind that's why problems with it become weird and hard to debug. The situations I've seen that have been truly rebase-heavy workflows with multiple "git flow" long-running branches and even sometimes cherry picking between them. (Generally the same sorts of things that create "criss-cross merge scenarios".) Rebased commits start to bring in regressions from other branches. Rebased commits start to break builds randomly. If what is getting rebased is a long-running branch you probably don't have eyes on every commit, so finding where these hidden merge regressions happen becomes full branch bisects, you can't just focus on merge commits because you don't have them anymore, every commit is a candidate for a bad merge in a rebased branch.
Personally, I'd rather have real merge commits where you can trace both parents and the code not from either parent (conflict fixes), and you don't have to worry about ghosts of bad merges showing up in any random commit. Even the worst "criss-cross merge" commits are obvious in a commit log and I've seen have had enough data to surgically fix, often nearly as soon as they happen. rerere cache problems are things that can go unnoticed for weeks to everyone's confusion and potentially a lot of hidden harm. You can't easily see both parents of the merges involved. You might even have multiple repos with competing rerere caches alternating damage.
But also yes rerere cache problems are so generally infrequent that it might also take weeks of research, when it does happen, just to figure out what the rerere cache is for, that it might be the cause of some of your "merge ghosts" haunting your codebase, and how to clean it.
Obviously by the point where you are rebasing git flow-style long runnning branches and using frequent cherry picks you're in a rebase heavy workflow that is painful for other reasons and maybe that's an even heavier step beyond "rebase heavy" to some, but because the rerere cache is involved to some degree in every rebase once you stop trusting the rerere cache it can be hard to trust any rebase heavy workflow again. Like I said, personally I like the integration history/logs/investigatable diffs that real merge commits provide and prefer tools like `--first-parent` when I need "linear history" views/bisects.
... for code, honestly no idea
(Hint: --no-merges, --merges)
But even if i wasn't using gerrit, sometimes its the easiest way to fix branches that are broken or restructure your work in a more clear way
The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.
Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).
What if I've only staged one part of a file, but the pre-commit hook fails on the unstaged portions, which should be fine since I'm not commiting or pushing those changes.
[1]: https://pre-commit.com/
> hooks shouldn’t be run during a rebase
The pre-commit framework doesn't run hooks during a rebase.
> hooks should be fast and reliable
The pre-commit framework does its best to make hooks faster (by running them in parallel if possible) and more reliable (by allowing the hook author to define an independent environment the hook runs in), however it's of course still important that the hooks themselves are properly implemented. Ultimately that's something the hook author has to solve, not the framework which runs them.
> hooks should never change the index
As I read it the author says hooks shouldn't change the working tree, but the index insteead and that's what the pre-commit framework does if hooks modify files.
Personally I prefer configuring hooks so they just print a diff of what they would've changed and abort the commit, instead of letting them modify files during a commit.
correct. i'm saying that hook authors almost never do this right, and i'd rather they didn't even try and moved their checks to a pre-push hook instead.
> They tell me I need to have "proper formatting" and "use consistent style". How rude.
> Maybe I can write a pre-commit hook that checks that for me?
git filter is made for that. It works. There are still caveats (it will format whole file so you might end up commiting changes that are formatting fixed of not your own code).
Pre-commit is not for formatting your code. It's for checking whether commit is correct. Checking whether content has ticket ID, or whether the files pass even basic syntax validation
> Only add checks that are fast and reliable. Checks that touch the network should never go in a hook. Checks that are slow and require an update-to-date build cache should never go in a hook. Checks that require credentials or a running local service should never go in a hook.
If you can do that, great! If you can't (say it's something like CI/CD repo with a bunch of different language involved and not every dev have setup for everything to be checked locally), having to override it to not run twice a year is still preferable over committing not working code. We run local checks for stuff that make sense (checking YAML correctness, or decoding encrypted YAMLs with user key so they also get checked), but the ones that don't go remote. It's faster. few ms RTT don't matter when you can leverage big server CPU to run the checks faster
Bonus points, it makes the pain point - interactive rebases - faster, because you can cache the output for a given file hash globally so existing commits during rebase take miliseconds to check at most
> Don't set the hook up automatically. Whatever tool you use that promises to make this reliable is wrong. There is not a way to do this reliably, and the number of times it's broken on me is more than I can count. Please just add docs for how to set it up manually, prominantly featured in your CONTRIBUTING docs. (You do have contributing docs, right?)
DO set it up automatically (or as much as possible. We have script that adds the hooks and sets the repo defaults we use). You don't want new developer to have to spend half a day setting up some git nonsense only to get it wrong. And once you change it, just rerun it
Pre-push might address some of the pain points but it doesn't address the biggest - it puts the developer in a "git hole" if they have something wrong in commit, because while pre-commit will just... cancel the commit till dev fixes it, with pre-push they now need to dig out knowledge on how to edit or undo existing commits
This knowledge is a crucial part of effective use of git everyday, so if some junior dev has to learn it quick it's doing them a favor.
I almost always have a "this cicd must pass to merge" job, that includes linting etc, and then use squash commits exclusively when merging.
My coworker did that the other day and I'm deciding how to respond.
I'm particular about formatting, and it doesn't always match group norms. So I'll reformat things to my preferred style while working locally, and then reformat before pushing. However I may have several commits locally that then ge curated out of existence prior to pushing.
If you're using `pre-commit` the tool, not merely the hook, you can also use something like https://github.com/andrewaylett/pre-commit-action to run the tool in CI. It's a really good way to share check definitions between local development and CI, meaning you've shifted your checks to earlier in the pipeline.
I use Jujutsu day-to-day, which doesn't even support pre-commit hooks. But the tooling is still really useful, and making sure we run it in CI means that we're not relying on every developer having the hooks set up. And I have JJ aliases that help pre-commit be really useful in a JJ workflow: https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
I've been much much happier just having a little project specific script I run when I want to do formatting/linting.
I'm perfectly happy to have the CI fail if I forget to run the CI locally, which is rare but does happen. In that case I lose 5 minutes or whatever because I have to go find the branch and fix the CI failure and re-push it. The flip side of that is I rarely lose hours of work, or end up painting myself in a corner because commit is too expensive.
For my part, I find the “local history” feature of the JetBrains IDEs gives me automatic checkpoints I can roll back to without needing to involve git. On my Linux machines I layer in ZFS snapshots (Time Machine probably works just as well for Macs). This gives me the confidence to work throughout the day without needing to compulsively commit.
Having 25 meaningless “wip” commits does not help with that. It’s fine when something is indeed a work in progress. But once it’s ready for review it should be presented as a series of cleaned up changes.
If it is indeed one giant ball of mud, then it should be presented as such. But more often than not, that just shows a lack of discipline on the part of the creator. Variable renames, whitespace changes, and other cosmetic things can be skipped over to focus on the meat of the PR.
From my own experience, people who work in open source and have been on the review side of large PRs understand this the best.
Really the goal is to make things as easy as possible for the reviewer. The simpler the reviews process, the less reviewer time you’re wasting.
But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.
I think you must be kind of senior and think that you can just insist that everyone be less efficient and work in a weird way so you can feel more comfortable?
It's not really hand curation if you're deliberate about it from the get-go. It's certainly not eating up 80% of anyone's time.
Structuring code and writing useful commits a skill to develop, just like writing meaningful tests. As a first step, use `git add -p` instead of `git add .` or `git commit -a`. As an analog, many junior devs will just test everything, even stuff that doesn't make a lot of sense, and then jumble them all together. It takes practice to learn how to better structure that stuff and it isn't done by writing a ton of tests and then curating them after the fact.
> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?
Your personal productivity should only be one consideration. The long-term health of the project (i.e., maintenance) and the impact on other people's efficiency also must be considered. And efficiency isn't limited to how quickly features ship. Someone who ships fast but makes it much harder to debug issues isn't a top performer. At least, in my experience. I'd imagine it's team, company, and segment-dependent. For OSS projects with many part-time contributors, that history becomes really important because you may not have the future ability to ask someone why they did something a particular way.
It is too hard to get someone to look at a PR, so you are smuggling multiple 'related' but not interdependent features into one PR as individual commits so you can minimize the number of times you have to get someone to hit "approve", which is the limiting resource.
In your situation then I believe your way of working is a rational adaptation, but only so far as you lack the influence to address the underlying organizational/behavioral dysfunction.
https://news.ycombinator.com/newsguidelines.html
Repeatedly, you've been dismissive and insulting. It's not conducive to productive conversation. Your characterization of what I do or how I work is wrong. You latched on to some small part you thought would let you "win" and ran with it. If you actually care, I do a lot of open source work so you can find exactly how I work. Naturally, you can't see what I do in private, but I assure you it's not significantly different.
I aim to ship reasonably complete functionality. The "V" in "MVP" means it needs to be viable, not just minimal. Shipping some part that doesn't work standalone isn't useful to anyone. Yes, the PR is smaller, but now the context for that work is split over multiple PRs, which may not be reviewed by the same people. No one really has the full picture beyond me, which I guess is a good way to get my PRs rapidly approved, but a terrible way to get feedback on the overall design.
I don't work with you so I don't particularly care how you work. But, you're incorrect that any model other than the one you employ is niche an not how software is written. Elsewhere you dismissed the examples of large, open source projects as being unique. But, you'll find substantially smaller ones also employ a model closer to what I've described.
If you’re working on something and a piece of it is clearly self contained, you commit it and move on.
> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?
You can work however you like. But when it’s time to ask someone else to review your work, the onus is on you to clean it up to simplify review. Otherwise you’re saying your time is more valuable than the reviewer’s.
I do this. Also I do not spend 80% of my time doing it. It's not hard, nor is it time consuming.
I've been on a maintenance team for years and it's also been a massive help here, in our svn repos where squashing isn't possible. Those intermediate commits with good messages are the only context you get years down the line when the original developers are gone or don't remember reasons for something, and have been a massive help so many times.
I'm fine with manual squashing to clean up those WIP commits, but a blind squash-merge should never be done. It throws away too much for no good reason.
CI also doesn't necessarily help here - you have to have tests for all possible edge cases committed from day one for it to prevent these situations. It may be a month or a year or several years later before you hit one of the weird cases no one thought about.
Calling svn part of the problem is also kind of backwards - it has no bearing on the code quality itself, but I brought it up because it was otherwise forcing good practice because it doesn't allow you to erase context that may be useful later.
Over the time I've been here we've migrated from Bugzilla to Fogbugz to Jira, from an internal wiki to ReadTheDocs to Confluence, and some of these hundreds of repos we manage started in cvs, not svn, and are now slowly being migrated to git. Guess what? The cvs->svn->git migrations are the only ones that didn't lose any data. None of the Bugzilla cases still exist and only a very small number were migrated from FogBugz to Jira. Some of the internal wiki was migrated directly to Confluence, but ReadTheDocs are all gone. Commit messages are really the only thing you can actually rely on.
Lets just be Bayesian for a minute. If an organization can't figure out how to get off of svn, which is a dead and dying technology within 15-20 years of it being basically dead in most of tech then probably it's not not going to be nimble in other ways. Probably it's full of people who don't really do any work.
I'm surprised by how confident you are that things simply aren't done this way considering the number of high-profile users of workflows where the commit history is expected to tell a story of how the software evolved over time.
I'm surprised by how confident you are when you can only name projects you've never worked on. I wanted to find a commit of yours to prove my point, but I can't find a line of code you've written.
Presumably, a branch is a logical segment of work. Otherwise, just push directly master/trunk/HEAD. It's what people did for a long time with CVS and arguably worked to some extent. Using merge commits is pretty common and, as such, that branch will get merged into the trunk. Being able to understand that branch in isolation is something I've found helpful in understanding the software as a whole.
> Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.
I get you disagree with me, but you could be less dismissive about it. Work however you want -- I'm certainly not stopping you. I just don't your productivity to come at the expense of mine. And, I offered up other potential (and IMHO, superior) solutions from both developer and system tools.
I suppose what type of project you're working on matters. The "treat git like a versioned zip file" using squashed merges works reasonably well for SaaS applications because you rarely need to roll anything back. However, I've found a logically structured history has been indispensable when working on long-lived projects, particularly in open source. It's how I'm able to dig into a 25 year old OSS tool and be reasonably productive with.
To the point I think you're making: sure, I care what changed, and I can do that with `diff`. But, more often if I'm looking at SCM history I'm trying to learn why a change was made. Some of that can be inferred by seeing what other changes were made at the same time. That context can be explicitly provided with commit messages that explain why a change was made.
Calling it incompatible with how people work is a pretty bold claim, given the practice of squash merging loads of mini commits is a pretty recent development. Maybe that's how your team works and if it works for you, great. But, having logically separate commits isn't some niche development practice. Optimizing for writes could be useful for a startup. A lot of real world software requires being easy to maintain and a good SCM history shines there.
All of that is rather orthogonal to the point I was trying to add to the discussion. We have better tools at our disposal than running `git commit` every 15 minutes.
So when I open a Pr, I'll have a branch with a gajillion useless commits, and then curate them down to a logical set of commits with appropriate commit messages. Usually this is a single commit, but if I want to highlight some specific pieces as being separable for a reviewer, it'll be multiple commits.
The key point here is that none of those commits exist until just before I make my final push prior to a PR.
But, if you're really worried about losing 15 minutes of work, I think we have better tools at our disposal, including those that will clean up after themselves over time. Now that I've been using ZFS with automatic snapshots, I feel hamstrung working on any Linux system just using ext4 without LVM. I'm aware this isn't a common setup, but I wish it were. It's amazing how liberating it is to edit code, update a config file, install a new package, etc. are when you know you can roll back the entire system with one simple command (or, restore a single file if you need that granularity). And it works for files you haven't yet added to the git repo.
I guess my point is: I think we have better tools than git for automatic backups and I believe there's a lot of opportunity in developer tooling to help guard against common failure scenarios.
One nifty feature is that commits don't need messages, and also it'll refuse (by default) to push commits with no message. So your checkpoint commits are really easy to create, and even easier to avoid pushing by mistake.
Most common is I'm switching branches. Example use case: I'm working locally, and a colleague has a PR open. I like to check out their branch when reviewing as then I can interact with their code in my IDE, try running it in ways they may not have thought of, etc.
Another common reason I switch branches is that sometimes I want to try my code on another machine. Maybe I'm changing laptops. Maybe I want to try the code on a different machine for some reason. Whatever. So I'll push a WIP branch with no intention of it passing any sort of CI/CD just so I can check it out on the other machine.
The throughline here is that these are moments where the current state of my branch is in no shape, way, or form intended as an actual valid state. It just whatever state my code happened to be in before I need to save it.
The key thing (that several folk have pointed out) is that CI runs the canonical checks. Using something like pre-commit (the tool) makes it easier to at least vaguely standardise making sure that you can run the same checks that CI will run. Having it run from the pre-commit hook fits nicely into many workflows, my own pre-JJ workflow included.
Lefthook with glob+stage_fixed for formatters makes one of the issues raised a complete non-issue.
I'll write a in-depth post about it maybe within the next week or so, been diving into these in my hobby projects for a year or so.
Make test cases all green locally before pushing, but not in a way that interferes with pushing commits. Then, upload all of the proposed PRs you want in a review state, but then the linear system of record is backed by an automated testing / smoke test process before they land "auto-fast-forwarded" in a mostly uncontrolled manner that doesn't allow editing the history directly. Git, by itself, is too low-level for shared, mostly single-source-of-truth yet distributed dev. Standardization and simplicity are good, and so is requiring peer review of code before it's accepted for existing, production, big systems.