Everything as Code: How We Manage Our Company in One Monorepo
Key topics
The radical idea of managing an entire company in a single monorepo has sparked a lively debate, with some converts, like giancarlostoro, enthusiastically embracing the approach after discovering the power of tools like Claude. While some commenters, like emzo, appreciate the convenience of making atomic changes across the stack, others, such as valzam and yearolinuxdsktp, raise valid concerns about the challenges of ensuring backwards compatibility and handling client-server interactions during rollouts. As the discussion unfolds, a consensus emerges that robust testing and system design are crucial to mitigating these risks, with aylmao arguing that the problem should be tackled at the system level rather than relying on developer discipline.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
124
0-12h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 30, 2025 at 3:05 PM EST
11 days ago
Step 01 - 02First comment
Dec 30, 2025 at 3:13 PM EST
8m after posting
Step 02 - 03Peak activity
124 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 5, 2026 at 3:43 PM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I guess I could work with either option now.
And if it's not, it breaks everything. This is an assumption you can't make.
I think it’s better to always ask your devs to be concerned about backwards compatibility, and sometimes forwards compatibility, and to add test suites if possible to monitor for unexpected incompatible changes.
However there's a big difference between development and releases. You still want to be able to cut stable releases that allow for cherrypicks for example, especially so in a monorepo.
Atomic changes are mostly a lie when talking about cross API functions, i.e. frontend talking to a backend. You should always define some kind of stable API.
Even if we squash it into main later, it’s helpful for reviewing.
Other than that pretty free how you write commit messages
I can spend hours OCDing over my git branch commit history.
-or-
I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.
But also, rewriting history only works if you haven't pushed code and are working as a solo developer.
It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.
Weird, works fine in our team. Force with lease allows me to push again and the most common type of branch is per-dev and short lived.
Also rebasing is just so fraught with potential errors - every month or two, the devs who were rebasing would screw up some feature branch that they had work on they needed and would look to me to fix it for some reason. Such a time sink for so little benefit.
I eventually banned rebasing, force pushes, and mandated squash merges to main - and we magically stopped having any of these problems.
The Linux kernel manages to do it for 1000+ devs.
My history ends up being: - add feature x - linting - add e2e tests - formatting - additional comments for feature - fix broken test (ci caught this) - update README for new feature - linting
With a squash it can boil down to just “added feature x” with a description of smaller changes inside the description.
It's just too bad not enough graphical UIs default to `--first-parent` and a drill-down like approach over cluttered "subway graphs".
Where logical commits (also called atomic commits) really shine is when you're making multiple logically distinct changes that depend on each other. E.g. "convert subsystem A to use api Y instead of deprecated api X", "remove now-unused api X", "implement feature B in api Y", "expose feature B in subsystem A". Now they can be reviewed independently, and if feature B turns out to need more work, the first commits can be merged independently (or if that's discovered after it's already merged, the last commits can be reverted independently).
If after creating (or pushing) this sequence of commits, I need to fix linting/formatting/CI, I'll put the fixes in a fixup commit for the appropriate and meld them using a rebase. Takes about 30s to do manually, and can be automated using tools like git-absorb. However, in reality I don't need to do this often: the breakdown of bigger tasks into logical chunks is something I already do, as it helps me to stay focused, and I add tests and run linting/formatting/etc before I commit.
And yes, more or less the same result can be achieved by creating multiple MRs and using squashing; but usually that's a much worse experience.
every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.
No information loss, and every commit is valid on their own, so cherry picks maintain the same level of quality.
So one branch had 40x "Deploy to Dev" commits. And those got merged straight into the repo.
They added no information.
When I am ready to make my PR I delete my remote feature branch and then squash the commits. I can use all my granular commit comments to write a nice verbose comment for that squashed commit. Rarely I will have more than one commit if a user story was bigger than it should be. Usually this happens when more necessary work is discovered. At this stage each larger squashed commit is a fully complete change.
The audience for these commits is everyone who comes after me to look at this code. They aren’t interested in seeing it took me 10 commits to fix a test that only fails in a GitHub action runner. They want the final change with a descriptive commit description. Also if they need to port this change to an earlier release as a hotfix they know there is a single commit to cherry picked basically. They don’t need to go through that dev commit history to track it all down.
I don’t like squashing on the PR merge. But I do like squashing things together into a tight set of independent commits (ideally 1). This keeps the dev in control of what makes it into the mainline history.
- You need to remove trash commits that appear when you need to rerun CI. - You need to remove commits with that extra change you forgot. - You want to perform any other kind of rebase to clean up messages.
I assume in this thread some people mean squashing from the perspective of a system like Gitlab where it's done automatically, but for me squashing can mean simply running an interactive (or fixup) and leaving only important commits that provide meaningful information to the target branch.
Serious question, what's going on here?
Are you using a "trash commit" to trigger your CI?
Is your CI creating "trash commits" (because build artefacts)?
It's harder to debug as well (this 3000line commit has a change causing the bug... best of luck finding it AND why it was changed that way in the first place.
I, myself, prefer that people tidy up their branches such that their commits are clear on intent, and then rebase into main, with a merge commit at the tip (meaning that you can see the commits AND where the PR began/ended.
git bisect is a tonne easier when you have that
Is there overhead to creating a branch?
I'm using a monorepo for my company across 3+ products and so far we're deploying from stable release to stable release without any issues.
Canary/Incremental, not so much
But (in my mind) even a front end is going to get told it is out of date/unusable and needs to be upgraded when it next attempts to interact with the service, and, in my mind atleast, that means that it will have to upgrade, which isn't "atomic" in the strictest sense of the word, but it's as close as you're going to get.
The moment you have two production services that talk to each other, you end up with one of them being deployed before the other.
Hell, you lose "atomic" assets the moment you serve HTML that has URLs in it.
Consider switching from <img src=kitty.jpg> to <img src=puppy.jpg>. If you for example, delete kitty from the server and upload puppy.jpg, then change html, you can have a client with URL to kitty while kitty is already gone. Generally anything you published needs to stay alive for long enough to "flush out the stragglers".
Same thing applies to RPC contracts.
Same thing applies to SQL schema changes.
IMO, monorepos are much easier to handle. Monoliths are also easier to handle. A monorepo monolith is pretty much as good as it gets for a web application. Doing anything else will only make your life harder, for benefits that are so small and so rare that nobody cares.
If you have a bajillion services and they're all doing their own thing with their own DB and you have to reconcile version across all of them and you don't have active/passive deployments, yes that will be a huge pain in the ass.
So just don't do that. There, problem solved. People need to stop doing micro services or even medium sized services. Make it one big ole monolith, maybe 2 monoliths for long running tasks.
A monorepo only allows you to reason about the entire product as it should be. The details of how to migrate a live service atomically have little to do with how the codebase migrates atomically.
This seems like simply not following the rules with having a monorepo, because the DB Cluster is not running the version in the repo.
Being 17 versions behind is an extreme example, but always having everything run the latest version in the repo is impossible, if only because deployments across nodes aren't perfectly synchronised.
Adding new APIs is always easy. Removing them not so much since other teams may not want to do a new release just to update to your new API schema.
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
We use Unleash at work, which is open source, and it works pretty well.
My philosophy is that we shouldn't be relying on long-lived feature branches that get merged in when a feature is "done." Instead, the goal should be to get code in `main` as soon as possible. Even if it's not done — maybe has some "not implemented" errors — it's now in your main branch and everyone else needs to keep it in mind while working on their code.
However, it's fine to keep this code dormant in production until it's ready. This type of "feature flagging" is worth building yourself, and frankly can often be as simple as env vars / a json config file / etc.
--
I think there's another type of feature flag, however. This is where you want to either A/B test or just slowly rollout a change to existing functionality, and both code paths are complete. You just want to be able to show it to certain users and not others, and measure its impact.
This sort of tooling is fairly difficult to get right yourself. It's not rocket science, but you'll probably want it to be adjustable without releases (so it needs a persistence layer) and by non-engineers (so it needs an admin UI).
Note I don't include reporting in here because I have come to believe that's a separate responsibility—you should be logging product metrics in general, and this is just an extension to tag the requests / units of work the feature flag variants exposed.
--
I agree with you that working primarily against main with small, incremental PRs that don't fully implement a feature (but are disabled in production) is _far_ superior to long-lived release branches. The world doesn't stop moving just because you are developing a feature, and this ensures your work doesn't incur a painful, risky rebase at the end (or get thwarted by someone else's refactoring). Build this mechanism yourself.
But if you want to be able to show feature variants to cohorts and measure their outcomes, use something off the shelf.
Finally, it's entirely possible that this tool could be used for both use cases. It then becomes reasonable to use it as such. But I think it's important to realize these are two very different use cases that appear similar at face value.
Feature flags are a good idea, but they require a lot of discipline and maintenance. In practice, they tend to be overused, and provide more negatives than positives. They're a complement, but certainly not a replacement for VCS branches, especially in monorepos.
Can you explain this comment? Are you saying to develop directly in the main branch?
How do you manage the various time scales and complexity scales of changes? Task/project length can vary from hours to years and dependencies can range from single systems to many different systems, internal and external.
The complexity comes from releases. Suppose you have a good commit 123 were all your tests pass for some project, you cut a release, and deploy it.
Then development continues until commit 234, but your service is still at 123. Some critical bug is found, and fixed in commit 235. You can't just redeploy at 235 since the in-between may include development of new features that aren't ready, so you just cherry pick the fix to your release.
It's branches in a way, but _only_ release branches. The only valid operations are creating new releases from head, or applying cherrypicks to existing releases.
So you can say that you have short-lived development branches that are always rebased on main. Along with the release branch and cherry-pick process, the workflow you describe is quite common.
They don’t do code reviews or any sort of parallel development.
They’re under the impression that “releases are complex and this is how they avoid it” but they just moved the complexity and sacrificed things like parallel work, code reviews, reverts of whole features.
What there isn't, is long lived feature branches with non-integrated changes.
And you've personally done this for a larger project with significant amount of changes and a longer duration (like maybe 6 months to a year)?
I'm struggling to understand why you would eliminate branches? It would increase complexity, work and duration of projects to try to shoehorn 2 different system models into one system. Your 6 month project just shifted to a 12 to 24 month project.
In my experience development branches vastly increase complexity by hiding the integration issues until very late when you try to merge.
Either way, I still don't understand how you can reasonably manage the complexity, or what value it brings.
Example:
main - current production - always matches exactly what is being executed in production, no differences allowed ever
production_qa - for testing production changes independent of the big project
production_dev_branches - for developing production changes during big project
big_project_qa_branch - tons of changes, currently being used to qa all of the interactions with this system as well as integrations to multiple other systems internal and external
big_project_dev_branches - as these get finalized and ready for qa they move to qa
Questions:
When production changes and project changes are in direct conflict, how can you possibly handle that if everyone is just committing to one branch?
How do you create a clean QA image for all of the different types of testing and ultimately business training that will need to happen for the project?
In general, all new code gets added to the tip of main, your only development branch. Then, new features can also be behind feature flags optionally. This allows developers to test and develop on the latest commit. They can enable a flag if they are interested in a particular feature. Ideally new code also comes with relevant automated tests just to keep the quality of the branch high.
Once a feature is "sufficiently tested" whatever that may mean for your team it can be enabled by default, but it won't be usable until deployed.
Critically, there is CI that validates every commit, _but_ deployments are not strictly performed from every commit. Release processes can be very varied.
A simple example is we decide to create a release from commit 123, which has some features enabled. You grab the code, build it, run automated tests, and generate artifacts like server binaries or assets. This is a small team with little SLAs so it's okay to trust automated tests and deploy right to production. That's the end, commit 123 is live.
As another example, a more complex service may require more testing. You do the same first steps, grab commit 123, test, build, but now deploy to staging. At this point staging will be fixed to commit 123, even as development continues. A QA team can perform heavy testing, fixes are made to main and cherry picked, or the release dropped if something is very wrong. At some point the release is verified and you just promote it to production.
So development is always driven from the tip of the main branch. Features can optionally be behind flags. And releases allow for as much control as you need.
There's no rule that says you can only have one release or anything like that. You could have 1 automatic release every night if you want to.
Some points that make it work in my experience are:
1. Decent test culture. You really want to have at least some metric for which commits are good release candidates. 2. You'll need some real release management system. The common tools available like to tie together CI and CD which is not the right way to think about it IMO (example your GitHub CI makes a deployment).
TL:Dr:
Multiple releases, use flags or configuration for the different deployments. They could all even be from the same or different commits.
But how would you create that QA environment when it involves thousands of commits over a 6 month period?
It will be highly dependent on the kind of software you are building. My team in particular deals with a project that cuts "feature complete" releases every 6 months or so, at that point only fixes are allowed for another month or so before launch, during this time feature development continues on main. Another project we have is not production critical, we only do automated nightlies and that's it.
For a big project, typically it involves deploying to a fully functioning QA environment so all functionality can be tested end to end, including interactions with all other systems internal to the enterprise and external. Eventually user acceptance testing and finally user training before going live.
Ideally you'd do the work in your hotfix branch and merge it to main from there rather than cherry picking, but I feel that mostly because git isn't always great at cherry picking.
We build a user-friendly way for non-technical users to interact with a repo using Claude Code. It's especially focused on markdown, giving red/green diffs on RENDERED markdown files which nobody else has. It supports developers as well, but our goal is to be much more user friendly than VSCode forks.
Internally we have been doing a lot of what they talk about here, doing our design work, business planning, and marketing with Claude Code in our main repo.
for example I can have a prompt writing playwright tests for happy paths while another prompt is fixing a bug of duplicated rows in a table because of a missing SQL JOIN condition.
What does this mean in context of downloadable desktop apps?
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
Multiple repos shouldn't depend on a single shared library that needs to be updated in lockstep. If they do, something has gone horribly wrong.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
We have a monorepo, we use a server framework with automated code generation for API clients for each h service derived from OpenAPI.json. One change cascades too many changes. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt. We may just not be at scale—thank God. We a small team.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.
The alternative of every service being on their own version of libraries and never updating is worse.
And monorepo or not, bad software developers will always run into this issue. Most software will not have 'many teams'. Most software is written by a lot of small companies doing niche things. Big software companies with more than one team, normally have release managers.
My tipp: use architecture unit tests for external facing APIs. If you are a smaller company: 24/7 doesn't has to be the thing, just communicate this to your customers but overall if you run SaaS Software and still don't know how to do zero-downtime-deployment in 2025/2026, just do whatever you are still doing because man come on...
The people who say polyrepos cause breakage aren't doing it right. When you depend across repos in a polyrepo setup, you should depend on specific versions of things across repos, not the git head. Also, ideally, depend on properly installed binaries, not sources.
To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.
In a polyrepo environment, either:
- B upgrades their endpoint in a backward compatible fashion
OR
- B releases a new version of their API at /api/2.0 but keeps /api/1.0 active and working until nothing depends on it anymore
Never expose your storage/backend type. Whenever you do, any consumers (your UI, consumers of your API, whatever) will take dependencies on it in ways you will not expect or predict. It makes changes somewhere between miserable and impossible depending on the exact change you want to make.
A UI-specific type means you can refactor the backend, make whatever changes you want, and have it invisible to the UI. When the UI eventually needs to know, you can expose that in a safe way and then update the UI to process it.
It's tempting to return a db table type but you don't have to.
Of course, it’s still a pretty rough and dirty way to do it. But it works for small/demo projects.
It's definitely not amazing, code generation in general will always have its quirks, but protobuf has some decent guardrails to keep the protocol backwards-forwards compatible (which was painful with Avro without tooling for enforcement), it can be used with JSON as a transport for marshaling if needed/wanted, and is mature enough to have a decent ecosystem of libraries around.
Not that I absolutely love it but it gets the job done.
Company website in the same repo means you can find branding material and company tone from blogs, meaning you can generate customer slides, video demos
56 more comments available on Hacker News