Using the Expand and Contract Pattern for Schema Changes
Posted2 months agoActiveabout 2 months ago
prisma.ioTechstory
calmpositive
Debate
40/100
Database MigrationsSchema ChangesDevops
Key topics
Database Migrations
Schema Changes
Devops
The article discusses the 'expand and contract pattern' for making schema changes in databases without downtime, and the discussion revolves around its implementation, variations, and related challenges.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
10d
Peak period
33
Day 10
Avg / period
10.5
Comment distribution42 data points
Loading chart...
Based on 42 loaded comments
Key moments
- 01Story posted
Oct 31, 2025 at 4:06 PM EDT
2 months ago
Step 01 - 02First comment
Nov 10, 2025 at 7:59 AM EST
10d after posting
Step 02 - 03Peak activity
33 comments in Day 10
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 13, 2025 at 6:31 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45776138Type: storyLast synced: 11/20/2025, 3:53:09 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I call it the migration sandwich. (Nothing to do with the cube rule).
A piece of bread isn't a sandwich and a single migration in a tool like alembic isn't a "sandwich" either. You have a couple layers of bread with one or several layers of toppings and it's not a sandwich until it's all done.
People get a laugh out of the "idiot sandwich meme" and we always have a good conversation about what gnarly migrations people have seen or done (72+ hours of runtime, splitting to dozens or more tables and then reconstructing things, splitting things out to safely be worked on in the expanded state for weeks, etc).
I had never heard it called "expand and contract" before reading this article a few years ago.
What does everyone else call these?
Versioning is a concept where each version lives in non-intersecting time intervals.
This concept is completely focusing on the fact that your structures lifetimes must absolutely have non-empty intersections. It's close to the opposite.
Is it? Node.js publishes "Current", "LTS" and "Maintenance" versions, and there's always a reasonable time interval during which consumers typically upgrade from eg Maintenance to newer LTS or even Current. From the publishing side, that's very similar to "expand and contract", in temporarily expanding what's supported to include Current, and dropping support for oldest versions leaving Maintenance. It's continuous instead of ad hoc, and there are more than 2 versions involved, but the principle is basically the same (at least if you squint).
Though I guess if you're talking strictly about schema management strategies, then yeah, "versioning" might be very different from "expand contract", as you noted.
The main difference I see is that of focus here the focus is to migrate a database without downtime or excessive global locks, keeping multiple versions of the schema is a detail.
Things can have similarities without being the same. Also not being unique is not a moral failure.
I think I am failing to understand your stance.
Patterns are necessarily reductionist, like any sort of comparison of things that aren't 100% similar.
There is value in recognizing patterns. They're useful for comprehension and memory.
I guess we disagree on the reasonable interpretations of this sentence
But green/blue and A/AB/B I've used before to discuss the same.
“In programming, the strangler fig pattern or strangler pattern is an architectural pattern that involves wrapping old code, with the intent of redirecting it to newer code.”
https://en.wikipedia.org/wiki/Strangler_fig_pattern
The rest of the sequencing details follow from this idea.
What’s in the article, I know as the Strangler Fig Pattern.
Strangler fig pattern is mostly concerned with migrating from an old software to a new software, from example from a monolith to microservices. But I guess you can also apply it to database schemas.
> For column-level changes, this often means adding new columns to a table that have the characteristics you want while leaving the current columns as-is.
I think what makes it confusing is that their diagrams depict a completely separate schema, but what they describe is really just altering the existing schema.
[0]: https://stripe.com/blog/online-migrations
https://martinfowler.com/bliki/ParallelChange.html
Indeed, this pattern, in particular, is extremely useful in environments where you are trying to making changes to one part of a system while multiple deploys are happening across the entire system, or where you are dealing with a change that requires a large number of clients to be updated where you don't have direct control of those clients or they operate in a loosely-connected fashion.
So, regardless of AWS RDS as your underlying database technology, plan to break these steps up into individual deployment steps. I have, in fact, done this with systems deployed over AWS RDS, but also with systems deployed to on-prem SQL Server and Oracle, to nosql systems (this is especially helpful in those environments), to IoT and mobile systems, to data warehouse and analysis pipelines, and on and on.
Like you essentially defined the steps in a temporal like workflow and then it does all the work of expanding, verifying and contracting.
this is the interesting part where the article's prpcess matters. how do you make incompatible changes without breaking clients?
I feel like maybe they should invest more R&D in their migrations technology? The ORM is pretty great.
I like the prisma schema first way of specifying my models too. It’s pretty intuitive and readable, it centralizes all my models in one place.
The migration system could be more advanced but does the job. Multiple production projects I worked heavily on use it.
Overall I think it’s very well designed software
2) migration involves the problem of mixing a migration write with an actual live in flight mutation. Cassandra would solve this with additional per cell write time tracking or a migrated vs new mutation flag
3) and then you have deletes. So you'll need a tombstone mechanism, because if a live delete of a cell value is overwritten by a migrated value, than data that is deleted comes back to life
https://ris.utwente.nl/ws/portalfiles/portal/275963001/PDEng...
the only thing i would add is a minor and major version changes, so its clear how the different class ent stages are labeled/how you track when you're ready to backfill