$50 Planetscale Metal Is Ga for Postgres
Key topics
The Postgres community is abuzz about PlanetScale's new $50 Metal offering, with commenters hailing it as an "incredible deal for indiehackers." However, some users expressed concerns about the lack of single-instance deploys and potential noisy neighbor issues, to which PlanetScale's team responded that they've engineered protections against resource over-commitment and that CPU will be the bottleneck, not IOPS. As users dug deeper, they uncovered details about the underlying AWS EC2 instances and resource allocation, with PlanetScale revealing they run on dedicated instance types like r6id and i4i. The discussion highlights the trade-offs between high availability, durability, and cost, and why this new offering is generating excitement among developers.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
51s
Peak period
59
0-3h
Avg / period
11.5
Based on 69 loaded comments
Key moments
- 01Story posted
Dec 15, 2025 at 11:11 AM EST
18 days ago
Step 01 - 02First comment
Dec 15, 2025 at 11:12 AM EST
51s after posting
Step 02 - 03Peak activity
59 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 17, 2025 at 1:12 PM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Even to take a case in point where durability is irrelevant - people building caches in Postgres (so as to only have one datastore / not need Redis as well). Not a big deal if the cache blows up - just force everyone to login again. Would love to see the vendor reduce complexity on their end and pass through the savings to the customer.
If your or another customer's workload grows and needs to size up we launch three whole new database servers of the appropriate size (whether that's more CPU+RAM, more storage, or both), restore the most recent backups there, catch up on replication, and then orchestrate changing the primary.
Downtime when you resize typically amounts to needing to reconnect i.e. it's negligible.
Would be curious to know what the underlying aws ec2 instance is.
Is each DB on a dedicated instance?
If not, are there per-customer iops bounds?
From what I can tell, the 'Metal' offering runs on nodes with directly attached NVMe rather than network-attached storage. That means there isn't a per-customer IOPS cap – they actually market it as 'unlimited I/O' because you hit CPU before saturating the disk. The new $50 M-class clusters are essentially smaller versions of those nodes with adjustable CPU and RAM in AWS and GCP .
RE: EC2 shapes, it's not a shared EBS volume but a dedicated instance with local storage. BUT you'll still want to monitor capacity since the storage doesn't autoscale.
ALSO this pricing makes high-throughput Postgres accessible for indie projects, which is pretty neat.
So in the M-10 case, wouldn't this actually be somewhat misleading as I imagine hitting "1/8 vCPU" wouldn't be difficult at all?
You can get a lot more out of that CPU allocation with the fast I/O of a local NVMe drive than from the slow I/O of an EBS volume.
Just want to add that you don't necessarily need to invest in fancy disk-usage monitoring as we always display it in the app and we start emailing database owners at 60% full to make sure no one misses it.
Wouldn't this introduce additional latency among other issues?
yes and no. In my AWS account I can explicitly pick an AZ (us-east-2a, us-east-2b or us-east-2c) but Availability Zones are not consistent between AWS accounts.
See https://docs.aws.amazon.com/ram/latest/userguide/working-wit...
I ask because we see it more often than not, and for that situation sharding the workflow is the best answer. Why have one MySQL instance responding to request when you could have 2,4,8...128, etc MySQL instances responding as a single database instance? They also have the ability to vertically scale each of the shards in that database as it's needed.
That's $54,348/year, not including the cost of benefits, not including stock compensation. Let's say you reserve 20% for benefits and that comes out to $43,478.40 in salary.
Besides the benefit of not needing the management / communication overhead of hiring somebody, do you know any DBAs willing to take a full-time job for $43,478.40 in salary?
The reality most databases are tiny as shit and most apps can tolerate the massive latency that the cloud provider dbs offer.
It is why it is sorta funny we are rediscovering non network attached storage is faster.
Also, this is a shared server, not a truly dedicated one like you’d get with bare-metal providers. So, calling it "Metal" might be misleading marketing trick, but if you want someone to always blame and don’t mind overpaying for that comfort, then the managed option might be the right thing.
Apparently there are people who find this offering compelling. The lack of value is quite stunning to me.
- Aurora storage scales with your needs, meaning that you don't need to worry about running out of space as your data grows. - Aurora will auto-scale CPU and memory based on the needs of your application, within the bounds you set. It does this without any downtime, or even dropping connections. You don't have to worry about choosing the right CPU and memory up-front, and for most applications you can simply adjust your limits as you go. This is great for applications that are growing over time, or for applications with daily or weekly cycles of usage.
The other Aurora option is Aurora DSQL. The advantages of picking DSQL are:
- A generous free tier to get you going with development. - Scale-to-zero and scale-up, on storage, CPU, and memory. If you aren't sending any traffic to your database it costs you nothing (except storage), and you can scale up to millions of transactions per second with no changes. - No infrastructure to configure or manage, no updates, no thinking about replicas, etc. You don't have to understand CPU or memory ratios, think about software versions, think about primaries and secondaries, or any of that stuff. High availability, scaling of reads and writes, patching, etc is all built-in.
You're still sharing nvme IO, cpu, memory bandwidth, etc. Not having a VM isn't really the point.
How does cross data center nodes work?
https://planetscale.com/blog/postgres-18-is-now-available
asking for a friend that liked this space