Replacing a $3000/mo Heroku Bill with a $55/mo Server
Posted2 months agoActive2 months ago
disco.cloudTechstoryHigh profile
calmmixed
Debate
70/100
Cloud ComputingCost OptimizationSelf-Hosting
Key topics
Cloud Computing
Cost Optimization
Self-Hosting
A company replaced their $3000/month Heroku bill with a $55/month server, sparking discussion on the trade-offs between convenience, cost, and maintenance responsibilities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
10m
Peak period
88
0-3h
Avg / period
11.4
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 21, 2025 at 4:28 PM EDT
2 months ago
Step 01 - 02First comment
Oct 21, 2025 at 4:38 PM EDT
10m after posting
Step 02 - 03Peak activity
88 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 2:53 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45661253Type: storyLast synced: 11/22/2025, 11:17:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Lots of conversation & discussion about self-hosting / cloud exits these days (pros, cons, etc.) Happy to engage :-)
Cheers!
Would be great to have a comparison on the main page of Disco
I am building https://github.com/openrundev/openrun/. Main difference is that OpenRun has a declarative interface, no need for manual CLI commands or UI operations to manage apps. Another difference is that OpenRun is implemented as a proxy, it does not depend on Traefik/Nginx etc. This allows OpenRun to implement features like scaling down to zero, RBAC access control for app access, audit logs etc.
Downside with OpenRun is that is does not plan to support deploying pre-packaged apps, no Docker compose support. Streamlit/Gradio/FastHTML/Shiny/NiceGUI apps for teams are the target use case. Coolify has the best support and catalog of pre-packaged apps.
https://news.ycombinator.com/item?id=44292103
https://news.ycombinator.com/item?id=44873057
I'd say the main differences is that we 1) we offer a more streamlined CLI and UI rather than offering extensive app/installation options 2) have an api-key based system that lets team members collaborate without having to manage ssh access/keys.
Generally speaking, I'd say our approach and tooling/UX tends to be more functional/pragmatic (like Heroku) than one with every possible option.
The load average in htop is actually per CPU core. So if you have 8 CPU cores like in your screenshot, a load average of 0.1 is actually 1.25% (10% / 8) of total CPU capacity - even better :).
Cool blog! I've been having so much success with this type of pattern!
Interesting project. Do you have any screenshots of the UI of Disco?
Oh, there’s actually this tutorial that shows a tiny preview of it:
https://disco.cloud/docs/deployment-guides/meilisearch
Thanks for the reminder!
Hosting staging envs in pricey cloud envs seems crazy to me but I understand why you would want to because modern clouds can have a lot of moving parts.
I'd still like a staging + prod, but keeping the dev environments on a separate beefy server seems smart.
The default thought to use the cloud because it's more performant though for even the most basic to intermediate loads instead of the hardware directly is what I'm referring to and what the article is referring to.
It's very easy to pay for cloud services per transaction at greatly inflated prices than what it actually costs, and how many cpu cores it actually uses at any given time.
It offloads things like - Power Usage - Colo Costs - Networking (a big one) - Storage (SSD wear / HDD pools) - etc
It is a long list but what doesnt allow you do it make trade offs like spending way less but accept downtime if your switch dies etc etc.
For a staging env these are things you might want to do.
It's fun the first time, but becomes an annoying faff when it has to be repeated constantly.
In Heroku, Vercel and similar you git push and you're running. On a linux server you set up the OS, the server authentication, the application itself, the systemctl jobs, the reverse proxy, the code deployment, the ssl key management, the monitoring etc etc.
I still do prefer a linux server due to the flexibility, but the UX could be a lot better.
And the overlap between what Nix does and what the 'cloud' does for you is only partial. (Eg it can still make sense to use Nix in the cloud.)
Certainly true, but there are a whole lot of tools to automate those operations so that you aren't doing them constantly.
Ansible basically automates the workflow of: log in to X, do step X (if Y is not present). It has broad support for distros and OSes. It's mostly imperative and can be used like a glorified task runner.
Salt let's you mostly declaratively describe the state of a system. It comes with a agent/central host system for distributing this configuration from the central host to the minions (push).
Puppet is also declarative and also comes with an agent/central host system but uses a pull based approach.
Specialized/ exotic options are also available, like mgmt or NixOS.
Actually I am looking for tools to automate DevOps and security for self-hosting
It is in general the simplest of these systems to get started with and you should be able to incrementally adopt it. There is also a plethora of free online resources available for it.
Ansible-Lockdown is another excellent example of how Ansible can be used to harden servers via automation.
Ansible can also do that, on top of literally anything else you could want - network configuration, infrastructure automation, deployment pipelines, migrations, anything. As always, that flexibility can be a blessing or a curse, but I think Ansible manages it well because it's so KISS.
RedHat's commercial Ansible Automation Platform gives you more power for when you need it, but you don't need it starting out.
The person you're replying to mentioned a self-hosting use case, so this probably isn't relevant for that, but Ansible can also be configured for a pull approach, which is useful for scaling.
I have to ask - do scripts not work for you?
When I had to do this back in 2005 it was automated with 3 main steps:
1. A preseed (IIRC) debian installation disc (all the packages I needed where installed at install time), and
2. Which included a first-boot bash script that retrieved pre-compiled binaries from our internal ftp site, and
3. A final script that applied changes to the default config files and ran a small test to ensure everything started.
Zero human interaction after powering a machine on with the disc in the drive.
These days I would do it even better (system-d configs, Nix perhaps, text files (such as systemd units) can be retrieved automagically after boot, etc).
No. It covered setting up all the applications needed as well (nginx, monitoring agent, etc), installing keys/credentials.
What did parent mention that can't be covered by the approach I used?
Sure you can script all the things into 3 steps, just like you can draw an owl with a couple circles.
Maintain, maybe. The setup for everything extra can scripted, and include a few packages I had to build from source myself because there was no binary download.
I'm not a PaaS user, and I encourage people to avoid vendor lock-in and be in control of their own destiny. It takes work though, and you need to sweat the details if you care about reliability and security, which continue to be problem areas for more DIY solutions.
If people aren't willing to put in the work, I'd rather they stick to the managed services so they don't contribute to eroding the already abysmal trust of the industry at large.
cloud is easy until is not, for 90% of us maybe we dont need a multi region with hot and cold storage
for those that need it, its neccesary
I bet you could figure out `apt install nginx` and a basic config pretty quickly, definitely faster than a web dev could learn game programming. “What do you mean, I have to finish each loop in 16 msec?”
Configuring a web server is a low-difficulty task that should be available for any good software developer with 3 days to study for it. It's absurd for a developer to need to configure a web server, but insist on paying a large rent and cede control to some 3rd party instead of just doing it.
It’s a lot cheaper than me learning to bake as well as he does—not to mention dedicating the time every day to get my daily bread—and I’ll never need bread on the kind of scale that would make it worth my time to do so.
But the cloud is different. None of the financial scale benefits are passed on to you. You save serious money running it in-house. The arguments around scale have no validity for the vast, vast majority of use cases.
Vercel isn't selling bread: they're selling a fancy steak dinner, and yes, you can make steak at home for much less, and if you eat fancy steak dinners at fancy restaurants every night you're going to go broke.
So the key is to understand whether your vendors are selling you bread, or a fancy steak dinner, and to not make the mistake of getting the two confused.
I wonder, though—at the risk of overextending the metaphor—what if I don’t have a kitchen, but I need the lunch meeting to be fed? Wouldn’t (relatively expensive) catering routinely make sense? And isn’t the difference between having steak catered and having sandwiches catered relatively small compared to the alternative of building out a kitchen?
What if my business is not meaningfully technical: I’ll set up applications to support our primary function, and they might even be essential to the meat of our work. But essential in the same way water and power are: we only notice it when it’s screwed up. Day-to-day, our operational competency is in dispatching vehicles or making sandwiches or something. If we hired somebody with the expertise to maintain things, they’d sit idle—or need a retainer commensurate with what the Vercels and Herokus of the world are charging. We only need to think about the IT stuff when it breaks—and maybe to the extent that, when we expect a spike, we can click one button to have twice as much “application.”
In that case, isn’t it conceivable that it could be worth the premium to buy our way out of managing some portion of the lower levels of the stack?
In practice, there are two situations where cloud makes sense:
1. You infrequently need to handle traffic that unpredictably bursts to a large multiple of your baseline. (Consider: you can over provision your baseline infrastructure by an order of magnitude before you reach cloud costs) 2. Your organization is dysfunctional in a way that makes provisioning resources extremely difficult but cloud can provide an end run around that dysfunction.
Note that both situations are quite rare. most industries that handle that sort of large burst are very predictable: event management know when a client will be large and provision ticket sales infra accordingly, e-commerce knows when the big sale days will be, and so on. In the second case, whatever organizational dysfunction caused the cloud to be appealing will likely wrap itself around the cloud initiative as well.
Water is cheap, yes. Salt isn't all that cheap, but you only need a little bit.
> [...] and I’ll never need bread on the kind of scale that would make it worth my time to do so.
If you need bread by hand, it's a very small scale affair. Your physique and time couldn't afford you large scale bread making. You'd a big special mixer and a big special oven etc for that. And you'd probably want a temperature and moisture controlled room just for letting your dough rise.
https://postmates.com/store/restaurant-depot-4538-s-sheridan...
I blush to admit that I do from time to time pay $21 for a single sourdough loaf. It’s exquisite, it’s vastly superior to anything I could make myself (or anything I’ve found others doing). So I’m happy to pay the extreme premium to keep the guy in business and maintain my reliable access to it.
It weighs a couple of pounds, though I’m not clear how the water weight factors in to the final weight of a loaf. And I’m sure that flour is fancier than this one. I take your point—I don’t belong in the bread industry :)
(Similarly to how you pay Amazon or Google etc not just for the raw cloud resources, but for the system they provide.)
I grew up in Germany, but now live in Singapore. What's sold as 'good' sourdough bread here would make you fail your baker's training in Germany: huge holes in the dough and other defects. How am I supposed to spread butter over this? And Mischbrot, a mixture of rye and wheat, is almost impossible to find.
So we make our own. The goal is mostly to replicate the everyday bread you can buy in Germany for cheap, not to hit any artisanal highs. (Though they are massively better IMHO than anything sold as artisanal here.)
Interestingly, the German breads we are talking about are mostly factory made. Factory bread can be good, if that's what customers demand.
See https://en.wikipedia.org/wiki/Mischbrot
Going on a slight tangent: with tropical heat and humidity, non-sourdough bread goes stale and moldy almost immediately. Sourdough bread can last for several days or even a week without going moldy in a paper bag on the kitchen counter outside the fridge, depending on how sour you go. If you are willing to toast your bread, going stale during that time isn't much of an issue either.
(Going dry is not much of an issue with any bread here--- sourdough or not, because it's so humid.)
Of course, the difference between sourdough and anything else is astonishing, I just can't comprehend someone charging $21 for it!
also skills, some people just bake better than others
It's actually not too bad, if look at the capital cost of a bread factory amortised over each loaf of bread.
The equipment is comparatively more expensive for a home baker who only bakes perhaps two loafs a week.
Some skills are required, but it's really not that hard once you learn the technique and have done it a few times.
Wait, what? Salt is literally one of the cheapest of all materials per kilogram that exists in all contexts, including non-food contexts. The cost is almost purely transportation from the point of production. High quality salt is well under a dollar a pound. I am currently using salt that I bought 500g for 0.29 euro. You can get similar in the US (slightly more expensive).
This was a meme among chemical engineers. Some people complain in reviews on Amazon that the salt they buy is cut with other chemicals that make it less salty. The reality is that there is literally nothing you could cut it with that is cheaper than salt.
But sure, it's cheap otherwise. Point granted.
One way or another, salt is not a major driver of cost in bread, because there's relatively little salt in bread. (If there's 1kg of flour, you might have 20g of salt.)
I think this is partly responsible for the increased popularity of sqlite as a backend. It's super simple and lightstream for recovery isn't that complicated.
Most apps don't need 5 9s, but they do care about losing data. Eliminate the possibility of losing data, without paying tons of $ to also eliminate potential outages, and you'll get a lot of customers.
Is it mostly developer insecurity, or mostly tech leadership insecurity?
You get X resources in the cloud and know that a certain request/load profile will run against it. You have to configure things to handle that load, and are scored against other people.
Things like Lambda do fit in this model, but they are too inefficient to model every workload.
Amazon lacks vision.
* The big caveat: If you don't incur the exact same devops costs that would have happened with a linux instance.
Many tools (containers in particular) have cropped up that have made things like quick, redundant deployment pretty straightforward and cheap.
Cloud isn't worth it until suddenly it is because you can't deploy your own servers fast enough, and then it's worth it until it exceeds the price of a solid infrastructure team and hardware. There's a curve to how much you're saving by throwing everything in the cloud.
As cloud marches on it continues to seem like a grift.
Breaking into a home is relatively easy.
And unless you live in the US and is willing to actually shot someone (with all the paperwork that entails, as well as physical and legal risks), the fact is that you can't actually stop a burglary.
It used to be called 3 laptops a power scrubber and a backup battery. If you want to go self hosting things. If you were fancy you had two servers.
The cloud costs includes everything.
As an example: my Macbook Pro from 2015 had 16 GiB RAM, and that's what my MacBook Air from 2025 also has.
Oh, and the new machine has unified RAM. The old machine had a bit of extra RAM in the GPU that I'm not counting here.
As far as I can tell, the new RAM is a lot faster. That counts for something. And presumably also uses less power.
Simplicity is uncomfortable to a lot of people when they're used to doing things the hard way.
Today the smallest, and even large, aws machines are a joke, comparable to a mobile phone from 15 years ago to a terrible laptop today, and take about three to six months to in rent as buying the hardware outright.
If you're on the cloud without getting 75% discount you will save money and headcount by doing everything on prem.
Quick question: how long would it take to provision and set up another server if this one dies?
But to provision a new server, as these are "stateless" (per 12 Factor) servers, it's just 1) get a VPS 2) install Docker+Disco using our curl|sh install script 3) authorize github 4) deploy a "project" (what we call an app), setting the env vars.
All in all ~10 minutes for a new machine.
[0] https://github.com/gregsadetsky/example-flask-site/blob/main...
Re Load balancing for example, Disco is built on top of Docker Swarm, so you can add nodes (ie machines) to scale horizontally - `disco nodes:add root@<ip>`
For monitoring/alerting, we offer some real time cpu/memory metrics (ie `docker stats`) and integrate with external syslog services.
Do you have specific use cases in mind which current PaaS providers satisfy? Would you say that these kinds of concerns are what's holding you back from leaving Heroku or others (and are you considering leaving because of price, support, etc.)? Cheers
How do I harden the server, back it up, etc? Basically the layer below Disco, to go beyond running it as a "toy"
This is not a dig at Disco, I run into the same issue with virtually any other self-hosted PaaS I could find.
Our philosophy is built on the "cattle, not pets" [0] and 12-factor [1] app methodologies. To some extent, the Disco server itself should be treated as disposable.
Disco runs your applications, which are just deployments of your code (ie git pulls). There's nothing on the server itself to back up. If a server were to die, you'd spin up a new one, run the install.sh script, and redeploy your apps in about 15 minutes.
For application data, our stance is that we believe you should use a dedicated, managed database provider for prod workloads. While we can run a "good enough" postgres as noted, we treat that as a dev/staging tool. Disco handles the stateless application layer, you should entrust your critical stateful data to a service that specializes in that.
Finally, re: security, we recommend a fresh Ubuntu 24.04 LTS server, which handles its own OS security updates. Disco only exposes the necessary web and SSH ports, so the attack surface is minimal by default.
[0] https://cloudscaling.com/blog/cloud-computing/the-history-of...
[1] https://12factor.net/
Which means, that if they want to test what it will look like running in cloud for prod, they are going to either need a pre-prod environment or go yolo
We used to be on Heroku and the cost wasn't just the high monthly bill - it was asking "is this little utility app I just wrote really worth paying $15/month to host?" before working on it.
This year we moved to a self-hosted setup on Coolify and have about 300 services running on a single server for $300/month on Hetzner. For the most part, it's been great and let us ship a lot more code!
My biggest realization is that for an organization like us, we really only need 99% uptime on most of our services (not 99.99%). Most developer tools are around helping you reach 99.99% uptime. When you realize you only need 99%, the world opens up.
Disco looks really cool and I'm excited to check it out!
(Just remember to take regular backups now, so that when this 5 year deal expires you don’t get into the same situation again :-)
We know of two similar cases: a bootcamp/dev school in Puerto Rico that lets its students deploy all of their final projects to a single VPS, and a Raspberry Pi that we've set up at the Recurse Center [0] which is used to host (double checking now) ~75 web projects. On a single Pi!
[0] https://www.recurse.com/
> Even with all 6 environments and other projects running, the server's resource usage remained low. The average CPU load stayed under 10%, and memory usage sat at just ~14 GB of the available 32 GB.
If you can fit them all on a 4 cpu / 32gb machine, you can easily forgo them and run the stack locally on a dev machine. IME staging environments are generally snowflakes that are hard to stand up (no automation).
$500/month each is a gross overpayment.
Not if you're running with external resources of specific type, or want to share the ongoing work with others. Or need to setup 6 different projects with 3 different databases at the same time. It really depends on your setup and way of working. Sometimes you can do local staging easily, sometimes it's going to be a lot of pain.
Especially when I got look at the site in question (idealist.org) and it seems to be a pretty boring job board product.
As for the staging servers, for each deployment, it was a mix of Performance-M dynos, multiple Standard dynos, RabbitMQ, a database large enough, etc. - it adds up quickly.
Finally, Idealist serves ~100k users per day - behind the product is a lot of boring tech that makes it reliable & fast. :-)
That's more than 1/3 of the cost of a developer there.
That will save you some week of a person's work to set things up and half-a-day every couple of months to keep it running. Rounding way up.
Not free, it became a productivity boost.
You now have a $35k annual budget for the maintenance, other overhead, and lost productivity. What do you spend it on?
> The team also took on responsibility for server monitoring, security updates, and handling any infrastructure issues themselves
For a place that’s paying devs $150k a year that might math out. It absolutely does not for places paying devs $250k+ a year.
One of the great frustrations of my mid career is how often people tried to bargain for more speed by throwing developers at my already late project when what would have actually helped almost immediately was more hardware and tooling. But that didn’t build my boss’ or his bosses’ empires. Don’t give me a $150k employee to train, give me $30k in servers.
Absolutely no surprise at all when devs were complicit with Cloud migrations because now you could ask forgiveness instead of permission for more hardware.
But I've migrated plenty of companies off custom deployment setups to PAAS and told many ceo's simply what OP above has shared. Even a part time dev ops engineer is still $60000 a year, and that can buy us a LOT on PAAS. Using PAAS you can have effectively zero dev ops, I've also trained non technical people on how to scale their own servers if no devs are around because you just have a web based UI slider.
I consider myself a developer who cares more about the business, risk, profits and runway. A lot of developers don't share this mentality (which is fine btw always need engineers who like engineering for engineering sakes) but in meetings you will have a hard time beating me in an argument if you try to say that running servers ourselves would be "cheaper", and/or even faster, safer and definitely not more stable. (obviously not in all situations, but kind of most for modern crud web apps that don't require complicated compute setups)
I'm probably being overly antagonistic, forgive me for that, though highly recommend questioning the real cost of running your own setups.
Volunteering to be preempted by broken systems more often is a sucker’s bet. Solve problems so they stay solved. It’s more work now, but reduces the interest rate on past work so you can get new things done.
Just something to consider if you are in a professional environment before switching your entire infra: maintenance cost is expensive. I strongly suggest to throw man-days in your cost calculation.
To prevent security vulnerabilities, the team will need to write some playbooks to auto-update regularly your machine, hoping for no breaking changes. Or instead write a pipeline for immutable OS images updates. And it often mean testing on an additional canary VM first.
Scaling up the VM from a compute point of view is not that straightforward as well, and will require depending of the provider either downtime or to migrate the entire deployments to a new instance.
Scaling from a disk size point of view, you will need to play with filesystems.
And depending on the setup you are using, you might have to manage lets encrypt, authentication and authorization, secrets vaults, etc (here at least Disco manages the SSL certs for you)
Only if those man-days actually incur a marginal cost. If it's just employees you already have spending their time on things, then it's not worth factoring in because it's a cost you pay regardless.
It's precisely why we moved from a self-hosted demo environment server to heroku - the developers that had both the skills to manage a server and enough seniority to have access accross all the different projects could bring in more by building.
This part can be outsourced to a PaaS company, so that the company engineers can be focused on what is the company actually making money from.
If you are small enough, you are not going to be truly affected by downtime. If you are just a little bigger, a single hot spare is going to be sufficient.
The place where you get dinged is heavy growth in personnel and bandwidth. You end up needing to solve CPU bound activities quicker because it hurts the whole system. You need to start thinking about sticky round robin load balancing and other fun pieces.
This is where the cloud can allow you to trade money for velocity. Eventually, though, you will need to pay up.
That said, the average SaaS can go a long way with a single server per product.
For example, the "Bridging the Gap: Why Not Just Docker Compose?" section is a 1:1 copy of the points in the "Powerful simplicity" on the landing page - https://disco.cloud/
And this blog post is the (only) case study that they showcase on their main page.
- ...
I'm kidding :-)
Our library is open source, and we're very happy and proud that Idealist is using us to save a bit of cash. Is it marketing if you're proud of your work? :-) Cheers
Marketing should be marketing and clearly so. Tech blogs are about sharing information with the community (Netflix Tech blog is a good example) NOT selling something. Marketing masquerading as a tech blog is offputting to a lot of people. People don't like being fooled with embedded advertising and putting ad copy into such pieces is at best annoying.
https://netflixtechblog.com/
This seems like a good idea to have plentiful dev environments and avoid a bad pricing model. If your production instance is still on Heroku, you might still want a staging environment on Heroku since a Hetzner server and your production instance might have subtle differences.
Dokku can be an option if needed to maintain heroku endpoints.
396 more comments available on Hacker News