Self-Hosting Is Being Enshittified
Key topics
The notion that self-hosting is being "enshittified" has sparked a lively debate, with some commenters questioning the connection between the author's concerns about DRAM prices and the actual state of self-hosting software. While one commenter defended the article as a year-in-review with a scattering of topics, others chimed in with their own takes, ranging from the inevitability of "enshittification" to the freedom that self-hosting affords, namely choosing when to update software. As one commenter wryly noted, self-hosting's appeal lies in its flexibility, but others saw it as a form of "maximalism" that expects open-source developers to be perpetually beholden to users. The discussion highlights the complexities and trade-offs inherent to self-hosting in today's tech landscape.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
53m
Peak period
109
Day 1
Avg / period
20
Based on 120 loaded comments
Key moments
- 01Story posted
Dec 28, 2025 at 9:00 PM EST
13 days ago
Step 01 - 02First comment
Dec 28, 2025 at 9:54 PM EST
53m after posting
Step 02 - 03Peak activity
109 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 7, 2026 at 7:21 AM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
That’s about all I’ll say though, not my article.
Less cutely, this is an interesting topical site/newsletter => https://selfh.st
That's what we say it's about. But it's really about open source devs being our slaves forever. Get to work, Mattermost! (whip crack)
> Mattermost Entry gives small, forward-leaning teams a free self-hosted Intelligent Mission Environment to get started on improving their mission-critical secure collaborative workflows. Entry has all features of Enterprise Advanced with the following server-wide limitations and omissions:
https://docs.mattermost.com/product-overview/editions-and-of...
Sounds like some kind of parody of enterprise software.
I saw "we're happy to pay for it" and thought they were paying for it. They're not, yet.
> What This Means for Existing Deployments
> Paid Customers: No action required—your deployments are unaffected.
[1] https://forum.mattermost.com/t/mattermost-v11-changes-in-fre...
You also realistically can't fork things unless multiple people do, and they all stay interested in the fork.
I assumed that they were being forced by the copyright mafia, but they’re perfectly capable of making these decisions on their own.
yes no more dyndns free accounts... but u can still use afraid or do cf tunnels maybe?
and in some cases nowadays u can get away with
docker-compose up
and some of those things like minio and mattermost are complaints about the free tier or complaints about self hosting? i can't tell
Running an OS in 16-32mb ram with GUI...
Memory management for programs...
If the source code is available for you to fork, modify, and maintain as you see fit, what's the complaining really about?
I think co-management is going to be the next paradigm.
Unless you have a heavy-duty pipe to your prem you're just risking all kinds of headaches, and you're going to have to put your stuff behind Cloudflare anyway and if you're doing that why not use a VPS?
It's just not practical for someone to run a little blog or app that way.
Take file storage: Some folks find Google Drive and similar services unpalatable because they can and will scan your content. Setting up Nextcloud or even just using file sharing built into a consumer router is pretty easy.
You don't need to rely on Cloudflare, either. Some routers come with VPN functionality or can have it added.
The self-hosting most people talk about when they talk about self-hosting is very practical.
...today.
If you're self-hosting, do you need 640K of ram?
You can buy a “lightly used” Dell Optiplex with 8gb RAM for like $40 which will cover all your self hosting needs today.
For bulk storage / logs I toss in a few of whatever the cheapest drives are right now.
Have lots of spares for failures… haven’t had one failure since 2019.
For example, Micron didn't think about any alternatives for the consumer retail market. They just dumped it entirely.
"Plex added a paid license for remote streaming, a feature that was previously free. And then Plex decided to also sell personal data — I sure love self-hosted software spying on me."
How is it "self-hosted" if it's "remote streaming?" And if you're hosting it, you can throttle any outgoing traffic you want. Right?
The only other examples are Mattermost and MinIO... which I don't know much about, but again: Aren't you in control of your own host?
This article is lame. How about focusing on back-ends that pretend to support self-hosting but make it difficult by perpetuating massive gaps in its documentation (looking at you, Supabase)?
You host the plex service with your media library. Plex allows you to stream without opening up your firewall to others. Not sure now it works exactly because I never hosted it myself.
It relies on their hosted services/infrastructure. I avoid Plex for that reason. I just host my media with nginx + indexing enabled. Wireguard for creating the tunnel between the server-client and Kodi as the frontend to view the media (you can add an indexed http server as a media source).
Works great, no transcoding like Plex, but that's less of an issue nowadays when hardware accelerated decoders are common for h264 & h265.
Only if you want it to. Your local Plex server is always available on port 32400 - which can be opened up for others as well. But using Plex’s authentication is more convenient, of course.
that's one way of enshittifying, but what the article talks about is nonetheless very important.
People rely on projects being open source (or rather: _hosted on github_) as some sort of mark of freedom from shitty features and burdensome monetization.
As the examples illustrate, the pattern of capturing users with a good offering and then subsequently squeezing them for money can very easily be done by open source software with free licenses. The reason for that is that source code being available is not, alone, enough to ensure not getting captured by adversarial interests.
What you ALSO need is people wanting to put in the work to create a parallel fork to continuously keep the enshittification at bay. Someone who rolls a distribution with a massive amount of ever-decaying patches, increasingly large amounts of workarounds, etc. Or, alternatively, a "final release" style fork that enters maintenance mode and only ever backports security vulnerability fixes. Either of those is a huge amount of work and it's not even sure that people will find that fork on their own rather than just assume "things are like that now".
Given that the code's originating corporation can and will eagerly throw whole teams of people at disabling such efforts, the counter-efforts would require the same amount of free labor to be successful - or even larger, given that it's easy to wreck things for the code's originator but it's difficult to fix them for the restoration crew.
This pattern, repeated in many projects over the decades since GPL2 and MIT were produced, displays that merely being free and open source does not create a complete anti enshittification measure for the end user. What is actually necessary is a societal measure, a safety web made up of developers dedicated to conservation of important software, who would be capable of correcting any stupid decisions made by pointy-haired managers. There are some small projects like this (eg Apache, and many more) but they are not all-encompassing and many projects that are important to people are without such a safety net.
Enshittification also usually implies that switching to an alternative is difficult (usually because creating a competing service is near impossible because you'd have to get users on it). That flaw doesn't really apply to self hosting like it does with centralized social media. You can just switch to Jellyfin or Garage or Zulip. Migration might be a pain, but it's doable.
You can't as easily stop using LinkedIn or GitHub or Facebook, etc.
That's probably not how you should interpret it. Self hosting as a whole is still a vastly better option. But if there is a significant enough public movement towards it, you can expect it to be targeted for enshittification too. The incidents related to Plex, MinIO and Mattermost should be taken as warning signals about what this may escalate into in the future. Here are the possible problems I foresee.
1. The situation with Plex, MinIO and Mattermost can be expected to happen more frequently. After a limit, the pain of frequent migration will become untenable. MinIO is a great example. Even the crowd on HN hadn't considered an alternative until then. Some of us learned about Garage, RustFS and Ceph S3 for the first time and we were debating about each of their pros and cons. It's very telling that that discussion was very lengthy.
2. There is a gradual nudge to move everything to the cloud and then monetize it. Mandatory online account for Win11, monetization of GH self-hosted runner (now suspended after backlash, I think) and cloudification of MS Office are good examples. You can expect a similar attempt on self hosted applications. Of course, most of our self-hosted software is currently open source. But if these big companies decide to embrace, extend and extinguish it, I'm not sure that the market will be prudent enough to stick with the FOSS options. Half of HN was fighting me a few days back when I suggested that we should strive to push the market towards serviceable modular hardware.
3. FOSS projects developed under companies are always at a higher risk of being hijacked or going rogue. To be clear, I'm not against that model. For example, I'm happy with Zulip's development and monetization model - ethical, generous and not too pushy. But mattermost shows where that can go wrong. Sure, they're are open source. But there are practical difficulties in easily overriding such issues.
4. At one time, we were expecting small form-factor headless computers (Plug computers [1]) like SheevaPlug and FreedomBox to become ubiquitous. That should still be an option, though I'm not sure where it's headed, given the current RAM situation. But even if they make a come back, it's very likely that OEMs will lock it down like smartphones today and make it difficult for you to exercise your choices of servers, if not outright restrict them. (If anybody wants to argue that normal people will never consider it, remember how smartphones were, before iPhone. We had a blackberry that was used only by a niche crowd.)
[1] https://en.wikipedia.org/wiki/Plug_computer
But the biggest thing I am worried about is the hardware prices too.
So I want to ask but is there any hardware (usually ram) which isn't getting its price increase insanely much? Perhaps refurbished or auctioned servers?
What is the best way to now get hardware which is bang for its buck? Should we even buy hardware right now or wait 3-4 years for factory production to rise and AI bubble to crash, I definitely think that ram prices will fall off very steeply (its almost a cycle in the ram business)
I am not sure but buying up small levels of compute feels like a decent idea if you are doing anything computationally expensive and of course if you have something like plex, then I suppose you have to expand on the storage part and not so much on the ram part (perhaps some encoding/decoding which could be ram intensive but I don't know)
I had gotten into the rumour that asus is ramping up chip production or smth to save hardware but it turned out to be fake so not sure how to respond but please some hardware company should definitely see this opportunity smh.
A TinyMiniMicro https://www.servethehome.com/introducing-project-tinyminimic... used PC is more than adequate for most workloads (except for local AI and if you want to have a huge amount of storage). Last time I checked the prices were in the ballpark of $100/$150 for a working machine.
New machines with a N series Intel CPU are in the similar ballpark.
He is really excited for this project, he brought me newspaper clippings the other day showing that my idea has potential and other things so that's nice and I have given him the task to get his contacts in our small city for hardware, auctions and rents and try to get more information about some cheapped out specs starting out as I don't want us to invest in with a lot of hardware/investment up front but rather reinvesting the profits and maintaing a clear transparency.
Do you think we should postpone this idea till 3-4 years (I am thinking so) honestly because I would love to build my own software and I am thinking that within these years I can try more pain points of other providers and build a list of the nice features I like (If you know of any, please let me know as well as I still am making the list)
I am not trying to achieve AI purposes at all but rather simple compute (even low-end compute starting out)
Power consumption comparison isn't that much of an issue I think
Honestly I am thinking that we should wait out this cycle of rising hardware so that the hardware prices can go down in the start of the next cycle but I am interested if NUC's would be good enough for my workflow as I can redirect my father more about it because I am not that expertised about the hardware side of things so much so I would really appreciate it if you can tell me more about it/what could be the best use cases for that?
I saw from your article that chic-fil-a uses intel nucs to run their kubernetes clusters so I am assuming that it can be good enough for my use case as well?
If you ask these people you need to buy expensive hardware and build your own datacenter at home.
I have been hosting all my services on a single Intel Nuc from 10 years ago and a RPI5 as backup for critical services like DNS.
That's it.
You'll truly be amazed at how much stuff you can actually run on very little hardware if you only have between 2 and 5 users like in a family.
Also, MinIO was always a enterprise option. It was never meant for home use. Just use SeaweedFS, Garage or so if you really want S3.
Sidenote: You do not need S3 in your house. Just use the filesystem.
What are you putting in the VM, another Linux kernel? Why? Yeah then you need to take into account between 4GB and ~ 8GB of extra ram per VM.
I don't have RAID though I do backup to my NAS at my parents'.
But honestly a NVMe drive is basically like a CPU: it's either dead on arrival or it will just run forever.
Seems like a waste to me.
Backup your docker config and your data, that's what you actually need. The rest is just available online if you ever need it.
>Besides sometimes you need to run software that is not available on linux.
Really, like what?
There are some use cases for a VM over a container, sometimes you want better isolation (my public facing webserver runs in one), or a different OS for some reason (I run an OSX VM because its the only way to test a site in Safari).
But yeah I just restrict my webserver in an unprivileged container. Though my site is static and accepts no input whatsoever.
Containers also have some advantages for device passthrough, I have my Intel iGPU added into one for Immich and Frigate, can't do that with a VM unless you detach the whole GPU from the system.
You don't need ECC
You absolutely don't need proxmox, containers are good enough
It does not quickly make sense to build a proper home server
Raid1 or raid6 makes sense, but it's absolutely not a tipping point.
For me personally, I built my “data centre” as cheap as possible, but there’s a few requirements that the computers you’re using would not cut it: storage server must be using ZFS with ECC. I started this around a decade ago and I only spent ~$300 at the time (reusing old PSU and case I think).
There are many requirements of a data centre that can be relaxed in a home lab settings, up time, performance, etc. but I would never trade data integrity for tiny bit of savings. Sadly this is a criteria that many, including some of those building very sophisticated home cluster, didn’t set as a priority.
It looks to me, and i could be wrong, that many “homelabbers” upgraded from hoarding dvds to hoarding docker containers or whatever.
Yes, fully agree with this and I've a similar setup. I even started with using wsl on the default windows install hoping to switch later to linux, but didn't have much need for this. Only gripe is tailscale seems to be flaky (rare) in windows.
Better to start with something small and cheap, see if it solves your needs and then upgrade if needs. Don't overcomplicate things based on what others do.
Goals are vastly different too. For some it's about hosting a few services to be free from company slop, for others it's a way to practice devops: clustering, containers, complex networking.
Seeing someone recommending Proxmox or Freenas to a beginner that just want to share family photos from an old laptop is wrong in so many ways...
For your home, no, you don’t need it. But if setting up a remote backup, ie at your parents / in-laws / children / summerhouse / whatever, S3 can help cut down on network traffic by offloading checksum calculations to the remote server. It won’t help (much) with backups, but verification of backups will be much faster as you don’t have to transfer everything back home to verify it like with SMB.
1. Peer-to-peer model of decentralization like bittorrent, instead of the client-server model. Local web UIs (like Transmission's web UI) may be served locally (either host-only or LAN-only) as frontend for these apps. Consider this as the 'last-mile connectivity' if you will.
2. Applications are resistant to outages. Obviously, home servers can't be expected to be always online. It may even be running on you regular desktops. But you shouldn't lose the utility of the service just because it goes offline. A great example of this is the email service. They can wait for up to 2 days for the destination server to show up before declaring a delivery failure. Even rejections are handled with retries minutes later.
3. The applications should be able to deal with dynamic IPs and NATs. We will probably need a cryptographic identity mechanism and a way to translate that into a connection to the correct end node. But most of these technologies exist today.
4. E2E encrypted and redundant storage and distribution servers for data that must absolutely be online all the time. Nostr relays seem like a good example.
The Solid and Nostr projects embody many of these ideas already. It just needs a bit more polish to feel natural and intuitive. One way to do it is to have a local daemon that acts as a gateway, cache and web-ui to external data.
I apologize if it was confusing. I was suggesting the exact opposite. It's not about how to build a mini enterprise cluster. It's about how to change the service infrastructure to suit the small computers we usually find at homes, without any modifications. I'm suggesting a more fundamental change.
> I have reliable electricity and internet at home, though.
It isn't too bad where I'm at, either. But sadly, that isn't the practical situation elsewhere. We need to treat power and connectivity as random and intermittent.
When choosing software that I run in my “homelab” I lean towards community developed projects first. They may not always have as high quality as the ones offered by commercial entities but they’re just safer for the long term and have no artificial limits (Plex). I used to be a happy Plex customer (I have Plex Pass) but several years ago I had enough of their bullshit, switched to Jellyfin and couldn’t be happier!
Over the years I have been streaming all my movies and shows to as many people I want.
Plex added HDR support for transcoding, live subtitles syncing and more.
Especially the subtitle syncing is fantastic. It completely solved the problem.
... not a week later they've announced that they were getting rid of that feature
Then they forced everybody into having us username accounts, change things so I couldn't just visit my media servers address directly.
I also leverage Plex for live TV but they still don't support most OTA HD channels for licensing reasons.
Then they got rid of "watch together" , which has been heavily leveraged by family and friends over the years (re-implementing it is the second highest most requested feature right now in their suggestions).
Now they have the new pricing model where you must have Plex pass or some other subscription service if you want to be able to watch stuff stored on your own media server if you're doing it outside of your local network.
It's getting frustrating and despite people begging for certain things (e.g. Watch together) they seem to just be ignoring what people are asking for and focusing on weird stuff like sharing your watch history with random people or trying to turn Plex into a social media platform.
Oh yes it is. I already self hosted stuff back in 2000 and it was very hard. Then came docker and it is very simple now.
Sure "very simple" mean different things to different people, but if you self host you need to know a lot already.
This is somehiw similar to amateur electronics. You used to do 100% yourself from scratch. Now you have boards and you can start in z much simpler way.
But it does almost seems like there is a squeeze on general purpose computing from all sides, including homelab. The DRAM and SSD prices is just the latest addition to that. There's also Win 11 requiring TPM, which is not an bad thing by itself, but which will almost certainly take away the ability to run arbitrary OSes 5-10 years down the line on PCs. Or you'd still be able to boot them, but nothing will run on it without a fully trusted chain from TPM -> secure boot -> browser.
The same thing has been happening with the proliferation of enterprise SaaS apps
> Even old hardware isn't safe: DDR4 prices are also affected, so that tiny ThinkCentre M720 won't save us.
Most of my home infrastructure is DDR2 or DDR3. It’s plenty fast for quite a lot of things. I really don’t care whether some background operation takes five minutes or an hour. I rather care how little energy and heat that machine produces.
minimal ~ < $100 hardware (Dell, Acer etc mini or ssf pc)
install your linux distro, use your preferred containers, or just run as services on the os.
There's no ads, no ai, no layer of shite, it can't be "enshittified" unless you're doing the shittifying.
Also, forking is an option, you can always use AI to keep it current.