More Random Home Lab Things I've Recently Learned
Posted3 months agoActive3 months ago
chollinger.comTechstoryHigh profile
calmpositive
Debate
40/100
HomelabProxmoxRaspberry Pi
Key topics
Homelab
Proxmox
Raspberry Pi
The author shares their recent experiences and learnings from their home lab setup, sparking a discussion among commenters about various aspects of homelabbing, including hardware choices and software configurations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
7d
Peak period
115
Day 8
Avg / period
18.6
Comment distribution130 data points
Loading chart...
Based on 130 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 9:02 AM EDT
3 months ago
Step 01 - 02First comment
Oct 13, 2025 at 8:49 AM EDT
7d after posting
Step 02 - 03Peak activity
115 comments in Day 8
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 11:54 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45490938Type: storyLast synced: 11/20/2025, 9:01:20 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
VMs can add a lot of complexity that you don't really need or want to manage.
And (perhaps unadmitted) lots of people bought Pis and then searched for use cases for them.
One advantage over the used market is that you can easily keep getting the exact same one over and over again.
Not really, no. Not if all you need is "a computer". They're more interesting if you want a cheap ARM computer, or a tiny computer, or a server that sips power. Although for that last one they're also kind of dubious. If you can somehow get away with running what you need off a phone, that's way better value.
>No space left on device.
>In other words, you can lock yourself out of PBS. That’s… a design.
Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.
>PiHole
AGH is worth considering because it has built in DoH
>Raspberry Pi 5, ARM64 Proxmox
Interesting. I'm leaning more towards k8s for integrating pis meaningfully
DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.
On the plus side I have a lot of non-ECC DDR4 sticks that I'm dumping into the expensive market rn
Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
>Imo, Raspberry Pis haven't been cost competitive general compute devices for a while now unless you want GPIO pins.
I have a bunch of rasp 4Bs that I'll use for a k8s HA control plane but yeah outside of that they're not idea. Especially with the fragility of SD card instead of nvme (unless you buy the silly HAT thing).
And Raspberry Pi 4s can actually boot from NVME via a USB enclosure.
And if you want GPIO pins I’d imagine that a lot of those applications you’d be better served with an ESP32 and that a raspberry pi is essentially overkill for many of those use cases.
The Venn diagram for where the pi makes sense seems smaller than ever these days.
And then you get all the advantages of the x86 ecosystem, more modularity, etc.
Heck, I wouldn’t be surprised if the base model M series Mac mini is competitive so long as you can get Asahi Linux to do what you need.
Maybe five years from now we will see ARM or RISC-V mini PCs further narrow the Venn diagram for raspberry pi systems.
I often us an Arduino plugged onto a spare USB port. There's a whole lot of GPIO pin related projects that suit 5V better than 3.3V, and Arduino IO pins are practically unbreakable compared to ESP32. I've got Arduinos that still work fine after accidentally connecting 12V directly to IO pins. I've has ESP32s (and RasPis) give up the ghost just from looking at the IO pins while thinking about 12V.
Technitium has all the bells and whistles along with being cross platform.
https://technitium.com/dns/
And don’t get me started on if you intend to run any storage solutions like Rook-Ceph on cluster.
but proxmox and kubernetes are overkill, imo, for most homelab setups. setting them up is a good learning experience but not necessarily an appropriate architecture for maintaining a few mini PCs in a closet long term.
you can ignore the gatekeeping.
proxmox is great, though. It's worth running it even if you treat it as nothing more than a BMC.
There’s a lot of overlap between “I run a server to store my photos” and “I run a bunch of servers for fun”, which has resulted in annoying gatekeeping (or reverse gatekeeping) where people tell each other they are “doing it wrong”, but on Reddit at least it’s somewhat being self-organized into r/selfhosted and r/homelab, respectively.
It's funny. I did this (before it really became a more mainstream hobby, this was early 00s), but now that I work in ops I barely even want to touch a computer after work.
It’s the SUV that has off-road tires but never leaves the pavement, the beginner guitarist with an arena-ready amp, the occasional cook with a $5k knife. No judgment, everyone should do what they want, but the discussions get very serious even though the stakes are low.
People will build a huge multinode cluster in their basement with Raspberry PIs, and benchmark it to point out performance issues that they absolutely can't live with and so they are off to buy new SSDs or whatever. It's a hobby, but it's shaped like someone's actual job.
Which are all pretty useful considering my day job is a software engineer.
Many of these things have been directly applicable at work, e.g. when something weird happens in AWS, or we have a project using obscure Docker features.
I’ve had one or two machines running serving stuff at home for a couple decades [edit: oh god, closer to 2.5 decades…], including serving public web sites for a while, and at no point would I have thought the term “home lab” was a good label for what I was doing.
I’d classify myself in the former camp, with a small server that runs TrueNAS and serves media, runs VMs and a few apps I use for work - and a Unifi network for my home and security cameras and VPN.
I’m positive there are people with very similar setups who call it a homelab. I don’t really experiment with mine, it’s set up and works in the background for months on end.
Surely people have had ‘homelabs’ for longer than VMs and container have been mainstream?
http://www.trygve.com/servers_quarters.html
http://www.trygve.com/house.html (scroll down)
You know when you know.
I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).
I know others really enjoy playing with K8s, which is its own rabbit hole.
My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.
I hadn't heard about mealie yet, but sounds like a great one to install.
I had to switch to VM because of that, passing through the GPU.
In my book, that’s a homelab, it's just a small one (an efficient one?...)
I was able to put everything on a fanless zotac box with a 2.5" sata SSD, and it has served well for many years. (and QUITE a bit less electricity, even online 24/7)
The Proxmox Backup Server is the killer feature for me. Incremental and encrypted backups with seamless restoration for LXC and VMs has been amazing.
I also wanted to back up my big honking zpool of media, but it doesn't economical to store 10+ TB offsite when the data isn't really that critical.
Yeah I don't backup any of my media zpool. It can all be replaced quite easily, not worth paying for the backup storage.
I'm not even using the features beyond the recipes yet, but i'm already very happy that i can migrate my recipes from google docs to over there
You can also distill recipes down. I find a lot of good recipes online that have a lot of hand-holding within the steps which I can just eliminate.
I don’t get why people use VMs for stuff when there’s docker.
Thanks!
especially useful if you want multiple of those, and also helpful if you don't want one of them anymore.
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.
Better how? What isolation are we talking about, home-lab? Multi-tenant environments for every family member?
> Some software only runs in VMs.
Like OS kernels and software not compiled for host OS?
> Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
Insane take because we're talking about binding something from /dev/ to a namespace, which is much easier and faster than any VM pass-through even if your CPU has features for that pass-through.
> plex has a dedicated network port and storage device, which is simpler to set up this way.
Same, but my plex is just a systemd unit and my *arrs are in nspawn container also on its own port (only because I want to be able to access them without authentication on the overlay network).
> I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s.
Hosting Plex in k8s is objectively wrong, so you're right there. I don't see how adding proxmox into the place instead of those services being systemd units. If they run on the same node - you're not getting any fault tolerance, just adding another thing that can go wrong (proxmox)
Defining "works better" as quicker, simpler to set up, more intuitive, or similar... I'd still argue passing through a port rather than a device "works better".
E.g., I essentially gave up trying to pass a Google Coral through to a container. When connected, it shows up as one vendor+device ID, then once you push the firmware+model to it it reconnects with a different vendor+device ID.
Saying "anything plugged in (or not plugged in) to this USB port is this VM's problem" is quite easy to set up, handles disconnecting and reconnecting as you would expect, is resilient against whatever weird stuff the device does, upgrading or replacing the device, etc.
Exactly. The "insane take" - if its ever reasonable to say that - is to take on the burden of all the management logic oneself when its trivially avoidable. We will hopefully see better container orchestration UX for competing with the long established VM hypervisors in this respect.
I disable the high availability stuff I don’t use that otherwise just grinds away at disks because of all the syncing it does.
It has quirks to work through, but at this point for me dealing with it is fairly simple, repeatable and most importantly, low effort/mental overhead enough for my few machines without having to go full orchestration, or worse, NixOS.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
You need to be careful with this one.
The USB spec goes up to 15W (3A) for its 5V PD profiles, and the standard way to get 25W would be to use the 9V profile. I assume the Pi 5 lacks the necessary hardware to convert a 9V input to 5V, and, instead, the Pi 5 and its official power supply support a custom, out-of-spec 25W (5A) mode.
Using an Apple charger gets you the standard 15W mode, and, on 15W, the Pi can only offer 600mA for accessories, which may or may not be enough to power your NVMe. Using the 25W supply, it can offer 1.6A instead, which gives you plenty more headroom.
Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
Sometimes it's people running 2-3 node k8s cluster to run a few static workloads. You're not going to learn much about k8s from that, but you will waste CPU cycles on running the infra.
Nah, I get enough of that from being by the pool, managing my home lab.
I find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
The biggest bottleneck is I/O performance, since I rely on SAS drives (since running full VMs has a lot of disk overhead), rather than SSDs, but I cannot justify the expense to upgrade to SSDs, not to mention NVME.
> Those setups always pure "home-lab" because it's too small or macgyvered together for anything, but the smallest businesses...where it will be an overkill.
That is a core part of the hobby. You do some things very enterprise-y and over-engineered (such as redundant PSUs and UPSs), while simultaneously using old hard drives that rely on your SMART monitor and pure chance to work (to pick 2 random examples).
I also re-use old hardware that piles up around the house constantly, such as the Pi. I commented elsewhere that I just slapped an old gaming PC into a 4U case since I want to play/tinker with/learn from GPU passthrough. I would not do this for a business, but I'm happy to spend $200 for a case and rails and stomach an additional ~60W idle power draw to do such. I don't even know what exactly I'll be running on it yet. But I _do_ know that I know embarrassingly little about GPUs, X11, VNC, ... actually work and that I have an unused GTX 1080.
Some of this is simply a build-vs-buy thing (where I get actual usage out of it and have something subjectively better than an off the shelf product), others is pure tinkering. Hacking, if you will. I know a website that usually likes stuff like that.
> You're not going to learn much about k8s from that
It's possible you and I learn things very differently then (and I mean this a lot less snarky than it sounds). I built Raft from scratch in Scala 3 and that told me a lot about Raft and Scala 3, despite being utterly pointless as a product (it's on my website if you care to read it). I have the same experience with everything home lab / Linux / networking - I always learn something new. And I work for a networking company...
Building k8s from scratch, you're going to learn how to build k8s from scratch. Not how to operate and/or use k8s. Maybe you will learn some configuration management tool along the way unless your plan is to just copy-paste commands from some website into terminal.
> find horizontal scaling with many smaller cores and lots of memory more impactful for virtualization workloads than heavy single core performance (which, fwiw, is pretty decent on these Xeon Golds).
Yeah, if you run a VM for every thing that should be a systemd service, it scales well that way.
Also, an engineer with experience? I'm calling out over-engineering for the sake of over-engineering.
"should be" according to your goals. "should not be" according to mine:
1. run untrusted code in a reasonably secure way. i don't care how many github stars it has, i'm not rawdogging it. nor is my threat model mossad, so it doesn't have to be perfect. but systemd's default security posture is weak, hardening it is highly manual (vs. "run a VM"), and must be done per service to do properly (allowlist syscalls, caps, directory accesses, etc.).
2. minimize cost. it's orders of magnitude less costly for most people with the skills to run a homelab to:
if you want to optimize for "learn how to configure systemd", "learn how to hyperoptimize cpu usage", or whatever it may be then great. if other people aren't, they're not necessarily wrong, they may be choosing different tradeoffs. understanding this is an essential step in maturing as an engineer and human being. i truly mean this as encouragement - not rebuke. Otherwise i wouldn't have paid the relatively high cost in time to write it afterall :)I think this is one of the main reasons why Raspberry Pi has such a strong representation in homelabs, including my own.
I'd bet you're minority. People use Pi because it lets them assemble a cluster for under 200 bucks.
Start with k3s, configure ingress/services/deployment, install ingress controller for cluster wide routing, install service meshes (istio, cilium), write a controller, mess around with gateway API. The possibilities are endless. Stop bitching and maybe have a real argument instead of shitting on homelabbers
That is my argument, you're not learning much if your cluster 2-3 machines and a few RPis.
> Stop bitching and maybe have a real argument instead of shitting on homelabbers
Why so rude? I have a homelab myself, it's just for running things and not LARPing as sysadmin.
300 are a lot of watts. ...I didn't used to pay attention but power costs keep rising.
I recently opted for a i7 F (65w base) over an i7 K (125w base) even with the 15% performance hit.
YMMV, I'm not saying it's enough for every use case. The CPU will transcode my 1080p media using QSV at ~500 FPS. I don't have enough users to saturate that using Jellyfin.
Did you do the original encoding yourself (to insure everything is optimized for your rig)?
Edit: did some searching, probably you mean the N95 Intel CPU as the basis for a mini pc, not actually a system called N95. That thing is 30% faster than my server's cpu which has <10% occupancy, and about half of it comes from the 1 Windows VM that I need for a legacy application I should really be getting rid of. It's also very recent, you can use way older CPUs from hardware people are throwing away that use similarly little power instead of buying a new product
I agree about finding old hardware to use. Sometimes it is the way to go.
I ran old Dell laptops from the recycle bin at work as servers for some time. I ran old gaming machines at one point as well.
Those laptops used more power than this thing, put out more heat, and were slower.
I was just saying that people don't always need dual 150W Xeons monsters for homelab.
Both the "Hacker" and "Cheap Bastard" ethos are better served by ... just about any cheap x86 stuff--especially used.
What am I missing?
https://github.com/gitbls/sdm
can help with this issue.
...turns out you don't need local storage at all: if you already run a NAS you can bootp the RasPi over the network!
This also makes backups super easy.