Knocker, a Knock Based Access Control System for Your Homelab
Posted2 months agoActive2 months ago
github.comTechstory
heatedmixed
Debate
80/100
SecurityHomelabNetworking
Key topics
Security
Homelab
Networking
The 'Knocker' project, a knock-based access control system for homelabs, sparks debate on the effectiveness and security of port knocking as a security measure.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
54m
Peak period
147
Day 1
Avg / period
38.5
Comment distribution154 data points
Loading chart...
Based on 154 loaded comments
Key moments
- 01Story posted
Oct 22, 2025 at 4:37 AM EDT
2 months ago
Step 01 - 02First comment
Oct 22, 2025 at 5:31 AM EDT
54m after posting
Step 02 - 03Peak activity
147 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 4:14 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45666327Type: storyLast synced: 11/20/2025, 8:56:45 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The idea itself sounds fun though
Regardless; what benefits this would have over Wireguard?
Great for prototyping, really bad for exposing anything of any value to the internet.
(Not Anti-Al, just pro-sensible)
Also the "If you're Anti-AI please don't use this." is pretty funny :D I guess I must be "Anti-AI" when I think this kind of code is wild to rely on.
Is it because the AI can generate code that looks like it was made by a competent programmer, and is therefore deceiving you?
But whatever the reason, I think that if we use it as a way to shame the people who do tell us then we can be assured that willingness to disclose it going forward will be pretty abysmal.
As humans we segment functionality and by nature avoid extra work as much as possible. Meaning reading someone else's code even if they are less competent makes sense and you can see the intention.
With LLM code everything is mixed together with no rime or reason and unless separately specified old useless functionality won't be cleaned out just because it is no longer used.
Also just the fact that people who use LLMs to vibe code bigger things usually aren't capable of reviewing what is going on in the first place, but if you are dangerous enough yourself to write a bigger piece of software you probably do know something about the problem on a deeper level and can test it.
I don't really see shaming. If you vibe code something and you are proud of it good for you, but LLMs currently are not capable of creating good software.
I must be Doing It Wrong(TM), because my experience has been pretty negative overall. Is there like a FAQ or a HOWTO or hell even a MAKE.MONEY.FAST floating around that might clue me in?
1. Make prototype
2. Magic happens here
3. Make lots of $$$
Great for prototyping only makes it easier to get to step 2, but done correctly, it certainly does that.
As proven by the nice app I have running on my laptop, but probably won't make any money from.
And it was very useful because
- I realized it wouldn't sell well anyway,
- but it did scratch my itch.
I'm pro security. The gall to put something out there, pretend it being vibe coded is not a big deal and possibly exposing hundreds of people to security issues. Jesus.
Edit: should have mentioned I am a bootcamp grad, not just throwing random shade.
I gate access to my homelab using Wireguard.
Wireguard is widely deployed across the world, and has been worked on for years.
No random new repo that was vibe coded can measure up in the slightest to that.
<https://news.ycombinator.com/item?id=39898061>
IPv6 of course.
> or is it just not important
Port knocking not a security feature anyway.
The likelihood of someone is on the same network as you noticing your servic, try to hack it, before the TTL expires again is IMO quite low.
This is without taking into account that the services themselves have their own security and login processes, getting a port open doesn't mean the service is hacked.
Tailscale is just an added unnecessary external dependency layer (& security attack surface) on top of vanilla Wireguard. And in 2025 it's easier to run vanilla Wireguard than it's ever been.
The selling point of Tailscale is that they simplify Wireguard UX by adding a proprietary control server - this adds complexity to the stack (extra component) but simplifies user experience (Tailscale run the control server for you).
Headscale seems like it's complicating the stack (adding an extra component) as well as complicating the user experience (you have to maintain two components yourself now instead of just the one Wireguard instance).
Granted I presume the Headscale control server might simplify management of your Wireguard instance but... you're still maintaining the control server yourself.
I was speaking more to doing it all in-house, versus outsourcing things to Tailscale, a third party not fully under one’s control, even if they act of behalf of the user. I think I largely agree with what you said.
Buying hardware is an investment (& not something everyone can do) but I've really never understood the point of the control server from the perspective of an open-source self-hoster (for a business like Tailscale it makes sense as it introduces an element of control, user dependency & likely analytics of some value).
There's still a lot that can be done to improve Wireguard's UX but I think the Asus example proves it can be done well. Headscale seems to be doing the worst of both worlds (promoting an architecture & user-flow of a proprietary closed-source competitor, while still requiring CLI setup & instance maintenance). For example, it seems to me like it would be better for them to wrap Wireguard directly & integrate with the actual Wireguard mobile app instead of having people install proprietary Tailscale app on their phones to use your own open-source self-hosted control server.
I would agree that stock WireGuard is going to have the fewest dependencies, and I don’t mean to nitpick or be disagreeable because I do agree with you, that fewer third party dependencies is usually better than more.
The Asus-Merlin firmware is also nice, though the stock Asus firmwares have gotten pretty good and work for most folks for many use cases. I think VLAN config and tagging support might be one of the only features I wanted that stock Asus firmware didn’t handle when I used them last.
However, while you can never really trust anything you run with internet access, I feel there's a fundamental line between an explicitly cloud-dependent service like Tailscale (e.g. a Tailscale control server outage incident would impact your home server access) compared to a fully self-hosted service that may or may not phone home if you don't put preventative measures in front of it, but will continue to function fine if you do put said measures in place.
The Asus mobile app is another potential concern but the Merlin browser UI is fine for most purposes.
This is why I mentioned Headscale in the first place. It’s not for everyone or every use case, but it’s good that it exists, on the whole.
Not only do you need to manually manage the keys for each device and make sure they're present in every other device's configuration, but plain Wireguard also cannot punch through NATs and firewalls without any open ports like Tailscale can, as far as I know.
Combine that with the fact that networking issues can be some of the hardest to diagnose and fix, and something like Tailscale becomes a no-brainer. If you prefer using plain Wireguard instead, that's fine, and I still use it too for some more specific use cases, but trying to argue that Tailscale is entirely unnecessary is just wrong.
I could be wrong, but I think Tailscale just does what you can do on Wireguard, which is `PersistentKeepAlive`. It lets a wireguard client periodically ping another to keep the NAT mapping open.
Tailscale handles this, and can establish a direct connection between two machines without either of them needing an open port listening for new connections.
There's an article on their website that explains how they do it: https://tailscale.com/blog/how-nat-traversal-works
Tailscale is great if it meets your requirements, & it probably does for most - I wasn't arguing that at all. Only that it won't be an option for everyone: in particular a non-tiny subset of home server hosters.
I know there's plenty of HA integrations that require some cloud service but the core application is very offline-friendly...
Everybody's got their own set of beliefs and understandings, and they get to decide how they want their homelab to work.
For me, tailscale fits in just right. Others can come to their own conclusion based on how they feel about networking and points of failure and depency and all that.
To an untrained eye, the wording here could be construed to imply that this is more secure than a VPN. Might be worth a reword to clarify why one might prefer it want to over a VPN.
I created this because I always have a VPN on my devices, and I can't have tailscale running with that, in addition to tailscale killing my battery life on android.
1- In the 90s were security was whatever
2- In modern days as a way to keep your logs squeaky clean ( although you get 99% there with custom ports)
3- As a cute warm up exercise that you code yourself with what's available in your system. (iptables? a couple of python scripts communicating with each other?)
It's not a security mechanism, and downloading external dependencies or code (especially if vibecoded) is a net loss (by a huge margin).
It's also a waste of time to overengineer for the reasons noted above, I've seen supposedly encrypted port knocking implementations. It feels as if someone had a security checklist and then a checklist for that checklist.
But it works very well as an additional layer of security. Sec nerds often scoff at "security through obscurity", but it is a very valid strategy. Running sshd on a random high port is not inherently more secure, but it avoids the vast majority of dumb scanners that spam port 22, which is why all my systems do that. Camouflage is underrated, yet wildly effective. You can see how well it works in nature.
In any case, this is not a port knocking solution anyway, as I mentioned in another comment.
This is vibe coded security through obscurity, i. e. quite useless. Use Tailscale or a self hosted VPN.
Apologies in advance if I'm missing something obvious here, but are you saying an IP allow list is not a standard security practice? If so I'd appreciate further explanation.
IPv6 is slowly growing in popularity. Google stats are close to 50%. If your ISP has IPv6, you might be accessing Hacker News with IPv6 since they added support recently.
This is what it feels like people using AI for everything.
AI is not good at telling you best solution but it will tell you that you can build it yourself since that approach is what AI is good at.
Using self hosted vpn, cloudflare zero trust or Tailscale is the easiest way to go.
I self host extensively and have multiple self hosted VPN(OpenVPN and WireGuard) along with Tailscale and cloudflare protecting my infra.
And it will not work on mobile if you already use another VPN.
...now I'll have to make this myself.
TIL that that has a name.[1] All I ever knew it as was "the knock from Roger Rabbit".
[1]https://en.wikipedia.org/wiki/Shave_and_a_Haircut
Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
As a side note I just happen to be reading a book at the moment that contains a fairly detailed walkthrough of the procedure required to access the Russian SVRs headquarters in New York in 1995.
Think of this as an analogue version and in no way a perfect analogy but it does include a step that has more or less the same security properties as this… anyways here’s a relevant quote:
“After an SVR officer passed through various checkpoints in the mission’s lower floors, he would take an elevator or stairs to an eighth-floor lobby that had two steel doors. Neither had any identifying signs.
One was used by the SVR, the other by the GRU. The SVR’s door had a brass plate and knob, but there was no keyhole. To open the door, the head of the screw in the lower right corner of the brass plate had to be touched with a metal object, such as a wedding ring or a coin.
The metal would connect the screw to the brass plate, completing an electrical circuit that would snap open the door’s bolt lock and sometimes shock the person holding the coin.The door opened into a small cloakroom. No jackets or suit coats were allowed inside the rezidentura because they could be used to conceal documents and hide miniature cameras.
SVR officers left their coats, cell phones, portable computers, and all other electronic devices in lockers. A camera videotaped everyone who entered the cloakroom. It was added after several officers discovered someone had stolen money from wallets left in jackets. Another solid steel door with a numeric lock that required a four-digit code to open led from the cloakroom into the rezidentura.
A male secretary sat near the door and kept track of who entered, exited, and at what times. A hallway to the left led to the main corridor, which was ninety feet long and had offices along either side. ”
Excerpt from Comrade J by Pete Earley
As another funny side note… I once discovered years ago that the North Koreans had a facility like this that they used to run a bunch of financing intelligence operations using drugs in Singapore where I was at the time and thought it would be funny to go and visit. It was in a business complex rather than a dedicated diplomatic facility from memory. But as I recall it was a similar scenario of unmarked door with no keyhole.
“Port knocking” et al were most definitively not.
OpenVPN is basically 1000 configuration options and magic incantations wearing a trenchcoat, and if you get any of them wrong the whole thing crumbles (or worse, appears to work but is not secure).
Use-cases:
1. helps auto-ban hosts doing port-scans or using online vulnerability scanners
2. helps reduce further ingress for a few minutes as the hostile sees the site is "down". Generally, try to waste as much of a problem users time as possible, as it changes the economics of breaking networked systems.
3. the firewall rule-trigger delay means hostiles have a harder time guessing which action triggered a IP ban. If every login attempt costs 3 days, folks would have to be pretty committed to breaking into a simple website.
4. keeps failed login log noise to a minimum, so spotting actual problems is easier
5. Easier to forensically analyze the remote packet stream when doing a packet dump tap, as only the key user traffic is present
6. buys time to patch vulnerable code when zero day exploits hits other hosts exposed services
7. most administrative ssh password-less key traffic should be tunneled over SSL web services, and thus attackers have a greater challenge figuring out if dynamic service-switching is even active
People that say it isn't a "security policy" are somewhat correct, but are also naive when it comes to the reality of dealing with nuisance web traffic.
Fail2ban is slightly different in that it is for setting up tripwires for failed email logins, and known web-vulnerability scanners etc. Then whispering that IP ban period to the firewall (must override the default config.)
Finally, if the IP address for some application login session changes more than 5 times an hour, one should also whisper a ban to the firewalls. These IP ban rules are often automatically shared between groups to reduce forum spam, VoIP attacks, and problem users. Popular cloud-based VPN/proxies/Tor-exit-nodes run out of unique IPs faster than most assume.
Have a nice day, =3
Don’t waste resources putting lipstick on the pig.
"Don’t waste resources putting lipstick on the pig."
I would never kink-shame someone that ignored the recent CVE-2025-48416, that proved exposing unprotected services is naive =3
But I see you’ve backpedaled to this being about log noise, not security.
One may believe whatever they like, as both our intentions are clear friend.
Have a wonderful day =3
The roving spam it blocks are not threats, and stolen credentials aren't going to be detected by it.
99.98% of hostile traffic simply reuse already published testing tools, or services like Shodan to target hosts.
One shouldn't waste resources guessing the motives behind problem traffic. =3
Your services should simply be unreachable over anything but wireguard (or another secure VPN option).
At some point, the idealism of white-listed pears and VPN will fail due to maintenance service costs. Two things may be true at the same time friend. =3
https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...
- You should be using WireGuard.
- “Port knocking” is pointless theater.
IPSec is simply a luxury unavailable on some LANs =3
However, even with all those choices, “port knocking” still wouldn’t be a solution for anything.
[edit]
Are you just searching for random WireGuard CVEs now?
CVE-2024-26950 was a *local-only* DoS and potential UaF requiring privileged access to wireguard netlink sockets.
<edit>
Firewall administrative network port traffic priority is important for systems under abnormal stress.
Open source tools are good at actually doing the job, as long as it's a programmer type of job. We've known how to do unbreakable encryption for decades now. Even PGP still hasn't been broken. Wireguard is one of those solutions in the "so simple it has obviously no bugs" category - that's actually what differentiates it from protocols like OpenVPN.
Think about the recent satellite listening talk at DEFCON and how that massive data leak could have been prevented by even just running your traffic through AES with a fixed key of the CEO's cat's name on a Raspberry Pi, but that's a non-corporate solution and so not acceptable to a corporation, who will only ever consider enabling encryption if it comes with a six figure per year license fee which is what the satellite box makers charged for it. Corporations, as a rule, are only barely competent enough to make money and no more.
I don't like or trust OpenVPN. I'd sooner expose OpenSSH itself, which has really a pretty stunning security track record.
The biggest weakness in VPN is client-side cross-network leaks.
IPSec is simply a luxury if the LAN supports it, but also an administrative nightmare for >5k users. =3
A lot of VPN installations are simply done wrong, and it only takes 1 badly configured client or cloud side-channel to make it pointless. IPSec is not supported on a lot of LANs, and 5k users would prove rather expensive to administer.
Also, GnuPG Kyber will not be supported by VPN software anytime soon, but it would be super cool if it happens. =3
I had some additional logic that gave me a really easy but unintuitive way to tell with an incredibly high degree of confidence the difference between a bot and a human on keyboard scenario and for what it’s worth I think that is the specific thing that makes it worth the effort.
If I have reasons to suspect it’s a bot I just drop the request and move on with my day. The signal to noise ratio isn’t worth it to me.
So we made coffee-money wasting spammers time, and attacks stayed rudimentary. =3
Personally I use fwknop for port knocking as it doesn't suffer from replay attacks as it's an encrypted packet. But still serves the same niche
Hence the cargo cult.
Also by collecting data on the IP addresses that are triggering fail2ban I can identify networks and/or ASes that disproportionally host malicious traffic and block them at a global level.
It's possible that some compliance regimes exist that mandate keeping logs of all unsuccessfully authentication attempts. There's surely a compliance regime out there that mandates every possible permutation of thing.
But the far more common permutation, like we see with NIST, is that the organization has to articulate which logs it keeps, why those logs are sufficient for conducting investigations into system activity, and how it supports those investigations.
> The need to limit unsuccessful logon attempts and take subsequent action when the maximum number of attempts is exceeded applies regardless of whether the logon occurs via a local or network connection. Due to the potential for denial of service, automatic lockouts initiated by systems are usually temporary and automatically release after a predetermined, organization-defined time period.
https://csf.tools/reference/nist-sp-800-53/r5/ac/ac-7/
The IDP will have some settings for max fails before lockout, and apply it by counting.
I'm not totally following what Fail2Ban has to do with Wireguard. Are we talking strictly about homelabs you don't expose to the internet?
Because I have a homelab I can connect to with Wireguard. That's great. But there are certain services I want to expose to everybody. So I have a VPS that can connect to my homelab via Wireguard and forward certain domain traffic to it.
That's a safe setup in that I don't expose my IP to the internet and don't have to open ports, but I could still be DDOS'd. Would it not make sense for me to use Fail2Ban (or some kind of rate limiting) even if I'm using Wireguard? I can still be DDOS'd.
Logging both successful and failed requests is important for troubleshooting my systems, especially the client-facing ones (a subset of which are the only ones that are accessible to the open internet), and failed authentication attempts are just one sort of request failure. Sometimes those failures are legitimate client systems where someone misconfigured something, and the logs allow me to troubleshoot that after the fact. That it can also be fed to fail2ban to block attackers is just another benefit.
> You can't meaningfully characterize attacker traffic this way. They'll come from any AS they want to.
Obviously in a world full of botted computers, IoT devices, etc. it's true that an attacker can hypothetically come from anywhere, but in practice at least from the perspective of a small service provider I just don't see that happen. I'm aware that you are involved with much larger scale operations than I'm likely to ever touch so perhaps that's where our experiences differ. No one's targeting my services specifically, they're just scanning the internet for whatever's out there and occasionally happen to stumble upon one of my systems that needs to be accessible to wherever my clients happen to bring their devices.
Sure, I see random domestic residential ISP addresses get banned from individual servers from time to time, but I never see the organized attacks I see which are usually coming from small hosting providers half way around the world from my clients. I have on multiple occasions seen fail2ban fire off rapidly sequential IP addresses like xxx.xxx.xxx.1 followed by xxx.xxx.xxx.2 then xxx.xxx.xxx.3, or in other cases a series of semi-random addresses all in the same subnet, which then triggers my network block and magically they're stopped instead of just moving on to another network. If I were to be packet sniffing on the outside of the relevant firewall I'm sure I'd see another address in the blocked network trying to do its thing but I've never looked.
Every complex services running, is a door someone can potentially break. Even with the most secure and battle tested service, you never know where someone fucked up and introduced an exploit or backdoor. Happened too often to be not a concern. XZ Utils backdoor for example was just last year.
> Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
If there is no harm, who cares...
I also find it hard to believe it is engineering malpractice to use one technology over another.
What happens if there is a vulnerability in WireGuard? Or if WireGuard traffic is not allowed in or out of a network due to a policy or security restriction?
Is knocking incredibly weak security through obscurity? Sure, but part of what it does is cut down on log volume.
It's not extra security but it is a little extra efficiency.
Wireguard has something like this built in though, the PresharedKey (which is in addition to the public key crypto, and doesn't reduce your security to the level of a shared-key system). It's still more work to verify that than a port knock however.
Just skip the plaintext password (the sequence of ports transmitted) and use certificate based auth, as you note below.
Using WireGuard to gate access to a server. It looks like it's a VPN, not an access control mechanism. So I am curious how this works.
The most mundane setup is two peers with each other’s public keys that let each peer talk to the other via the WireGuard link.
IMO, "only wireguard" is too restrictive of a policy - I also trust openssh and nginx to be open to the internet, if configured moderately carefully. Most FOSS servers that are widely deployed on the internet are safe to be deployed on the internet, or we'd know about it. I reviewed something that's not widely deployed on the internet though (Apache Zookeeper) and couldn't convince myself that every code path was properly checking authentication. That would have to go behind a VPN.
Briefly looking at the diagram at the top of the repo, it looks like you "knock" with an API key. Why not just run a reverse proxy in front of (whatever service you're trying to protect) and use the API keys there? To harden further, do some sort of real authentication (PKI, client certs). If you want your logs to look cleaner, install and actually configure fail2ban.
Because it breaks the clients of most homelab services.
That's what authelia does.
Though this is not technically a "knocker", but a typical token-based auth gateway. I experimented with something similar recently as well, and think it has its use cases.
But I would agree with some of the comments here. If you need to expose many services to the internet, especially if their protocols are not encrypted, then a tunneling/mesh/overlay network would be a better solution. I was a happy tinc user for several years, and WireGuard now fills that purpose well. As much as people use solutions like Tailscale, ZeroTier, etc., I personally don't trust them, and would prefer to roll my own with WG. It's not that difficult anyway.
There's also Teleport, which is more of an identity-aware proxy, and it worked well last time I tried it, but I wouldn't use it for personal use.
My opinion is that being able to filter out noise and false positives from authentication logs allows you to improve your actual security measures.
An other advantage is that it may hide information about your system making it harder for an attacker to target you based on a broad scan without doing some (usually detectable) targeted reconnaissance first. For example imagine someone found a 0-day in one of the services behind the port-knock and is scanning for the vulnerable version.
It does however add another cog in the machine that may break.
Will go into more details why I created in the blog post coming very soon! Just doing the final touches right now.
With xz backdoor owning ssh, I wouldn’t completely trust ssh public key authentication either.
11 more comments available on Hacker News