Show HN: I'm rewriting a web server written in Rust for speed and ease of use
ferron.sh--
Reach out to the guys at Kamal. They wrote their own reverse proxy because they thought Traefik was too complex, but they might be super happy about yours if Ferron is more powerful yet easy to configure because it might solve more of Kamal’s problems.
Not affiliated with Kamal at all, just an idea.
I previously founded and sold an AI startup to Spotify; that doesn't actually make me smarter than the average HN user (mostly just more lucky) but it probably looks nice on a social proof section.
Depending of course from the type of the backend (is it limited by other I/O and Caddy bottleneck does not matter)
So, I guess, performance + easy of use. Obviously, caddy is much more mature though.
> Install with sudo curl bash
Open build service [1] / openSUSE Build Service [2] might help a bit there though, providing a tool to automate packaging for different distributions.
I will say that though, it's probably not rational to be okay with blindly running some opaque binary from a website, but then flip out when it comes to running an install script from the same people and domain behind the same software. At least from security PoV I don't see how there should be any difference, but it's true that install scripts can be opinionated and litter your system by putting files in unwanted places so nevertheless there are strong arguments outside of security.
I really like the spirit and simplicity of Ferron, will try it out when I have a chance. Been waiting for gradually throw out nginx for a while now, nothing clicked all the checkboxes.
Good luck.
This is great, I started working on a similar project but never had the discipline to sit through all the edge cases.
Maybe I'll start building it on top of ferron!
I would love to have a minimalistic DIY serverless platform where I can compile rust functions (or anything else, as long as it matches the type signature) to a .so, dynamically load the .so and run the code when a certain path is hit.
You could even add JS support relatively easily with v8 isolates.
Lots of potential!
Wishing the best for your concept too!
Each of the 4 charts have data for Ferron and Caddy, but then include data for lighttpd, apache, nginx and traefik selectively for each chart, such that each chart has exactly four selected servers.
That doesn't inspire confidence.
The problems start even higher on the page in "The problem with popular web servers" section that doesn't inspire confidence either.
From "nginx configs can become verbose" (because nginx is not "just" a web server [1]) to non-sequiturs like "Many popular web servers (including Apache and NGINX) are written in programming languages and use libraries that aren't designed for memory safety. This caused many issues, such as Heartbleed in OpenSSL"
[1] Sidetrack: https://x.com/isamlambert/status/1979337340096262619
Until ~2015, GitHub Pages hosted over 2 million websites on 2 servers with a multi-million-line nginx.conf, edited and reloaded per deploy. This worked incredibly well, with github.io ranking as the 140th most visited domain on the web at the time.
Nginx performance is fine (and probably that's why it's not included in the static page "benchmark")
if this is a sense of the logic put into the application, no memory safe language will save it from the terrible bugs!
From https://www.heartbleed.com
> The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.
Also, the program being memory-safe doesn't mean it's bug-free, other bugs not related to memory safety exist (like path traversals are due to improper sanitation or checking of the input).
not sure if there is already a true rust TLS implementation - that might be useful for such a case but would also make the point a moot-point since its just evading the risk by not using it, not by solving the issue of memory issues being present in third-party libraries.
You can read how Rustls compares to other TLS implementations, when it comes to implementation vulnerabilities, from the Rustls manual: https://docs.rs/rustls/latest/rustls/manual/_01_impl_vulnera...
Summary URL: https://ferron.sh/docs Status: 200 Source: Network
Request Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Encoding: gzip, deflate, br Accept-Language: en-GB,en;q=0.9 Priority: u=0, i Sec-Fetch-Dest: document Sec-Fetch-Mode: navigate Sec-Fetch-Site: none User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.6 Safari/605.1.15
Response Accept-Ranges: bytes Cache-Control: public, max-age=900 Content-Encoding: br Content-Security-Policy: default-src 'self'; style-src 'self' 'unsafe-inline'; object-src 'none'; img-src 'self' data:; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://analytics.ferron.sh; connect-src 'self' https://analytics.ferron.sh Content-Type: text/html Date: Tue, 21 Oct 2025 10:07:46 GMT ETag: W/"ba17d6fadf70c9f0f3b08511cd897f939b6130afbed2906b841119cd7fe17a39-br" Server: Ferron Strict-Transport-Security: max-age=31536000; includeSubDomains; preload Vary: Accept-Encoding, If-Match, If-None-Match, Range X-Content-Type-Options: nosniff x-ferron-cache: HIT X-Frame-Options: deny
I did the reverse proxy benchmarks for NGINX, when someone opened a GitHub issue about missing NGINX benchmarks, and asked about benchmark comparisons. It turned out that yeah, Ferron is close to NGINX's reverse proxy performance.
On Linux you can allow your program to bind to those ports even without running the program itself as root.
https://superuser.com/questions/710253/allow-non-root-proces...
2) Even if it were, I’m not going to do so while evaluating an unknown program.
"Download ffmpeg here: sudo bash -c ..."
And then the installation script from our malicious site installs ffmpeg just fine, plus some stuff you have no idea about. And you never know that you've just been hacked.
There’s not really a difference between curl piped to bash, and installing packages from a third-party package repository that the distro maintainers have no involvement in with.
(Yes I know that the last one has built-in benefits for automatic updates but that's not going to protect you on initial installation and its benefits can be replicated in a more portable way in any other auto-update mechanism with a similar amount of effort)
((And if you have the patience to set up a custom repository, you can simplify initial installation process using a "curl|bash" script))
You can have 10 step instructions for users to add your PGP signing key and install your APT repository, but what difference does it make? None at all. A malicious website will copy your instructions and replace the signing key and the repository URL with their own.
Installation via package managers (Debian/Ubuntu), using repo provided by ferron
https://ferron.sh/docs/installation/debian
Installation as a Docker container
https://ferron.sh/docs/installation/docker
And more.
This poses a similar security risk to executing the "curl-sudo-bash".
My personal opinion is that curl pipe to bash is not much worse than any other third-party binary installation method. (Third-party meaning from a source other than the package repo of your distro, or other than brew, or other than some kind of official App Store like thing on your platform.)
If this project isn’t already available in the official package repos of various distros, it will eventually be. And for those more cautious among us you will probably want to wait until that point in time.
For me personally I wouldn’t have any big concerns about the curl pipe bash install method on some of my servers. On my personal laptop (macOS), I’d probably rather build it from source (which is also a method available since it’s open source).
The best bet would be using official distro repositories...
But for now, I'm providing the .deb packages, so that people can easily install Ferron on Debian and alike.
But this is overblown. What’s your threat model here, you’re downloading a random thing from the internet and executing it. 99% of people are on single user machines, so root access doesn’t help, you’re screwed just by executing the thing if it’s malicious. Doing this is no worse than installing and running a random deb, or running npm install
2. Actually, no, I will fight you on this: Unless you're actively trying to break them, docker, nix, flatpak, or any of their ilk will trivialize updates and give you guaranteed uninstallation and going full container will absolutely let you lock down exactly what an application is capable of touching or leaving behind (so, easy with podman/docker, varies with flatpak).
Read https://www.joelonsoftware.com/2006/12/09/simplicity/ and ask yourself if you are truly solving anyone's problem or if you are just looking for a way to rationalize the amount of time you are spending on a hobby.
The complete opposite. It's OP that's trying to "optimize the web server for reverse proxying and static file serving", when what we have out there is more than enough.
> or you're wasting time
"Wasting time" is not a problem. If OP is doing working on things because it brings them pleasure and they are hoping to learn from it, more power for them. What bugs me about these types of posts is when people are set on the "build a better mouse trap" mentality and want others to validate them.
It may sound "harsh" to you, but if I came up asking for "any type of feedback" when I'm trying to figure out if the idea is worth persuing, I'd be pretty upset if I kept chasing an invisible dragon because the community was more concerned about "hurting my feelings" instead of being upfront and give some warning like this might be interesting to you but it's not solving any real pain point. Keep that in mind when deciding if work on this will be worthwhile.
I have optimized it, so it would be faster than the original server I have been working on.
> (...) give some warning like this might be interesting to you but it's not solving any real pain point. Keep that in mind when deciding if work on this will be worthwhile.
If you feel the project isn't solving a real pain point for you, you don't have to use it! I was showcasing my web server to interested people on Hacker News.
The problem space of "web servers to serve static files and reverse proxy" is fairly small, how many differing solutions and designs would be required to satisfy your idea of "as many as possible"?
At what cost? For what benefit?
Again: if OP wants to work on this because they take joy in it, fine. But be honest about it (to themselves and to others) instead of coming up with all sorts of ratioinalizations and biased comparisons when talking about the alternatives.
Here are my learnings:
* TLS (HTTPS) can be easily enabled by default, but it requires certificates. This requires a learning curve for the application developer but can be automated away from the user.
* The TLS certs will not be trusted by default until they are added to the OS and browser trust stores. In most cases this can be fully automated. This is most simple in Windows, but Firefox still makes use of its own trust store. Linux requires use of a package to add certs to each browser trust store and sudo to add to the OS. Self signed certs cannot be trusted in OSX with automation and requires the user to manually add the certs to the keychain.
* Everything executes faster when WebSockets are preferred over HTTP. An HTTP server is not required to run a WebSocket server allowing them to run in parallel. If the server is listening for the WebSocket handshake message and determines the connection to instead be HTTP it can allow both WebSocket and HTTP support from the same port.
* Complete user configuration and preferences for an HTTP or WebSocket server can be a tiny JSON object, including proxy and redirection support by a variety of addressable criteria. Traffic redirection should be identical for WebSocks and HTTP both from the users perspective as well as the internal execution.
* The server application can come online in a fraction of a second. New servers coming online will also take just milliseconds if not from certificate creation.
> TLS (HTTPS) can be easily enabled by default, but it requires certificates. This requires a learning curve for the application developer but can be automated away from the user.
Yeah, these certificates can be obtained from Let's Encrypt automatically.
> Everything executes faster when WebSockets are preferred over HTTP. An HTTP server is not required to run a WebSocket server allowing them to run in parallel. If the server is listening for the WebSocket handshake message and determines the connection to instead be HTTP it can allow both WebSocket and HTTP support from the same port.
Oh, seems like an interesting observation!
I’m working on an open-source project myself (AI-focused), and I’ve been exploring efficient ways to serve streaming responses — so I’d love to hear more about how your server handles concurrency or large responses.
> Did you benchmark against other Go web servers like Caddy or fasthttp?
I have already benchmarked Ferron against Caddy! :)
> so I’d love to hear more about how your server handles concurrency or large responses.
Under the hood, Ferron uses Monoio asynchronous runtime.
From Monoio's GitHub repository (https://github.com/bytedance/monoio):
> Moreover, Monoio is designed with a thread-per-core model in mind. Users do not need to worry about tasks being Send or Sync, as thread local storage can be used safely. In other words, the data does not escape the thread on await points, unlike on work-stealing runtimes such as Tokio. > For example, if we were to write a load balancer like NGINX, we would write it in a thread-per-core way. The thread local data does not need to be shared between threads, so the Sync and Send do not need to be implemented in the first place.
Ferron uses an event-driven concurrency model (provided by Monoio), with multiple threads being spread across CPU cores.
"Point a subdomain named ferrondemo of your domain name (for example, ferrondemo.example.com) to either: CNAME demo.ferron.sh or A 194.110.4.223"
This is really strange and make my Spidey sense tingle. If the goal is to just point your domain to this server it should not require DNS auth. Just HTTP is fine. DNS sure if you want optimize reverse proxy, because that would be also possible to do via http auth for every subdomain separately. If you just need some www server quickly pointing your domain to some other dude domain is not the way to go.
This feels weird.
Yeah. But as I said before, people installing Ferron on their servers don't need to use this demo. Oh, and people using this demo don't need to install Ferron on their servers.
I just added two notices to this demo:
> Note: After completing the demo, it's recommended to delete the subdomain you have just created to prevent security issues.
> This demo setup is optional and exists only to demonstrate automatic TLS functionality. You do not need to point any subdomain to demo servers for normal use of Ferron.
I think we do not understand each other. Look at this: https://letsencrypt.org/docs/challenge-types/#http-01-challe...
If this is for demo, even more DNS is not necessary. Just code your web server to serve challenge for that particular Uri, and you are done. I do not think so that you need wildcard cert for the demo. Just fixed subdomain is fine. Also you do not have to wait for DNS to propagate with HTTP method.
"It's written in Rust so it's memory safe!"
By the way the rewrite is still in Rust.
Which seems like interesting UX.
Maybe just write an nginx config generator instead?
It's also interesting that the actual config looks quite a lot like nginx config.