Ssh3: Faster and Rich Secure Shell Using Http/3
Posted3 months agoActive3 months ago
github.comTechstoryHigh profile
heatedmixed
Debate
80/100
SSHHttp/3NetworkingSecurityProtocol Design
Key topics
SSH
Http/3
Networking
Security
Protocol Design
The SSH3 project aims to create a faster and more secure SSH protocol using HTTP/3, but the community is divided on its merits and naming convention.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
34m
Peak period
92
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 10:27 AM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 11:01 AM EDT
34m after posting
Step 02 - 03Peak activity
92 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 2:54 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45395991Type: storyLast synced: 11/22/2025, 11:00:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.
Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.
Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/
There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.
SSH multiplexes multiple channels on the same TCP connection which results in head of line blocking issues.
> Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for.
Not really, no. OpenSSH has a 2 MB window size (in the 2000s, 64K), even with just ~gigabit speeds it only takes around 10-20 ms of latency to start being limited by the BDP.
But it's still irrelevant here; specifically called out in README:
> The keystroke latency in a running session is unchanged.
The YouTube and social media eras made everyone so damn dramatic. :/
Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.
I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.
1. High latency, maybe even packet-dropping connections;
2. You’re roaming and don’t want to get disconnected all the time.
For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.
For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.
From my stance, and where I've used mosh has been in performing quick actions on routers and servers that may have bad connections to them, or may be under DDoS, etc. "Read" is extremely limited.
So from that perspective and use case, the "huge downside" has never been a problem.
Not a scroll back buffer workflow issue.
Filtering inbound UDP on one side is usually enough to break mosh, in my experience. Maybe they use better NAT traversal strategies since I last checked, but there's usually no workaround if at least one network admin involved actively blocks it.
This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.
I think it's silly that this exists. They should just let TCP handle this.
https://github.com/libfuse/sshfs/issues/300
No, unfortunately it'snecessary so that the SSH proocol can multiplex streams independently over a single established connection.
If one of the multiplexed streams stalls because its receiver is blocked or slow, and the receive buffer (for that stream) fills up, then without window-based flow control, that causes head-of-line blocking of all the other streams.
That's fine if you don't mind streams blocking each other, but it's a problem if they should flow independently. It's pretty much a requirement for opportunistic connection sharing by independent processes, as SSH does.
In some situations, this type of multiplexed stream blockiing can even result in a deadlock, depending on what's sent over the streams.
Solutions to the problem are to either use window-based flow control, separate from TCP,, or to require all stream receive buffers to expand without limit, which is normally unacceptable.
HTTP/2 does something like this.
I once designed a protocol without this, thinking multipexing was enough by itself, and found out the hard way when processes got stuck for no apparent reason.
* Give users a config options so I can adjust it to my use case, like I can for TCP. Don't just hardcode some 2 MB (which was even raised to this in the past, showing how futile it is to hardcode it because it clearly needs adjustments to people's networks and and ever-increasing speeds). It is extremely silly that within my own networks, controlling both endpoints, I cannot achieve TCP speeds over SSH, but I can with nc and a symmetric encryption piped in. It is silly that any TCP/HTTP transfer is reliably faster than SSH.
* Implement data dropping and retransmissions to handle blocking -- like TCP does. It seems obviously asking for trouble to want to implement multiplexing, but then only implement half of the features needed to make it work well.
When one designs a network protocol, shouldn't one of the first sanity checks be "if my connection becomes 1000x faster, does it scale"?
Or, better but more difficult, it should track the dynamic TCP window size, from the OS when possible, combined with end-to-end measurements, and ensure the SSH mux channel windows grow to accomodate the TCP window, without growing so much they starve other channels.
To your second point, you can't do data dropping and retransmission for mux'd channels over a single TCP connection. After data is sent from the application to the kernel socket, it can't be removed from the TCP transmission queue, will be retransmitted by the kernel socket as often as needed, and will reach the destination eventually, provided the TCP connection as a whole survives.
You can do mux'd data dropping and retransmission over a single UDP connection, but that's basically what QUIC is.
https://github.com/rapier1/hpn-ssh
If you use the former without the latter, you'll inevitably have head-of-line blocking issues if your connection is bandwidth or receiver limited.
Of course not every SSH user uses protocol multiplexing, many do, as it can avoid repeated and relatively expensive (terms of CPU, performance, and logging volume) handshakes.
[0]: https://github.com/mobile-shell/mosh/issues/98
Also, HTTP/3 must obviously also be using some kind of acknowledgements, since for fairness reasons alone it must be implementing some congestion control mechanism, and I can't think of one that gets by entirely without positive acknowledgements.
It could well be more efficient than TCP's default "ack every other segment", though. (This helps in the type of connection mentioned above; as far as I know, some DOCSIS modems do this via a mechanism called "ack compression", since TCP is generally tolerant of losing some ACKs.)
In a sense, the win of QUIC/HTTP/3 in this sense isn’t that it’s not TCP (it actually provides all the components of TCP per stream!); it’s rather that the application layer can “provide its own TCP”, which might well be more modern than the operating system’s.
https://github.com/crazyscot/qcp
> The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection
....
> Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session
[0] https://www.ietf.org/archive/id/draft-michel-remote-terminal...
The former kind of multiplexing addresses functionality, the latter performance.
Not that I've ever noticed this being an issue (no matter how much we complain, internet here is pretty decent)
Edit: seeing as someone downvoted your hour-old comment just as I was adding this first reply, I guess maybe they 'voted to disagree'... Would be nice if the person would comment. It wasn't me anyway
I use ssh everywhere, maybe establish 200+ SSH sessions a day for my entire career of 20 years and never once have I thought “I wish establishing this connection was faster”
There are a lot of automation use cases for SSH where connection setup time is a significant impediment; if you’re making dozens or hundreds of connections to hundreds or thousands of hosts, those seconds add up.
HTTP/3 (and hopefully this project) does not have this problem.
The reasons states to use http3 and not QUIC directly makes sense with littlest downside - you can run it behind any standard http3 reverse proxy, under some subdomain or path of your choosing, without standing out to port scanners. While security through obscurity is not security, there's no doubt that it reduces the CPU overhead that many scanners might incur if they discover your SSH server and try a bunch of login attempts.
Running over HTTP3 has an additional benefit. It becomes harder to block. If your ssh traffic just looks like you're on some website with lots of network traffic, eg google meet, then it becomes a lot harder to block it without blocking all web traffic over http3. Even if you do that, you could likely still get a working but suboptimal emulation over http1 CONNECT
As long as said proxy supports a http CONNECT to a bi-directional connection. Which most I know of do, but may require additional configuration.
Another advantage of using http/3 is it makes it easier to authenticate using something like oauth 2, oidc, saml, etc. since it can use the normal http flow instead of needing to copy a token from the http flow to a different flow.
But both HTTP/2 and QUIC (the "transport layer" of HTTP/3) are so general-purpose that I'm not sure the HTTP part really has a lot of meaning anymore. At least QUIC is relatively openly promoted as an alternative to TCP, with HTTP its primary usecase.
If you ever using wifi in the airport or even some hotel with work suite unit around the world, you will notice that Apple Mail can't send or receive emails. It is probably some company wide policy to first block port 25 (that is even the case with some hosting providers) all in the name of fighting SPAM. Pretty soon, 143, 587, 993, 995.... are all blocked. Guess 80 and 443 are the only ones that can go through any firewalls now days. It is a shame really. Hopefully v6 will do better.
So there you go. And know EU wants to do ChatControl!!!! Please stop this none-sense, listen to the people who actually knows tech.
People were (wisely) blocking port 25 twenty years ago.
A network admin can reasonably want to have the users of their network not run mail servers on it (as that gets IPs flagged very quickly if they end up sending or forwarding spam), while still allowing mail submission to their servers.
Is it because it is hard to detect what type of the request that is being sent? Stream vs Non Stream etc?
20 years ago (2005) STARTTLS was still widely in use. Clients can be configured to call it when STARTTLS isn't available. But clients can also be served bogus or snake oil TLS certs. Certificate pinning wasn't widely in use for SMTP in 2005.
Seems STARTTLS is deprecated since 2018 [1]
Quote: For email in particular, in January 2018 RFC 8314 was released, which explicitly recommends that "Implicit TLS" be used in preference to the STARTTLS mechanism for IMAP, POP3, and SMTP submissions.
[1] https://serverfault.com/questions/523804/is-starttls-less-sa...
Blocking ports 587, 993, 995 etc. is indeed silly.
It's not like we see a lot of downsides that the world collectively agreed on TCP/IP over IPX/SPX or DECNet or X.25. Or that the linux kernel is everywhere.
If you are designing a protocol, unless you have a secret deal with telcos, I suggest you masquerade it as something like HTTP so that it is more difficult to slow down your traffic.
So your super speedy HTTP SSH connection then ends up being slower than if you just used ssh. Especially if your http traffic looks rogue.
At least when its its own protocol you can come up with strategies to work around the censorship.
There is not only censorship, but traffic shaping when some apps are given a slow lane to speed up other apps. By making your protocol identifiable you gain nothing good.
kind of like if a random person created an (unaffiliated) hacker news 2.0 website.
Looking at you, teams who run Zscaler with tls man in the middle attack mode enabled.
Host *.internal.example.com
in the SSH client config would make everything in that domain hop over that hop server. It's one extra connection - but with everything correctly configured that should be barely noticeable. Auth is also proxied through.EDIT: Looking at the relevant RFC [1] and the OpenSSH sshd_config manual [2], it looks like the answer is that the protocol supports having the jump server decide what to do with the host/port information, but the OpenSSH server software doesn't present any relevant configuration knobs.
[1]: https://www.rfc-editor.org/rfc/rfc4254.html#section-7.2
[2]: https://man7.org/linux/man-pages/man5/sshd_config.5.html
What am I missing?
"It is often the case that some SSH hosts can only be accessed through a gateway. SSH3 allows you to perform a Proxy Jump similarly to what is proposed by OpenSSH. You can connect from A to C using B as a gateway/proxy. B and C must both be running a valid SSH3 server. This works by establishing UDP port forwarding on B to forward QUIC packets from A to C. The connection from A to C is therefore fully end-to-end and B cannot decrypt or alter the SSH3 traffic between A and C."
More or less, maybe but not automatically like you suggest, I think. I don't see why you couldn't configure a generic proxy to set it up, though.
https://github.com/openbsd/src/commits/master/
> SSH3 is probably going to change its name. It is still the SSH Connection Protocol (RFC4254) running on top of HTTP/3 Extended connect, but the required changes are heavy and too distant from the philosophy of popular SSH implementations to be considered for integration. The specification draft has already been renamed ("Remote Terminals over HTTP/3"), but we need some time to come up with a nice permanent name.
A better 'working name' would be something like sshttp3, lol. Obviously not the successor to SSH2
Non-doers are the bottom rung of the ladder, don't ever forget that :).
I've seen very little do that. Probably just HTTP, and it's using a slash specifically to emphasize a big change.
Having SSH in the name helps developers quickly understand the problem domain it improves upon.
I meant this in jest but now that I think about it, it actually could be a decent name (?)
SSHTP/3 "Secure Shell Transfer Protocol Version 3"
or even:
SSHP/3 "Secure Shell Protocol Version 3"
pronounced: shoop
Pronounced "Shoe"
qsh might be taken by QShell
https://en.m.wikipedia.org/wiki/Qshell
There's a whole github issue where the issue was bike shed to death.
Or h3s for HTTP 3 Shell?
H3rs for http3 remote shell?
Why not just SSH/QUIC, what does the HTTP/3 layer add that QUIC doesn’t already have?
That way, when you need to use sed for editing text containing it, your pattern can be more interesting:
so, maybe SSHoQ or SoQ
soq reads better for the CLI I suppose.
You’ll see when the logs drop!
ssh is not a shell and ssh is not a terminal, so please everybody stop suggesting name improvements that more deeply embed that confusion.
back in the day, we had actual terminals, and running inside was our shell which was sh. then there was also csh. then there was the idea of "remote" so rsh from your $SHELL would give you a remote $SHELL on another machine. rsh was not a shell, and it was not a terminal. There were a whole bunch of r- prefixed commands, it was a family, and nobody was confused, these tools were not the thing after the r-, these tools were just the r- part.
then it was realized that open protocols were too insecure so all of the r- remote tools became s- secure remote tools.
http is a network protocol that enables other things and gets updated from time to time, and it is not html or css, or javascript; so is ssh a network protocol, and as I said, not a shell and not a terminal.
just try to keep it in mind when thinking of new names for new variants.
and if somebody wants to reply that tcp/ip is actually the network protocol, that's great, more clarification is always good, just don't lose sight of the ball.
It's still largely SSH2, but runs on top of HTTP/3.
https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
However, it can also use HTTP mechanisms for authentication/authorization.
With ssh everybody does TOFU or copies host fingerprints around, vs https where setting up letsencrypt is a no-brainer and you’re a weirdo of you even think about self-signed certs. Now you can do the same with ssh but do you?
For authentication, ssh relies on long lived keys rather than short lived tokens. Yes, I know about ssh certificates but again, it’s a hassle to set up compared to using any of a million IdP with oauth2 support. This enables central place to manage access and mandate MFA.
Finally, you better hope your corporate IT has not blocked the SSH port as a a security threat.
Listing all the deficiencies of something, and putting together a thing that fixes all of them, is the kind of "designed by committee" project that everyone hates. Real progress requires someone to put together a quick project, with new features they think are useful, and letting the public decide if it is useful or not.
Firstly, I love the satirical name of tempaccount420, I was also just watching memes and this post is literally me (ryan gosling)
As I was also thinking about this thing literally yesterday being a bit delusional on hoping to create a better ssh using http/3 or something or some minor improvement because I made a comment about tor routing and linking it to things like serveo, I was thinking of enhancing that idea or something lol.
Actually, it seems that I have already starred this project but I had forgotten about it, this is primarily the reason why I star github project and this might be where I might have got some inspiration of http/3 in the first place with SSH.
Seems like a really great project (I think)
Now, one question that I have is could SSH be made modular in the sense that we can split the transport layer apart from SSH as this project does, without too much worries?
Like, I want to create a SSH-ish something to lets say something like iroh being the transport layer, are there any libraries or resources which can do something like that? (I won't do it for iroh but I always like mixing and matching and I am thinking of some different ideas like SSH over matrix/xmpp/signal too/ the possibilities could be limitless!)
103 more comments available on Hacker News