Streaming Compression Beats Framed Compression
Posted7 days agoActive2d ago
bou.keTech Discussionstory
informativepositive
Debate
40/100
Compression AlgorithmsData CompressionAI Performance Analysis
Key topics
Compression Algorithms
Data Compression
AI Performance Analysis
Discussion Activity
Active discussionFirst comment
4d
Peak period
14
84-96h
Avg / period
5.3
Comment distribution16 data points
Loading chart...
Based on 16 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 9:39 AM EST
7 days ago
Step 01 - 02First comment
Dec 30, 2025 at 12:11 AM EST
4d after posting
Step 02 - 03Peak activity
14 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 1:28 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46392382Type: storyLast synced: 12/30/2025, 10:56:04 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
There is RFC 9220 [2] that makes WebSockets go over QUIC (which is UDP-based). But that's still expected to expose a stream of bytes to the WebSocket, which still keeps the ordering guarantee.
[1]: https://datatracker.ietf.org/doc/html/rfc6455
[2]: https://datatracker.ietf.org/doc/rfc9220/
1) Using a single compression context for the whole stream means you have to keep that context active on the client and server while the connection is active. This may have a nontrivial memory cost, especially at high compression levels. (Don't set the compression window any larger than it needs to be!)
2) Using a single context also means that you can't decompress one frame without having read the whole stream that led up to that. This prevents some possible useful optimizations if you're "fanning out" messages to many recipients - if you're compressing each message individually, you can compress it once and send the same compressed message to every recipient.
It sounds like these contexts should be cleared when they reach a certain memory limit, or maybe reset periodically, i.e every N messages. Is there another way to manage the memory cost?
Many compression standards include memory limits, to guarantee compatibility, and the older the standard the lower that limit is likely to be. If the standards didn't dictate this stuff, DVD sellers could release a DVD that needed a 4MB decompression window, and it'd fail to play on players that only had 2MB of memory - setting a standard and following it avoids this happening.
https://tintin.mudhalla.net/protocols/mccp/
We used a streaming compression format that was originally designed for IBM tape drives.
It was fast as hell and worked really well, and was gentle on CPU and it was easy to control memory usage.
In the early 2000s on a modest 2-proc AMD64 machine we ran out of fast Ethernet way before we felt CPU pressure.
We got hit by the SOAP mafia during Longhorn; we couldn’t convince the web services to adopt it; instead they made us enshittify our “2 bytes length, 2 bytes msgtype, structs-on-the-wire” speed demon with their XML crap.
https://www.ietf.org/archive/id/draft-ietf-httpbis-compressi...