A File Format Uncracked for 20 Years
Postedabout 2 months agoActiveabout 2 months ago
landaire.netTechstory
calmpositive
Debate
20/100
Reverse EngineeringFile FormatsGame Development
Key topics
Reverse Engineering
File Formats
Game Development
A 20-year-old file format has been cracked, showcasing the dedication and expertise of reverse engineers, and sparking nostalgia and appreciation for the community.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
5h
Peak period
8
12-18h
Avg / period
3.3
Comment distribution13 data points
Loading chart...
Based on 13 loaded comments
Key moments
- 01Story posted
Nov 6, 2025 at 9:10 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 7, 2025 at 2:39 AM EST
5h after posting
Step 02 - 03Peak activity
8 comments in 12-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 9, 2025 at 1:51 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45842851Type: storyLast synced: 11/20/2025, 12:08:29 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
For example, one format I use is just to concatenate multiple files into a single one, I use it to group video timeline seeker images into one file - it is faster than using archive or tar/gzip. Another one is a format that concatenates AES-GCM chunks into a single file, which allows me to have interrupted writes and it also supports seeking and streaming of reads.
These things are quite useful, but there is no general use(like gzip/tar). Usually there is some specific functionality needed, so they have to always be written from scratch.
And yeah, like I said, random access is possible but you have to write your own "driver" for it.
I did something like this when I was moving my files onto a new computer like 25 years ago, and all I had was a floppy drive. Just continuously dump the data onto a floppy until space runs out and ask for another one until there are no more files.
I've been authoring IFF/LBM and PCX format en/decode libraries recently because of the half-assed implementations that half-heartedly cherrypick a few features rather than fully-support these formats robustly.
> The CPU wasn't terribly slow for the time, but wasting cycles would have been noticed.
> Compressing data means you save space on the disc...
While wasting cycles isn't a good thing it's even worse if you are wasting those cycles by not using them because you are waiting for a sloooow media.
And while you can invent a compressed format for the every asset type you have it would be really easier to just compress the whole thing a let the compressor to do the magic.
NB: I still somewhat remember the original SC and it was like 'future is now' with all those glorious shadows and sunshine blooming.