Hacker News SQLite
hackerbook.dosaygo.comKey Features
Tech Stack
Key Features
Tech Stack
Go to this repo (https://github.com/DOSAYGO-STUDIO/HackerBook): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
Question - did you consider tradeoffs between duckdb (or other columnar stores) and SQLite?
So you can dump e.g. all of Hacker News in a single multi-GB Parquet file somewhere and build a client-side JavaScript application that can run queries against that without having to fetch the whole thing.
It would be an interesting experiment to add the duckdb hackend
duckdb is a 54M dynamically-linked binary on amd64
sqlite3 is a 1.7M static binary
DuckDB is a 6yr-old project
SQLite is a 25yr-old project
duckdb reads Parquet or DuckDB
sqlite3 reads SQL
DuckDB is OLAP
SQLite is OLTP
It's different in that it is tailored to analytics, among other things storage is columnar, and it can run off some common data analytics file formats.
It has transparent compression built-in and has support for natural language queries. https://buckenhofer.com/2025/11/agentic-ai-with-duckdb-and-s...
"DICT FSST (Dictionary FSST) represents a hybrid compression technique that combines the benefits of Dictionary Encoding with the string-level compression capabilities of FSST. This approach was implemented and integrated into DuckDB as part of ongoing efforts to optimize string storage and processing performance." https://homepages.cwi.nl/~boncz/msc/2025-YanLannaAlexandre.p...
Doesn't scream columnar database to me.
I had to run a test for myself, and using sqlite2duckdb (no research, first search hit), and using randomly picked shard 1636, the sqlite.gz was 4.9MB, but the duckdb.gz was 3.7MB.
The uncompressed sizes favor sqlite, which does not make sense to me, so not sure if duckdb keeps around more statistics information. Uncompressed sqlite 12.9MB, duckdb 15.5MB
BUT I did try to push the entire 10GB of shards to GitHub (no LFS, no thanks, money), and after the 20 minutes compressing objects etc, "remote hang up unexpectedly"
To be expected I guess. I did not think GH Pages would be able to do this. So have been repeating:
wrangler pages deploy docs --project-name static-news --commit-dirty=true
on changes and first time CF Pages user here, much impressed!It's super simple, really, far less impressive than what you've built there.
Listen was nice. That's really cool, actually. I encourage you to do it.
I've been taking frequent "offline-only-day" breaks to consolidate whatever I've been learning, and Kiwix has been a great tool for reference (offline Wikipedia, StackOverflow and whatnot).
[0] https://kiwix.org/en/the-new-kiwix-library-is-available/
> 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands
I'm really sorry to have to ask this, but this really feels like you had an LLM write it?
Ooh, I used “sequential”, ooh, I used an em dash. ZOMG AI IS COMING FOR US ALL
Also for reference: “this shortcut can be toggled using the switch labeled 'Smart Punctuation' in General > Keyboard settings.”
Always write what you want, however you want to write it. If some reader somewhere decides to be judgemental because of — you know — an em dash or an X/Y comparison or a complement or some other thing that they think pins you down as a bot, then that's entirely their own problem. Not yours.
They observe the reality that they deserve.
Ending with a question mark doesn’t make your sentence a question. You didn’t ask anything. You stated an opinion and followed it with a question mark.
If you intended to ask a question about the text being written by AI, no, you don’t have to ask that.
I am so damn tired of the “that didn’t happen” and the “AI did that” people when there is zero evidence of either being true.
These people are the most exhausting people I have ever encountered in my entire life.
As someone reskilling into being a writer, I really do not think that is "good writing".
But it didn’t read LLM generated IMO.
I wonder if there's something like this going on here. I never thought it was LLM on first read, and I still don't, but when you take snippets and point at them it makes me think maybe they are
Thank you btw
I have a much simpler database: https://play.clickhouse.com/play?user=play#U0VMRUNUIHRpbWUsI...
I did something similar. I build a tool[1] to import the Project Arctic Shift dumps[2] of reddit into sqlite. It was mostly an exercise to experiment with Rust and SQLite (HN's two favorite topics). If you don't build a FTS5 index and import without WAL, import of every reddit comment and submission takes a bit over 24 hours and produces a ~10TB DB.
SQLite offers a lot of cool json features that would let you store the raw json and operate on that, but I eschewed them in favor of parsing only once at load time. THat also lets me normalize the data a bit.
I find that building the DB is pretty "fast", but queries run much faster if I immediately vacuum the DB after building it. The vacuum operation is actually slower than the original import, taking a few days to finish.
[1] https://github.com/Paul-E/Pushshift-Importer
[2] https://github.com/ArthurHeitmann/arctic_shift/blob/master/d...
> The VACUUM command may change the ROWIDs of entries in any tables that do not have an explicit INTEGER PRIMARY KEY.
means SQLite does something to organize by rowid and that this is doing most of the work.
Reddit post/comment IDs are 1:1 with integers, though expressed in a different base that is more friendly to URLs. I map decoded post/comment IDs to INTEGER PRIMARY KEYs on their respective tables. I suspect the vacuum operation sorts the tables by their reddit post ID and something about this sorting improves tables scans, which in turn helps building indices quickly after standing up the DB.
I watched it in the browser network panel and saw it fetch:
https://hackerbook.dosaygo.com/static-shards/shard_1636.sqlite.gz
https://hackerbook.dosaygo.com/static-shards/shard_1635.sqlite.gz
https://hackerbook.dosaygo.com/static-shards/shard_1634.sqlite.gz
As I paginated to previous days.It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.
The interactive SQL query interface at https://hackerbook.dosaygo.com/?view=query asks you to select which shards to run the query against, there are 1636 total.
https://github.com/simonw/sqlite-s3vfs
This comment was helpful in figuring out how to get a full Git clone out of the heritage archive: https://news.ycombinator.com/item?id=37516523#37517378
https://simonwillison.net/2021/May/2/hosting-sqlite-database...
From what I see in GitHub in your copy of the repo, it looks like you don’t have the tags.
Do you have the tags locally?
If you don’t have the tags, I can push a copy of the repo to GitHub too and you can get the tags from my copy.
git push --tags originSure, the LLM fills in all the boilerplate and makes an easy-to-use, reproducible tool with loads of documentation, and credit for that. But is it not more accurate to say that Simon is absurdly efficient, LLM or sans LLM? :)
https://just.billywhizz.io/sqlite/demo/#https://raw.githubus...
I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.
It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.
I want something like a db with indexes
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
It’s a fun time to be a dev in this space!
But, when using this on frontend, are portions of files fetched specifically with http range requests? I tried to search for it but couldn't find details
I believe that there are also indexing opportunities (not necessarily via eg hive partitioning) but frankly - am kinda out of my depth pn it.
Disclaimer: I work at GitHub but am unaffiliated with that blog.
There is also a file format to optimize this https://cogeo.org/
This is my VFS: https://github.com/ncruces/go-sqlite3/blob/main/vfs/readervf...
And using it with range requests: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...
And having it work with a Zstandard compressed SQLite database, is one library away: https://pkg.go.dev/github.com/SaveTheRbtz/zstd-seekable-form...
But, also, SQLite caches data; you can simply increase the page cache.
But you can use it (e.g.) in a small VPS to access a multi-TB database directly from S3.
Where did you get the 22GB figure from? On the site it says:
> 46,399,072 items, 1,637 shards, 8.5GB, spanning Oct 9, 2006 to Dec 28, 2025
The HN post title (:
How was the entirety of HN stored in a single SQLite database? In other words, how was the data acquired? And how does the page load instantly if there's 22GB of data having to be downloaded to the browser?
- 1. download_hn.sh - bash script that queries BigQuery and saves the data to *.json.gz
- 2. etl-hn.js - does the sharding and ID -> shard map, plus the user stats shards.
- 3. Then either npx serve docs or upload to CloudFlare Pages.
The ./toool/s/predeploy-checks.sh script basically runs the entire pipeline. You can do it unattended with AUTO_RUN=true
The sequence of shards you saw when you paginated to days is faciliated by the static-manifest which maps HN item ID ranges to shards, and since IDs are increasing and a pretty good proxy of time (a "HN clock"), we can also map the shards that we divided by ID, to time spans their items cover. An in memory table sorted by time is created from the manifest on load so we can easily look up which shard we need when you pick a day.
Funnily enough, this system was thrown off early on by a handful of "ID/timestamp" outliers in the data: items with weird future timestamps (offset by a couple years), or null timestamps. To cleanse our pure data from this noise, and restore proper adjacent-in-time shard cuts we just did a 1/99 percentile grouping and discarded the outliers leaving shards with sensible 'effective' time spans.
Sometimes we end up fetching two shards when you enter a new day because some items' comments exist "cross shard". We needed another index for that and it lives in cross-shard-index.bin which is just a list of 4-byte item IDs that have children in more than 1 shard (2-bytes), which occurs when people have the self-indulgence to respond to comments a few days after a post has died down ;)
Thankfully HN imposes a 2 week horizon for replies so there aren't that many cross-shard comments (those living outside the 2-3 days span of most, recent, shards). But I think there's still around 1M or so, IIRC.
Best locally of course to avoid “I burned a lake for this?” guilt.
For example, one of the most useful applications of video over text is appliance or automotive repair, but the ideal format would be an article interspersed with short video sections, not a video with a talking head and some ~static shaky cam taking up most of the time as the individual drones on about mostly unrelated topics or unimportant details yet you can’t skip past it in case there is something actually pertinent covered in that time.
I've produced a few videos, and I was shocked at how difficult it was to be clear. I have the same problem with writing, but at least it's restricted in a way video making isn't. There's so many ways to make a video about something, and most of them are wrong!
22 GB is uncompressed and compressed the entire things about 9 GB
Nonetheless, random access history is cool.
Guess its common knowledge that SharedArrayBuffer (SQLite wasm) does not work with FF due to Cross-Origin Attacks (i just found out ;).
Once the initial chunk of data loads the rest load almost instantly on Chrome. Can you please fix the GitHub link (current 404) would like to peak at the code. Thank you!
But when go back to the 26th none of the shards will load, error out.
Using Windows 11, FF 146.0.1
Since you tested it seems its just a me problem and thanks for fixing the GitHub link
It would be nice for the thread pages to show a comment count.
Such as DB might be entertaining to play with, and the threadedness of comments would be useful for beginners to practise efficient recursive queries (more so than the StackExchange dumps, for instance).
57 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.