Data-at-Rest Encryption in DuckDB
Mood
informative
Sentiment
positive
Category
tech_discussion
Key topics
Database Security
Data Encryption
DuckDB
Discussion Activity
Active discussionFirst comment
48m
Peak period
16
Day 1
Avg / period
16
Based on 16 loaded comments
Key moments
- 01Story posted
Nov 20, 2025 at 2:26 PM EST
3d ago
Step 01 - 02First comment
Nov 20, 2025 at 3:14 PM EST
48m after posting
Step 02 - 03Peak activity
16 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 20, 2025 at 10:06 PM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
DB encryption is useful if you have multiple things that need separate ACL and encryption keys but if it is one app one DB there is no need for it
> This allows for some interesting new deployment models for DuckDB, for example, we could now put an encrypted DuckDB database file on a Content Delivery Network (CDN). A fleet of DuckDB instances could attach to this file read-only using the decryption key. This elegantly allows efficient distribution of private background data in a similar way like encrypted Parquet files, but of course with many more features like multi-table storage. When using DuckDB with encrypted storage, we can also simplify threat modeling when – for example – using DuckDB on cloud providers. While in the past access to DuckDB storage would have been enough to leak data, we can now relax paranoia regarding storage a little, especially since temporary files and WAL are also encrypted.
Comparing it to a naive approach (encrypting an entire database file in a single shot and loading it all into memory at once) is always going to make competent work seem "amazing".
I say this not to shit on DuckDB (I see no reason to shit on them); rather, I think it's important that we as professionals have realistic standards that we expect _ourselves_ to hit. Work we view as "amazing" is work we allow ourselves not to be able to replicate. But this is not in that category, and therefore, you should hold yourself to the same standard.
I run a small company and needed to budget solid amount of chunk of time for next year to dig into improving this component of our system. I respect your perspective around holding high standards, but I do think it's worth getting excited about and celebrating reliable performant software that demonstrates consistent competence.
ie. Running it like a normal database, and getting to take advantage of all of its goodies
Where you store the .duckdb file will make a big difference in performance (e.g. S3 vs. Elastic File System).
But I'd take a good look at ducklake as a better multiplayer option. If you store `.parquet` files in blob storage, it will be slower than `.duckdb` on EFS, but if you have largish data, EFS gets expensive.
We[2] use DuckLake in our product and we've found a few ways to mitigate the performance hit. For example, we write all data into ducklake in blog storage, then create analytics tables and store them on faster storage (e.g. GCP Filestore). You can have multiple storage methods in the same DuckLake catalog, so this works nicely.
0 - https://www.definite.app/blog/duck-takes-flight
SqliteMultipleCiphers has been around for ages and is free https://utelle.github.io/SQLite3MultipleCiphers/
And Turso Database supports encryption out of the box: https://docs.turso.tech/tursodb/encryption
8 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.