Prima Veritas
github.comKey Features
Key Features
Tech Stack
=== Prima Veritas OSS — Hash Check (iris) ===
normalized → MATCH Expected: EF28EA082C882A3F9379A57E05C929D76E98899E151A6746B07D8D899644372F Actual: EF28EA082C882A3F9379A57E05C929D76E98899E151A6746B07D8D899644372F
kmeans → MATCH Expected: DA96D0505BCB1A5A2B826CEB1AA7C34073CB88CB29AE1236006FA4B0F0D74C46 Actual: DA96D0505BCB1A5A2B826CEB1AA7C34073CB88CB29AE1236006FA4B0F0D74C46
Hashcheck PASSED — outputs match golden hashes.
---------
Next step is probably benchmarking this against sklearn? Accuracy comparison and performance hit from all the rounding operations. Anyone here working in maritime auditing, medical data, or other regulated stuff - would you actually use something like this? Trying to figure out if crypto- verifiable analytics is solving a real problem or just a cool technical exercise.
Re: whether this is useful beyond being a cool exercise:
sklearn: Yeah, sklearn is obviously faster and great for day to day work. The reason this project doesn’t use it is because even with fixed seeds, sklearn can still produce different results across machines due to BLAS differences, CPU instruction paths, etc. Here the goal isn’t speed, it’s to make sure the same dataset always produces the exact same artifacts everywhere, down to the byte.
Where that matters: A few examples from my world:
Maritime/industrial auditing: a lot of equipment logs and commissioning data get “massaged” early on. If later analysis depends on that data, you need a way to prove the ingest + transformations weren’t affected by the environment they ran on.
Medical/regulatory work: clinical models frequently get blocked because the same run on two different machines gives slightly different outputs. Determinism makes it possible to freeze analytics for compliance.
Any situation where you have to defend an analytical result (forensics, safety investigations, audits, etc). People assume code is reproducible, but floating-point libraries, OS updates, and dependency drift break that all the time.
So yeah sklearn is better if you just want clustering. This is more like a “reference implementation” you can point to when you need evidence that the result wasn’t influenced by hardware or environment.
Happy to answer questions if anyone’s curious.
Ran on: • Laptop A (Node 22) • Laptop B (Node 18) • Mobile SSH terminal → Docker
All producing bit-for-bit identical outputs.
Feedback or reproducibility tests welcome.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.