Processing Strings 109x Faster Than Nvidia on H100
Posted4 months agoActive3 months ago
ashvardanian.comTechstory
supportivepositive
Debate
20/100
GPU OptimizationString ProcessingPerformance Engineering
Key topics
GPU Optimization
String Processing
Performance Engineering
The author presents StringZilla, a library that processes strings 109x faster than Nvidia on H100 GPUs, sparking discussion on its potential applications and optimizations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
23
84-96h
Avg / period
13
Comment distribution26 data points
Loading chart...
Based on 26 loaded comments
Key moments
- 01Story posted
Sep 19, 2025 at 2:24 PM EDT
4 months ago
Step 01 - 02First comment
Sep 23, 2025 at 9:11 AM EDT
4d after posting
Step 02 - 03Peak activity
23 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 7:53 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45304807Type: storyLast synced: 11/20/2025, 4:38:28 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you want to run the benchmarks yourself, you can. First, get rebar[1]. Then, from the root of the `memchr` repository[2]:
See also: https://github.com/BurntSushi/memchr/discussions/159[1]: https://github.com/BurntSushi/rebar
[2]: https://github.com/BurntSushi/memchr
There is definitely no AVX-512 support on my CPU. Which is also true for most of my users. I don't bother with AVX-512 for that reason.
Another substantial population of my users are on aarch64, which memchr has optimizations for. I don't think StringZilla does.
This is exactly my issue with targeting AVX-512. It isn't just absent on "older AVX2-only CPUs." It's also absent on many "newer AVX2-only CPUs." For example, the i9-14900K. I don't think any of the other newer Intel CPUs have AVX-512 either. And historically, whether an x86-64 CPU supported AVX-512 at all was hit or miss.
AVX-512 has been around for a very long time now, and it has just never been consistently available.
But its fair to say that I’m mostly focusing on the datacenter/supercomputing hardware, both on the x86 and Arm side.
But realistically, is there any real-world situation where one would use this? What niche or industry or need would benefit from this, where the dependency + setup costs are worth it. Strings just seem to be a long-solved non-issue.
Namely, if you look at DeepMind’s AlphaFold 1 and 2, bulk volume of compute time is spent outside of PyTorch - running sequence alignment. Historically, with BLAST. More recently, in other labs, with some of my code :)
What excites me in this release is the quality of the new hash functions. I’ve built many over the years but never felt they were worth sharing until now. Having two included here was a personal milestone for me, since I’ve always admired how good xxHash and aHash are and wanted to build something of similar caliber.
The new hashes should be directly useful in databases, for example improving JOIN performance. And the fingerprinting interfaces based on 52-bit modulo math with double-precision FMA units open up another path. They aren’t easy to use and won’t apply everywhere, but on petabyte-scale retrieval tasks they can make a real impact.
A suggestion: in the comparison table under the “AES and Port-Parallelism Recipe” it would be great to include “streaming support” and “stable output” (across os/arch) as a column.
Also something to beware of, some hash libraries claim to support streaming via the Hasher interface but actually return different results in streaming and one-shot mode (and have different performance profiles). I’m on mobile so I can’t check atm but I’m about 80% sure gxhash has at least one of these problems that prevented me from using it before.
One micro-question on the editing: why are numbers written with an apostrophe (') as the thousands-separator [1]? I know that is used for this purpose in Switzerland and that many programming languages support it. It just seemed very strange for English text, where typically comma (,) would be used, of course.
[1]: https://en.wikipedia.org/wiki/Decimal_separator#Digit_groupi...
[2]: https://en.wikipedia.org/wiki/Apostrophe#Miscellaneous_uses_...
A digit separator for increased readability of long numbers has been first introduced by Ada (1979-06), which has used the underscore. This usage matched the original reason for the introduction of the underscore in the character set, which had been done by PL/I (1964-12), for increasing the readability of long identifiers, while avoiding the ambiguity caused by using hyphen for that purpose, as previously in COBOL (many LISPs have retained the COBOL usage of the hyphen, because they, like COBOL, do not normally write arithmetic expressions with operators).
Most programming languages that have added a digit separator have followed Ada, by using the underscore.
35 years later, C++ should have done the same and I hate whoever thought otherwise within the people who have updated the standard, causing thus completely unnecessary compatibility problems, e.g. when copying a big initialized array between program text sources written in different languages.
There was some flawed argument against the underscore that it could have caused some parsing problems in some weird legacy programs, but they were not more difficult to solve than avoiding parsing errors caused by the legacy use of the apostrophe in character constants (i.e. forbidding the digit separator as the first character in a number is enough to ensure a non-ambiguous parsing) .
First, it tuned out that StringZilla scales further to over 900 GigaCUPS around 1000-byte long inputs on Nvidia H100. Moreover, the same performance is obviously accessible on lower-end hardware as the algorithm is not memory bound, no HBM is needed.
Second, I’ve finally transitioned to Xeon 6 Granite Rapids nodes with 192 physical cores and 384 threads. On those, the Ice Lake+ kernels currently yield over 3 TeraCUPS, 3x the current Hopper kernels.
The most recent numbers are already in the repo: https://github.com/ashvardanian/StringWa.rs