Image Dithering: Eleven Algorithms and Source Code (2012)
Posted2 months agoActive2 months ago
tannerhelland.comTechstory
calmpositive
Debate
20/100
Image ProcessingDithering AlgorithmsGraphics
Key topics
Image Processing
Dithering Algorithms
Graphics
The post discusses various image dithering algorithms and their source code, sparking a thoughtful discussion among commenters about the techniques, applications, and nuances of dithering in different contexts.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
12h
Peak period
15
72-84h
Avg / period
4.4
Comment distribution31 data points
Loading chart...
Based on 31 loaded comments
Key moments
- 01Story posted
Oct 24, 2025 at 3:38 PM EDT
2 months ago
Step 01 - 02First comment
Oct 25, 2025 at 3:09 AM EDT
12h after posting
Step 02 - 03Peak activity
15 comments in 72-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 29, 2025 at 5:37 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45698323Type: storyLast synced: 11/20/2025, 5:33:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Christoph Peters’s free blue noise textures are the most commonly used, for people who can’t be bothered running void and cluster themselves: https://momentsingraphics.de/BlueNoise.html
What's important to appreciate is that dithering digital audio should only ever be performed when preparing a final export for distribution, and even then, only for bit-perfect copies. You shouldn't dither when the next step is a lossy codec. Encoders for AAC and Opus accept high bit depth originals, because their encoded files don't have a native "bit depth". They generate and quantise (compress) MDCT coefficients. When these encoded files are decoded to 16-bit PCM during playback, the codec injects "masking noise" which serves a similar function to dither.
But there's another form of audio dithering that uses error diffusion (like TFA describes) rather than adding noise. If you use a single-bit ADC but sample much faster than Nyquist and keep track of your errors with error diffusion, you preserve all the audio information in the original with a similar number of bits as a (e.g.) 16-bit ADC sampled at Nyquist, but with the additional benefit that your sampling noised has moved above the audible range where it can be filtered out with an analog lowpass filter.
This is one-dimensional dithering but in the audio world it's called Sigma-Delta modulation or 1-bit ADC.
A few years ago I got annoyed with this and made a little web-component that attempts to make really sharp 1-bit dithered images by rendering the image on the client to match whatever display device the user has.
https://sheep.horse/2023/1/improved_web_component_for_pixel-...
Demo here: https://scrawl-v8.rikweb.org.uk/demo/filters-027.html
I'm only dropping it in here because the marketing site for the plugin demonstrates a lot of really interesting, full-color, wide spectrum of use-cases for different types of dithering beyond what we normally assume is dithering.
[0] https://www.doronsupply.com/product/dithertone-pro
Are there existing techniques that do this sort of thing? I'm imagining something like doing a median filter on the image, run clustering on the pixels in the colorspace, and then shift/smudge clusters towards "convenient" points in the colorspace, e.g. the N points of the quantized palette and the N^2 points halfway between each pair. Then a partial-error-diffusion alg like atkinson smooths out the final result.
The only catch is that generating blue noise is a roughly O(n^2) algorithm. Its not feasible to be generated on the fly, so in practice you just pregenerate a bunch of blue-noise textures and tile them.
If you google 'pregenerated blue noise' you find plenty of them: https://momentsingraphics.de/BlueNoise.html
Why can't you create blue noise from walking a Hilbert curve and placing points randomly with a minimum and maximum threshold?
the naive algorithm is O(n^2) where n is the number of pixels in an image. tiling and sampling pregenerated noise is O(n), so that's what most people use. the noise can be generated on the fly using a FFT-based algorithm, though it still needs to be applied iteratively so you'd typically end up with O(k n log n) s.t. 10 <= k <= 100.
this has been neat stuff to read up on. my favorite nugget of learning: blue noise is white noise that's fine through a high pass filter a few times. the result of a high pass filter is the same as subtracting the result of a low pass filter from the original signal. blurring is a low pass filter for images. since blue noise is high frequency information, blurring a noised up image effectively removes the blue noise. so the result looks like a blurred version of the original even though it contains a fraction of the original's information.
You don't need real noise, it is enough to have a single texture that is a bit bigger than the input image and then randomly offset and rotate it. If that random offset is random enough (so not pseudorandom with a low periodicity), nobody will ever notice.
Memory has gotten cheaper while latency deadlines are still deciding over how much you can do realtime. That means cheap tricks like this are not an embarrassing workaround, but sometimes the smart choice.
A lot of blue noise references: https://gist.github.com/pixelmager/5d25fa32987273b9608a2d2c6...
There also exist pseudo blue noise generators, e.g.: https://observablehq.com/@fil/pseudoblue https://www.shadertoy.com/view/3tB3z3
That's the kind of thing I use dithering on BBC Micro because it's such a cheap technique, here in a thread directly comparing to Bayer-like dithering https://hachyderm.io/@bbcmicrobot@mastodon.me.uk/11200546490... or here faking the Windows XP desktop https://hachyderm.io/@bbcmicrobot@mastodon.me.uk/11288651013...
[1] https://en.wikipedia.org/wiki/Color_difference [2] https://dithermark.com
Important for lo-fi displays and printing etc
I do think that well dithered images looked better in some texts than colour images which had more wow but were more distracting.
2016 (199 points, 61 comments) https://news.ycombinator.com/item?id=11886318
2017 (125 points, 53 comments) https://news.ycombinator.com/item?id=15413377
https://doodad.dev/dither-me-this/
Basically, great dithering in color instead of B/W.
on topic: https://surma.dev/things/ditherpunk/ is a great companion read to the subject
https://nelari.us/post/quick_and_dirty_dithering/ is the best quick introduction to the technique that I've seen. There's a more comprehensive introduction at https://momentsingraphics.de/BlueNoise.html. https://bartwronski.com/2016/10/30/dithering-part-three-real... also demonstrates it, comparing it to other dithering algorithms.
Ulichney introduced blue noise to dithering in 01988 as a refinement of "white-noise dithering", also known as "random dithering", where you just add white noise before thresholding: https://cv.ulichney.com/papers/1988-blue-noise.pdf. Ulichney's paper is also a pretty comprehensive overview of dithering algorithms at the time, and he also makes some interesting observations about high-pass prefiltering ("sharpening", for example with Laplacians). Error-diffusion dithering necessarily introduces some low-pass filtering into your image, because the error that was diffused is no longer in the same place, and high-pass prefiltering can help. He also talks about the continuum between error-diffusion and non-error-diffusion dithering, for example adding a little bit of noise to your error-diffusion algorithm.
But Ulichney is really considering blue noise as an output of conventional error-diffusion algorithms; as far as I can tell from a quick skim, nowhere in his paper does he propose using a precomputed blue-noise pattern in place of the white-noise pattern for "random dithering". That approach has really only come into its own in recent years with real-time raytracing on the GPU.
An interesting side quest is Georgiev and Fajardo's abstract "Blue-Noise Dithered Sampling" from SIGGRAPH '16 http://web.archive.org/web/20170606222238/https://www.solida..., sadly now memory-holed by Autodesk. Georgiev and Fajardo attribute the technique to the 02008 second edition of Lau and Arce's book "Modern Digital Halftoning", and what they were interested in was actually improving the sampling locations for antialiased raytracing, which they found improved significantly when they used a blue-noise pattern to perturb the ray locations instead of the traditional white noise. This has a visual effect similar to the switch from white to blue noise for random dithering. They also reference a Ulichney paper from 01993, "The void-and-cluster method for dither array generation," which I haven't read yet, but which certainly sounds like it's generating a blue-noise pattern for thresholding images.
Lau, Arce, and Bacca Rodriguez also wrote a paper I haven't read about blue-noise dithering in 02008, saying, "The introduction of the blue-noise spectra—high-frequency white noise with minimal energy at low frequencies—has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray," suggesting that blue-noise dithering was already well established in inkjet-land long before it became a thing on GPUs.
Maxime Heckel has a nice interactive WebGL demo of different dithering algorithms at https://blog.maximeheckel.com/posts/the-art-of-dithering-and..., with mouse-drag orbit controls, including white-noise dithering, ordered dithering, and blue-noise dithering. Some of her examples are broken for me.
It's probably worth mentioning the redoubtable https://surma.dev/things/ditherpunk/ and the previous discussion here: https://news.ycombinator.com/item?id=25633483.