Why Jpeg Xl Ignoring Bit Depth Is Genius (and Why Avif Can't Pull It Off)
Key topics
The article argues that JPEG XL's decision to use a fixed 32-bit floating-point representation for image data is a genius move, sparking a discussion on the trade-offs between bit depth, compression, and hardware implementation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
53
0-12h
Avg / period
14.3
Based on 57 loaded comments
Key moments
- 01Story posted
Oct 27, 2025 at 4:17 AM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 5:21 AM EDT
1h after posting
Step 02 - 03Peak activity
53 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 5:07 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.
Google citied insufficient improvements which is a rather ambiguous statement. Mozilla seems more concerned with the attack surface.
For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.
Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.
https://github.com/mozilla/standards-positions/pull/1064
> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
That is a perfectly clear position.
Seems to be under very active development.
That's a perfectly reasonable stance.
- https://github.com/libjxl/jxl-rs
- https://github.com/tirr-c/jxl-oxide
- https://github.com/etemesi254/zune-image
Etc. You can wait for 20 or so years "just to be sure" or start doing something. Mozilla sticks to the option A here by not doing anything
zune also uses jxl-oxide for decode. zune has an encoder and they are doing great work but their encoder is not threading safe so it's not viable for Mozilla's need.
And there's work already being done for properly integrating jxl implementations with firefox but frankly things take time.
If you are seriously passionate about seeing JPEG-XL in firefox there's a really easy solution. Contribute. More engineering hours put towards a FOSS project tends to see it come to fruition faster.
Some links from my notes:
https://www.phoronix.com/news/Mozilla-Interest-JPEG-XL-Rust
https://news.ycombinator.com/item?id=41443336 (discussion of the same GitHub comment as in the Phoronix site)
https://github.com/tirr-c/jxl-oxide
https://bugzilla.mozilla.org/show_bug.cgi?id=1986393 (land initial jpegxl rust code pref disabled)
In case anyone is curious, here is the benchmark I did my reading for:
https://op111.net/posts/2025/10/png-and-modern-formats-lossl...
BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
That was just the context for some reading I did to understand where we are now.
> BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
That is one of the links I shared in my comment (along with the bug title in parenthesis). :-)
And for larger files especially the benefits of actually having progressive decoding, pushed me even more in favour of jpeg-xl. Doubly so when you can just provide variations in image size by halting the bit flow arbitrarily.
What is that in terms of bpp? Because according to Google Chrome 80-85% of we deliver images with bpp of 1.0 or above. I don't think most people realise that.
And in most if not all circumstances, jpeg XL performs better than AVIF at bpp 1.0 and above tested by professionals.
So 2^32 bit depth? 4 bytes seems an overkill.
Sorry I missed. How is the "floating point" stored in .jxl files?
Float32 has to be serialized one way or another per pixel, no?
However, I wonder if floating-point is necessary, or even the best to use compared to using 32-bit fixed-point. The floating-point format includes subnormal numbers that are very close to zero, and I'd think that could be much more precision than needed. Processing of subnormal numbers is extra slow on some processors and can't always be turned off.
The minimal AI prompt that likely led to the generation of this article could be:
"Write a technical blog post comparing JPEG XL and AVIF image formats, focusing on how JPEG XL's bit-depth-agnostic float-based encoding is superior to AVIF's integer-based approach, with emphasis on perceptual quality, HDR handling, and workflow simplicity."
This prompt captures the core elements:
- Technical comparison between two image formats
- Focus on JPEG XL's unique "ignoring bit depth" design
- Emphasis on perceptual vs. numerical quality
- Discussion of HDR and workflow benefits
- Tone and structure matching the published article
The prompt would have guided the AI to produce content that:
1. Explains the technical difference in encoding approaches
2. Demonstrates why JPEG XL's method is better
3. Provides real-world implications for users
4. Maintains the author's voice and technical depth
5. Follows the article's structure and emphasis on "perceptual intent" over bit precision
Soon enough the AI will invent a format for communicating with instances of itself or other AIs so that they can convey information that a client AI can translate back to the user's personal consumption preferences. Who needs compression or image optimization when you can reduce a website to a few kB of prompts which an AI engine can take to generate the full content, images, videos, etc?
Tenish years ago we had slop / listicles already and thankfully our curated internet filters helped us avoid them (but the older generation who came across them through Facebook and the like). But now they're back, and thanks to AI they don't need people who actually know what they're talking about to write articles aimed at e.g. the HN audience (because the people who know what they're talking about refuse to write slop... I hope)
Here's something you know. It's actually neither adjective 1 nor adjective 2—in fact, completely mundane realization! Let that sink in—restatement of realization. Restatement. Of. Realization. The Key Advantages: five-element bulleted list with pithy bolded headings followed by exactly zero new information. Newline. As a surprise, mild, ultimately pointless counterpoint designed to artificially strengthen the argument! But here's the paradox—okay, I can't do this anymore. You get the picture.
Everything after the first "Not" is superfluous and fairly distinctively so. Same general pattern. It's hard to describe the pattern here in words, but the whole thing is sort of a single stimulus for me. At the very least, notice again the repetition of the thing being argued against, giving it different names and attributes for no good semantic reason, followed by another pithy restatement of the thesis. This kind of upbeat, pithy, quotable punchline really is something frontier LLMs love to generate, as is the particular form of the statement. You can also see the latter in forms like "The conflict is no longer political—it's existential." I know I said I wouldn't comment on little tics and formatting and other such smoking guns, but if I never have to see this godforsaken sequence of characters again…The crudest is downsampling the chroma channel, which makes no sense whatsoever for digital formats.
It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?
However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?
I don't know if JPEG XL constrains solutions to be smooth.
Did some sample exports comparing JXL 8bit lossless vs JPG and JXL was quite a bit bigger. Same for doing lossy 100 comparison or 99 comparison of both. When setting JXL to 80%, 70% see noticeably savings but had thought the idea was JXL full quality essentially for much smaller sizes.
To be fair the 70% does look very similar to 100% but then again the JPEG 70% vs 100% also look very similar on an Apple XDR Monitor. the 70% or 80% etc on both jpeg and jpeg xl i do see visual differences in areas like on shoes where there is mesh.
JXL comes with lots of compatibility challenges since while things were picking up with Apple's adoption it seems to have halted since and apps like Evoto, and Topaz not adding support among many others. And Apple's still not full support and no progress on that. So unless Chrome does a 180 again, think AVIF and JXL will both end up stagnating and most sticking with JPG. For Tiff though noticed significant savings lossless jxl compared to tiff so that would be a good use case except tiffs more likely ones to be edited by third party apps that most likely won't support the format.
The OP article is talking about lossy compression.
When comparing lossy compression, note that lossy compression settings are not a "percent" of anything, it's just an arbitrary scale that depends on the encoder implementation. So lossy "80%" is certainly not the same thing between JPEG and JXL, or between Photoshop and ImageMagick, etc. It's not a percentage of anything — it's just an arbitrary scale that gets mapped to encoder parameters (e.g. quantization tables) in some arbitrary way.
The best way to compare lossy compression performance is to encode an image at the quality that is acceptable for your use case (according to your eyes), and then you just look for various codecs/encoders what the lowest filesize is you can get while still getting an acceptable quality.
8 more comments available on Hacker News