Google Revisits JPEG XL in Chromium After Earlier Removal
Mood
informative
Sentiment
neutral
Category
tech_discussion
Key topics
Jpeg Xl
Chromium
Image Compression
Browser Support
Discussion Activity
Active discussionFirst comment
38m
Peak period
18
Hour 8
Avg / period
4.6
Based on 82 loaded comments
Key moments
- 01Story posted
Nov 23, 2025 at 1:05 AM EST
1d ago
Step 01 - 02First comment
Nov 23, 2025 at 1:43 AM EST
38m after posting
Step 02 - 03Peak activity
18 comments in Hour 8
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 9:02 PM EST
5h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
https://www.theregister.com/2025/11/10/another_chance_for_jp...
I'm saying this to explain how JPEG XL support in PDF isn't a silver bullet. Browsers already support image formats in PDF that are not supported outside of PDF.
This test renders correctly in Firefox, in any case: https://sources.debian.org/data/main/p/pdf2djvu/0.9.18.2-2/t...
To be really fair, on Windows:
- H.264 is the only guaranteed (modern-ish) video codec (HEVC, VP9, AV1 is not built-in unless the device manufacturer bothered to do it)
- JPEG, GIF, and PNG are the only guaranteed (widely-used) image codecs (HEIF, AVIF, and JXL is also not built-in)
- MP3 and AAC are the only guaranteed (modern-ish) audio codecs (Opus is another module)
... and all of them are widely used when Windows 7 was released (before the modern codecs) so probably modules are now the modern Windows Method™ for codecs.
Note on pre-8 HEVC support: the codec (when not on VLC or other software bundling their own codecs) is often on that CyberLink Bluray player, not a built-in one.
blink-dev mailing list
https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...
Tracking Bug (reopened)
The patch at the end of that thread uses a C++ implementation so it is a dead end.
Every year, I go on a rant about how my camera can take HDR images natively, but the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube.
It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!
Any year now, maybe in 2030s, someone will get around to a ticket that is currently at position 11,372 down the list below thousands of internal bullshit that nobody needed done, rearranging a dashboard nobody has ever opened, or whatever, and get around to letting computers be used for images. You know, utilising the screen, the only part billions of users ever look at, with their human eyes.
I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
Indeed. I tried every possible export format from Adobe Lightroom including JPG + HDR gainmaps, and it looks... potato.
With a narrow gamut like sRGB it looks only slightly better than JPG, but with a wider gamut you get terrible posterization. People's faces turn grey and green and blue skies get bands across them.
Meanwhile my iPhone creates 10-bit Dolby Vision video.
I think Ultra HDR (and Apple's take on it, ISO 21496-1) make a lot of sense in a scenario where shipping alternate formats/codecs is not viable because renderer capabilities are not known or vary, similarly to how HDR was implemented on Blu-Ray 4K discs with the backwards-compatible Dolby Vision profiles.
It's also possible to do what Apple has done for HEIC on iOS: Store the modern format, convert to the best-known supported format at export/sharing time.
i understand some of this frustration, but really you just have to use ffmpeg to convert it to a web format (which can be done by ffmpeg.js running in a service worker if your cpu is expensive) and spell <img as <video muted autoplay playsinline which is only a little annoying
> I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
hear hear
> If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
i can think of a few better uses for such a wand...
Doesn't work for sharing images in text messages, social media posts, email, Teams, Wikipedia, etc...
What am I missing?
Sure, me too! I can take a HDR P3 gamut picture with my iPhone and share it with all my friends and relatives... that have iPhones.
What I cannot do is take a picture with a $4000 Nikon DSLR and share it in the same way... unless I also buy a Mac so I can encode it in the magic Apple-only format[1] that works... for Mac and IOS users. I have a Windows PC. Linux users are similarly out in the cold.
This situation so incredibly bad that I can pop the SD card of my camera into an reader plugged into my iPhone, process the RAW image on the iPhone with the Lightroom iPhone app in full, glorious HDR... and then be unable to export the HDR image onto the same device for viewing because oh-my-fucking-god-why!?
[1] They claim it is a standards-compliant HEIF file. No, it isn't. That's a filthy lie. My camera produces a HDR HEIF file natively, in-body. Everything opens it just fine, except all Apple ecosystem devices. I suspect the only way to get Apple to budge is to sue them for false advertising. But... sigh... they'll just change their marketing to remove "HEIF" and move on.
They support only a very specific subset of it, in a particular combination.
Some Apple apps can open third-party HEIC-in-HEIF files, and even display the image correctly, but if you try anything more "complex", it'll start failing. Simply forwarding the image to someone else will result in thumbnails looking weirdly corrupted, brightness shifting, etc...
I've even seen outright crashes, hangs, visible memory corruption, etc...
I bet there's at least one exploitable security vulnerability in this code!
This has been supported by both Apple and third party apps for over a decade. I’ve implemented it myself.
Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.
You also don't need "three pictures". That was a hack used for the oldest digital cameras that had about 8 bits of precision in their analog to digital converters (ADC). Even my previous camera had a 14-bit ADC and in practice could capture about 12.5 bits of dynamic range, which is plenty for HDR imaging.
Lightroom can now edit and export images in "true" HDR, basically the same as a modern HDR10 or Dolby Vision movie.
The problem is that the only way to share the exported HDR images is to convert them to a movie file format, and share them as a slide show.
There is no widely compatible still image format that can preserve 10-bit-per-channel colours, wide-gamut, and HDR metadata.
I wrote camera firmware. I’ve implemented HDR on the both the firmware level, and later at the higher client level when devices became faster. You’re either overloading terminology to the point where we are just talking past each other, or you’re very confused.
In the Apple Silicon era, the MacBook Pro has a 1,000 nit display, with peak brightness at 1,600 nits when displaying HDR content.
Affinity Studio [1] also supports editing and exporting "true" HDR images.
I have no way, in general, to share those photos with you, not without knowing ahead of time what software you’re using. I’ll also have to whip up a web server with custom HTML and a bunch of hacks to encode my images that will work for you but not my friends with Android phones or Linux PCs.
Huh? Safari seems to render HDR JPEG XLs without any issues these days (e.g. [1]), and supports wide gamut in even more formats as far a I remember.
The problem is that I can't, in general widely share a HDR image and have it be correctly displayed via ordinary chat applications, social media, email, or what have you. If it works at all, it only works with that One Particular Format in One Specific Scenario.
If you disagree, find me something "trivial", such as a photo sharing site that supports HDR image uploads and those images are viewable as wide-gamut HDR on mobile devices, desktops, etc... without any endpoint ever displaying the image incorrectly such a very dark, very bright, or shifted colors.
You act like this is some kind of mistake or limit of technology, but really it's an obvious intentional business decision.
Under late stage capitalism, it'd be weird if this wasn't the case in 2026.
Address the underlying issue, or don't be surprised by the race to the bottom.
On one hand, there have been (and still are!) several competing HDR formats for videos (HDR+, Dolby Vision, "plain" HLG, Dolby Vision in HLG etc.), and it tooks years for a winner to pull ahead – that race just started earlier, and the set of stakeholders is different (and arguably a bit smaller) than that for still images.
On the other hand, there are also several still image HDR formats competing with each other right now (JPEG with depth map metadata, i.e. Ultra HDR and ISO 21496-1, Apple's older custom metadata, HEIF, AVIF, JPEG XL...), and JPEG XL isn't the clear winner yet.
Format wars are messy, and always have been. Yes, to some extent they are downstream of the lack of a central standardization body, but there's no anti-HDR cabal anywhere. If anything, it's the opposite – new AV formats requiring new hardware is just about the best thing that can happen to device manufacturers.
But until now, the position of other Googlers (in the Chrome team) was that they didn't want to have JPEG XL support in Chrome. And that changed now. Which is a big deal.
But JPEG-XL is being quite widely used now, from PDF, medical images, camera lossless, as well as being evaluated in different stage of cinema / artist workflow production. Hopefully the rust decoder will be ready soon.
And from the wording, it seems to imply Google Chrome will officially support anything from AOM.
Isn't JPEG-XL a lossy codec?
I sure hope they came up with a good, clear system to distinguish them.
I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.
There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.
And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.
After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.
This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.
Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.
Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.
At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.
But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.
> After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.
And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.
Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.
https://en.wikipedia.org/wiki/Appeal_to_tradition
Just because metadata is useful doesn’t mean it needs to live in the filename.
Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.
While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.
Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.
I don't think that's strictly true.
The conventional reporting has been that JXL works better at regular web sizes, but AVIF starts to edge out at very low quality settings.
However, the quality per size between the two is so close that there are comparisons showing JXL winning even where AVIF is supposed to out perform JXL. (e.g. https://tonisagrista.com/blog/2023/jpegxl-vs-avif/)
Even at the point where AVIF should shine: when low bandwidth is important, JXL supports progressive decoding (AVIF is still trying to add this) so the user will see the images sooner with JXL rather than AVIF.
---
There is one part where AVIF does beat JXL hands down, and that's animation (which makes sense considering AVIF comes from the modern AV1 video codec). However, any time you would want an animation in a file, you're better off just using a video codec anyway.
But then the question is should we even be presenting this level of quality. Or is it enough. I guess that is a different set of questions.
But at what cost? From this the en/decoding speed (links below) is much higher for those advanced video codecs, so for various lower powered devices they wouldn't be very suitable?
Also, can we expect "near max potential" with AV2/near future or is it an ever-unachievable goal that shouldn't stop adding "non-max" codecs?
https://res.cloudinary.com/cloudinary-marketing/image/upload...
https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...
There is no reason to block its adoption.
> The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents.
glad those folks didn't listen to "the format is dead since the biggest browser doesn't support it" (and shame on Firefox for not doing the same)
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.