When a Video Codec Wins an Emmy
Key topics
The AV1 video codec's Emmy win has sparked a lively discussion about its adoption and potential impact on image formats. Commenters are weighing in on AVIF, the image format built on AV1, with some noting its "way wider browser adoption" as a major advantage, while others point out that browser support for specific decoders is still a limiting factor. The conversation also touches on comparisons with JPEG-XL and the potential for future advancements with AV2. As one commenter quipped, "it doesn't matter that AVIF uses the same container for AV1 or AV2 based encoding, if the browsers don't have the right decoder for it then they can't decode it."
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
39m
Peak period
54
108-120h
Avg / period
13.4
Based on 94 loaded comments
Key moments
- 01Story posted
Dec 5, 2025 at 12:15 PM EST
about 1 month ago
Step 01 - 02First comment
Dec 5, 2025 at 12:54 PM EST
39m after posting
Step 02 - 03Peak activity
54 comments in 108-120h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 12, 2025 at 7:17 PM EST
25 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I wish adoption was better. When will Wikipedia support AVIF?
An example of this is MP4: Browsers can decode videos encoded with H264 in MP4 containers, but not H265 even if it uses the same container, because one thing is the container and another thing is the codec, they're related but they aren't the same.
Who still uses paten encumbered codecs and why?
so it takes a long time to transition to a new codec - new devices need to ship with support for your new codec, and then you have to wait until old devices get lifecycled out before you can fully drop support for old codecs.
Video is naturally large. You've got all the pixels in a frame, tens of frames every second, and however many bits per pixel. All those frames need to be decoded and displayed in order and within fixed time constraints. If you drop frames or deliver them slowly no one is happy watching the video.
If at any point you stick to video that can be effectively decoded on a general purpose CPU with no acceleration you're never going to keep up with the demands of actual users. It's also going to use a lot more power than an ASIC that is purpose-built to decode the video. If you decide to use the beefiest CPU in order to handle higher quality video under some power envelope your costs are going to increase making the whole venture untenable.
If that's a correct guess -- I think the biggest reason is about hardware support, actually. When you have pirated movies, where are you going to play it? TV. Your TV or TV box very likely has support for H265, but very few has AV1 support.
Then the choice is apparent.
Scene rules say to start with --crf 17 at 1080p, which is a pretty low CRF (i.e. it results in high bitrates): https://scenerules.org/html/2020_X265.html
AV1 would most likely result in slower encodes that look worse.
So, the apparent preference could simply be 5+ years more time to do hardware-assisted transcoding.
The image processing industry is similar, but not as mature. I hated dealing with patents, when I was writing image processing stuff.
> In 1997, both Ritchie and Thompson were made Fellows of the Computer History Museum, "for co-creation of the UNIX operating system, and for development of the C programming language."
> On April 21, 1999, Thompson and Ritchie jointly received the National Medal of Technology of 1998 from President Bill Clinton for co-inventing the UNIX operating system and the C programming language
https://en.wikipedia.org/wiki/Dennis_Ritchie#Awards
I think that's also good ;) Ritchie and Thompson also received a Turing Award; not for the C-language, but for UNIX and OS development in general.
https://en.wikipedia.org/wiki/Technology_and_Engineering_Emm...
https://theemmys.tv/tech/
It was for standardising widescreen switching signals, in the early 2000s that was a big issue because each company had a different interpretation of what the flags meant. Thus when you were watching TV you would often get the wrong behaviour and distorted pictures. A small group of us sat down and agreed what the proper behaviour should be. Then every other TV standards body in the world adopted it.
I never did get a statue.
https://en.wikipedia.org/wiki/Primetime_Engineering_Emmy_Awa...
https://en.wikipedia.org/wiki/National_Academy_of_Television...
Youtube has used vp8 since 2010. Openly licensed video codes were in use through the mid-2010s.
In 2010 the majority of (YouTube and other) videos were still served as H.264, because the majority of playback devices were smartphones without vp8 decoding capabilities (iOS for example didn't support vp8 until iOS12 in 2019).
I find this extremely difficult to believe. In 2010 the only widely used smartphone would have been the iPhone. The Motorola Droid was the first widely marketed Android device in the US and was only launched in late 2009.
No major browsers didn't support VP8 back then, and among the remaining devices (other appliances than PCs with those Browsers) the majority of video playback devices were already smartphones (not supporting VP8 in 2010).
Apologies for the lack of clarity.
It's why the h264ify extension existed, and forced h264 was for that time a large part of the reason Safari had vastly superior battery life.
Chrome didn't support VP8 until the first stable release in September 2010, others browsers added it in 2011.
They can be as aggressive as they want, when opening a video the client/server agreed on a codec both support and in 2010 that codec wasn't VP8
The context is that in mid-2010 the majority of the codecs used on the web were based on a closed licensing system, which is objectively true based on the provided information.
Your statement that Google enabled and enforced the codec prior to HW-decoding support is not wrong because of that, just your overall attitude on dealing with information is.
Reason: There was also no widespread VP8 HW-decoding support in 2011 and 2012 in most devices. Mobile chipsets vendors (Qualcomm, Samsung, TI,...) only added HW-decoding for VP8 from 2012 premium tier chipsets, so VP8 was SW-decoded on many devices in the market well into ~2014.
But in mid-2010 (!!) there was no Browser able to handle VP8 even in Software, and no meaningful embedded device supported the codec either
This allowed me to create a custom powerpoint theme / twmplate that captures the essense of a particular brand.
> AOMedia is working on the upcoming release of AV2. It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.
That's all nice and good, but please make AV1 as widespread as H264, so that I can just import it in every editing program as well instead of having Adobe Premiere Pro complain about not knowing what that format is (well, I personally prefer DaVinci Resolve, but my editor is on Adobe). But yeah, I think that AV1 is great but would like support for it across the board, on every device (hardware decoding and encoding) as well as Kdenlive and Resolve and all the other editors and everything on the software side.
Right now, I have a drive filled with H264 content that I can hook up to any old hotel TV and play back. It's gonna be a while before I switch to AV1. And is H264 by now largely out of patent anyway?
Almost for the AVC High profile - WikiMedia track all the known patents here: https://meta.wikimedia.org/wiki/Have_the_patents_for_H.264_M...
Of course, this distinction is moot, since I've yet to see a (consumer) video source that provides fixed framerate footage. If anyone wants to explain why, I'm all ears. As a result, I habitually re-encode everything before taking it into a video editor as a precaution, and once you're doing that then capping the GOP length is a no-brainer.
[0] Non-linear editor. If you're wondering what a linear editor is, please watch https://www.youtube.com/watch?v=AEMdmnNbCZA
[1] It's actually possible to do lossless editing at GOP boundaries, though I don't know if any NLEs would try doing this.
https://www.youtube.com/watch?v=_cnQv8JCsX4
It's how I got started, indeed by watching this programme when it was on telly.
I still vividly remember what a clusterfk was H264 support on mobile devices just ten years ago, circa 2010-2015. AVC spec was published in 2003, High Profiles standardised in 2005, universally supported only since ~2015. I personally had a 2011 Tegra 2 tablet which did support H264, but didn't support high profiles.
* Multi-Symbol Entropy Coder
* Chroma from Luma
* CDEF filter (directional dering filter)
What didn't make it in?
* Lapped Transform
* Use of vector quantization for residuals (aligning the vectors)
https://news.ycombinator.com/item?id=46155135
AV1 powers approximately 30% of Netflix viewing
https://news.ycombinator.com/item?id=46155135
https://www.w6rz.net/DCP_1235.JPG
The original DirecTV encoder was MPEG-1 at 704x480 using eight CL4000 chips. Then in 1995 when the MPEG-2 capable CL4010 was finished, the encoders were upgraded to MPEG-2 (frame only encoding). Then upgraded again to a 12 chip AFF (Adaptive Field/Frame) encoder when the firmware was completed.
https://www.w6rz.net/videorisc.png