Decoding Netflix's Av1 Streams
Posted3 months agoActive3 months ago
singhkays.comTechstory
skepticalmixed
Debate
70/100
Av1 Video CodecNetflix StreamingVideo Compression
Key topics
Av1 Video Codec
Netflix Streaming
Video Compression
The article analyzes Netflix's AV1 streams, sparking discussion on the codec's effectiveness, device support, and video quality, with some commenters questioning the author's findings and others praising the analysis.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
52
0-3h
Avg / period
8.4
Comment distribution76 data points
Loading chart...
Based on 76 loaded comments
Key moments
- 01Story posted
Oct 1, 2025 at 2:36 PM EDT
3 months ago
Step 01 - 02First comment
Oct 1, 2025 at 2:52 PM EDT
16m after posting
Step 02 - 03Peak activity
52 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 3, 2025 at 3:23 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45441447Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It'd be great to hear from someone at Netflix about the unexpected Bojack Horseman results. I'd bet that Netflix just isn't yet taking advantage of AV1 features designed especially for this kind of animation and synthetic content.
TVs have horrible UIs and are generally ad ridden garbage. Not using anything android based because of the same reason and slow.
Also 6 GHz Wi-Fi would be nice. I had to run a cable to my 2 because the 5 GHz airspace where I am is too crowded to stream high quality movies via Infuse without occasional hitching. Same with the seek speed. Meanwhile my iPhone gets 2.9 Gbps of goodput at solid jitter on 6 GHz Wi-Fi.
There's probably some updates to the HDR standards. For me at least though the current one already supports what my TV does.
Also apps seem to assume "because hardware decode isn't available don't serve AV1" sometimes. As silly as that is with the CPU power in the AppleTV, at least that problem would go away with hardware support and they'd stop trying to serve a "compatible" SDR h.264 stream. Despite internet pessimism, sometimes the quality is also raised with more efficient codecs rather than just "the same quality at less bandwidth".
I'd bet money when TVs are advertised at 120 FPS, they're really 119.88 FPS, so no judder showing 23.976 FPS and the other NTSC-off display rates.
This isn't a completely unreasonable decision, since the current 2022 model's software AV1 decode apparently can only sustain 4K AV1 decode (although it handled 1080p content fine in my test) for as little as 45 minutes before thermal throttling kicks in.
In terms of AV1 support, YouTube often only does 4K with AV1 so that's an issue for people.
Personally, I'd love to see an Apple TV that was great for gaming. New Apple processors have hardware ray tracing and decent gaming performance.
I think it's also likely that Apple will try and make an Apple TV that will support next-gen Siri and on-device AI stuff. Yes, you can complain about Apple's AI delays, but Apple's probably looking toward an Apple TV that can support their AI models.
In some ways, "what benefit would a new <insert-thing> have?" Sometimes we don't know until we have it and people start using it.
I don't think new graphics hardware solves the problem. Beyond the friction of the unit not shipping with a controller, tvOS lacks good discovery for games and there is no ad infrastructure comparable to mobile. Most game developers aren't looking to invest in small, closed platforms with bad discovery. It's hard enough to make money on Apple's mobile platforms.
While the percentages look scary, it's only a slight difference (60kbps!) and still around 1mbps average, but with a significant quality boost (very crisp lines and near perfect quality). I bet Netflix could encode at nearly half that bitrate and stay similar to HEVC in quality, but I'm pleased they seem to have made a good tradeoff here.
It's actually quite amazing the quality that AV1 delivers at such low bitrates across the board. I've said it before, but AV1 is almost magical. Which I think is behind the lack of enthusiasm for VVC/h266; is anyone even using that? I've yet to actually see it in the wild.
https://aomedia.org/press%20releases/AOMedia-Announces-Year-...
I think as technical tests as much as anything, but they're interesting to see.
While the topic matter is interesting, I feel like obviously synthetic content falls into the “that which was not worth writing, is not worth reading either” trap.
If the authors tone is extremely ChatGPT-esque, I apologize in advance.
> The data shows it’s not just an incremental improvement; it’s a demolition.
Complete with extra bold to emphasize the second half, sigh.
I keep getting a paragraph or two into something, read one of the terrible "It's not just word - it's massive hyperbole!" sentences, see that there are several more in subsequent paragraphs and can't continue.
However bad the author's original writing that generate this output was, it can't be as awful as this.
Rather than this inflated slop that look like I am trying to reach word count in a paper and one sentence becomes 15 useless ones
Edit: This is not so much commentary on AI than it is the core of your post is a few tables. Just post the tables and one or two sentence of conclusion and that is all ! It is so tedious to read through dozens of paragraph of autogenerated unnecessary nonsense -- that contribute nothing of value to the data
https://singhkays.com/archives/
But on the bright side I've got a large backlog of ideas now just waiting to be turned into pixels
https://news.ycombinator.com/item?id=45446589
LLMs seem to love putting stuff in bold, thats an immediate red flag for me.
I think what stands out to me is this cartoonishly punchy, faux-dramatic framing.
That, and specialist terms that seem to be thrown in there in an empty way, just to signal subject-matter expertise that’s not even expected of a DIYer’s experiment report:
> It’s a multi-decade, billion-dollar street fight over bytes and pixels, waged in the esoteric battlegrounds of DCT blocks and entropy coding
I like using real idioms that have percolated through culture ('birds of a feather', 'white elephant', 'nip in the bud', etc), not stupid contrivations.
As someone who sweated through hours and hours of English essay-writing in school, LLM output that is misrepresented as genuine human writing is annoying and highly disrespectful of the reader's time and effort. The moment I saw the stupid, contrived headers and dozens of emojis, I closed the tab.
I refuse to waste my time reading the output of a matrix multiplication done in some server farm when I could do the latter myself.
https://news.ycombinator.com/item?id=45446589
> uses slopbot 9000 to explode his point into ten times the "prose"
> mfw
To be transparent, I was experimenting with a more "punchy," narrative style to weave in some wit and humor. I didn't want the writing to feel dry and was aiming for a flow that was more entertaining. In retrospect, I clearly overshot the mark and ended up with something that feels inauthentic and distracts from the main point.
The experiment and the data are what I was most excited to share, and the writing shouldn't obscure that. Based on this feedback, I'll revise the article to be more direct, cut the fluff, and let the numbers do the talking.
Seriously, I appreciate the reality check! This is a great lesson in "know your audience." :)
… and so I'll continue to stick with AVC, thanks! :-)
One of the quite expensive paid plans, as the free one has to have "Created with Datawrapper" attribution at the bottom. I would guess they've vibe-coded their way to a premium version without paying, as the alternative is definitely outside individual people's budgets (>$500/month).
Presumably nothing jumped out at the author as being worse, but come on how can you have a whole section on why AV1's regression on Bojack is actually a good thing because the quality is way higher, and then not show any quality comparisons?
I was really impressed with my setup, but after disabling content blockers (Firefox Focus), and turning off using Mullvad's free DNS proxy service, still nothing!
Perhaps the author turned the ads off since you visited?
Also, If anyone was wondering where AV1 stands in comparison to VP8 and VP9... I just looked it up after a few years of not paying attention and I guess Google donated VP8 and VP9 to the alliance for open media foundation (AOMedia) in 2015 and they created AV1 and released it in 2018.
https://news.ycombinator.com/item?id=45446589
For mobile, I don't know who outside of Netflix is delivering AV1. If they are, I expect them to be leveraging the hardware AV1 decoders for battery life instead of employing a software only solution like dav1d. Saying that, I think Netflix was using dav1d solution where it had a benefit (e.g. low quality cellular networks)
https://netflixtechblog.com/netflix-now-streaming-av1-on-and...
Pretty sure Youtube forces it with dav1d on devices that don’t support hw decoding which is how i became aware of the battery problem
Perhaps the most important question which I have yet to see anyone pointing out. Those bit rate numbers are appalling! 2-4Mbps average bitrate? For a services you are paying for? I knew Netflix has gotten bad but this is worse than I thought. Even some high end YouTube Content does better than that. It should be 8Mbps minimum. And at this bitrate the difference between H.264 and AV1 wont be so obvious.
>Those bit rate numbers are appalling! 2-4Mbps average bitrate?
While those may sound low, what I'm thinking is Netflix didn't see any benefit in perceptual quality (or VMAF scores) but sending more bits down the pipe and increasing their bandwidth bill.
VMAF is better than SSIM PSNR but it is still far from perfect. And it is a reason why most torrent are still using HEVC and AVC.
By now - it should be in most devices that's aren't outdated by even average standards. And it's worth mentioning that for devices that don't have hardware decoding, dav1d does an excellent job of decoding it on the CPU.
The problem is more with hardware encoding. That's indeed only present in only recent generations (or a couple) of hardware and even with that, AMD for example have an aspect ratio limitation bug in their AV1 hardware encoder (which requires adding black bands to work around) that's only fixed in RDNA 4 which is not available in their APUs, so it won't be fixed in APUs until their UDNA is used for them (they didn't fix it in RDNA 3.5 chips).