Ffmpeg to Google: Fund Us or Stop Sending Bugs
Postedabout 2 months agoActiveabout 2 months ago
thenewstack.ioTechstoryHigh profile
heatedmixed
Debate
85/100
Open SourceSoftware MaintenanceCorporate Responsibility
Key topics
Open Source
Software Maintenance
Corporate Responsibility
FFmpeg maintainers express frustration with Google's bug reporting practices, sparking debate about the responsibilities of large corporations towards open-source projects they rely on.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
40m
Peak period
100
0-6h
Avg / period
14.5
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 11, 2025 at 1:32 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 11, 2025 at 2:11 PM EST
40m after posting
Step 02 - 03Peak activity
100 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 14, 2025 at 7:05 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45891016Type: storyLast synced: 11/26/2025, 1:00:33 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
Then everybody wins.
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
And occasionally you do see immediate disclosures (see below). This usually happens for vulnerabilities that are time-sensitive or actively being exploited where the user needs to know ASAP. It's very context dependent. In this case I don't think that's the case, so there's a standard delayed disclosure to give courtesy for the project to fix it first.
Note the word "courtesy". The public interest always overrides considerations for the project's fragile ego after some time.
(Some examples of shortened disclosures include Cloudbleed and the aCropalypse cropping bug, where in each case there were immediate reasons to notify the public / users)
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.
If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).
But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.
Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.
Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
I fully agree which is why I really don’t understand why everyone is all up in arms here. Google didn’t demand that this bug get fixed immediately. They didn’t demand that everything be dropped to fix a 25 year old bug. They filed a (very good and detailed) bug report to an open source product. They gave a private report out of courtesy and an acknowledgment of the tradeoffs inherent in public bug disclosure, but ultimately a bug is a bug, it’s already public because the source code is public. If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
I can understand that stance for serious bugs and security vulnerabilities. I can understand such delays for a company with a big market cap to put pressure on them. But these delays are exactly like a demand put on the company: fix it asap or it gets public. We wouldn't have to do this if companies in general didn't need to get publicly pressured into fixing their stuff. Making it public has two objectives: Warn users they may be at risk, and force the publisher to produce a fix asap or else risk a reputation hit.
> If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
It does change how they report. Had they given more time or staggered their reports over time, Ffmpeg wouldn't have felt pressure to publish fixes asap. Even if the devs can say they won't fix, any public project will want to keep a certain quality level and not let security vulnerabilities get public.
In the end, had these reports been made by random security researchers, no drama would have happened. But if I see Google digging up 25 years old bugs, is it that much to expect them to provide a patch with it?
I think this is where the disconnect is. To my mind there is no "do this or else" message here, because there is no "or else". The report is a courtesy advance notice of a bug report that WILL be filed, no matter what the ffmpeg developers do. It's not like this is some awful secret that Google is promising not to disclose if ffmpeg jumps to their tune.
Further, the reality is most bug reports are never going to be given a 90 day window. Their site requests that if you find a security vulnerability you email their security team, but it doesn't tell you not to also file a bug report, and their bug report page doesn't tell you not to file anything you think might be a security or vulnerability bug to the tracker. And a search through the bug tracker shows more than a few open issues (sometimes years old) reporting segfault crashes, memory leaks, un-initialized variable access, heap corruption, divide by zero crashes, buffer overflows, null pointer dereferences and other such potential safety issues. It seems the ffmpeg team has no problems generally with having a backlog of these issues, so certainly one more in a (as we've been repeatedly reminded) 25 year old obscure codec parser is hardly going to tank their reputation right?
> In the end, had these reports been made by random security researchers, no drama would have happened.
And now we get to what is really the heart of the matter. If anyone else has reported this bug in this way, no one would care. It's not that Google did anything wrong, it's that Google has money so everyone is mad that they didn't do even more than they already do. And frankly that attitude stinks. It's hard enough getting corporations to actually contribute back to open source projects, especially when the license doesn't obligate them to at all. I'm not advocating holding corporations to some lesser standard, if the complaint was that Google was shoving unvalidated, and un-validatable low effort reports into the bug tracker, or that they actually were harassing the ffmpeg developers with constant followups on their tickets and demands for status updates then that would be poor behavior that we would be equally upset about if it came from anyone. But like you said, any other security researcher behaving the same way would be just fine. Shitting on Google this way for behaving according to the same standards outlined on ffmpeg's own website because of who they are and not what they've done just tells other corporations that it doesn't matter if you contribute code and money in addition to bug reports, if you don't do something to someone's arbitrary standard based on WHO you are, rather than WHAT you do, you'll get shit on for it. And that's not going to encourage more cooperation and contributions from the corporations that benefit from these projects.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
vs. spamming OSS maintainers with slop reports costs Google nothing
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
For a human, generating bug reports requires a little labor with a human in the loop, which imposes a natural rate limit on how many reports are submitted, which also imposes a natural triaging of whether it's personally worth it to report the bug. It could be worth it if you're prosocially interested in the project or if your operations depend on it enough that you are willing to pay a little to help it along.
For a large company which is using LLMs to automatically generate bug reports, the cost is much lower (indeed it may be longer-term profitable from a standpoint like marketing, finding product niches, refining models, etc.) This can be asymmetric with the maintainer's perspective, where the quality and volume of reports matter in affecting maintainer throughput and quality of life.
If that's what I have to expect, I'd rather not even interact with them at all.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
Their choice becomes to: - maintain a complex fork, constantly integrating from upstream. - Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream. - Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
and the ffmpeg maintainers say it's not wanted
so it's slop
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
The 90 day period is the grace period for the dev, not a demand. If they don't want to fix it then it goes public.
That’s how open source works.
If this keeps up, there won't be anyone willing to maintain the software due to burn out.
In today's situation, free software is keeping many companies honest. Losing that kind of leverage would be a loss to the society overall.
And the public disclosure is going to hurt the users which could include defense, banks and other critical institutions.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
I don't know if you'd be satisfied with that, but certainly this would allow them to easily make the blog posts you seem to be complaining about, all while making the load on maintainers rather minimal, at least insofar as blog posts appear to be quite infrequent compared to the total amount of vulnerabilities they report - around 20 vulnerability reports per year certainly seems like a manageable load for the entire FOSS community to bear, especially given almost none of these 20 yearly vulnerability reports would go to ffmpeg (if not literally none, given the Project Zero blog has 0 search results for "ffmpeg" or "libav"), and a significant portion of their blog posts aren't even about FOSS at all but instead about proprietary software like the operating systems Microsoft and Apple make.
I do think such a thing would be bad for everyone, though (including the ffmpeg developers themselves, to be honest) - Project Zero is good for everyone's security, in my opinion, and even if all FOSS developers were to universally decide to reject all Project Zero reports that don't come with a patch, and Google decided to still not make such patches, people being able to know that these vulnerabilities exist is still a good thing nonetheless - certainly much better than more vulnerabilities being left in for malicious actors to discover and use in zero-day attacks.
I suppose you'd prefer they abandon their projects entirely? Because that's the real alternative at this point.
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
https://issuetracker.google.com/issues/440183164?pli=1
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
AI was only used to find the bug.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
They could adopt a more flexible policy for FOSS though.
Project Zero hasn't reported any vulnerabilities in any software I maintain. Lots of other security groups have, some well respected as well, but to my knowledge none of these "outside" reports were actual vulnerabilities when analyzed in context.
Where did you get that idea?
chill, nobody knows what ffmpeg is
Why there's such a weird toxic empathy around ffmpeg?
If this wasn't google but lone developer by now he'd be doxxed, fired and receiving death threats. It's not first time ffmpeg strikes like this.
It’s hard to take any comment seriously that tries to use “slavery” for situations where nobody is forced to do anything for anyone.
Though it's more likely there's just some team tasked with shoving AI into vulnerability hunting.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
Looks like firefox does the same.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
It would be surprising to find memory corruption in tar in 2025, but not in ffmpeg.
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
That's a pretty core difference.
I saw another poster say something about "buggy software". All software is buggy.
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
xkcd 2347
I fail to see a single Google logo. I also didn't know that Google sonehow had a contract with ffmpeg to be their customer.
This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.
> Just publishing bug reports by themselves does not make open source projects secure!
It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
While true, Only Google has google infrastructure, this presupposes that 100% of all published exploits would be findable.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
Not really. It requires time, ergo money.
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
Just because software makes no guarantees about being safe doesn’t mean I want it to be unsafe.
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
And still, we live in a society. We have to use software, bugs or not.
727 more comments available on Hacker News