We Should All Be Using Dependency Cooldowns
Key topics
Regulars are buzzing about the proposal to implement "dependency cooldowns," a strategy to temporarily block updates to dependencies in software projects, allowing teams to vet new versions before integrating them. Commenters riff on the potential benefits, including reduced risk of introducing bugs and improved stability, with some sharing their own experiences with dependency management. As the discussion unfolds, a consensus emerges that cooldowns could be a valuable tool, but also raise important questions about trade-offs between security, stability, and innovation. The idea feels particularly relevant now as software supply chains continue to grow in complexity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
59m
Peak period
82
0-6h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 21, 2025 at 9:50 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 21, 2025 at 10:49 AM EST
59m after posting
Step 02 - 03Peak activity
82 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 24, 2025 at 6:13 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The probability of introducing bugs is a function of the amount of development being done. Releasing less often doesn't change that. In fact, under that assumption, delaying releases strictly increases the amount of time users are affected by the average bug.
People who do this tell themselves the extra time allows them to catch more bugs. But in my experience that's a bedtime story, most bugs aren't noticed until after deployment anyway.
That's completely orthogonal to slowly rolling out changes, btw.
(This also doesn't apply to vulnerabilities per se, since known vulnerabilities typically aren't evaluated against cooldowns by tools like Dependabot.)
A sane "cooldown" is just for automated version updates relying on semantic versioning rules, which is a pretty questionable practice in the first place, but is indeed made a lot more safe this way.
You can still manually update your dependency versions when you learn that your code is exposed to some vulnerability that's purportedly been fixed. It's no different than manually updating your dependency version when you learn that there's some implementation bug or performance cliff that was fixed.
You might even still use an automated system to identify these kinds of "critical" updates and bring them to your attention, so that you can review them and can appropriately assume accountability for the choice to incorporate them early, bypassing the cooldown, if you believe that's the right thing to do.
Putting in that effort, having the expertise to do so, and assuming that accountability is kind of your "job" as a developer or maintainer. You can't just automate and delegate everything if you want people to be able to trust what you share with them.
There's no reason to pretend we live in a world where everyone is manually combing through the source of every dependency update.
I'm reminded of how new Setuptools versions are able to cause problems for so many people, basically because install tools default to setting up isolated build environments using the latest version of whatever compatible build backend is specified (which in turn defaults to "any version of Setuptools"). qv. my LWN article https://lwn.net/Articles/1020576/ .
Except if everyone does it chance of malicious things being spotted in source also drops by virtue of less eyeballs
Still helps though in cases where maintainer spot it etc
The underlying premise here is that supply chain security vendors are honest in their claims about proactively scanning (and effectively detecting + reporting) malicious and compromised packages. In other words, it's not about eyeballs (I don't think people who automatically apply Dependabot bumps are categorically reading the code anyways), but about rigorous scanning and reporting.
I don't think the people automatically updating and getting hit with the supply chain attack are also scanning the code, I don't think this will impact them much.
If instead, updates are explicitly put on cooldowns, with the option of manually updating sooner, then there would be more eyeballs, not fewer, as people are more likely to investigate patch notes, etc., possibly even test in isolation...
Instead of a period where you don't use the new version, shouldn't we instead be promoting a best practice of not just blindly using a package or library in production? This "cooldown" should be a period of use in dev or QA environments while we take the time to investigate the libraries we use and their dependencies. I know this can be difficult in many languages and package managers, given the plethora of libraries and dependencies (I'm looking at you in particular JavaScript). But "it's hard" shouldn't really be a good excuse for our best efforts to maintain secure and stable applications.
Although, I suppose we've probably updated the patch version.
Dependencies are good. Churn is bad.
This is indeed what's missing from the ecosystem at large. People seem to be under the impression that if a new release of software/library/OS/application is released, you need to move to it today. They don't seem to actually look through the changes, only doing that if anything breaks, and then proceed to upgrade because "why not" or "it'll only get harder in the future", neither which feel like solid choices considering the trade-offs.
While we've seen to already have known that it introduces massive churn and unneeded work, it seems like we're waking up to the realization that it is a security tradeoff as well, to stay at the edge of version numbers. Sadly, not enough tooling seems to take this into account (yet?).
I had not seen that tool. Thanks for pointing it out.
I think I wouldn't object to "Dependabot on a 2-week delay" as something that at least flags. However working in Go more than anything else it was often the case even so that dependency alerts were just an annoyance if they aren't tied to a security issue or something. Dynamic languages and static languages do not have the same risk profiles at all. The idea that some people have that all dependencies are super vital to update all the time and the casual expectation of a constant stream of vital security updates is not a general characteristic of programming, it is a specific characteristic not just of certain languages but arguably the community attached to those languages.
(What we really need is capabilities, even at a very gross level, so we can all notice that the supposed vector math library suddenly at version 1.43.2 wants to add network access, disk reading, command execution, and cryptography to the set of things it wants to do, which would raise all sorts of eyebrows immediately, even perhaps in an automated fashion. But that's a separate discussion.)
Doing updates on a regular basis (weekly to monthly) seems like a good idea so you don't forget how to do them and the work doesn't pile up. Also, it's easier to debug a problem when there are fewer changes at once.
But they could be rescheduled depending on what else is going on.
This lessens, but doesn't eliminate supply side vulns. You can still get a vulnerable new release if your schedule happens to land just after the vuln lands.
TFA proposes a _delay_ in a particular dependency being pulled in. You can still update every day/hour/microsecond if you want, you just don't get the "new" thing until it's baked a bit.
What would happen from time to time was that an important reason did come up, but the team was now many releases behind. Whoever was unlucky enough to sign up for the project that needed the updated dependency now had to do all those updates of the dependency, including figuring out how they affected a bunch of software that they weren't otherwise going to work on. (e.g., for one code path, I need a bugfix that was shipped three years ago, but pulling that into my component affects many other code paths.) They now had to go figure out what would break, figure out how to test it, etc. Besides being awful for them, it creates bad incentives (don't sign up for those projects; put in hacks to avoid having to do the update), and it's also just plain bad for the business because it means almost any project, however simple it seems, might wind up running into this pit.
I now think of it this way: either you're on the dependency's release train or you jump off. If you're on the train, you may as well stay pretty up to date. It doesn't need to be every release the minute it comes out, but nor should it be "I'll skip months of work and several major releases until something important comes out". So if you decline to update to a particular release, you've got to ask: am I jumping off forever, or am I just deferring work? If you think you're just deferring the decision until you know if there's a release worth updating to, you're really rolling the dice.
(edit: The above experience was in Node.js. Every change in a dynamically typed language introduces a lot of risk. I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. So although there's a lot of noise with regular dependency updates, it's not actually that much work.)
That's been my experience as well. In addition, the ecosystem largely holds to semver, which means a non-major upgrade tends to be painless, and conversely, if there's a major upgrade, you know not to put it off for too long because it'll involve some degree of migration.
Although this is true, any large ecosystem will have some popular packages not holding to semver properly. Also, the biggest downside is when your `>=v1` depends - indirectly usually - on a `v0` dependency which is allowed to do breaking changes.
While my recent legacy Java project migration from JDK 8 -> 21 & a ton of dependency upgrades has been a pretty smooth experience so far.
I don't like Java but sometimes I envy their ecosystem.
Plus you can find endless stream of experienced devs for it. Which are more stable job wise than those who come & go every 6-12 months. Stability. Top management barely cares for anything else from IT.
I'd prefer to upgrade around the time most of the nasty surprises have already been discovered by somebody else, preferably with workarounds developed.
At the same time, you don't want to be so far back that upgrading uncovers novel migration problems, or issues that nobody else cares about anymore.
For instance if you use a package that provides a calendar widget and your app uses only the “western” calendar and there is a critical vulnerability that only manifests in the Islamic calendar, you have zero reason to worry about an exploit.
I see this as a reasonable stance.
If you break my code I'm not wasting time fixing what you broke, I'm fixing the root cause of the bug: finding your replacement.
I don’t think this is specific to any one language or environment, it just gets more difficult the larger your project is and the longer you go without updating dependencies.
I’ve experienced this with NPM projects, with Android projects, and with C++ (neglecting to merge upstream changes from a private fork).
It does seem likely that dynamic languages make this problem worse, but I don’t think very strict statically typed languages completely avoid it.
Most tooling (e.g. Dependabot) allows you to set an interval between version checks. What more could be done on that front exactly? Devs can already choose to check less frequently.
that's generally true, no?
of course waiting a few days/weeks should be the minimum unless there's a CVE (or equivalent) that's applies
In the end, I decided to implement a lightweight PHP 5 relay to translate SQL requests so the MSSQL server could still be accessed. The university stuffs were quite satisfied with my work. But I really felt guilty for the next guy who will touch this setup. So I still didn’t quite make it back onto the release train… as that PHP 5 relay counts.
(Literally at one place we built a SPA frontend that was embedded in the device firmware as a static bundle, served to the client and would then talk to a small API server. And because these NodeJS types liked to have libraries reused for server and frontend, we would get endless "vulnerability reports" - but all of this stuff only ever ran in the clients browser!)
The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software.
Where I work, if our infosec team's scanner detect a critical vulnerability in any software we use, we have 7 days to update it. If we miss that window we're "out of compliance" which triggers a whole process that no one wants to deal with.
The path of least resistance is to update everything as soon as updates are available. Consequences be damned.
The solution is to fire those teams.
So it's more of a cost-cutting/cover-your-ass measure than an actual requirement.
Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.
On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.
As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up. The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.
Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".
So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".
What you should do instead is talk with them about SLAs and validation. For example, commit to patching CRITICAL within x days, HIGH with y, etc. but also have a process where those can be cancelled if the bug can be shown not to be exploitable in your environment. Your CISO should be talking about the risk of supply chain attacks and outages caused by rushed updates, too, since the latter are pretty common.
Come 2027-12, the Cyber Resilience Act enters full enforcement. The CRA mandates a "duty of care" for the product's lifecycle, meaning if a company blindly updates a dependency to clear a dashboard and ships a supply-chain compromise, they are on the hook for fines up to €15M or 2.5% of global turnover.
At that point, yes, there is a sense in which the blind update strategy you described becomes a legal liability. But don't miss the forest for the trees, here. Most software companies are doing zero vetting whatsoever. They're staring at the comet tail of an oncoming mass extinction event. The fact that you are already thinking in terms of "assess impact" vs. "blindly patch" already puts your workplace significantly ahead of the market.
We had like 1 or 2 crash-patches in the past - Log4Shell was one of them, and blocking an API no matter what in a component was another one.
In a lot of other cases, you could easily wait a week or two for directly customer facing things.
Yes
"Only then do you need to update that specific dependency right away."
Big no. If you do that it is guaranteed one day you miss a vulnerability that hurts you.
To frame it differently: What you propose sounds good in theory but in practice the effort to evaluate vulnerabilities against your product will be higher than the effort to update plus taking appropriate measures against supply chain attacks.
Nobody is proposing a system that utterly and completely locks you out of all updates if they haven't aged enough. There is always going to be an override switch.
Browsers get a lot of unknown input, so they have to update often.
A Weather app is likely to only get input from one specific site (controlled by the app developers), so it should be relatively safe.
Most of them assume what if they are working on some public accessible website then 99% of the people and orgs in the world are running nothing but some public accessible website.
Libraries themselves should perhaps also take a page from the book of Linux distributions and offer LTS (long term support) releases that are feature frozen and include only security patches, which are much easier to reason about and periodically audit.
Limiting the number of dependencies, but then rewriting them in your own code, will also increase the maintenance burden and compile times
Of course it's up to developers to weigh the tradeoffs and make reasonable choices, but now we have a lot more optionality. Reaching for a dependency no longer needs to be the default choice of a developer on a tight timeline/budget.
In many cases I was able to replace 10s of lines of code with a single function call to a dependency the project already had. In very few cases did I have to add a new dependency.
But directly relevant to this discussion is the story of the most copied code snippet on stack overflow of all time [1]. Turns out, it was buggy. And we had more than once copy of it. If it hadn't been for the due diligence effort I'm 100% certain they would still be there.
[1]: https://news.ycombinator.com/item?id=37674139
Can you cite an example of a moderately-widely-used open source project or library that is pulling in code as a dependency that you feel it should have replicated itself?
What are some examples of "everything libraries" that you view as problematic?
So if you’re adding chalk, that generally means you don’t know jack about terminals.
Pulling in a huge library just to set some colors is like hiring a team of electrical contractors to plug in a single toaster.
I wonder how many devs are pulling in a whole library just to add colors. ANSI escape sequences are as old as dirt and very simple.
Just make some consts for each sequence that you intend to use. That's what I do, and it typically only adds a dozen or so lines of code.
If chalk emits sequences that aren't supported by your terminal, then that's a deficiency in chalk, not the programs that wanted to produce colored output. It's easier to fix chalk than to fix 50,000 separate would-be dependents of chalk.
The problem is also less about the implementation I want, it's about the 10,000 dependencies of things I don't really want. All of those are attack surface much larger than some simple function.
Of course, small libraries get a bad rap because they're often maintained by tons of different people, especially in less centralized ecosystems like npm. That's usually a fair assessment. But a single author will sometimes maintain 5, 10, or 20 different popular libraries, and adding another library of theirs won't really increase your social attack surface.
So you're right about "pull[ing] in universes [of package maintainers]". I just don't think complexity or number of packages are the metrics we should be optimizing. They are correlates, though.
(And more complex code can certainly contain more vulnerabilities, but that can be dealt with in the traditional ways. Complexity begets simplicity, yadda yadda; complexity that only begets complexity should obviously be eliminated)
It's a shame some ecosystems move waaay too fast, or don't have a good story for having distro-specific packages. For example, I don't think there are Node.js libraries packaged for Debian that allow you to install them from apt and use it in projects. I might be wrong.
Web search shows some: https://packages.debian.org/search?keywords=node&searchon=na... (but also shows "for optimizing reasons some results might have been suppressed" so might not be all)
Although probably different from other distros, Arch for example seems to have none.
Using these in commonjs code is trivial; they are automatically found by `require`. Unfortunately, system-installed packages are yet another casualty of the ESM transition ... there are ways to make it work but it's not automatic like it used to be.
A small price to pay for the abundant benefits ESM brings.
In particular, the fact that Typescript makes it very difficult to write a project that uses both browser-specific and node-specific functionality is particularly damning.
Out of curiosity, what would you recommend? And what would be needed to make them work automatically?
I really don't have a recommendation other than another hack. The JS world is hacks upon hacks upon hacks; there is no sanity to be found anywhere.
An eco system moving too quickly, when it isn't being fundamentally changed, isn't a sign of a healthy ecosystem, but of a pathological one.
No one can think that js has progressed substantially in the last three years, yet trying to build any project three years old without updates is so hard a rewrite is a reasonable solution.
Are we talking about the language, or the wider ecosystem?
If the latter, I think a lot of people would disagree. Bun is about three years old.
Other significant changes are Node.js being able to run TypeScript files without any optional flags, or being able to use require on ES Modules. I see positive changes in the ecosystem in recent years.
The point of javascript is to display websites in the browser.
Ask yourself, in the last three years has there been a substantial improvement in the way you access websites? Or have they gotten even slower, buggier and more annoying to deal with?
Your are comparing the js ecosystem and bad project realizations/designs.
> Action vs motion
I think the main difference you mean is the motivation behind changes, is it a (re)action to achieve a meassurable goal, is this a fix for a critical CVE, or just some dev having fun and pumping up the numbers.
GP mentioned the recent feature of executing ts, which is a reasonable goal imo, with alot of beneficial effects down the line but in the present just another hustle to take care about. So is this a pointless motion or a worthy action? Both statements can be correct, depending on your goals.
I don't follow. JavaScript is a dynamic general purpose programming language. It is certainly not limited to displaying websites, nor it's a requirement for that. The improvements I mentioned in the previous post aren't things you'd get the benefit of inside a web browser.
And after all, isn’t developer velocity (and investor benefits) really the only things that matter???
/sssss
* If everybody does it, it won't work so well
* I've seen cases where folks pinned their dependencies, and then used "npm install" instead of "npm ci", so the pinning was worthless. Guess they are the accidental, free beta testers for the rest of us.
* In some ecosystems, distributions (such as Debian) does both additional QA, and also apply a cooldown. Now we try to retrofit some of that into our package managers.
Things that everybody does: breathe. Eat. Drink. Sleep. And a few other things that are essential to being alive.
Things that not everybody does: EVERYTHING else.
Quoting from the docs:
> This command installs a package and any packages that it depends on. If the package has a package-lock, or an npm shrinkwrap file, or a yarn lock file, the installation of dependencies will be driven by that [..]
Indeed, this is a complex problem to solve.
And the "it won't work so well" of this is probably a general chilling effect on trying to fix things because people won't roll them out fast enough anyway.
This may seem theoretical but for example in websites where there are suppliers and customers, there's quite a chilling effect on any mechanism that encourages people to wait until a supplier has positive feedback; there are fewer and fewer people with low enough stakes who are willing to be early adopters in that situation.
What this means is that new suppliers often drop out too quickly, abandon platforms, work around those measures in a way that reduces the value of trust, and worse still there's a risk of bad reviews because of the reviewer's Dunning-Kruger etc.
I think the mechanism is important for people who really must use it, but there will absolutely be side effects that are hard to qualify/correct.
Bottom line those security bugs are not all from version 1.0 , and when you update you may well just be swapping known bugs for unknown bugs.
As has been said elsewhere - sure monitor published issues and patch if needed but don't just blindly update.
These days it seems most software just changes mostly around the margins, and doesn't necessarily get a whole lot better. Perhaps this is also a sign I'm using boring and very stable software which is mostly "done"?
That's balancing the effort and risk of managing it yourself versus the risk and busy work generated from dependency churn.
What evergreen update policies have done is increase development velocity ( much easier to make breaking changes if you assume everyone is upto date everywhere ) - but also increases churn.
One great example of that is log4shell. If you were still using version 1.0 (log4j 1.x), you were not vulnerable, since the bug was introduced in version 2.0 (log4j 2.x). There were some known vulnerabilities in log4j 1.x, but the most common configuration (logging only to a local file or to the console, no remote logging or other exotic stuff) was not affected by any of them.
I had to point out it was because we hadn't updated was the reason our stuff wasn't vulnerable.
It's comparing the likelihood of an update introducing a new vulnerability to the likelihood of it fixing a vulnerability.
While the article frames this problem in terms of deliberate, intentional supply chain attacks, I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
On the unintentional bug/vulnerability side, I think there's a similar argument to be made. Maybe even SemVer can help as a heuristic: a patch version increment is likely safer (less likely to introduce new bugs/regressions/vulnerabilities) than a minor version increment, so a patch version increment could have a shorter cooldown.
If I'm currently running version 2.3.4, and there's a new release 2.4.0, then (unless there's a feature or bugfix I need ASAP), I'm probably better off waiting N days, or until 2.4.1 comes out and fixes the new bugs introduced by 2.4.0!
> I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.
Yes, absolutely! The overwhelming majority of vulnerabilities stem from normal accidental bug introduction -- what makes these kinds of dependency compromises uniquely interesting is how immediately dangerous they are versus, say, a DoS somewhere in my network stack (where I'm not even sure it affects me).
From their point of view it is a trade-off between volume of vulnerable targets, management impatience and even the time value of money. Time to market probably wins a lot of arguments that it shouldn't, but that is good news for real people.
The cooldown security scheme appears like some inverse "security by obscurity". Nobody could see a backdoor, therefor we can assume security. This scheme stands and falls with the assumed timelines. Once this assumption tumbles, picking a cooldown period becomes guess work. (Or another compliance box ticked.)
On the other side, the assumption can very well be sound, maybe ~90% of future backdoors can be mitigated by it. But who can tell. This looks like the survivorship bias, because we are making decisions based on the cases we found.
If you tell people that cooldowns are a type of test and that until the package exits the testing period, it's not "certified" [*] for production use, that might help with some organizations. Or rather, would give developers an excuse for why they didn't apply the tip of a dependency's dev tree to their PROD.
So... not complaining about cooldowns, just suggesting some verbiage around them to help contextualize the suitability of packages in the cooldown state for use in production. There are, unfortunately, several mid-level managers who are under pressure to close Jira tickets IN THIS SPRINT and will lean on the devs to cut whichever corners need to be cut to make it happen.
[*] for some suitable definition of the word "CERTIFIED."
The cooldown approach makes the automatic upgrades of the former kind much safer, while allowing for the latter approach when (hopefully rarely) you actually need a fix ASAP.
One of the classic scammer techniques is to introduce artificial urgency to prevent the victim from thinking clearly about a proposal.
I think this would be a weakness here as well: If enough projects adopt a "cooldown" policy, the focus of attackers would shift to manipulate projects into making an exception for "their" dependency and install it before the regular cooldown period elapsed.
How to do that? By playing the security angle once again: An attacker could make a lot of noise how a new critical vulnerability was discovered in their project and every dependant should upgrade to the emergency release as quickly as possible, or else - with the "emergency release" then being the actually compromised version.
I think a lot of projects would could come under pressure to upgrade, if the perceived vulnerability seems imminent and the only point for not upgrading is some generic cooldown policy.
If it's "only" technical access, it would probably be harder.
Ok if this is an amazing advice and the entire ecosystem does that: just wait .... then what? We wait even more to be sure someone else is affected first?
Every time I see people saying you need to wait to upgrade it is like you are accumulating tech debt: the more you wait, the more painful the upgrade will be, just upgrade incrementally and be sure you have mitigations like 0 trust or monitoring to cut early any weird behavior.
I've seen this plenty of times: v1 of some library has one way of doing things, v2 of that library changes to a new incompatible way, and then v2.1 introduces a few extra changes to make it easier to port from the v1 way. If you wait a while, you have to do less work to update than if you had updated immediately.
One example is Python 3. After the first few Python 3.x releases, a few "useless" features were introduced to make it easier to port code from Python 2.7 (IIRC, things like reintroducing the u'...' syntax for unicode strings, which had been removed by Python 3.0 since normal '...' strings are now always unicode strings).
Not enough to accumulate tech debt, enough to mitigate the potential impact of any supply-chain vulnerability.
There are a lot of companies out there, that's scan packages and analyze them. Maintainers might notice a compromise, because a new release was published they didn't authorize. Or just during development, by getting all their bitcoin stolen ;)
Stacking up more sub-par tooling is not going to solve anything.
Fortunately this is a problem that doesn't even have to exist, and isn't one that anyone falls into naturally. It's a problem that you have to actively opt into by taking steps like adding things to .gitignore to exclude them from source control, downloading and using third-party tools in a way that introduces this and other problems, et cetera—which means you can avoid all of it by simply not taking those extra steps.
(Fun fact: on a touch-based QWERTY keyboard, the gesture to input "vendoring" by swiping overlaps with the gesture for "benefitting".)
P.S. When I was working at Amazon, I remember that a good number of on-call tickets were about fixing dependencies (in most of them are about updating the outdated Scala Spark framework--I believe it was 2.1.x or older) and patching/updating OS'es in our clusters. What the team should have done (I mentioned this to my manager) is to create clusters dynamically (do not allow long-live clusters even if the end users prefer it that way), and upgrading the Spark library. Of course, we had a bunch of other annual and quarterly OKRs (and KPIs) to meet, so updating Spark got the lowest of priorities...
The article does not discuss this tradeoff.
For projects with hundreds or thousands of active dependencies, the feed of security issues would be a real fire hose. You’d want to use an LLM to filter the security lists for relevance before bringing them to the attention of a developer.
It would be more efficient to centralize this capability as a service so that 5000 companies aren’t all paying for an LLM to analyze the same security reports. Perhaps it would be enough for someone to run a service like cooldown.pypi.org that served only the most vetted packages to everyone.
Something like, upgrade once there are N independent positive reviews AND less than M negative reviews (where you can configure which people are organisations you trust to audit). And of course you would be able to audit dependencies yourself (and make your review available for others).
I know Ubuntu and others do the same but I don't know what they call their STS equivalent.
I've been working on automatic updates for some of my [very overengineered] homelab infra and one thing that I've found particularly helpful is to generate PRs with reasonable summaries of the updates with an LLM. it basically works by having a script that spews out diffs of any locks that were updated in my repository, while also computing things like `nix store diff-closures` for the before/after derivations. once I have those diffs, I feed them into claude code in my CI job, which generates a pull request with a nicely formatted output.
one thing I've been thinking is to lookup all of those dependencies that were upgraded and have the LLM review the commits. often claude already seems to lookup some of the commits itself and be able to give a high level summary of the changes, but only for small dependencies where the commit hash and repository were in the lock file.
it would likely not help at all with the xz utils backdoor, as IIRC the backdoor wasn't even in the git repo, but on the release tarballs. but I wonder if anyone is exploring this yet?
111 more comments available on Hacker News