Npm Flooded with Malicious Packages Downloaded More Than 86k Times
Key topics
The NPM ecosystem is flooded with malicious packages downloaded over 86,000 times, highlighting concerns about the security and trustworthiness of the package manager, with commenters discussing the risks and potential mitigations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
19h
Peak period
71
12-24h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 29, 2025 at 8:37 PM EDT
2 months ago
Step 01 - 02First comment
Oct 30, 2025 at 3:50 PM EDT
19h after posting
Step 02 - 03Peak activity
71 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 5, 2025 at 8:47 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
What's the legitimate use case for a package install being allowed to run arbitrary commands on your computer?
Quote is from the researchers report https://www.koi.ai/blog/phantomraven-npm-malware-hidden-in-i...
edit: I was thinking of this other case that spawned terminals, but the question stands: https://socket.dev/blog/10-npm-typosquatted-packages-deploy-...
Personally, I think the issue is that it is too easy to create packages that people can then pull too easily. rpm and dpkg are annoying to write for most people and require some kind of (at least cursory) review before they can be installed on user's systems from the default repos. Both of these act as barriers against the kinds of lazy attacks we've seen in the past few months. Of course, no language package registry has the bandwidth to do that work, so Wild West it is!
Surely there are ways to just stick to absolute basics and you can get quite far with what is built in or some very small libraries, but I guess it depends on where you would be most productive.
https://github.com/orgs/pnpm/discussions/8945
What’s needed is an entitlements system so a package you install doesn’t do runtime stuff like install crypto mining software. Even then…
So preventing lifecycle scripts certainly limits the number of packages that could be exploited to get access to the installing machine. It's common for javascript apps to have hundreds of dependencies, but only a handful of them will ever actually run as code on the machine that installed them.
And with node you get files and the ability run arbitrary code on arbitrary processes.
https://bun.com/docs/guides/install/trusted
[0] https://www.npmjs.com/package/mongodb-memory-server
the npm version is decoupled from the binary version, when i want them locked together
A) maintainers don’t know any better and connect things with string and gum until it most works and ship it
B) people who are smart, but naive and think it will be different this time
C) package manager creators who think they’re creating something that hasn’t been done before, don’t look at prior art or failures, and fall into all of the same holes literally every other package manager has fallen into and will continue to fall into because no one in this industry learns anything.
It pains me to remember that the reason LLMs write like this is because many humans did in the training data.
In about as much text, we could explain far better why and how NPM's behaviour is risky:
> When you install a package using `npm install`, NPM may also run arbitrary code from the package, from multiple hook scripts specified in `package.json`, before you can even audit the code.
The paradigm itself has been in package managers since DEB and RPM were invented (maybe even Solaris packages before that? it's been a minute since I've Sun'd); it's not the same as NPM, a more direct comparison is the Arch Linux AUR - and the AUR has been attacked to try and inject malware all year (2025) just like NPM. As of the 26th (5 days ago) uploads are disabled as they get DDOSed again. https://status.archlinux.org/ (spirit: it's the design being attacked, pain shared between AUR and NPM as completely disparate projects).
More directly to an example: the Linux kernel package. Every Linux distribution I'm aware of runs a kernel package post-install script which runs arbitrary (sic) commands on your system. This is, of course, to rebuild any DKMS modules, build a new initrd / initramfs, update GRUB bootloader entries, etc. These actions are unique outputs to the destination system and can't be packaged.
I have no data in front of me, but I'd speculate 70% (?) of DEB/RPM/PKG packages include pre / post install/uninstall scriptlets, it's very common in the space and you see it everywhere in use.
The truth is npm, pip, rubygems, cargo and all the other programming language package managers are just fancier versions of the silly installation instructions you often find in README files that tell people to curl some script and pipe it into bash.
NPM etc. are a bit like Arch would be if everything was in AUR.
They use proinstall script to fetch pre-built binaries, or compile from source if your environment isn't directly supported.
I assume that it's heavily sandboxed, though, so it may be difficult to leverage.
[0] https://docs.swift.org/swiftpm/documentation/packagemanagerd...
[1] https://developer.apple.com/documentation/packagedescription
Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed not just folks' ability to conceive of just using an SCM like Git or Mercurial to manage source the way that they're made for without introducing a second, half-assed, "registry"-dependent VCS into the mix, but also destroyed the ability to recognize when a comment on the subject is dripping in the most obvious, easily detectable irony.
This is because you have redefined the problem—partly as a way of allowing you to avoid addressing it, and partly to allow you to speak of lockfiles as a solution to that problem. See <https://news.ycombinator.com/item?id=45824392>.
Lockfiles do not solve the problem. They are the problem.
This is what I wrote:
> Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed […] folks' ability to conceive of just using an SCM like Git
That's the problem that I'm talking about—lockfiles. Or, more specifically: the insistence on practicing a form of version control (i.e. this style of dependency management that the current crop of package managers tell people is the Right Way to do things) that leads to the use of lockfiles—for the sole purpose of papering over the issues that were only introduced by this kind of package manager—and for people to be totally unaware of the water they're swimming in under this arrangement.
Everyone is familiar with the concept of "a solution in need of a problem". That's exactly what lockfiles are.
Go-style vendoring does dump everything into a directory but that has other downsides. I also question how effectively you can audit dependencies this way -- C developers don't have to do this unless there's a problem they're debugging, and at least for C it is maybe a tractible problem to audit your entire dependency graph for every release (of which there are relatively few).
Unfortunately IMHO the core issue is that making the packaging and shipping of libraries easy necessarily leads to an explosion of libraries with no mechanism to review them -- you cannot solve the latter without sacrificing the former. There were some attempts to crowd-source auditing as plugins for these package managers but none of them bore fruit AFAIK (there is cargo-audit but that only solves one part of the puzzle -- there really needs to be a way to mark packages as "probably trustworthy" and "really untrustworthy" based on ratings in a hard-to-gamify way).
Maybe in a perfect world, we’d all use a better VCS whose equivalent of submodules actually could do that job. We are not in that world yet.
You wrote—alluding to, but without actually stating—the reasons why registries and package managers for out-of-tree packages that subvert the base-level VCS were created:
> Yeah, people invented the concept of packages and package management because they couldn’t conceive of vendoring (which is weird considering basically all package managers make use of it themselves) and surely not because package management has actual benefits.
This is a sarcastic comment. It using irony to make the case that the aforementioned trio (packages, package managers, package registries), etc. were created for good reason (their "actual benefits").
Do you know what the reasons are? Can you reply here stating those reasons? Be explicit. Preferably, putting it into words in a way that can be tested (falsified)—like the way the claim, "We can reduce the size of our assets on $PROJECT_Z by storing the image data as PNG instead of raw bitmaps" is a claim that lends itself to being tested/falsified—and not just merely alluding to the good reasons for doing $X vs $Y.
What, specifically, are the reasons that make these out-of-tree, never-committed packages (and the associated infrastructure involving package registries, etc.) a good strategy? What problem does this solve? Again: please be specific. Can it be measured quantitatively?
That is, when a security issue is found, regardless of supply chain tooling one would update.
That there is a little cache/mirror thing in the middle is of little consequence in that case.
And for all other cases the blessed versions in your mirror are better even if not latest.
Somewhat related, I also have a small homelab running local services and every now and then I try a new technology. occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot to infiltrate because I’m not sophisticated enough to stop it.
Where do I start?
For the case of general software, "Don't use node" would be my advice, and by extension any packaging backend without external audit and validation. PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.
The gold standard is Use The Software Debian Ships (Fedora is great too, arch is a bit down the ladder but not nearly as bad as the user-submitted madness outside Linux).
But it seems like your question is about front end web development, and that's not my world and I have no advice beyond sympathy.
> occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot
Pretty much that's the problem exactly. Distributing software is hard. It's a lot of work at a bunch of different levels of the process, and someone needs to commit to doing it. If you aren't willing to commit your time and resources, don't distribute it in a consumable way (obviously you can distribute what you built with it, and if it's appropriately licensed maybe someone else will come along and productize it).
NPM thought they could hack that overhead and do better, but it turns out to have been a moved-too-fast-and-broke-things situation in hindsight.
(That might hint that I'm not doing trendy things.)
You don't get secure things for free, you have to pay for that by doing things like "import and audit software yourself" or even "write simple utilities from scratch" on occasion.
IME Debian is falling behind on security fixes.
FWIW, the subject at hand here isn't accidentally introduced security bugs (which affect all software and aren't well treated by auditing and testing). It's deliberately malicious malware appearing as a dependency to legitimate software.
So the use case here isn't Heartbleed, it's something like the xz-utils trojan. I'll give you one guess as to who caught that.
One obvious further mitigation for Python is to configure your package installer to require pre-built wheels, and inspect the resulting environment prior to use. Of course, wheels can contain all sorts of compiled binary blobs and even the Python code can be obfuscated (or even missing, with just a compiled .pyc file in its place); but at least this way you are protected from arbitrary code running at install time.
Do development, all of it, inside VMs or containers, either local or remote.
Use ephemeral credentials within said VMs, or use no credentials. For example, do all your git pulls on your laptop directly, or in a separate VM with a mounted volume that is then shared with the VM/containers where you are running dev tooling.
This has the added benefit of not only sandboxing your code, but also making your dev environments repeatable.
If you are using GitHub, use codespaces. If you are using gitlab, workspaces. If you are using neither, check out tools like UTM or Vagrant.
Im genuinely curious because I casually looked into it so that i could work on some hobby stuff over lunch on my work machine.
However I just assumed the performance wouldn't be too great.
Would love to hear how people are setup…
Pretty soon I liked using the environment so much that I got my work running on it. And when I change the environment, I can sync it to my other machine.
Though NixOS is particularly magical as a dev environment since you have a record of everything you've done. Every time I mess with postgres hb_conf or nginx or pcap or on my local machine, I think "welp, I'll never remember that I did that".
Then, I removed the graphical settings, as I was aiming to use SSH instead of emulated TTY that comes ON by default with UTM (at that time).
Finally, I set up some basic scripting to turn the machine on and SSH into it as soon as sshd.service was available, which I don't have now, but the script finished with this:
(fish shell)
Later it evolved in something like this: I also removed some unnecessary services for local development: And done, performance was really good and I could develop on seamlessly.[1]: https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages...
There's around 10-15% performance penalty for VMs (assuming you use arm64 guests), but the whole system is just so much faster and well built than anything Intel-based to day, that it more than compensates.
For Windows, it's lacking accelerated video drivers, but VMWare Fusion is an ok free alternative - I can totally play AAA games from last decade. Enjoy it until broadcom kills it.
Whether or not it’s a good idea, “realistic” implies practicality, which could presumably be measured by whether people find it worthwhile to do the thing.
Aren't you're a bit asking "When X transportation method isn't used by everyone, can it really be any good?" :-)
When I was developing server software for Windows, the first time I was able to setup a development environment by simply cloning a VM instead of spending a day-and-a-half with a lap full of MSDN CDs/DVDs, I never went back.
Prior to that, I was happily net-booting *BSD/Solaris servers all over my house/apartment.
Nowadays, we have so many tools to make this trivial. Your contention doesn't stand up to basic scrutiny of the available data.
If you are downloading software from untrusted sources (e.g. NPM, pip, and others) and running it on your primary working machine, or personal machine, then you are simply begging for trouble.
The fact I use nvm means a global install won’t cross accounts.
Vendoring dependencies (copying the package code into your project rather than using the package manager to manage it) can help - it won't stop a malicious package, but it will stop a package from turning malicious.
You can also copy the code you need from a dependency into your code (with a comment giving credit and a link to the source package). This is really useful if you just need some of the stuff that the package offers, and also forces you to read and understand the package code; great practice if you're learning.
I think we need a different solution that fixes the dependency bloat or puts more safeguards around package publishing.
The same goes for any other language with excessive third-party dependency requirements.
It's going to take a lot of people getting pwned to change these attitudes though
That’s not to say it’s hopeless. Rather, it’s more likely that widespread improvement will need to be centrally orchestrated rather than organic in response to hacks.
Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages. Those 2 things, a library that provides some functionality, and a CLI command that lets you use the library from the command line, SHOULD NEVER BE MIXED.
The library should be its own package without the bloat of the command line crap
(2) Choose low dependency packages
Example: commander has no dependencies, minimist now has no dependencies. Some other command line parsers used to have 10-20 dependencies.
(3) Stop using packages when you can do it yourself in 1-2 lines of JS
You don't need a package to copy files. `fs.copyFileSync` will copy a file for. `fs.cpSync` will copy a tree, `child_process.spawn` will spawn a process. You don't need some package to do these things. There's plenty of other examples where you don't need a package.
After your little rant you point out Commander has zero dependencies. So I don’t know what’s up with you.
If the library you’re building has anything with application lifecycle, particularly bootstrapping, then having a CLI with one dependency is quite handy for triage. Most especially for talking someone else through triage when for instance I was out and there was a production issue.
Which is why half the modules I worked on at my last place ended up with a CLI. They are, as a rule, read mostly. Which generally doesn’t require an all caps warning.
Does every module need one of those? No. But if your module is meant as a devDependency, odds are good it might. And if it’s bootstrapping code, then it might as well.
> should have it's dev shamed
Oh I feel embarrassed right now. But not for me.
I still maintain pushing this back to library authors is the right thing to do instead of making this painful for literally millions of end-users. The friction of getting a package accepted into a critical mass of distributions is the point.
Neither is a security guarantee, but it does add a substantial extra barrier.
https://npmgraph.js.org/?q=hono
https://npmgraph.js.org/?q=zod
Recently I switched to Bun in part because many dependencies are already included (db driver, s3 client, etc) that you'd need to download with Node or Deno.
Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.
I won't execute that code directly on my machine. I will always execute it inside the Docker container. Why do you want to run commands like `vite` or `eslint` directly on your machine? Why do they need access to anything outside the current directory?
Most valuable data on your system for a malware author is login cookies and saved auth tokens of various services.
But it is true that work and personal machines have different threat vectors.
Because it means the hygiene is thrown over the fence in a post commit manner.
AI makes this worse because they also run them "over the fence".
However you run it, i want a human to hold accountability for the mainline committed code.
How does this throw hygiene over the fence?
is he just saying always run your code in a container?
> is he just saying always run your code in a container?
yes
> isn't that running things on your machine?
in this context where they're explicitly contrasted, it isn't running things "directly on my machine"
If only :(
I have used Node, I would not go near the NPM auto install Spyware service.
How is it possible that people keep this service going, when it has been compromised so regularly?
How's it possible that people keep using it?
135 more comments available on Hacker News