Gitlab Discovers Widespread Npm Supply Chain Attack
Postedabout 1 month agoActiveabout 1 month ago
Original: GitLab discovers widespread NPM supply chain attack
about.gitlab.comSecuritystory
informativenegative
Debate
20/100
Supply Chain AttackNpm SecurityGitlab Research
Key topics
Supply Chain Attack
Npm Security
Gitlab Research
Discussion Activity
Very active discussionFirst comment
16m
Peak period
108
12-24h
Avg / period
17.8
Key moments
- 01Story posted
Nov 27, 2025 at 10:36 AM EST
about 1 month ago
Step 01 - 02First comment
Nov 27, 2025 at 10:53 AM EST
16m after posting
Step 02 - 03Peak activity
108 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 2, 2025 at 2:58 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46070203Type: storyLast synced: 11/27/2025, 4:37:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Just like in the 90s when viruses primarily went to windows, it' wasn't some magical property of windows, it was the market of users available.
Also, following this logic, it then becomes survivorship bias, in that the more attacks they get, the more researchers spend time looking & documenting.
no, it really was windows
Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.
I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.
Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.
It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted. Once CD-R writers became popular, and anyone could and did write their own data CDs, these assumptions no longer held.
> I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.
That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.
It's a stupid default, though. One way round the issue is to present the user with the option to either just open a disc or to run the installer and allow them to change the default if they prefer the less secure option.
> It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted
This allowed Sony BMG to infect so many computers with their rootkit (https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...).
> That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.
A sudoers group is different though as it highlights the difference between what files they are expected to change (i.e. that they own) and which ones require elevated permissions (e.g. installing system software). Earlier versions of Windows did not have that distinction which was a huge security issue.
This can of course be resolved, but here’s the kicker: our own governments equally enjoy this ambiguity to do their own bidding; so no government truly has an incentive to actually improve cross-border identity verification and cybercrime enforcement.
It's just not that effective when the SBOM becomes unmanageable. For example, our JS project at $work has 2.3k dependencies just from npm. I can give you that SBOM (and even include the system deps with nix) but that won't really help you.
They are only really effective when the size is reasonable.
This kind of large scale attack is perfect advertising for anyone selling protection against such attacks.
Spy agencies have no interest in selling protection.
Take the Jaguar hack, the economic loss is estimated at 2.5bn. Given an average house price in the UK of $300k, that’s like destroying ~8.000 homes.
Do you think the public and international response will be the same if Russia or China leveled a small neighborhood even with no human casualties?
Or, in other words; maybe the nature of humans and the inherent pressure of our society to perform, to be rich, to be successful, drives people to do bad things without any state actor behind it?
We should fight this kind of behavior (and our privacy) regardless of whose involved, yet our governments in the west have nurtured this narrative of always pointing at big tech and foreign actors as scape goats for anything privacy or hacking related.
Also, any cyber attack tracker will show you this is a global issue, if you think there aren't millions of attacks carried out from our own countries, you're not looking enough.
CN = Johannes Schindelin O = Johannes Schindelin S = Nordrhein-Westfalen C = DE
Downside is the cost. Certificates cost hundreds of dollars per year. There's probably some room to reduce cost, but not by much. You also run into issues of paying some homeless person $50 to use their identity for cyber crimes.
In principle, what’s stopping the technique from targeting macos CI runners which improperly store keys used for Notorization signing? Or… is it impossible to automate a publishing step for macos? Does that always require a human to do a manual thing from their account to get a project published?
The inevitable evolution of such a feature is a button on your repo saying" block all contributors from China, Russia, and N other countries". I personally think that's the antithesis of OSS and therefore couldn't find the value in such a thing.
"easily", not so much...
As in, services can still detect if you're connecting through a VPN, and if you ever connect directly (because you forgot to enable the VPN), your real location might be detected. And the consequences there might not be "having to refresh the page with the VPN enabled", but instead: "find the whole organisation/project blocked, because of the connection of one contributor"
This is why Comaps is using codeberg, after its predecessor (before the fork) project got locked by GitHub
https://news.ycombinator.com/item?id=43525395
https://mastodon.social/@organicmaps/114155428924741370
Moreover, this kind of stuff is also the reason I stopped accessing Imgur:
- if I try without VPN, imgur stops me, because of the UK's Online Safety Act
- if I try with my personal VPN, I get a 403 error every single time
I'm sure I could get around it by using a different service (e.g. Mullvad), but imgur is just not important enough for me to bother, so I just stopped accessing it altogether
I consider this to be a sign that someone is still an amateur, and this is a reason to not use the software and quickly delete it.
If you need a dependency, you can call the OS package manager, or tell me to compile it myself. If you start a network connection, you are malware in my eyes.
Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don't do this.
Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.
What we really need is a system to restrict packages in what they can do (for example, many packages don't need network access).
There has been some promising prior research such as BreakApp attempting to mitigate unusual supply-chain compromises such as denial-of-service attacks targeting the CPU via pathological regexps or other logic-bomb-flavored payloads.
That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.
To be fair Java has improved a lot over the last few years. I really have the feeling that Java is getting better, while C++ is getting worse.
One package for lists, one for sorting, and down the rabbit hole you go.
Refactoring these also isn't always trivial either, so it's a long journey to fully get rid of something like Lodash from an old project
So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it's an easy vector for a worm to propagate.
pip install <package> --only-binary :all:
to only install wheels and fail otherwise.
Would source distributions work as a vector for automated propagation, though? If I'm not mistaken, there's no universal standard for building from source distributions.
1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].
That, combined with:
2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it's not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.
I'm no NPM expert, but that's the worst offenders from a technical perspective, in my opinion.
[1]: I'm sure it can be disabled, and it might even be now by default - I don't know. [2]: Yes, I know you can use a lock file, but it's definitely not the norm to actively consider each upgraded version when refreshing the lockfile.
Yep, auto-updating dependencies are the main culprit why malware can spread so fast. I strongly recommend the use `save-exact` in npm and only update your dependencies when you actually need to.
The answer is a balance. Use Dependabot to keep dependencies up to date, but configure a dependency cooldown so you don't end up installing anything too new. A seven day cooldown would keep you from being vulnerable to these types of attacks.
IMO, `ci` should be `install`, `install` should be `update`.
Plus the install command is reused to add dependencies, that should be a separate command.
`npm install` will always use the versions listed in package-lock.json unless your package.json has been edited to list newer versions than are present in package-lock.json.
The only difference with `npm ci` is that `npm ci` fails if the two are out of sync (and it deletes `node_modules` first).
* NPM has a culture of "many small dependencies", so there's a very long tail of small projects that are mostly below the radar that wouldn't stand out initially if they get a patch update. People don't look critically into updated versions because there's so many of them.
* Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.
not chat bots.
In other "communities" you upgrade dependencies when you have time to evaluate the impact.
Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.
Although higher branching factor for JavaScript and potential target count are probably very important factors as well.
The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you're like most developers and reused any of those passwords elsewhere... rotate those too.
This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.
How does one know one is affected?
What's the point of rotating tokens if I'm not sure that I've been affected - the new tokens will just be ex-filtrated as well.
First step would be to identify infection, then clean up and then rotate tokens.
From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.
I hate that high profile services still default to plain text for credential storage.
If I just need to `fly secrets set KEY=hunter2` one time for production I can copy it from a paper pad even but if it's a key I need to use every time I run a program that I'm developing on, it's likely going to end up at least being in my program's shell environment (and thus readable from its /proc/pid/environ). So if I `npm install compromised-package` – even from some other terminal – can't it just `grep -a KEY= /proc/*/environ`?
Or are you saying the programs we hack on should use some kind of locker api to fetch secrets?*
Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.
I'd assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.
"Basic security practices" is an ever expanding set of hoops to jump through, that if properly followed, stop all work in its tracks. Few are following them diligently, or at all, if given any choice.
Places that care about this - like actually care, because of contractual or regulatory reasons - don't even let you use the same machine for different projects or customers. I know someone who often has to carry 3+ laptops on them because of this.
Point being, there's a cost to all these "basic security practices", cost that security practitioners pretend doesn't exist, but in fact it does exist, and it's quite substantial. Until security world acknowledges this fact openly, they'll always be surprised by how people "stubbornly" don't follow "basic practices".
Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.
https://en.cppreference.com/w/c/experimental/dynamic/strdup
Allocating on the stack is pretty cheap, it's only a single instruction to move the stack pointer. The compiler is likely to optimize it away completely. When doing more complicated things, where you don't build the string linearly allocating on the stack first can be likely cheaper, since the stack memory is likely in cache, but a new allocation isn't. It can also make the code easier, since you can first do random stuff on the stack and then allocate on the heap once the string is complete and you know its final size.
Like the sibling already wrote, that's what strdup does.
> Is it safe to return the duplicate of a stack allocated
Yeah sure, it's a copy.
> wouldn’t the copy be heap allocated anyway?
Yes. I wouldn't commit it like that, it is a naive implementation. But honestly I wouldn't commit leftpad at all, it doesn't sound like a sensible abstraction boundary to me.
> Not to mention it blows the stack and you get segmentation fault?
Yes and I already mentioned that in my comment.
---
> dynamic array right on the stack
Nitpick: It's a variable length array and it is auto allocated. Dynamic allocation refers to the heap or something similar, not already done by the compiler.
strndup prevents you from overrunning a string given that you pass it the containing allocations size correctly. But if you pass something that is not a string, there will be a buffer overrun right there in the first line. Also what outer allocation?
You use strcpy when you get a string and memcpy when you get an array of char. strncpy is for when you get something that is maybe a string, but also a limited array. There ARE use cases for it, but it isn't for safety.
What it doesn't have is a hashmap type, but in C types are cheap and are created on an ad-hoc basis. As long as it corresponds to the correct interface, you can declare the type anyway you like.
There's a reason disclosures are obligatory in academic papers.
Call me a conspiracy theorist, but I start to think these people might be affiliated with GitLab.
GitHub has a massive malware problem as it is and it doesn’t get enough attention.
Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol
The original Java platform is a good example to think about.
And (to put on my Go defender hat), the Go ecosystem doesn't like having many dependencies, in part because of supply chain attack vectors and the fact that Node's ecosystem went a bit overboard with libraries.
The golang modules core to the language are hosted at golang.org
Module authors have always been free to have their own prefix rather than github.com, even if they host their module on Github. If they say their module is example.com/foo and then set their webserver to respond to https://example.com/foo?go-get=1 with <meta name="go-import" content="example.com/foo mod https://github.com/the_real_repository/foo"> then they will leave no hint that it's really hosted at github, and they could host it somewhere else in future (including at https://example.com directly if they want)
https://go.dev/ref/mod#vcs
Another feature is that go uses a default proxy, https://proxy.golang.org/, if you don't set one yourself. This means that Google, who control that proxy, can choose to make a request for a package like github.com/foo/bar go to some place else, if for whatever reason Microsoft won't honour it any more.
The one with 12 competing standards going to 13 competing standards, or something like that.
Meanwhile I have been using Ruby for 15 years and it has evolved in a stable way without breaking everything and without having to rewrite tons of libraries. It's not as powerful in terms of performance and I/O, it's not as far-reaching as JS is because it doesn't support the browser, it doesn't have a typescript equivalent, but it's mature and stable and its power is that it's human-friendly.
And what’s more, people have proposed a standard library through tc39 without success - https://github.com/tc39/proposal-built-in-modules
Of course any large company could create a massive standard library on their own without going through the standards process but it might not be adopted by developers.
In addition to concerns about npm, I'm now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.
All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.
And worst thing, afaik there is no way do do it correctly in MacOS for example. I'd like to be corrected though.
I feel like we are barking up the wrong tree here. The plain text token thing can't be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.
This does mean entering your keyring password a lot.
https://en.wikipedia.org/wiki/GNOME_Keyring
Not when you put that keyrings password into the user keyring. I think it is also cached by default.
If all Homebrew "apps" are the same key then accepting a keyring notification on one app is a lost cause at it would allows things vulnerable to RCE to read/write everything?
For a given project, I have a `./creds` directory which is managed with pass and it contains all the access tokens and api keys that are relevant for that project, one per file, for example, `./creds/cloudflare/api_token`. Pass encrypts all these files via gpg, for which I use a key stored on a Yubikey.
Next to the `./creds` directory, I have an `.envrc` which includes some lines that read the encrypted files and store their values in environment variables, like so: `export CLOUDFLARE_API_TOKEN=$(pass creds/cloudflare/api_token)`.
Every time that I `cd` into that project's directory, direnv reads and executes that file (just once) and all these are stored as environment variables, but only for that terminal/session.
This solves the problem of plain-text files, but of course the values remain in ENV and something malicious could look for some well known variable names to extract from there. Personally I try to install things in a new termux tab every time which is less than ideal.
I'd like to see if and how other people solve this problem
[1]: https://direnv.net/ [2]: https://www.passwordstore.org/
Example : https://github.com/combostrap/devfiles/blob/main/dev-scripts...
It’s not completely full proof but at least gpg asks my passphrase only when I run the script
otoh I wouldn't do it, because I don't believe I could implement it securely.
I had a Borg backup script for example and 1password needed me to authenticate to run it.
Authenticating for ssh and git is great.