Shai-Hulud Malware Attack: Tinycolor and Over 40 Npm Packages Compromised
Key topics
Socket:
- Sep 15 (First post on breach): https://socket.dev/blog/tinycolor-supply-chain-attack-affect...
- Sep 16: https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
StepSecurity – https://www.stepsecurity.io/blog/ctrl-tinycolor-and-40-npm-p...
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
The 'Shai-Hulud' malware attack compromised over 40 NPM packages, including Tinycolor, highlighting the ongoing security risks in the JavaScript ecosystem and sparking discussions on potential solutions and mitigations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
138
0-12h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 16, 2025 at 7:22 AM EDT
4 months ago
Step 01 - 02First comment
Sep 16, 2025 at 7:36 AM EDT
14m after posting
Step 02 - 03Peak activity
138 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 20, 2025 at 2:44 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
NPM packages are used by huge Electron apps like Discord, Slack, VS Code, the holy grail would be to somehow slip something inside them.
In our line-of-business .NET app, we have a logger, a database, a unit tester, and a driver for some specialty hardware. We upgrade to the latest version of each external dependency about once per year (every major version) to avoid accruing tech debt. They're all pinned and locally hosted, nuget exists but we (like most .Net developers) don't use it to the extent that npm devs do. We read the changelogs - all four of them! - and manually update.
I understand that the NPM ecosystem works differently from a "batteries included" .Net environment for a desktop app, but it's not just about where the users are. Line of business code in .Net and Java apps process a lot of important data. Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data, but again, it's less about the existence of a package manager and more about when and how you use it.
> In July 2024, Bittensor users were the victims of an $8 million hack. The Bittensor hack was an example of a supply chain hack using PyPI. PyPI is a site that hosts packages for the Python programming language
https://www.halborn.com/blog/post/explained-the-bittensor-ha...
It's the new pragmatic choice for web apps and so it's everyone is using it, from battle hardened teams to total noobs to people who just don't give a shit. It reminds me of Wordpress from 10 years ago, when it was the goto platform for cheap new websites.
I would argue that is only one of the many issues with the JS/TS/NPM ecosystem. Many of the other problems have been normalized. The constant security issues are highly visible.
Where did you see that number? Maven central says it has about 18 million [1] packages. Maybe with all versions of those 18 million packages there are about 62 million artifacts?
While the Java ecosystem is vastly larger, in Java (with Maven, Gradle, Bazel, etc.) it is not common to use really small libraries. So you end up with vastly less transitive dependencies in your projects.
[1] https://mvnrepository.com/repos/central
Adding dependencies comes with advantages and downsides. You need to strike a balance between them. External libraries can help implement things that you better don't implement yourself, so the answer is certainly not "no dependencies". But there are downsides and risks, and the risks grow with the number of dependencies.
In the world of NPM, people think those simple truths don't apply to them and the downsides and risks of dependencies can be ignored. Then you end up with thousands of transitive dependencies.
They're wrong and learn it the hard way now.
node should have shipped "batteries included" after the left-pad incident. There was a boneheaded attachment to small stdlib, which you could put down to youthful innocence, except that it's been almost 10 years.
The TC39 committee which controls the design of JS stdlib and the node maintainers basically both act like the other one doesn't exist.
NPM was never designed with security in mind. It's a dirty hack that somehow became the most popular package manager.
The dependency hell is a reflection of the massive egos of the people involved in the multiple organizations. Python doesn't have this problem because it's all centralized under one org with a single vision.
It's just javascript being javascript.
The problem with that guy is that the dependencies are useless to everyone except his ego.
Maybe some of these systems have better protection from counterfeiting, and probably they all should. But as the number of packages you use goes up, the surface area does too. As a Node developer the… permissiveness of the culture has always concerned me.
The trick with playing with fire is understanding how fire works, respecting it, and keeping the tricks small. The bigger you go, the more the danger.
Its users don't check who the email is from
Community is very happy to pick up helper libraries and by the time you get all the way up the tree in a react framework you have hundreds or even thousands of packages.
If you’re sensible you can be fine just like any other ecosystem, but limited because one wrong package and you’ve just ballooned your dependency tree by hundreds which lowers the value of the ecosystem.
Node doesn’t have a standard library and until recently not even a test runner which certainly doesn’t help.
If your sensible with node or Deno* you’ll somewhat insulated from all this nonsense.
*Deno has linting,formatting,testing & a standard library which is a massive help (and a permission system so packages can’t do whatever they want)
This. But the problem seems to go way deeper than npm or whatever package manager is used. I mean, why is anyone consuming a package like colors or tinycolors? Do projects really need to drag in a random dependency to handle these usecases?
There will always be packages that for some people are "but why?" but for others are "thank god I don't have to deal with that myself". Sure, colors and whatnot are tiny packages we probably could do without, but what are you really suggesting here? Someone sits and reviews every published package and rejects it if the package doesn't fit your ideal?
There's some ignorance in your comment. If you read up on debug & chalk supply chain attack, you'll end up discovering that the attacker gained control of the account through plain old phishing. Through a 2FA reset email, to boot.
What exactly do you expect the likes of Microsoft to do if users hand over their access to third parties? Do you want to fix issues or to pile onto the usual targets?
But the issue isn't just about the “thank god I don't have to deal with that myself” perspective. It's more about asking: do you actually need a dependency, or do you simply want it?
A lot of developers, especially newer ones, tend to blur that distinction. The result is an inflated dependency tree that unnecessarily increases the attack surface for malware.
The "ship fast at all costs" mindset that dominates many startups only makes this worse, since it encourages pulling in packages without much thought to long-term risk.
Why are React devs pulling object utils from lodash instead of reimplementing them?
What leads you to believe React is not well suited to simple ecommerce sites?
2. Extensive ecommerce experience including Disney, Carnival Cruises, Booking, TUI, and some of the European leaders in real estate and professional home building tools among the others.
Strongly disagree. React is not about interactivity, but reactivity. If you have to consume an API and update your app based on the responses, React does all the heavy lifting for you without requiring full page reloads.
On top of that, and as a nice perk, React also gives you all the tools you will ever need to optimize perceived performance.
Claiming that a tool designed for reactive programming is not suited for the happy flow of reactive programming is simply fundamentally wrong.
2. Ecommerces are not highly dynamic pages. They are overwhelmingly static content with an occasional configurator/cart/search. All things that can be embedded with whatever library you like (including React), or even better none at all.
3. Seo and performance is what really matters in ecommerces. The only minor exceptions are shops like Amazon or Airbnb, but that's unrelated to their seo and performance.
4. I've been writing React and ecommerces using React and similar with millions of daily users for a decade :)
Stuff like intents "this is a math library, it is not allowed to access the network or filesystem".
At a higher level, you have app sandboxing, like on phones or Apple/Windows store. Sandboxed desktop apps are quite hated by developers - my app should be allowed to do whatever the fuck it wants.
I would have thought it wouldn't be too hard to design a capability system in JS. I bet someone has done it already.
Of course, it's not going to be compatible with any existing JS libraries. That's the problem.
When declaring dependencies, you'd also declare the permissions of those dependencies. So a package like `tinycolor` would never need network or disk access.
Clojars (run by volunteers AFAIK) been doing signatures since forever, not sure why it's so difficult for Microsoft to follow their own yearly proclamation of "security is our top concern".
> The NPM CI tokens that don't require 2fa kind of makes it less useful though
Use OIDC to publish packages instead of having tokens around that can be stolen or leaked https://docs.npmjs.com/trusted-publishers
Do not let code to have access to things it's not supposed to access.
It's actually that simple. If you implemented a function which formats a string, it should not have access to `readFile`, for example.
Retrofitting it into JS isn't possible, though, as language is way too dynamic - self-modifying code, reflection, etc, means there's no isolation between modules.
In a language which is less dynamic it might be as easy as making a white-list for imports.
The situation gets better in monadic environments (can't readFile without the IO monad, and you cant' call anything which would read it).
Programming languages which are "static" (or, basically, sane) you can identify all imports of a module/library, and, basically, ban anything which isn't "pure" part of stdlib.
If your module needs to work with files, it will receive an object which lets it to work with files.
A lot of programming languages implement object-capability model: https://en.m.wikipedia.org/wiki/Object-capability_model it doesn't seem to be hard at all. It's just programmers have preference for shittier languages, just like they prefer C which doesn't even have language-level array bound checking (for a lack of a "dynamic array" concept on a language level).
I think it's sort of orthogonal to "pure functional" / monadic: if you have unrestricted imports you can import some shit like unsafePerformIO, right? You have another level of control, of course (i.e. you just need to ban unsafePerformIO and look for unlicensed IO) but I don't feel like ocap requires Haskell
This is something I'm trying to polish for my system now, but the idea is: yarn (and bundler and others) needs to talk only to the repositories. That means yarn install is only allowed outbound connections to localhost running a proxy for packages. It can only write in tmp, its caches, and the current project's node_packages. It cannot read home files beyond specified ones (like .yarnrc). The alias to yarn strips the cloud credentials. All tokens used for installation are read-only. Then you have to do the same for the projects themselves.
On Linux, selinux can do this. On Mac, you have to fight a long battle with sandbox-exec, but it's kinda maybe working. (If it gained "allow exec with specified profile", it would be so much better)
But you may have guessed from the description so far - it's all very environment dependent, time sink-y, and often annoying. It will explode on issues though - try to touch ~/.aws/credentials for example and yarn will get killed and reported - which is exactly what we want.
But internally? The whole environment would have to be redone from scratch. Right now package installation will run any code it wants. It will compile extensions with gyp which is another way of custom code running. The whole system relies on arbitrary code execution and hopes it's secure. (It will never be) Capabilities are a fun idea, but would have to be seriously improved and scoped to work here.
How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
Zero? How many developers have plain-text tokens lying around on disk? Avoiding that been hammered into me from every developer more senior than me since I got involved with professional software development.
The approach also depends on the project. There is a bunch of different approaches and I don't think there is one approach that would work for every project, and sometimes I requires some wrangling but takes 5-10 minutes tops.
Some basic information about how you could make it work with 1Password: https://developer.1password.com/docs/cli/secrets-environment...
Edit: Testing 1Password myself, with 1password desktop and shell, if I have authed myself once in shell, then "spawn" would be able to get all of my credentials from 1Password.
So I'm not actually sure how much better than plaintext is that. Unless you use service accounts there.
A repository that an attacker only needs to get access to once, after which they can perform offline attacks against at their leisure.
A repository that contains the history of changed values, possibly making the latter easier, if you used the same encryption secret for rotated values.
This is an awful idea. Use a proper secret management tool you need to authenticate to using OIDC or Passkeys, and load secrets at runtime within the process. Everything else is dangerous.
With that said, it's not impossible some tool leaks their secrets into ~/.local, ~/.cache or ~/.config I suppose.
I thought they were referencing the common approach of adding environment variables with plaintext secrets to your shell config or as an individual file in $HOME, which been a big no-no for as long as I can remember.
I guess I'd reword it to "I'm not manually putting any cleartext secrets on disk" or something instead, if we wanted it to be 100% accurate.
Most of them. Mainly on purpose, (.env files) but many also accidentally. (shell history with tokens in the commands)
I recommend Envie: https://github.com/ilmari-h/envie
It's more convenient than having a bunch of .env.prod, .env.staging files laying around, not to mention more secure.
Your program (or your shell) opens. It runs a program to ask the password manager for a secret. Your password manager prompts you to authorize unsealing the secret. You accept or deny. The secret is passed to the program that asked for it. Works very well with 1Password and tools like git, ssh, etc, or simply exporting the secret to an environment variable, either in a script or bashrc file.
Other programs also support OIDC, such as with git credential helper plugins, or aws sso auth.
The extent to which any of this is actually implemented varies wildly between different OSes, ecosystems and tools. On macOS, docker desktop does quite well here. There's also an app called Secretive which does even better for SSH keys - generating a non-exportable key in the CPU's secure enclave. It can even optionally prompt for login password or fingerprint before allowing the key to be used. It's practically almost as secure as using a separate hardware token for SSH but significantly more convenient.
In contrast, most of the time the only thing protecting the keys in your CI vault from being exfiltrated is that the malware needs to know the specific name / API call / whatever to read them. Plenty of CI systems you don't even need that, because the build script that uses the secrets will read them into environment variables before starting the build proper.
Frankly, our desktop OSes are not fit for purpose anymore. It's nuts that everything I run can instantly own my entire user account.
It's the old https://xkcd.com/1200/ . That's from 2013 and what little (Flatpak, etc.) has changed has only changed for end users - not developers.
I try to avoid JS, as it is a horrible language, by design. That does include TS, but it at least is useable, but barely - because it still tied to JS itself.
Still, even I who'd call myself a JavaScript developer also try to avoid desktop applications made with just JS :)
It is full of gotchas that serves 0 purpose nowadays.
Also remember that it is basically a Lisp wearing Java skin on top, originally designed in less than 2 weeks.
Typescript is one of few things that puts safety barrier and sane static error checking that makes JS bearable to use - but it still has to fall down to how JS works in the end so it suffers from same core architectural problems.
What some people see as a fault, others see as a feature :) For me, that's there to prevent entire websites from breaking because some small widget in the bottom right corner breaks, for example. Rather than stopping the entire runtime, it just surfaces that error in the developer tools, but lets the rest to continue working.
Then of course entire web apps crash because one tiny error somewhere (remember seeing a blank page with just some short error text in black in the middle? Those), but that doesn't mean that's the best way of doing things.
> Also remember that it is basically a Lisp wearing Java skin on top
I guess that's why I like it better than TS, that tries to move it away from that. I mainly do Clojure development day-to-day, and static types hardly ever gives me more "safety" than other approaches do. But again, what I do isn't more "correct" than what anyone else does, it's largely based on "It's better for me to program this way".
TS is just too much overhead for the marginal gains.
the issue is that it prevents that, but also allows you to send complete corrupt data forward, that can create horrible cascade of errors down the pipeline - because other components made assumption about correctness of data passed to them.
Such display errors should be caught early in development, should be tested, and should never reach prod, instead of being swept under the rug - for anything else other than prototype.
but i agree - going fully functional with dynamic types beats average JS experience any day. It is just piling up more mud upon giant mudball,
Care to explain why?
My view is this: since you can write plain JS inside TS (just misconfigure tsconfig badly enough), I honestly don’t see how you arrive at that conclusion.
I can just about understand preferring JS on the grounds that it runs without a compile step. But I’ve never seen a convincing explanation of why the language itself is supposedly better.
systems? rust - but it is still far from perfect, too much focus on saving few keystrokes here and there.
general purpose corporate development? c# - despite current direction post .net 5 of stapling together legacy parts of .net framework to .net core. it does most things good enough.
scripting, and just scripting? python.
web? there's only one, bad, option and that's js/ts.
most hated ones are in order: js, go, c++, python.
go is extremely infuriating, there was a submission on HN that perfectly encapsulated my feelings about it, after writing it for a while: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...
We fcked up with js, big time and its with us forever now
But apparently they only made it do like 95% of what JS does so you can't actually replace js with it. To me it seems like a huge blunder. I don't give a crap about making niche applications a bit faster, but freeing the web from the curse of JS would be absolutely huge. And they basically did it except not quite. It's so strange to me, why not just go the extra 5%?
Any attempt to bypass this will be perilous.
The only way to remove Js is to create a new browser that doesn't use it. Fragments the web, yes and probably nobody will use it
https://github.com/wasm-bindgen/wasm-bindgen https://docs.rs/web-sys/latest/web_sys/
This is also being worked on, in the future this 5% glue might eventually entirely disappear:
> Designed with the "Web IDL bindings" proposal in mind. Eventually, there won't be any JavaScript shims between Rust-generated wasm functions and native DOM methods
A go desktop app rendered with OpenGL: 39MB starting size
So its smaller than Gpu rendered desktop app and I get to use CSS for styling which is very powerful
Unix had a big scare last year because of XZ Utils.
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
I _know_ many don’t. In fact suggesting doing it is a good way to be looked at like a crazy person and be told something like “this is a yes place not a no place.”
We'd have to rely on the developer to notice, and check every line of code they ship, which might be the norm but certainly not 100% of cases.
Most of my career Node.JS has paid the bills and I'm very grateful to fate for that; but I have also worked in C/asm/etc for embedded firmware etc. Implying that the JS ecosystem is only comprised of terrible devs is classic gatekeeping holier than thou type shit.
You realize my point right? People are taught to not reinvent the wheel at work (mostly for good reasons) so that's what they do, me and you included.
You ain't gonna be bothered to write html and manual manipulation, the people that will give you libraries to do so won't be bothered reimplementing parsers and file watchers, file watcher writers won't be bothered reimplementing file system utils, file system utils developers won't be bothered reimplementing structured cloning or event loops, etc, etc.
I myself just the other day had the task of converting HTML to markdown, because I don't remember whether it was Jira or Github APIs that returns comments as HTML and despite it being mostly few hours of work that would get us 90% there everybody was in favor of pulling a dependency to do so (with its own dependencies) and thus further exposing our application to those risks.
https://github.com/williamcotton/markdown-to-html-llm
> I myself just the other day had the task of converting HTML to markdown
> you could write an HTML to markdown library in half a day
https://news.ycombinator.com/item?id=45256210
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
> Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
As well as Ruby, Python, Go, etc.
There is a difference, but it's not an order of magnitude and neither is a true island.
Granted, deciding not to use JS on the server is reasonable in the context of this article, but for the client htmx is as much a js lib with (dev) dependencies as any other.
https://github.com/bigskysoftware/htmx/blob/master/package.j...
https://github.com/vuejs/core/blob/main/package.json
Unless this lack of scrutiny is exclusive to JavaScript ecosystem, then this attack could just as well have happened in Rust or Golang.
859 more comments available on Hacker News