We All Dodged a Bullet
Key topics
A recent phishing attack on npm users was narrowly avoided, highlighting the vulnerabilities in the npm ecosystem and the need for better security measures. The discussion revolves around the attack, its implications, and potential solutions.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
110
0-6h
Avg / period
13.3
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 9, 2025 at 11:11 AM EDT
4 months ago
Step 01 - 02First comment
Sep 9, 2025 at 11:13 AM EDT
2m after posting
Step 02 - 03Peak activity
110 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 8:46 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I just try to avoid clicking links in emails generally...
Definitely good practice .
The only real solution is to have domain-bound identities like passkeys.
Always manually open the website.
This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.
My theory is that if that companies start using that workflow in the future, it’ll become even _easier_ for users to click a random link, because they’d go “wow! That’s so convenient now!”
I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.
Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.
https://e18e.dev/
Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.
Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.
What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.
Upgrading when falling off the train is serious drawback on moving fast..
I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.
If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.
and like:
>No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.
sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.
perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.
hence: doesn't sound too bad
"truly needs": currently, yes. but that seems like a fairly easy thing to address with library packaging systems and a language that supports that. static analysis and language design to support it can cover a lot (e.g. go is limited enough that you can handle some just from scanning imports), and "you can ask for something you don't use, it just means people are less likely to use your library" for the exceptions is hardly a problem compared to our current "you already have every permission and nobody knows it".
I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.
On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.
But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.
If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.
there are some rather obvious challenges, but a huge amount of the ones I've run across end up looking mostly like "it's hard to add to an existing language" which is extremely understandable, but hardly a blocker for new ones.
Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.
In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.
It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.
There’s no reason a color parser, or a date library should require network or file system access.
A different idea: Special stack frames such that while that frame is on the stack, certain syscalls are prohibited. These "sandbox frames" could be enabled by default for most library calls, or even used by developers to handle untrusted user input.
Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.
And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.
should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.
but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.
The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.
I do agree that C is an especially-bad case for additional reasons though, yeah.
exceptions totally exist, I've seen them too. I just don't think they're enough to move the median away from "total chaotic garbage" regardless of the system
Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.
Before AI code generation, we would have called that copy-and-paste, and a code smell compared to proper reuse of a library. It's not any better with AI. That's still code you'd have to maintain, and debug. And duplicated effort from all the other code doing the same thing, and not de-duplicated across the numerous libraries in a dependency tree or on a system, and not benefiting from multiple people collaborating on a common API, and not benefiting from skill transfer across projects...
Smells are changing, friend. Now, when I see a program with 20000 library dependencies that I have to feed into a SAST and SCA system and continually point-version-bump and rebuild, it smells a hell of a lot worse to me than something self-contained.
At this point, I feel like I can protect the latter from being exploited better than the former.
I expect that your future CVEs will say otherwise. People outside your organization have seen those library dependencies, and can update them when they discover bugs or security issues, and you can automatically audit a codebase to make sure it's using a secure version of each dependency.
Bespoke AI-generated code will have bespoke bugs and bespoke security issues.
In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.
In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.
Nixing javascript in the frontend is a harder sell, sadly
Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.
Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.
You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`
Just started with golang... the language is a bit annoying but the dependency culture seems OK
Hey that was also on NPM iirc!
Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.
We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.
Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.
Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.
Absolutely not.
https://www.malwarebytes.com/blog/news/2025/08/clickjack-att...
https://thehackernews.com/2025/08/dom-based-extension-clickj...
https://www.intercede.com/the-dangers-of-password-autofill-a...
There's strong evidence that the latter is a more common concern.
You don't have to believe me, read the links.
Same issue with python, rust etc. It’s all very trust driven
The only solution would be to prevent all releases from being applied immediately.
No hardware keys, no new releases.
They have it implemented.
I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.
So the guy either had TOTP or just the pw.
Seems like should be easy to implement enforcement.
In the Java world, I know there’s been griping from mostly juniors re “why isn’t Maven easy like npm?” (I work with some of these people). I point them to this article: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...
Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.
Cool, get big enough, become friends with the right people and you can squat an entire name on the internet. What, you're the Nepalese Party for Marxists, you've existed for 70 years and you want to buy npm.np ? Nope, tough luck, some random dude pushes shitty javascript packages over there. Sorry for the existing npm.org address too, we're going to expropriate the National Association of Pastoral Musicians. Dare I remind you that the whole left-pad situation was because Kik, the company, stole (with NPM's assistance because they were big enough and friends with the right people) the kik package ?
At least they're paying dozens of millions to buy a shitty ass .google that noone cares about because more and more browsers are hiding the URL bar. I'm glad ICANN can use it to buy drinks, hookers instead of being useful.
I think you and I have drastically different ideas about how dramatic a response is warranted by the scenario of needing to buy a domain with a different three letters or maybe even four or more letters before the TLD.
> Dare I remind you that the whole left-pad situation was because Kik, the company, stole (with NPM's assistance because they were big enough and friends with the right people) the kik package ?
...and then the package was entirely removed, which would have been preventable by sane policies around making removal just not allow new dependencies to use it. You're also conflating a resource that's ostensibly free and perpetual for people to claim with one that's only rented for fixed periods of time for money.
And then never even did anything with it.
That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.
These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.
Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!
Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.
A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.
Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.
“We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”
[0] ...so the payments serve the social function of enriching your buddy and improving your status in the whole favor economy thing...
Our infra guy then had to argue with them for quite a while to just email from their own domain, and that no, we're weren't going to add their cert to our DNS, and let a third party spoof us (or however that works, idk). Absolutely shocking lack of self awareness.
Title: "Expense report overdue - Please fill now"
Subject:
<empty body>
<Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>
---
So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.
A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.
So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...
Edit: nvm it seems it's not the case
You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.
> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.
Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).
Heuristics like this one should be performed automatically by the email client.
I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.
The sense of urgency is always the red flag.
but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3
nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority
The link went to the same domain as the From address. The URL scheme was 1:1 identical to the real npm's.
> but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
Why wouldn't I be? I don't stay logged into npm at all.
the from: address in every email is an arbitrary and unverified text string that the sender provides, anyone can send an email to anyone else and specify a from: president@whitehouse.gov and that's how it will show up to the recipient
what do you mean by the URL scheme? a URL scheme is the http or https part of it? and for sure the host part of the URL was not the same as the real npm's host part of their URL?
i'm not sure what this comment is trying to accomplish, it parses as FUD
DKIM et al came back clean.
As for URL scheme, I mean the format and layout of URLs - because it was an MITM attack, they matched 1:1.
> As for URL scheme, I mean the format and layout of URLs - because it was an MITM attack, they matched 1:1.
"scheme" is a well-defined domain term that refers to the e.g. `https://` part of a URL/URI -- but that aside, I still don't get what you're saying here? what is "format and layout of URLs" and how does that relate to "mitm attack"?
to cut to the chase, a malicious email maybe contains a link, to a URL, that a victim can click on. but if that link says it goes to `https://npm.org` then it actually does go to `https://npm.org` and there isn't like any special secret way for an email to hijack or mitm that domain or URL resolution. if the link is actually `https://npn.org` then that's a totally different thing, it's not a mitm attack, there is no concept of "format or layout" of that totally different URL "matching 1:1" with `https://npm.org` -- unless you're talking about something totally different to what I'm understanding?
edit: wait are we talking about an email sent from a domain `npmjs.help`? DKIM and DMARC and URL scheme validation don't even enter the picture here, this was no kind of mitm attack by any definition -- "npmjs.help" is clear-as-day a malicious domain, and any email from it a clear-as-day phishing attempt.. ! it's fine, we're all human and etc. but it just underscores the issue here being minimizing blast radius of failures, and not anything related to any specific user/human
1. My email client does the validation of certain integrity and security checks and shows a checkmark next to senders that pass. Since npmjs.help was a domain legitimately owned by the attackers, it passed.
2. The link in the email lead to their site at the same domain, most likely performing a MITM between my browser and npm's official servers.
3. You're arguing semantics about "scheme". Please try to understand what I'm attempting to convey: The URLs appeared to match the official npm's site. There was no <a href> trickery. Once I had it in my head (erroneously) that .help was fine, nothing else about the attack stood out as suspicious when it came to the URL or domains.
4. Emails themselves are not MITM attacks, no. I didn't respond to an email with my credentials. I would never do that. But that isn't what I've ever claimed to have happened.
5. The URLs being similar or identical to npm's isn't how they technically achieved the MITM. The URLs being similar was to avoid arousing suspicion.
Hopefully that's explanatory enough.
The domain "npmjs.help" is pretty clearly malicious at a glance, just from the ".help" TLD alone, but yeah as you say
> Once I had it in my head (erroneously) that .help was fine, nothing else about the attack stood out as suspicious when it came to the URL or domains.
well except that presumably you clicked on a npmjs.help link and the new tab ended up at npmjs.com? but yeah it's a tough break, don't mean to needle you, hopefully learning experience
It was obviously good enough.
Snark aside, you only need to trick one person once and you've won.
You should still never click a link in an email like this, but the urgency factor is well done here
No bank, and almost no large corporations go directly to artifact/package repos. They all host them internally.
Really, the killer combo would be to have some kind of LLM-based tool that would scan someone's artifactory. Something smart enough to notice that code changed, and there's code for accessing a crypto-wallet, etc. This would be too expensive for npmjs to host for free, but I could see this happen to hosted artifactory dependencies.
Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.
That, and like other have said... never clicking links in emails.
"Hey, is it still broken? No? Great!"
Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense.
NPM debug and chalk packages compromised
https://news.ycombinator.com/item?id=45169657
324 more comments available on Hacker News