A Postmark Backdoor That’s Downloading Emails
Posted3 months agoActive3 months ago
koi.securityTechstoryHigh profile
heatednegative
Debate
80/100
Supply Chain AttackNpm SecurityEmail Security
Key topics
Supply Chain Attack
Npm Security
Email Security
A malicious backdoor was discovered in a Postmark MCP npm package, leading to discussions on npm security, supply chain attacks, and the risks of using third-party libraries.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
89
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 10:23 AM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 10:40 AM EDT
18m after posting
Step 02 - 03Peak activity
89 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 6:55 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45395957Type: storyLast synced: 11/20/2025, 7:40:50 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you point out the excessive length, the rhetorical flaws, and the obvious idiomatic tics of AI writing people don't tend to want to hear it.
When authors had to do the work, you'd notice your article approaching 1900 words and feel the natural need to cut redundant platitudes like this:
> The postmark-mcp backdoor isn't just about one malicious developer or 1,500 weekly compromised installations. It's a warning shot about the MCP ecosystem itself.
An AI feels no such need, and will happily drag their readers through a tiresome circuitous journey.
The "this isn't just a _____, it's a (adjective of magnitude) _____, _____, and ______" form is one of the most egregious I see everywhere.
// Where did the machines learn this? LinkedIn influencers?
Whether this statement does hold or not depends a lot on your personal worldview:
- How do you define "malicious"?
- Is Microsoft a malicious [in the sense of your previous answer] actor (or not)?
- What is the result of your risk assessment that Microsoft will become a malicious in the future?
The chance that they become a hostile actor to my business is effectively zero. Certainly among the lowest chances of any email provider.
I guess the same holds for this malicious (?) single developer.
Malicious to me is intent. Microsoft does not store my emails to snoop or to potentially steal my assets. It Is a side effect of the systems they have created to ease user friction.
Some might argue that they want my data or behaviour (which is snooping) but exactly what has been said, my subscription fee is the value they extract from me and their enterprise value is the stickiness and the experience they provide.
To be clear, I am not a Microsoft fan, but I think it is safe to assume that Microsoft would not scrape my crypto wallets or bank account information to steal the entirety of my liquid assets. I can’t say the same for actors that plunk in a rogue email address to BCC themselves.
I have no idea how far the crew at Microsoft or any other large tech giant is willing to go into the grey area, but I can tell you they won’t attempt to drain my bank account without providing SOME kind of value to me in return.
Microsoft sees and treats their end users simultaneously as adversaries, as incompetent children, and as data cows to be milked without genuine informed consent for Microsoft's own profit, not as customers deserving of respect, dignity, and autonomy.
I am so shocked at the amount of people that think someone who wants to siphon your livelihood in a parasitic fashion is equivalent to a corporation that you have to conceivably opt into using. Users can make choices in the products they use. This person injected themselves as a man in the middle in user’s lives. Completely different circumstances and not at all the same intention.
And i consider myself a lazy person. Using 3rd party libraries are just more of a headache and time sink sometimes
Still vulnerable to prompt injection of course, but I don't connect LMs to my main browser profile, email, or cloud accounts either. Nothing sensitive.
its the other way around, codex started with TS then rewrite it to rust
Some people do this without thinking much about it. Not all of us. This is not normal nor ok.
Predicting this kind of attack was easy. Many of us probably did. (I did.) This doesn't make me feel much better though, since (a) I don't relish when lazy or ignorant people get pwned; (b) there are downstream effects on uninvolved people; and (c) there are classes of attacks that are not obvious to you or me.
Stay suspicious, stay safe. There are sharks in the water. With frikin' laser beams on their heads too.
I’m exaggerating to make a point here. If one builds a threat model, one can better allocate one’s attention to the riskiest components.
All of us operate in an uncertain world. Flattening this into “it is all a mess” isn’t useful.
Saying “machines are dangerous” or “we all die sometimes” isn’t fitting after Randall is maimed and pulverized from a preventable industrial accident where the conveyer belt dragged him into a box forming machine. Randall should not wear long sleeves. Randall should not have disabled the “screaming means something has gone wrong” sensors. Randall should not run the system at 5X speed while smoking meth.
That said, installing any package is a liability, whether it's a library or an mcp server.
I keep seeing this pattern in articles: "Did you know that if you point the gun at your foot and pull the trigger, yOu ShOoT yOuRsElF iN tHe FoOt??!? I couldn't believe it myself!! What a discovery!!1!"
Are people really this oblivious or are these articles written about non-issues just to have written 'content'?
The answer to you gun analogy is false because it assumes basic knowledge of a gun. This is part of why so many kids shoot themselves or family members with guns - because they don’t know if you pull the trigger something violent will happen until they are taught it.
That is what astounds me. How one can come into possession of a gun completely without understanding that it is dangerous. How fundamentally the worlds I and they live in must be for that to happen.
Oh, now that I write it out like that, I've definitely been on the other side of that astoundment before, for lacking 'common sense'. Ain't that just the way.
The fact that we all can is a professional deformation, it is not normal. I lived in a place where the doors had no locks. Nobody could imagine anybody stealing from someone else's house. And so it just didn't happen. When buying the house there is the moment where the keys are transferred. The sellers somewhat sheepishly announced they didn't have any. The lawyer handling the transaction asked if they had lost their keys and if they did that they should pay for replacing the locks. Then it turned out they never had any locks in the first place and that this was pretty much the norm there.
That distrust that we call common sense is where we go wrong, the fact that we've connected the whole world means that these little assholes now have access to everything and everybody and there isn't even a good way to figure out who they are and how to punish them unless they are inept at hiding their traces.
Articles like this are intended to serve the latter group of people.
And it’s true, AI agents with MCP servers are pretty much unsafe by design, because security was never considered from the start. Until that changes, if it ever even does, the best thing to do is to inform.
But that is what (almost) all of us do.
There is debate about this the rust world, where there are mitigations that very few even aware of
Mostly rusticans block their ears, close their eyes, and pretend everything will be just peachy
Until some ordinary developer develops a crypto wallet in Rust, honestly, that steals money for a third party this will not be addressed. Even then...
This is a big problem and we need to make everybody aware that they can protect themselves, and make them Liable for not taking those steps, before that happens
But I agree that the out-of-the-box settings really make you wonder how we are not indeed hacked all day every day.
MCP is just JSON RPC API dialect. It is not "safe" or "unsafe" by design - it's a layer where notion of safety (as talked about in the article) is not applicable. Saying that "MCP is unsafe" is not a meaningful statement in the scope of what MCP is. Nothing about any RPC (by itself) can guarantee that the remote system would do or not do something when some method is invoked.
Unless someone figures out a way to make end-user comprehensible language for formal software verification, so there could be an accompanying spec that describes the behavior to the dot, and technology that validates the implementation against the spec.
Right now the only spec is the actual codebase, and most users aren't typically reviewing that at all.
And with MCP, that purpose is fundamentally unsafe. It cannot be done safely without a different underlying technology. So yeah, it's pretty fair to say MCP is unsafe by design. It wouldn't exist except for that desire to create systems which cannot be secure.
To me, postmark-mcp is not a part of MCP, it’s a black box that talks MCP on one end. And its behavior is not an MCP but software trust and distribution issue, not specific to MCP (just like running any executables from random sources). I guess others may see differently.
That's exactly the point the GP was making: this is a fundamentally unsafe idea. It's impossible to allow an LLM to automatically run tools in a safe way. So yes, MCP as means to enable this fundamentally unsafe use case is fundamentally unsafe.
MCP safety is stuff like access controls, or lack of vulnerabilities on the protocol level (stuff like XML entity bombs). Software behavior of what MCP bridges together is not MCP anymore.
People discovering and running malware believing it’s legit is not a MCP problem. This line of thought is based on that “P” stands for “protocol” - MCP is interface, not implementation. And it’s humans who pick and connect programs (LLMs and MCP servers), not technology.
That, coupled with dynamic invocation driven by the LLM engine, seems like a security problem at the protocol level.
-----
Contrast to connecting an agent to a service that is defined by a spec. In this hypothetical world, I can tell my agent: "here's the spec and here's the endpoint that implements it". And then my agent will theoretically call only the methods in the spec. The endpoint may change, my spec won't.
I still need to trust the endpoint; that it's not compromised. Maybe I'm seeing a difference where there is none.
Having an MCP necessarily does not mean it is unsafe until and unless the company rushed execution and released something which it should not have in the first place.
A 'perfectly secured' MCP server will become insecure simply by existing along neighbouring, insecure, tools. With that, I think it's safe to take a broader, less technical definition and say that you should use AI agents + MCP servers with caution.
AI amplifies the problem.
Before AI, the idiot who accidentally has too much access probably doesn't have the ability to actively exploit it.
Given how much AI is being shoved down everybody's throats, an idiot with access now is an attack vector because they are have neither the ability nor desire to vet anything the AI is feeding to them.
yes. Previously that idiot had to write code. Now it's the LLM that is directing invocation of these functions. And the LLM is a coding prodigy with the judgment of a 3 year old kid.
While also not using them yourself/actively trying to find and strip them out of workflows you have control over.
There's layers here, of course:
1. The founder shot themselves in the foot by not understanding an AI tool couldn't be trusted, so clearly they really were that oblivious.
2. ...the founder had direct production access hooked up to their regular coding environment. They were always going to shoot themselves in the foot eventually.
That wasn’t a company. It was a vibe coding experiment. It didn’t have any actual customers.
It did delete the database marked “production” though. If it had been deployed it would have been a problem. It was just an experiment, though.
https://www.codeintegrity.ai/blog/notion
Last I checked, basic SQL injection attacks were still causing massive economic damage every year and are at or near the top of the OWASP list. SQL injection is essentially the result of unintentionally giving god-mode permission to a database by failing to understand how queries are built and processed. The... agency available to AI agents might not be all that obvious either.
This is also not an AI issue, or even an MCP issue. If the same issue had been in a client library for the Postmark API, it would likely have had a bigger impact.
What we need is to make it much more likely to get caught and go to prison for stuff like this. That will change things.
Yes, and attacks on AI are much the same. The AI gets "prompted" by something that was supposed to be inert processable data. (Or its basic conduct guidelines are overridden because the system doesn't and can't distinguish between the "system prompt" and "user prompt".)
Yes MCP has next to no security features, but then again is it even a year old at this point?
Not excusing it just pointing out something folks should me mindful of when using tool based on it, its an immature system.
And heck, I still remember a time when most of the internet traffic just flew around in plain text. Insanity to us now.
Yes, yes they are. The vector of "I've been using this package and giving it full permissions for like forever and it has never been a problem." oblivious. One must understand the two-step here:
Step 1) Use someone else's package that solves your problem and doesn't have any issues. Yay you're doing DRY and saving time and effort!
Step 2) The package you got is now corrupted but you don't have any tools to checking to see if you're application is doing anything sus.
The alternative is that you audit every release and look at every change on every file. So suddenly your DRY because a weekly/monthly audit exercise for every single imported package that changes. Use three packages, that's three audit schedules. Use ten? That's ten audit schedules. It's stupid because if you actually used this alternative you'd be spending all of your time just auditing packages and 90+% of the time not finding anything to complain about.
So the ACTUAL alternative, is write your own damn code. You wrote it so you know how it works, and when you change it you know what you changed. And unless you are the bad guy, exploits don't "magically appear" in your code. Vulnerabilities will appear in your code if you don't do code reviews, but remember actualizing this stuff requires both knowing about the bug and exploiting it in the wild, you may end up coding in something that could be exploited but you are unlikely then to use your own vulnerability.
and Step 2.... ? yeah, we know.
"Write your own code" won't scale. Anyway that horse is out of the barn. We need better SBOM tools.
Yes. People definitely blame victims all the time. There's no way LLMs, their proprietors, designers could be practicing dark patterns, marketing gimmicks and security shortcuts to convince people to shot these guns as often as possible...even when, they point them at their foot.
This grift is deep and wide and you may want to re-evaluate why everyone is so keen to always blame victims for technology, society, and business marketing.
There are hundreds of accidental gun deaths every year in the US. So yes, people do need to be told to not point guns at their feet (and other body parts) and pull the trigger.
https://ammo.com/articles/accidental-shooting-statistics
Just the other day I saw a video of a guy pulling a trigger on a gun in another person’s hand with the express goal of inuring them with the knock back. Another of a guy accidentally shooting a hole through his own ceiling. And a third one of a guy (there were at least three people present) shooting a rifle into a shallow river; they were lucky the barrel exploded like a looney tunes cartoon.
And we’re talking guns, which are designed to be dangerous. Software is much more nebulous and people have been conditioned for years to just download whatever and install it.
There was a time when people thought letting their children use the internet as a babysitter was healthier than parking them in front of the television... guess that turned out another way.
Today, 10,000 Americans discovered that water is wet for the first time. Same thing tomorrow.
It’s an AI, it must be perfect! /s
IMO this might be due to a few things like these:
1. Folks are trained on buffer overflows and SQL injections, they don't even question it, these are "bad". But an MCP interface to an API with god-like access to all data? The MCP of course will make sure its safe for them! (of course, it does not). It's learning and social issue rather than a technical issue. Sometimes, it makes me feel like we're all LLMs.
2. 300 things to fix, 100 vendors, and it needs to be done yesterday. As soon as you look into the vendor implementation you find 10 issues, because just like you, they have 300 things to fix and it needs to be done yesterday, so trade-offs become more dangerous. And who's going to go after you if you do not look into it too hard? No one right now.
3. Complete lack of oversight (and sometimes understanding). If you're just playing around by prompting an LLM you do not know what its doing. When it works, you ship. We've all seen it by now. Personally, I think this one could be improved by having a visual scheduler of tasks.
And that if that happens ‘smart’ people will tell you that it was really dumb to do that!!?!
Sure, I agree, and the problem is absolutely magnified by AI. If a back door gets into Thunderbird, or Google decides to start scanning and sharing all of your email, that’s one point of failure.
An MCP may connect to any number of systems that require a level of trust, and if any one thing abuses that trust it puts the entire system at risk. Now you’re potentially leaking email, server keys, recovery codes, private documents, personal photos, encrypted chats - whatever you give your AI access to becomes available to a single rogue actor.
Not really true. They have skin in the game. They have legitimate revenue at stake. If they betray trust on such a scale, and we find out, they'll be out of business.
How this app is legal and not marked as malware is beyond me! It's the biggest information heists in history!
https://www.heise.de/en/news/Microsoft-lays-hands-on-login-d... https://cybernews.com/privacy/new-outlook-copies-user-emails...
> If they betray trust on such a scale, and we find out, they'll be out of business.
The decision makers don't care. They could eat children and they still would buy from them.
https://www.wiz.io/blog/storm-0558-compromised-microsoft-key...
In my experience, it's hands down the worst e-mail client I've ever used. I only have it on my work PC because my employer uses Office 365. It never even crossed my mind to try to use it for my personal e-mailing needs.
I do agree, however, that companies that decide to trust MS don't care one bit about their scandalous practices. I don't even think it's as much of an actual choice as a cop-out, as in "everybody uses microsoft", so they rarely actually ponder the decision.
> "everybody uses microsoft", so they rarely actually ponder the decision.
Exactly. That is my main argument against PantaloonFlames's claim "They have legitimate revenue at stake. If they betray trust on such a scale, and we find out, they'll be out of business."
At a certain scale nothing matters anymore! You can Bluescreen half the planet and still be in business.
Provably false. Google has been found to lie about data collection for years and they still have more money than ever.
The https://en.wikipedia.org/wiki/XZ_Utils_backdoor bears mentioning here.
Giving a lift to a drunk stranger you just met is also a bad idea. Not a criticism—what you’re doing is positive—but it’s also a risk for you.
I don’t get the argument. Had this been a backdoor in a Thunderbird extension, would it not have been worth reporting? Of course it would. The value of this report is first and foremost that it found a backdoor. That it is on an MCP server is secondary, but it’s still relevant to mention it for being the first, so that people who don’t believe or don’t understand these systems can be compromised (those people exist) can update their mental model and be more vigilant.
For all we know it could have been a research to figure out on how easy is it to introduce a change like that and how much time such a redirection can be gone unnoticed.
It is just a piece of code in a random library available to download and used by persons actually willing and deciding to use it. The code was free to read before and after downloading it. No intrusion has been done on a remote machine, not even a phishing email have been sent.
However jurisdiction and lack of funding for cybercrime policing is the main reason criminals don’t get caught .
Many cybercriminals operate in countries that do not cooperate, extradite and may even have tacit state approval .
Only the largest police departments like NYPD and few federal agencies like FBI have some cybercrime investigations capability and very little of that is for investigating crimes against individuals rather than institutional victims.
It is not an unsound approach when resources are limited you would want to prioritize institutions as that would protect or serve more individuals indirectly .
However the result is that you are far more likely get policing support when someone robs your house physically rather than your identity or assets online .
> 1,500 downloads every single week
> Being conservative, maybe 20% are actively in use
> That's about 300 organizations
> Each one probably sending what, 10-50 emails daily?
> We're talking about 3,000 to 15,000 emails EVERY DAY flowing straight to giftshop.club
Those figures seems crazy to me.
They assert that behind a single download from NPM is a unique organization.
That's insane.
A download from NPM is just someone (most often something) doing _npm i_.
Given how most CIs are (badly) configured in the wild, they'll _npm i_ at least once per run. If not per stage.
So those 1,500 downloads per week can come from just 2 organizations, one with a dev POCing the tool, and one with a poorly configured CI.
And the official repo has 1 watch 0 fork and 2 stars: https://github.com/ActiveCampaign/postmark-mcp
Sure the issue raised around MCP and supply chain is big, but the actual impact of this one is probably close to 0.
> Given how most CIs are (badly) configured in the wild, they'll _npm i_ at least once per run. If not per stage.
Indeed. By the same calculus, it should take less than a year for everyone on the planet (including children and the elderly and a whole lot of people who might not have computers, let alone any idea what Python is) to get a personal copy of many of the most popular Python packages (https://pypistats.org/top).
First, no one is ever punished for having security breaches: companies outsource security specifically to avoid responsibility (using contract law to transfer risk). Second, the MBA mentality has infected software development such that first to market and feature velocity trumps all: if I can download a package or have an LLM write my code so much the faster than me writing it.
Security is fucked because shareholders want it that way. Change the incentives to make security matter if you want something different.
> Readers are likely to ignore your prose or dismiss it as an uneducated rant when you make overgeneralizations. Learn how to avoid this error.
Writing Commons > Argumentation > Overgeneralization
https://writingcommons.org/section/genre/argument-argumentat...
This has been the modus operandi since windows xp days where we in all innocence installed random cd-ripping software and bonzi buddies, with full access to the rest of the computer.
It’s hard to argue against convenience. People will always do what’s easy even if less secure. The bigger lesson is why we still haven’t learned to sandbox sandbox sandbox. Here it seems like AI just did a full factory reset on every best practice know to man.
Because nobody has figured out how to make sandboxing easy, apparently.
N.B. at this level, "easy" has to include "provided by default with the operating system".
It's definitely not what we are worried about with MCP.
This is not about how stupid MCP is, it's about how stupid people can be. And anyone mucking about with agentic workflows, through contractors or not, should be responsible for the code their machines run. Period.
I'd rather just read whatever the prompt was. In the current state it's an insult to the user and a waste of time.
"Well, here's the thing not enough people talk about: we're giving these tools god-mode permissions. Tools built by people we've never met. People we have zero way to vet. And our AI assistants? We just... trust them. Completely.
[...]
On paper, this package looked perfect. The developer? Software engineer from Paris, using his real name, GitHub profile packed with legitimate projects. This wasn't some shady anonymous account with an anime avatar. This was a real person with a real reputation, someone you'd probably grab coffee with at a conference.
[...]
One single line. And boom - every email now has an unwanted passenger.
[...]
Here's the thing - there's a completely legitimate GitHub repo with the same name"
"Here's the thing", dashes - clearly the operator was aware how much of a giveaway em dashes are and substituted them but the way they're used here still feels characteristic - and this pattern where they say something with a question mark? And then elaborate on it. Also just intangibles like the way the sentences were paced. I wouldn't bet my life on it, but it felt too much like slop to pay attention to.
That said, I didn't find this one too bad. I could be wrong but it feels to me like the author had already written this out in their own words and then had the AI do a rewrite from that.
Too obvious for backdoor. Replacing bitcoin addresses in email would be more useful)
By the way, this is not specific to MCP, could have happened to any package.
Assuming good intentions (debugging) rather than malice was at play, communication is key: drop the malicious version of the package, publish a fix, and communicate on public channels (blog post, here on HN, social media) about the incident.
A proper timeline (not that AI slop in the OP article) also helps bring back trust.
Why does npm allow packages to share names? Why does it not warn the user that they probably wanted another package? These are easy-to-solve problems.
As far as I know, npm doesn't actually allow packages to share names if they are both going to be in the public registry.
[0]: https://postmarkapp.com/blog/information-regarding-malicious...
First that you know of. MCP zero-days seems to be so much easier to find and exploit.
> Then version 1.0.16 dropped. Buried on line 231, our risk engine found this gem: A simple line that steals thousands of emails
> One single line. And boom - every email now has an unwanted passenger.
A brand new twist on enshittification.
Shame, the actual topic is interesting
https://www.linkedin.com/posts/eito-miyamura-157305121_we-go...
6 more comments available on Hacker News