We Pwned X, Vercel, Cursor, and Discord Through a Supply-Chain Attack
Key topics
A teenage hacker's $4,000 bug bounty for exposing a supply-chain vulnerability in major companies like Discord and Vercel has sparked a heated debate about the fairness of bug bounty payouts. While some commenters, like FloorEgg, attribute the low payout to supply and demand, others, such as finghin and yieldcrv, argue that the amount is pathetic considering the severity of the vulnerability and the hacker's skill level. The discussion highlights the tension between the value of bug bounty work and the often relatively low financial rewards, with some, like ascorbic, suggesting that the experience and exposure gained can lead to more lucrative opportunities down the line.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
30m
Peak period
105
0-6h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 18, 2025 at 2:08 PM EST
15 days ago
Step 01 - 02First comment
Dec 18, 2025 at 2:38 PM EST
30m after posting
Step 02 - 03Peak activity
105 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 22, 2025 at 6:08 AM EST
12 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Pathetic for a senior SE but pretty awesome for a 16 year old up and coming hacker.
That’s a free car. Free computer. Uber eats for months.
And my status with my peers as a hacker would be cemented.
I get that bounty amounts are low vs SE salary, but that’s not at all how my 16yo self would see it.
I agree $4,000 is way too low, but a $400k salary is really high, especially for security work.
So commensurate for approximately 2 days of work, a little high for two hours of work, and a little low for 8 days of work.
It's like a finders reward elsewhere in life. If you lost your wallet, your immaterial and material loss is quite high, but apart from cash the contents are of way less value for a finder/thief. These type of rewards are meant to manipulate emotions and motivation. Twitter paid these kids each between $1 and $20. That's insulting. As I said elsewhere, bug bounties are PR. And it's bad PR in this case. Black market pricing is the absolute low end for valuation (it's basically the cash value in the wallet example).
I'm twice this kid's age and have been doing this hobby-turned-work as long as they have. I can tell you the work we do is no different. It doesn't matter if you're 16 or 64 or what your credentials are or salary is. We're all hackers. Hacker ethos is judging by skill, not appearance. Welcome to hacker news :P
https://en.wikipedia.org/wiki/Hacker_ethic#The_hacker_ethics item #4
> The submission doesn't say they've even contacted Xitter.
This one doesn't. This one does: https://heartbreak.ing/. Or at least, I presume they meant Twitter when they wrote "some company valued 44 billion".
Sorry that I actually have some experience in the industry
I would think that such a sale makes one inherently not "white hat".
The other three companies mentioned though... yeah, they totally pwned the dependency first and foremost
This kind of widespread XSS in a vulnerable third party component is indeed concerning.
Similar example: there have been two reflected XSS vulns found in Anubis this year, putting any website that uses it and doesn't patch at risk of JS execution on their origin:
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
The vulnerabilities that command real dollars all have half-lives, and can't be fixed with a single cluster of prod deploys by the victims.
In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny.
Could you elaborate on this? I don't fully understand the shorthand here.
what's an example of an existing business process that would make them valuable, just in theory? why do they not exist for xss vulns? why, and in what sense, are they only situational and time-sensitive?
i know you're an expert in this field. i'm not doubting the assertions just trying to understand them better. if i understand you're argument correctly, you're not doubting that the vuln found here could be damaging, only doubting that it could be make money for an adversary willing to exploit it?
But for RCE, there's lots of them! RCE vulnerabilities slot into CNE implants, botnets, ransomware rigs, and organized identity theft.
The key thing here is that these businesses already exist. There are already people in the market for the vulnerabilities. If you just imagine a new business driven by XSS vulnerabilities, that doesn't create customers, any more than imagining a new kind of cloud service instantly gets you funded for one.
I wonder what you think of this, re: the disparity between the economics you just laid out and the "companies are such fkn misers!" comments that always arise in these threads on bounty payouts...
I've seen first hand how companies devalue investment in security -- after all, it's an insurance policy whose main beneficiaries are their customers. Sure it's also reputational insurance in theory, but what is that compared with showing more profit this quarter, or using the money for growth if you're a startup, etc. Basically, the economic incentives are to foist the risks onto your customers and gamble that a huge incident won't sink you.
I wonder if that background calculus -- which is broadly accurate, imo -- is what rankles people about the low bounty rewards, especially from companies that could afford more?
Seen through that light, bug bounty programs are engineering services, not a security control. A thing generalist developers definitely don't get about high-end bug bounty programs is that they are more about focusing internal resources than they are about generating any particular set of bugs. They're a way of prioritizing triage and hardening work, driven by external incentives.
The idea that Discord is, like, eliminating their XSS risk by bidding for XSS vulnerabilities from bounty hunters; I mean, just, obviously no, right?
The subtlety here is the difference between people using an exploit (certainly they can) and people who buy exploits for serious money.
An XSS is much harder to exploit quietly (the server can log everything), and can be closed immediately 100% with no long tail. At the push of an update the vulnerability is now worth zero. Someone paying to purchase an XSS is probably intending to use it once (with a large blast radius) and get as much as they can from it in the time until it is closed (hours? maybe days?)
Yes, evidently not.
Just because on average the intelligence agencies or ransom ware distributors wouldn't pay big bucks for XSS on Zerodium etc. doesn't mean that's setting the fair, or wise price for disclosure. Every bug bounty program is mostly PR mitigation. It's bad PR if you underpay for a disclosed vulnerability, which may have ended your business, considering the price of security audits/practices you cheaped out on.
The lowest tier is $5k. XSS up to $40k.
This is very sad because SVGs often have way smaller file size, and obviously look much better at various scales. If only there was a widely used vector format that does not have any script support and can be easily shared.
The only reliable solution would be an allowlist of safe elements and attributes, but it would quickly cause compat issues unless you spend time curating the rules. I did not find an existing lib doing it at the time, and it was too much effort to maintain it ourselves.
The solution I ended up implementing was having a sandboxed Chromium instance and communicating with it through the dev tools to load the SVG and rasterize it. This allowed uploading SVG files, but it was then served as rasterized PNGs to other users.
But you can use an `img` tag (`<img src="evil.svg">`) and that'll basically Just Work, or use a CSP. I wouldn't rely on sanitizing, but I'd still sanitize.
That doesn't help you if evil.svg is hosted on the same domain (with default "Content-Type: image/svg+xml" header).
(Yes I'm still salty about Flash.)
I remember my teenage friends creating things with flash in a way that doesn't happen on the modern web.
Honestly I think a lot of the Flash mania is just middle aged nerds fondly remembering their youth. The actual tool was a flash in the pan, and part of a much more complicated history of online content production. And the world is doing just fine without it.
That wasn't the only reason. Flash was also proprietary, and opaque, and single-vendor, among many other problems with it.
Do you allow SVGs to be uploaded anywhere on your site? This is a PSA that you're probably at risk unless you can find the few hundred lines of code doing the sanitization.
Note to Ruby on Rails developers, your active storage uploaded SVGs are not sanitized by default.
https://svgo.dev/docs/plugins/removeScripts/
I found this page a helpful summary of ways to prevent SVG XSS: https://digi.ninja/blog/svg_xss.php
Notably, the sanitization option is risky because one sanitizer's definition of "safe" might not actually be "safe" for all clients and usages.
Plus as soon as you start sanitizing data entered by users, you risk accidentally sanitizing out legitimate customer data (Say you are making a DropBox-like fileshare and a customer's workflow relies on embedding scripts in an SVG file to e.g. make interactive self-contained graphics. Maybe not a great idea, but that is for the customer to decide, and a sanitization script would lose user data. Consider for example that GitHub does not sanitize JavaScript out of HTML files in git repositories.)
It’s so regular like clockwork that it has to be a nation state doing this to us.
1: https://owasp.org/www-community/vulnerabilities/XML_External...
Whoever decided it should be enabled by default should be put into some sort of cybersecurity jail.
In one of my penetration testing training classes, in one of the lessons, we generated a malicious PDF file that would give us a shell when the victim opened it in Adobe.
Granted, it relied on a specific bug in the JavaScript engine of Adobe Reader, so unless they're using a version that's 15 years old, it wouldn't work today, but you can't be too cautious. 0-days can always exist.
There isn't such an implementation for SVG.
PDFs can also contain scripts. Many applications have had issues rendering PDFs.
Don't get me wrong, the folks creating the SVG standard should've used their heads. This is like the 5th time (that I am aware of) this type of issue has happened, (and at least 3 of them were Adobe). Allowing executable code in an image/page format shouldn't be a thing.
PDFs at least usually embed the used subset of fonts and contain explicit placement of each glyph. Which is also why editing or parsing text in PDFs is problematic. Although it also has many variations of Standard and countless Adobe exclusive extensions.
The situation with specification is also not great. Just SVG 1.1 defines certain official subsets, but in practice many software pick whatever is more convenient for them.
SVG 2.0 specification has been in limbo for years although seems like recently the relevant working group has resumed discussions. Browser vendors are pushing towards synchronizing certain aspects of it with HTML adjacent standards which would make fully supporting it outside browsers even more problematic. It's not just polishing little details many major parts that were in earlier drafts are getting removed, reworked or put on backlog.
There are features which are impractical to implement or you don't want to implement outside major web browsers that have proper sandboxing system (and even that's not enough once uploads get involved) like CSS, Javascript, external resource access across different security contexts.
There are multiple different parties involved with different priorities and different threshold for what features are sane to include:
- SVG as scalable image format for icons and other UI elements in (non browser based) GUI frameworks -> anything more complicated than colored shapes/strokes can problematic
- SVG as document format for Desktop vector graphic editors (mostly Inkscape) -> the users expect feature parity with other software like Adobe Illustrator or Affinity designer
- SVG in Browsers -> get certain parts of SVG features for free by treating it like weird variation of HTML because they already have CSS and Javascript functionality
- SVG as 2d vector format for CAD and CNC use cases (including vinyl cutters, laser cutters, engravers ...) -> rarely support anything beyond shapes of basic paths
Beside the obviously problematic features like CSS, Javascript and animations, stuff like raster filter effects, clipping, text rendering, and certain resource references are also inconsistently supported.
From Inkscape unless you explicitly export as plain 1.1 compatible SVG you will likely get an SVG with some cherry picked SVG2 features and a bunch of Inkscape specific annotations. It tries to implement any extra features in standard compatible way so that in theory if you ignore all the inkscape namespaced properties you would loose some of editing functionality but you would still get the same result. In practice same of SVG renderers can't even do that and the specification for SVG2 not being finalized doesn't help. And if you export as 1.1 plain SVG some features either lack good backwards compatibility converters or they are implemented as JavaScript making files incompatible with anything except browsers including Inkscape itself.
Just recently Gnome announced working on new SVG render. But everything points that they are planning to implement only the things they need for the icons they draw themselves and official Adwaita theme and nothing more.
And that's not even considering the madness of full XML specification/feature set itself. Certain parts of it just asking for security problems. At least in recent years some XML parsers have started to have safer defaults disabling or not supporting that nonsense. But when you encounter an SVG with such XML whose fault is it? SVG renderer for intentionally not enabling insane XML features or the person who hand crafted the SVG using them.
That took way too long to be this way. Some old browsers couldn't even get the colors of PNGs correct, let alone the transparency.
I guess the next step is to propose a simple "noscripting" attribute, which if present in the root of the SVG doc inhibits all scripting by conforming renderers. Then the renderer layer at runtime could also take a noscripting option, so the rendering context could force it if appropriate. Surely someone at HN is on this committee, so see what you can do!
Sanitizing is hard to get right by comparison (svgs can reference other svgs) but it's still a good idea.
Firefox has this: svg.disabled in about:config. It doesn't seem to be properly documented, and might cause other problems for the developer (I found it accidentally, and a more deliberate search turns up mainly bug tracker entries.)
i wonder do people not do this with svgs?
That SVG can then do things like history.replaceState() and include <foreignObject> with HTML to change the URL shown to the user away from the SVG source and show any web UI it would like.
Anyway, I just set `svg.disabled` in Firefox. Scary world out there.
Didn’t we do this already with Flash? Why would this lesson not have stuck?
270 more comments available on Hacker News