We Ran a Security Bounty for Our Tiny Bootstrap Startup. Here's What Happened
Key topics
I expected people to try breaking into user data, finding crazy privilege escalations, or discovering ways to dump entire databases. What actually happened was… different.
We got reports about: - Creative ways to inject JavaScript into HTML through weird embedding edge cases. - Strange file upload pitfalls we hadn't thought of. - Tons of tiny UX bugs filed as "security." - And the most time-consuming one: things that worked exactly as designed, but looked like bugs to outsiders.
Example: in our CMS, you can use an "entry pointer" to select related content. A blog editor with no permissions for “products” should still be able to select products to recommend, but not open/edit/delete them. To make that UX work, our API returns product titles even if you don’t technically have product access. This is by design. We had to spend hours explaining this again and again to bounty hunters convinced it was a data-leak.
What we hoped: our existing users would feel encouraged to report issues. What we got: bounty hunters who’d never heard of us, hammering the API.
Another surprise: the program didn’t really motivate our existing users to report bugs. Instead, we suddenly got a wave of people who literally searched Google for
inurl:security "reward" "cms" bug bounty bug bounty reward $100
and landed on us. No genuine conversations, just transactional hunting. For a few months, 30% of all site visitors came from that one Google query. Our Sentry lit up from random API fuzzing, exceptions everywhere, onboarding metrics tanked.
So yes, the bounty helped. We did patch dozens of real issues that would have been painful later. We ended up paying out ~$5k total, and several reports led to meaningful patches. So even with small rewards, we got serious contributions. But it also forced us to “fix” things that weren’t broken (just to stop getting the submissions about them haha), drained time explaining design decisions, and buried real user issues.
We eventually shut it down. For us right now, it’s more valuable to understand how new users onboard, where they get stuck, and what makes them stay.
So, if you’re considering creating a bug bounty, be ready for bounty hunters to outnumber your real users. If you want signal from your actual users, maybe wait until you can afford the noise.
A bootstrapped startup ran a security bounty program and was surprised by the types of issues reported and the behavior of bounty hunters.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
35m
Peak period
1
0-2h
Avg / period
1
Key moments
- 01Story posted
Sep 8, 2025 at 9:24 AM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 9:59 AM EDT
35m after posting
Step 02 - 03Peak activity
1 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 5:17 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Would've it made more sense to separate this testing out to a different instance of your product? This would've probably helped distinguish between real and bounty users.
But yes, I agree even a simple flag (“tester”) on accounts would have gone a long way. At least then we could separate metrics and clean up the noise afterwards without burning so much time. Definitely something I’d do differently if we ever tried again.