Hack Club: A story in three acts (a.k.a., the shit sandwich)
Mood
heated
Sentiment
negative
Category
tech
Key topics
Hack Club
data privacy
GDPR
youth programming
The post details a teenager's experiences with Hack Club, highlighting issues with data privacy, security, and organizational practices, sparking a heated discussion about accountability and responsibility in youth-focused tech organizations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
72
Day 1
Avg / period
22.5
Based on 90 loaded comments
Key moments
- 01Story posted
11/13/2025, 11:31:23 AM
5d ago
Step 01 - 02First comment
11/13/2025, 12:33:26 PM
1h after posting
Step 02 - 03Peak activity
72 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 1:09:28 AM
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
DEATH handing out swords to kids as Santa in the Hogfather is a funny joke, not an example to follow.
It still renders smoothly though and doesn't go above 40C so I guess it could have been worse.
Oh wait, it's because it is too old to have WebGL support so the background crashed and thus consumed no processing power.
May I suggest you use reader mode to remove the annoying flashing background? If you can get past the annoying UX of the article, it has interesting stories about serious issues.
> i sent formal breach notifications to security@hackclub.com and gdpr@hackclub.com on july 9th. radio silence. nothing. not even an automated "we've received your email" response.
> when i tried talking to HQ staff informally, the responses were... well, shocking doesn't quite cover it. the first intern told me that since hack club is US-based, they're "not held to GDPR," that if fined "nothing compels us to pay it," and that EU people "void your EU protections" by coming to the US.
What? How did we get from (allegedly) informing them about a security vulnerability to them responding "nothing compels us to pay it"? It feel like the author is not being quite as candid in their account of the events as one would hope.
If instead they framed it in terms of "hey you guys are sharing stuff you probably didn't mean to" then the reaction would have likely been different
From the post:
> then i found this one:
> https://juice.hackclub.com/api/get-roommate-data?email=dont@...
> yep. no auth. just an email parameter. and what did it return?
> full names. emails. phone numbers. flight receipts. all just by passing an email address in a URL.
> i reported it through their security bounty program, made a bug fix pr (because apparently that's how you get things done around here), and maybe made the slight mistake of sharing the vulnerable endpoint in that group chat - which less than 10 people saw, for what that's worth.
The author then proceeds:
> their security bounty program states minimum payouts for this kind of thing start around $150. but exposing passport numbers (which are classed as government documents) should bump it up significantly. apparently "responsible disclosure" means "don't tell anyone, even in a private chat" so they docked the entire payout.
I'm not sure why they're being seemingly sarcastic about responsible disclosure. Yes, responsible disclosure absolutely means that you disclose this to the vendor before disclosing it to anyone else. As someone who works as a penetration tester and security researcher (both at work and in my free time), in my opinion, there should be no confusion about what responsible disclosure is. You disclosing the vulnerability in public before the vendor has had the chance to fix or apparently even triage it is not "responsible disclosure" or a "slight mistake".
Use reader mode, block Javascript or whatever it takes. Give the author a break. They're a teenager. What kind of websites were you making as a teenager? I'm sure one of those dark background websites with MARQUEEs and BLINKs with glaring contrast colors! So give them a break. Behind the annoying UX is an article about serious and appalling privacy and security issues.
Like read this:
> i raised this with chris, who's a full-time staff member (not a teenager), and he insisted that exposing physical addresses and sensitive info was "just a vuln" not a breach. said he's "never heard the term 'data breach' used that way" and... also relied on chatgpt instead of actual legal advice.
Actually this Chris guy has a point. I don't call it breach either. It's PII data exposure but it is a serious exposure. So I don't 100% agree with the OP but the cavalier attitude towards security coming from the staff of a legitimate organization is appalling.
It's just mind boggling that an organization handling PII data has such appalling privacy and security lapses and they still remain arrogantly indignant about it making bold claims about laws they don't understand, why, because ChatGPT told them so? Cherry on top is they are employing teenagers to answer legal questions! Not kidding! Just read the OP! Unbelievable!
At least California defines it as
> unencrypted personal information, as defined, was acquired, or reasonably believed to have been acquired, by an unauthorized person.
Nobody—certainly not any adult staff—at Hack Club relied on ChatGPT for legal advice. Nor do we employ teenagers to answer legal questions, we have actual legal counsel for that! Or in my personal case I ask my wife, who is a law professor, and then she asks ChatGPT (just kidding).
There is too much nonsense in this post to rebut line by line, and these conversations have all been had to death within Hack Club (we put a lot of time into transparently and publicly discussing our programs, problems, and decisions). Here's the short version of this saga:
- The author found a serious vuln in one of our programs introduced by a junior engineer
- We take vulns seriously—especially the serious ones! It was fixed immediately by a senior engineer upon report (within a day?)
- The author insisted that their test of the vuln to access their own address was a data breach, therefore obligating us to notify all 5,000 participants of this "breach" as per GDPR
- We judged this to be Prima Facie incorrect. A lawyer has since confirmed this judgment.
- It is, in fact, bad practice to notify users for every vulnerability. If this were the norm, you would inundated with notices from practically every software product you interact with. Almost all of these notices would be virtually non-actionable by the user, and they would wash out the few notices of breaches which are actionable. There is a good reason why the GDPR does not demand notice for vulns; mass notices are reserved for incidents where there is a known exfiltration of a meaningful amount of user data!
- The author was ultimately banned from the community not for their opinions on this matter, but because of a long streak of unrelated conduct issues that culminated in a spree of saying horribly abusive things to multiple other members of the community.
— They have been pursuing a grudge against the organization ever since. They are not a reliable narrator, this post is a fantasy version of events that casts them as a martyred hero.
Hack Club is an oddly-shaped organization with operations that often raise very real security concerns, but these are wrapped up in a complex web of tradeoffs that are very much still evolving as we refine and expand our core infrastructure. We are not Google, and it is a mistake to import reasoning from that kind of environment when analyzing our security/threat model. Nonetheless, privacy/security is something we think about and invest extensively in. In the past year we have started an organization-wide bounty system, moved all PII storage into a central "identity vault", and consulted extensively with a very fancy lawyer who specializes in corporate compliance with the growing raft of online privacy laws around the world. The good news is, according to that lawyer we already do almost everything we need to be compliant; we just need to publish a privacy policy! We are actively iterating on a mostly-finished draft of this document with our counsel, but it is taking time because, well, this stuff is very complicated. We serve or have served teenagers in almost every country, and GDPR is just the most prominent of many laws that are now on the books worldwide.
> - The author was ultimately banned from the community not for their opinions on this matter, but because of a long streak of unrelated conduct issues that culminated in a spree of saying horribly abusive things to multiple other members of the community.
> — They have been pursuing a grudge against the organization ever since. They are not a reliable narrator, this post is a fantasy version of events that casts them as a martyred hero.
Someone who has been acting maliciously against your organization accessed that data. And you think it's fine? They're a teenager. An angry teenager, who is acting out. You honestly believe you can trust they didn't distribute this data or tell anyone else about the problem before you found out about it?
When I was a teenager, someone in my year level gained access to a lot of personal data about a bunch of people in our year level. This was a smart individual who at least somewhat understood the gravity of the situation. But they were also a kid, of course they distributed some of the data — bragging rights and what not.
What about the section titled "the surveillance infrastructure (orpheus engine)" where the teenager claims children's data was intentionally being sent out to third parties, specifically to profile kids? What's that all about?
Look, no-one read this article and thought "Wow, this is well written article by a super mature well-adjusted individual. I'm taking this as gospel." The article is clearly written by an angry teenager. I feel far more invested in this now that I've seen your responses. The way you're handling this, and yourself, is just downright absurd. Stop.
We patched the vulnerability, quickly. We addressed it with the engineer and made clear that this is no joke. We have extensive refactoring happening within our infrastructure to move to a model where this information is handled as much as possible through secure, audited, centralized systems. Is there something else we should be doing?
The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability. The two lawyers we have consulted on this have both said no. One of them specifically specializes in privacy compliance. It's not a complicated legal question, the answer is just no.
> The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability.
You are just not going to be able to control the narrative like this. Trying to tell someone else what the "crux of the issue is" will not allow you to shift the goal posts. The article described a pattern of issues, and in my previous comment I specifically raised one. No determined individual is going to just leave that thread dangling for you.
> Is there something else we should be doing?
Yes. Obviously. That's the point.
> The crux of the question here was about whether GDPR obligates us to email all 5,000 people signed up for this program about this vulnerability. The two lawyers we have consulted on this have both said no. One of them specifically specializes in privacy compliance.
It's not a great look for the leader of a children's organization to so blatantly flout that they lack a moral compass. You're currently interacting with the public, not the legal system. Sure, whether or not you're legally required to inform your kids is relevant. However, the law is quite literally the bare minimum of what you're obligated to do.
No-ones reading this thinking. "Oh great, they've done the bare minimum legally required of them." They're thinking, "Wait. Companies notify people of breaches all the time. You apologise, and explain what you're doing to rectify the situation. What have they got to hide? Are they worried they'll get an influx of outrage because this lack of care was something people in the community were already concerned about?" With the context given from the odd parent in this thread, it certainly comes across as the latter.
> It's not a complicated legal question, the answer is just no.
This detracts so much credibility from your communication. There is no lawyer on Earth that will describe this as "not a complicated legal question". No adult that's ever had any communication with a lawyer is going to believe this for a second. Lawyers are notorious for their non-committal attitude toward providing legal advice. Nothing is black and white — it's all grey. So this comes across as:
a. You've never interacted with a lawyer in your life. Or, b. You're telling porkies, or at the very least, are way too flippant with hyperbole.
I'm not the leader of anything, that would be Zach Latta. He's a much better diplomat than I am, but I am doing my honest best to speak plainly and matter-of-factly to you about a complex situation that frankly requires a lot more context to properly understand than I think is possible to acquire from the information you have.
I'm also not trying to absolve our organization of all sins. We mess up all the time. We are working on many fronts to learn from these experiences and make imperfect systems a little better every day. We make mistakes, we apologize, we do our best to make amends, then we move on to the next mistake. It is the nature of doing new, hard things with real stakes.
> You're currently interacting with the public, not the legal system. Sure, whether or not you're legally required to inform your kids is relevant. However, the law is quite literally the bare minimum of what you're obligated to do. > > No-ones reading this thinking. "Oh great, they've done the bare minimum legally required of them." They're thinking, "Wait. Companies notify people of breaches all the time.
This is addressed in the top comment I left. Notifying 5k people about a patched vuln is not "more than the minimum", it's legitimately bad practice. That is not my opinion, it is industry standard practice! Absent any reason to believe there has been a data breach, absent any sort of actionable information, we are not going to send an email to thousands of people.
I call the GDPR thing the crux of the question because probably 80% of the thousands of Slack messages sent on this topic, a solid majority of them were about that question. That was the impasse. Staff considered the issue and concluded that from a moral, legal, and industry standard practice perspective, notifying every user was not the correct decision. Nothing was being hidden, that team logged and discussed the vulnerability publicly within the community from the start. They fixed, disclosed, discussed, learned, and moved on.
> This detracts so much credibility from your communication. There is no lawyer on Earth that will describe this as "not a complicated legal question". No adult that's ever had any communication with a lawyer is going to believe this for a second. Lawyers are notorious for their non-committal attitude toward providing legal advice. Nothing is black and white — it's all grey. So this comes across as: > > a. You've never interacted with a lawyer in your life. Or, b. You're telling porkies, or at the very least, are way too flippant with hyperbole.
I am married to a law professor for whom I lived through 3 years at Yale Law and 3 years of PhD/fellowship, I have about as much exposure to law as you can get without it actually being your job. I assure you, uncomplicated legal questions exist.
> - We take vulns seriously—especially the serious ones! It was fixed immediately by a senior engineer upon report (within a day?)
What? From the many, many #meta posts and other sources I cannot back this up.
> - The author was ultimately banned from the community not for their opinions on this matter, but because of a long streak of unrelated conduct issues that culminated in a spree of saying horribly abusive things to multiple other members of the community.
OP did say some bad stuff, but it wasn't a spree and was an isolated incident. I don't agree with her actions, but I see where she was coming from: she didn't feel heard and just wanted to get back at people she saw as having wronged her. She definitely shouldn't have done what she did but it was an isolated incident or two.
> — They have been pursuing a grudge against the organization ever since. They are not a reliable narrator, this post is a fantasy version of events that casts them as a martyred hero.
You'll note that in the article that isn't what she portrays herself as and she explicitly bookends the article with paragraphs of text praising the mission and all of the good hackclub has done. Which is it, is she rightfully praising the organization but rightfully getting angry about it or is she wrongfully praising the organization and wrongfully getting angry?
> Nonetheless, privacy/security is something we think about and invest extensively in.
Based on HQ's HCB, #meta, posts in #hq, and more this is not true in the slightest.
> In the past year we have started an organization-wide bounty system, moved all PII storage into a central "identity vault" Bounties were addressed in the article and last thing I heard PII is still massively distributed. If that isn't the case anymore, please actually make a post about it so the community is aware?
> consulted extensively with a very fancy lawyer who specializes in corporate compliance with the growing raft of online privacy laws around the world
That's good but again, make an announcement in hackclub?
> The good news is, according to that lawyer we already do almost everything we need to be compliant; we just need to publish a privacy policy!
The fuck?? No?? if this has happened in the last year, how angry has your lawyer about the numerous vulnerabilities that were pushed, not notified, underpaid bounties, and more? Oh, and don't forget you TAKING DOWN THE GDPR EMAIL AND NOT DELETING DATA??
> We are actively iterating on a mostly-finished draft of this document with our counsel, but it is taking time because, well, this stuff is very complicated.
I can definitely understand that. I really love hackclub and think the mission is amazing but at the moment I don't feel safe with my data in its hands.
> OP did say some bad stuff, but it wasn't a spree and was an isolated incident. I don't agree with her actions, but I see where she was coming from: she didn't feel heard and just wanted to get back at people she saw as having wronged her. She definitely shouldn't have done what she did but it was an isolated incident or two.
If I remember correctly, she admitted that her ban was justified. But also, she didn't just do "some bad stuff", she did a lot of it - there's even a recent #meta thread referencing this exact post.
> You'll note that in the article that isn't what she portrays herself as and she explicitly bookends the article with paragraphs of text praising the mission and all of the good hackclub has done. Which is it, is she rightfully praising the organization but rightfully getting angry about it or is she wrongfully praising the organization and wrongfully getting angry?
Nuance does exist.
> That's good but again, make an announcement in hackclub?
Zach did.
> The fuck?? No?? if this has happened in the last year, how angry has your lawyer about the numerous vulnerabilities that were pushed, not notified, underpaid bounties, and more? Oh, and don't forget you TAKING DOWN THE GDPR EMAIL AND NOT DELETING DATA??
I'm more inclined to trust Chris than an anon account that straight up denies that internal conversations happened. You also seem to be regurgitating posts from earlier without seeing Chris's context.
I think I've read through the #meta post you're referencing and commented in it and yeah, but it still wasn't a spree. It was not a lot of it? cite your sources as well
> > That's good but again, make an announcement in hackclub? > Zach did.
Where?
> I'm more inclined to trust Chris than an anon account that straight up denies that internal conversations happened. You also seem to be regurgitating posts from earlier without seeing Chris's context.
Well yeah, I'm on a throw away as I don't want to be deanon'd. If you really want to talk contact https://hackclub.slack.com/team/U09Q734PGUU, it's an alt I have. Where did I deny internal conversations as well? And wdym regurgitating posts without Chris' context? I literally broke his reply down point-by-point?
Headline really buries the lede: this is the issue, not some missing ToS boilerplate.
The map is not the territory, the security policy is not the security.
- This person has also used their access to attempt to extort the admins and their Airtable data, demanding a bounty payment for access they were previously given. - In her arguments about the program leads earning higher bounties, they had said that they both did bounties for Coinbase and Google, neither of which being non-profits - Many of her arguments are flawed in other ways.
Theo (yes the ffmpeg guy) also commented on it in a livestream, and I would just point to that:
> This feels really in the weeds of something we are not supposed to see externally. It is a lot of writing for what seems like clueless people doing backend
However there's still no excuse for these problems if they are describing it correctly. When you're storing the home address of thousands of users, (1) you shouldn't do that at all for this type of organisation and (2) you should be very careful to protect it and (3) the first several times it gets stolen, you should think harder about whether your protection is working and there should never be a several+1th time.
As the parent of a Hack Clubber, a lot of what is said here rings true to our experience with the Hack Club leadership.
It's a really long article so he only seemed to read a few paragraphs about the security vulnerability and then said the line while scrolling too fast to read all of the other points. Can't blame him, not going to lie.
I am the Chris cited in the piece. We have actual legal counsel that we go to for legal advice! However, that's not what was being sought here. In this conversation, the question on the table was "What is a data breach?" according to common convention (setting aside the more technical question of what it means specifically in the context of GDPR). The author contended that a single address record—her own record, IIRC—retrieved as a test of an unsecured endpoint counts as a data breach, and therefore that we are legally obligated under GDPR to email all 5,000 participants about it. My contention was/is that a data breach implies exfiltration of a meaningful amount of data. This was a vulnerability, which we patched within about a day, but we had no reason to believe there was a breach by any definition. I pointed to a few sources to demonstrate the consensus definition of "data breach", and one of them was Gemini (or "Omniscient Robot God", as I called it in the conversation).
There are real issues touched on in this post, but the author is not a reliable narrator and they are flattening a very complex issue into a narrative that centers themself as the hero. In reality, this user was banned from our community for a long string of conduct violations, culminating in repeated incidents of saying horribly abusive things to other teenagers. They have been pursuing a grudge against the organization ever since.
I'm not at all surprised that people are trying to program young teenage minds to think hackathons are a good pathway to advancing one's tech skills / career. Nor am I surprised to hear all of the sketchy behavior surrounding this organization and their leadership. It all fits very nicely together.
This sort of cavalier attitude is going to get them in trouble; I'm honestly surprised that this hasn't already gotten them into trouble. Hack Club has enough money that they can easily be a worthwhile target if any of their decisions turns out badly.
I'm going to be a bit oblique here because I don't want HC to take this out on my child, but at one of the HC events, the "figure it out for yourselves" lead to our child making decisions and taking actions that could have very easily turned into life threatening. Another situation led to our child being "ditched" in a foreign city and unsure how to get ahold of anyone on the ground to help.
Hack Club is a great idea, and I'm glad it exists, but I do think that the way it is currently organized is going to end badly.
I haven't heard about Hack Club until this very story, so forgive my ignorance, but what exactly happened here? According to their website, it seems to be about a community for teenage programmers, who build open source projects together, sometimes during events. Looking around at the types of events they host, nothing really looks life threatening at all? I'm not doubting your experience, just curious how a bunch of programmers could end up in a life threatening situation during those sort of events.
Truly, if they're forcing children to sit and code for 3 days straight someone should call the police this moment.
Are you saying they're lying or are wrong about this? They seemed to have personal experience about it, and I'm assuming they're not outright lying, but I do think it sounds strange they would let children sit and code for 3 days straight.
But that's beside the point - they provide rooms, plenty of food and snacks, workshops, and activities to do during breaks. Organizers are on-site at all times, and there is a live hotline for parents or kids to call at any time. "sit and code for 3 days straight" is a gross mischaracterization.
Here's an example of an event hosted: https://www.youtube.com/watch?v=uXWMr0gdLJA
This event was a camp out. They had tents for the campers, but it was, in my kids view, a free-for-all. Like a "go figure out the tent situation", and my child couldn't figure out the tent situation, so decided to sleep outside. And woke up with a bunch of bugs (I don't remember exactly what, leaches sticks in my mind). So they decided they'd caffeinate the rest of the event and not sleep.
(edit: Typo fix)
I addressed the post itself in another comment (https://news.ycombinator.com/reply?id=45921428&), so I'll skip that part.
I would really like to know more about these incidents at HC events. We have a lot of very complex tradeoffs within hack club involving security/privacy/safety for exactly the reasons you identified (ie, giving teenagers a very high level of agency/responsibility in running programs). However, staff try to be extremely conscious of these tradeoffs and highly attentive to the realistic risk vectors that come about in our operations.
No teenager will ever (ever!) have anything 'taken out' on them by myself or anyone else that works here. Any time things go wrong or almost go wrong, we just want to know so we can manage that risk in the future. If you are willing to share, please reach out at cwalker@hackclub.com
Please take what's said here with a grain of salt. This is the same person who attempted to extort Hack Club out of thousands by using an airtable token they previously had (all tokens have since been examined as to whether they are truly necessary).
> another asked: "if you found a security vulnerability within hackclub, severe or major, given how they have currently handled reports so far, would YOU report it and go through the same process and payouts that previous people have experienced?"
> the answer from most people was a resounding no.
Popular request is for the program to be expanded. I don't know about the "resounding no".
> teenagers are positioned as "independent contractors" to avoid employment protections, holiday pay, and wage floors. this isn't "scrappy nonprofit" energy - it's child exploitation dressed up as opportunity.
It isn't a full-time job.
> email compliance failures
Recently, email sending has been revamped, and there are tools to subscribe to individual mailing lists.
Criticism isn't ever censored - there's anonymous reporting, a public forum channel for feedback (which only has temporary threadlocks upon very inflammatory or irrelevant discussion), and you can discuss it anywhere else within the Slack.
I could keep going, but the raw truth is that this misses a lot of context for independent observers.
I could be wrong, but I don't think that was OP.
> Popular request is for the program to be expanded. I don't know about the "resounding no".
Do a poll then. I for one agree with that and don't think that most people would report it.
> > teenagers are positioned as "independent contractors" to avoid employment protections, holiday pay, and wage floors. this isn't "scrappy nonprofit" energy - it's child exploitation dressed up as opportunity. > > It isn't a full-time job.
It quite literally is?
> Recently, email sending has been revamped, and there are tools to subscribe to individual mailing lists.
That I'll give you. They did recently revamp that and make it be functional.
> Criticism isn't ever censored - there's anonymous reporting, a public forum channel for feedback (which only has temporary threadlocks upon very inflammatory or irrelevant discussion), and you can discuss it anywhere else within the Slack.
Not true. Thread locks are often for 6 months to a year and the posts often aren't even inflammatory, just anti-HQ.
If you do want to actually talk more, contact me on my alt at https://hackclub.slack.com/team/U09Q734PGUU.
I suspect the things this author is critiquing and the internal resistance to it is DIRECTLY related to the wonderful things this org can do and how it operates.
I'm of the belief that you can't truly love a thing without loving its mother. This applies to orgs as it does all creatures undergoing evolutionary processes. If you do straddle this belief tension, you perhaps love something other than the thing you thought you loved. And this other thing you love will eventually take shape under your care and watch. Which is nice, that "what we put our attention on grows".[1]
So obviously, you are permitted to love a thing and take issue with its incubating process/culture, but I would suggest you're the site of contradiction that has some explaining to do. If you win and change the process of the thing you love, the thing you love is on a new path toward being something else. And maybe that's fine. A new seed will grow in the empty space. People probably need to have a thing to love that looks like the thing you loved. It will be back.
But there's some other healthy dissonance here that the author isn't grasping. I would say this to them: You are the bringer of the end of what you love, not its saviour. It's all good -- these transitions happen, and in a more zen sense, it can come to pass without [my] judgement. But just please understand your role. You're not a hero, you're a death. Maybe a healthy one, but a death all the same. The thing you love perhaps won't survive your care.
To be clear, I have very mixed feelings. The critiques are valid, but I wish I could acknowledge them without compulsion to demand an action. I think orgs that work like this need to stay small, only scale horizontally (inspiring/supporting other sister orgs to grow), and resist any central/vertical scaling that brings you under the rules and norms that they are desperately trying to steer clear of, but are now accountable to (according to our shared societal values).
[1]: http://adriennemareebrown.net/2012/08/09/giftingmyattention/
This post should not be taken seriously because the implication is wrong: Hack Club is compliant with data protection rules and is very careful with student data; Unlike almost every where else teenagers hang out on the internet, Hack Club does NOT monetize or sell student data or allow advertising to young people.
During one of our many summer programs, we had a situation where some students’ info was accessible publicly by mistake, and as soon as it was reported, we fixed it. No one accessed it and we apologized. You GOT us, ok? It happens and the young programmer responsible feels really badly about the fact that it keeps getting brought up in new and twisted ways.
We work around the clock with a fully trained staff to make sure that there won’t be any problems and to address them immediately if they come up. As I’ve stated in the past, this original post is from a disgruntled student was banned for really ugly behavior and yet they continue and it's sad to see it getting amplified here.
Like it or not but I feel like account logins, PII and payment stuff will have to be handled by central big orgs. Ideally, I would like that to be a competent open-source government service. For now it is big companies like Google that can shove its SSO around in accessible manner to other sites.
I get it, some people dislike the appearance but c'mon, this is HN. If we can use vi(1) on a 80 column terminal, reading an html page is not an impossible task.
Just to be clear: I didn't post this on Hacker News myself, and I'm not trying to present myself as high and mighty or as some kind of villain. I'm just someone who documented what I observed, made mistakes along the way, and wanted to share my perspective on the discussion that's happening here.
On data exposure:
Chris said "The short answer is no" when asked if kids' data was exposed. From my perspective, the Neighbourhood API exposed thousands of users' full legal names through an unauthenticated endpoint. There was also the Juice vulnerability that exposed passport numbers, flight receipts, phone numbers, and addresses. A log file with minors' PII was pushed to a public Git repository. The Orpheus Engine code is publicly available on GitHub and shows data being sent to third parties.
Whether this meets the technical GDPR definition of "breach" is a legal question I'm not qualified to answer definitively. But the data was accessible to unauthorised parties, which is what I documented.
On ChatGPT legal advice:
Chris said "nobody relied on ChatGPT for legal advice." I have screenshots of a teenage intern using ChatGPT to answer GDPR compliance questions. Whether that counts as "relying on ChatGPT for legal advice" or just using it as a reference tool is a matter of interpretation. I was concerned about a teenager making legal determinations using AI tools, but I can see how others might view this differently.
On the timeline:
Chris said the vulnerability was "fixed immediately... within a day." From my perspective, it was reported on July 3rd and wasn't fixed until after I made it public. Other community members have also questioned this timeline. I may be wrong about this - I'm just sharing what I observed.
On the ban:
Chris is right that I said horrible things to people. I was in a terrible mental state at the time - Chris was involved in my mental health crisis in other occasions beforehand (he called an ambulance to my house). That doesn't excuse my behavior, and I've taken accountability for it. I included this context because I felt it was relevant, but I understand why others might see it as making excuses.
On DSARs and privacy policy:
I mentioned in the article that I sent DSARs (data subject access requests) that went unanswered for months. Chris didn't address this in his response, so I'm not sure what the current status is. I also noted that there's still no privacy policy after 3+ months of promises. Chris mentioned they're "actively iterating" on one, which may be true - I'm just sharing what I observed up to when I was banned.
I also mentioned that the GDPR email address was removed after I raised concerns. Other community members have confirmed this happened. I'm not sure why it was removed or if it's been replaced with something else.
On forced de-anonymisation:
There was a recent incident where a student (who had already bought flights) was told they needed to reveal their identity to get an explanation for why their Parthenon (an in-person event, see https://athena.hackclub.com) invite was revoked. They complied and revealed their identity publicly, but still didn't receive an explanation.
Christina Asquith (Hack Club's COO) responded by accusing them of lying, showing "bad faith," making "false accusations," and "harassing staff." She said "Character matters at hack club" and refused to work with them anymore after they posted in the #meta channel (which is specifically for community feedback). When the student tried to handle it privately first, they got one response and then were ghosted. After they revealed their identity and asked directly for an explanation, Christina still refused to provide one, saying the reason "will not be released" and that "no amount of info will ever be enough for them to stop arguing."
The student later described feeling like they were "talking to a stone wall that showed no emotion" and that they only got help from people who weren't part of the organizing team. Christina has also publicly stated she's "less likely to reply" to anonymous posts and has a problem with people not putting their names behind questions.
For context: Hack Club has a bot called Prox2 that allows community members to post anonymously in the #meta channel (a channel for feedback and concerns). This was created specifically to allow people to raise concerns without fear of retribution, especially given the power imbalance between adults in leadership positions and teenagers in the community. However, staff can refuse to engage unless people reveal themselves, which undermines the purpose of having an anonymous posting system. I'm not sure if this is official policy or just Christina's personal preference, but it's concerning when combined with claims that "no teenager will ever have anything taken out on them."
On multiple issues:
Chris focused his response on the Neighbourhood vulnerability, but the article documented multiple issues (Juice, the Git log file, Orpheus Engine, etc.). I understand he can't address everything, but I wanted to note that the article covered a pattern of issues, not just one incident.
I also noticed that all of these vulnerabilities that I reported came from the same person (Thomas). In Chris's response, he referred to this person as a "junior engineer," but in Hack Club's Slack and other communications, this person's title was "Capability Changing Events Lead." I'm not sure why the title changed in Chris's post, but I thought it was worth noting. This person is still working at Hack Club, and from what I observed, there didn't seem to be much accountability or consequences for the repeated security issues. I may be wrong about this - I'm just sharing what I observed.
On the "lawyer" claim:
Chris mentioned that Hack Club has consulted with "a very fancy lawyer who specializes in corporate compliance." From my perspective, I haven't seen evidence of this legal work - there's still no published privacy policy, no designated DPO (Data Protection Officer), no named compliance contact, and no data-retention policy. I'm not saying the lawyers don't exist - I'm just noting that the community hasn't seen any tangible output from this legal consultation yet. Maybe it's all happening behind the scenes, but from the outside it's hard to tell.
On the pattern of response:
I've noticed that concerns raised in the community sometimes don't get responses for a while, and then when people speak up publicly, staff engage more actively. Other community members have described similar experiences where they felt ignored until they raised things publicly. I'm not saying this is intentional - it could just be that staff are busy and public posts get more attention. But from the perspective of people raising concerns, it can feel like the only way to get a response is to make things public, which isn't ideal for anyone.
On the article:
I tried to be clear that I'm not trying to be a hero or villain - just document what happened. The article starts and ends with praise for Hack Club's mission. Other community members (VEBee, rlmineing_dead) have corroborated some of my points, but I'm sure I got things wrong too. The Orpheus Engine code is public if people want to verify that part themselves.
I wrote the article because I thought these issues were important to document, but I'm sure there are perspectives and context I'm missing. I'm not asking anyone to take my word for it - the code is public, the vulnerabilities are documented, and people can verify things themselves.
I want to be clear: Hack Club has done a lot of good. It's helped thousands of teenagers learn to code, build projects, and find community. Many of my friends came from Hack Club, and I'm genuinely grateful for the opportunities it gave me. That's why I care about these issues - because I want Hack Club to be better, not because I want to tear it down. The problems I've documented are real, but so is the positive impact Hack Club has had on many people's lives.
The title doesn't make is sound bad.
I mean, besides lawyers, who cares if some legal document is missing. You can respect privacy without a privacy policy, plenty of people do.
Here, it seems the actual problem is that there is no adult in the room, literally. Just kids that are completely clueless about how to care about personal data. Here, "no privacy policy" doesn't just mean "we dislike paperwork", it means "we are letting kids play with personal data without adult supervision".
Report them, you say? Many DPC's such as the Irish DPC are very friendly in terms of their lax approach to the regulation, just ask Max Schrems, he's been at this for years. I think the EU and the regulators do not have resources to enforce the law, so whilst there are requirements to protect customer data, nothing bad happens if you don't. Just check the top of HN as I write this [1] "Checkout.com hacked, refuses ransom payment, donates to security labs". Will anyone be arrested, charged, fined, or otherwise penalized? Nope, not a chance. I 100% guarantee absolutely nothing will happen as a result of this article. GPT makes it so easy to capture user data these days and people will just willingly hand it over.
The truth is, you should be very careful what data you hand out, always. Use an alias, use privacy tools, always be weary and check if they have a privacy policy, check to see if it works (make a dummy account, do GDPR request, if no reply, be weary).
If they are not serious about privacy, stop, think and act accordingly. While it is a disgrace what these individuals have done, individuals need to take personal responsibility just as in a real world, would you trust a random stranger giving you pills? Hopefully not!
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.