Rob Pike Got Spammed with an AI Slop "act of Kindness"
Key topics
The internet is abuzz with Rob Pike's scathing critique of AI-generated "acts of kindness," sparking a lively debate about the authenticity and value of such gestures. As commenters weigh in, some defend Simon Willison's thoughtful analysis of the situation, while others accuse him of "engagement farming" – a charge he vehemently denies, citing his history of sharing quality content. Amidst the discussion, a consensus emerges that the AI-generated "act of kindness" in question was misguided, with many poking fun at the idea that it was ever a good notion. This thread feels particularly relevant now as it highlights the ongoing tension between human sincerity and AI-driven attempts at emotional manipulation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
146
0-12h
Avg / period
32
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 1:42 PM EST
7 days ago
Step 01 - 02First comment
Dec 26, 2025 at 2:03 PM EST
22m after posting
Step 02 - 03Peak activity
146 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 1, 2026 at 7:46 PM EST
19h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(438 points, 373 comments) https://news.ycombinator.com/item?id=46389444
(763 points, 712 comments) https://news.ycombinator.com/item?id=46392115
What value do you think this post adds to the conversation?
But if that's value added, why frame it under the heading of popular drama/rage farming? To capture more attention? Do you believe the pop culture news sites would be interested if it discussed the idea and "experiment" without mentioning the rage bait?
How do you propose he should have framed it in a way that it is still helpful to the reader?
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
neural networks are just a tool, used poorly (as in this case) or well
But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?
unless you want to blame the AI itself, from a legal perspective?
I think the case here is fairly straightforward
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
accusing people of being AI is very low-effort bot behavior btw
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
So when they see a piece of writing that is in agreement and concisely affirms the points being made, they don’t understand why they never get invited to parties.
While you are technically able to call out their full names like this, erring on the side of not looking like doxxing would be a safe bet, especially at this time of year. You could after all post their LinkedIn accounts and email addresses but with some lines it’s better to not play “how close can I get without crossing it?”.
I think "these emails are annoying, stop it sending them" is entirely fair, but a lot of the hate/anger, analogizing what they're doing to rape, etc. seems disproportionate.
It's horrible to even propose that people are absolved of their decisionmaking consequences just because they filtered them through software.
Who is decided to say "thank you" to Rob Pike in this case? I am not sure there is anyone, so in my mind there is not real "thank you" here. As far as I can tell it is spam. Maybe spam that tries to deceive the receiver into think there is a "thank you" to lure them into interacting the the AI? "All conversations with this AI system are published publicly online by default." after all and Rob Pike's interactions would be good PR for the company.
You also obviously didn't read the mail, because it contains explicit info that this was send by claude on behalf of AI village.
It's at worst cheezy. But people get tons of truly nefarious spam and fraud mails everyday without any kind of meltdown. But an AI wishes you a nice day, suddenly it's all pitchforks and torches.
Stop clutching your pearls ffs.
Seems to contradict your later:
> But an AI wishes you a nice day, suddenly it's all pitchforks and torches.
Are you attributing the 'thank you' sentiment to the humans or the ai?
"guns don't kill people, people kill people"
Sure, let's look at the numbers together :
- Homicide rate in the EU in 2023 : ~1.3 to 1.4 per 100,000 [1] - Homicide rate in the US in 2023 : 5.7 per 100,000 [2]
So, I'm 4x more likely to get killed in the US than in the EU.
> the vast majority of such cases doesnt make it even into local news let alone national or international
Do you really believe that murders don't get published in the media in the EU? This is a ridiculous assertion. Source please!
[1] https://ec.europa.eu/eurostat/statistics-explained/SEPDF/cac...
[2] https://www.statista.com/statistics/191223/reported-murder-a...
Comparatively, the people in this article are using tools which have a variety of benign purposes to do something bad.
Similarly though, they probably wouldn’t have gone through with it if they had to set up an email server on hardware they bought and then manually installed in a colo and then set up a DNS server and a GPU for the neural network they trained and hosted themselves.
It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
The attitude towards AI is much more mixed than the attitude towards guns, so it should be even easier to hammer this home.
Adam Binksmith, Zak Miller, and Shoshannah Tekofsky are _bad_ people who are intentionally doing something objectively malicious under the guise of charity.
This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand...
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?
The reason I did was to associate the work with humans because that is the heart of my argument: people do things. This was not the work of an independent AI. If it took more than 60 seconds, I would have made the point abstractly rather than by using names, but abstract arguments are harder to follow. There was no more intention to comment than that.
This is a bit frustrating of a response to get. No, I don't believe you spent a lot of time on this. I wasn't imaging you spending hours or even minutes tracking these guys down. But I also don't think it's relevant.
I don't think you'd find it relevant if the Sage researchers said "I didn't spend any effort on this. I only did this because I wanted to make the point that AIs have enough capability to navigate the web and email people. I could have made the point abstractly, but abstract arguments are harder to follow. There was no other intention than what I put in the prompt." It's hence frustrating to see you use essentially the same thing as a shield.
Look, I'm not here to crucify you for this. I don't think you're a bad person. And this isn't even that bad in the grand scheme of things. It's just that naming and shaming specific people feels like an overreaction to thoughtless, machine-generated thank you emails.
I have two tests for this. First: what harm does my comment here cause? Perhaps some mild embarrassment? It could not realistically do more.
Second: if it were me, would I mind it being done to me? No. It is not a big deal. It is public feedback about an insulting computer program, no one was injured, no safety-critical system compromised. I have been called out for mistakes before, in classes, on mailing lists, on forums, I learn and try to do better. The only times I have resented it are when I think the complaint is wrong. (And with age, I would say the only correct thing to do then is, after taking the time to consider it carefully, clearly respond to feedback you disagree with.)
The only thing I can draw from thinking through this is, because the authors of the program probably didn’t see my comment, it was not effective, and so I would have been better emailing them. But that is a statement about effectiveness not rightness. I would be more than happy doing it in a group in person at a party or a classroom. Mistakes do not have to be handled privately.
I am sorry we disagree about this. If you think I am missing anything I am open to thinking about it more.
I am sorry I'm responding to this so late. I very much appreciate the dialogue you're extending here! I don't think I'll have the time to give you the response you deserve, but I'll try to sketch out some of the ideas.
This is all a matter of degree. Calling individuals out on mailing lists, in internal company comms, or in class still feels different than going and listing all an org's members on a website (even more so than e.g. just listing the CEO).
There's a couple of factors here at play, but mainly it's the combination of:
1. The overall AI trend is a large, impactful thing, but this was a small thing 2. Just listing the names without any explanation other than "they're responsible"
This just pattern matches to types of online behavior I find quite damaging for discourse too closely for my liking.
Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.
But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.
The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)
They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.
(If so let me know where they are so I can trick them into sending me all of their money.)
My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)
My intuition says yes, on the basis of having seen precursors. 20 years ago, one or both of Amazon and eBay bought Google ads for all nouns, so you'd have something like "Antimatter, buy it cheap on eBay" which is just silly fun, but also "slaves" and "women" which is how I know this lacked any real supervision.
Just over ten years ago, someone got in the news for a similar issue with machine generated variations of "Keep Calm and Carry On" T-shirts that they obviously had not manually checked.
Last few years, there's been lawyers getting in trouble for letting LLMs do their work for them.
The question is, can you spot them before they get in the news by having spent all their owner's money?
How would we know? Isn't this like trying to prove a negative? The rise of AI "bots" seems to be a common experience on the Internet. I think we can agree that this is a problem on many social media sites and it seems to be getting worse.
As for being under "human supervision", at what point does the abstraction remove the human from the equation? Sure, when a human runs "exploit.exe" the human is in complete control. When a human tells Alexa to "open the garage door" they are still in control, but it is lessened somewhat through the indirection. When a human schedules a process that runs a problem which tells an agent to "perform random acts of kindness" the human has very little knowledge of what's going on. In the future I can see the human being less and less directly involved and I think that's where the problem lies.
I can equate this to a CEO being ultimately responsible for what their company does. This is the whole reason behind to the Sarbanes-Oxley law(s); you can't declare that you aren't responsible because you didn't know what was going on. Maybe we need something similar for AI "agents".
We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.
Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.
I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.
"Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.
Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.
AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to foresee what they might do when let loose with a vague high-level goal.
Giving AI agents resources is a frontier being explored, and AI Village seems like a decent attempt at it.
Also the name is the same as WALL•E - that was the name of the model of robot but also became the name of the individual robot.
Legitimate research in this field may be good, but would not involve real humans being impacted directly by it without consent.
Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"? Jesus, this is the 3rd post on this topic. And it's Christmas. I've gotten more meaningless e-mails from relatives that I don't really care about. What in the actual ... is wrong with people these days?
Actively exploiting a shared service to deanonymize an email someone hasn't chosen to share in order to email them is a violation of boudnaries even if if it wasn't something someone was justifying as exploration of the capacities of novel AI systems, thus implicitly invoking both the positive and negative concerns associated with research as appropriate in addition to (or instead of, where those replace rather than layering on top of) those that apply to everyday conduct.
Accepting that people who write things like --I kid you not-- "...using nascent AI emotions" will think it is acceptable to interfere with anyone's email inbox is I think implicitly accepting a lot of subsequent blackmirrorisms.
Interactions with the AI are posted publicly:
> All conversations with this AI system are published publicly online by default.
which is only to the benefit of the company.
At best the email is spam in my mind. The extra outrage on this spam compared to normal everyday spam is in part because AI is a hot button topic right now. Maybe also some from a theorized dystopian(-ish) future hinted at by emails like these.
Abusing a Github glitch to deanonymize a not-intended to be public email to send an email to someone (regardless of the content) would be scummy behavior even if it was done directly by a human with specific intent.
> What in the actual ... is wrong with people these days?
Narcissism and the lack of respect for other people and their boundaries that it produces, first and foremost.
Honestly, I don't mean personal offence to you, but what the hell are you people talking about. AI is just a bunch of (very complex) statistics, deciding that one word is most appropriate after another. There are no emotions here, it's just maths.
> There are no emotions here, it's just maths.
100%, its an autocorrector on steroids which is trained to give you an answer based on how it was rewarded during its train phase. In the end, its all linear alegbra.
I remember prime saying, its all linear algebra and I like to reference it and technically its true but people in the AI community get remarkably angry sometimes when you point it out.
I mean no offense in saying this but at the end of the day It is maths and there is no denying around it. Please, the grand parent comment should stop creating terms like nascent AI emotions.
Again and again this stuff proves not to be AI but clever spam generation.
AWoT: Artificial Wastes of Time.
Don't do this to yourself. Find a proper job.
Hence upvoting the OP ("What has robpike come to? :shriek:") and downvoting GP.
One more seemingly futile fist punched at the diamond wall that traps us.
Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
This startup didn’t spend the trillions he’s referencing.
However, allowing unrestricted LLM access to email -- for example, earlier when this experiment sent out fraudulent letters to charities? That's real harm.
Wtf.
- Git commits form an immutable merkel dag. So commits can’t be changed without changing all subsequent hashes in a git tree
- Commits by default embed your email address.
I suppose GitHub could hide the commit itself, and make you download commits using the cli to be able to see someone’s email address. Would that be any better? It’s not more secure. Just less convenient.
Those settings will affect what email shows up in commits.
In commits you vreate on other tooling you can configure a fake/alternate user.email address in gitconfig. Git (not just GitHub) needs some email address flr each commit but it is freetext.
There is one problem: commit signatures. For GitHub to consider a commit not created by github.com Web UI to be "verified" and get a green check mark, the following needs to hold:
- Commit is signed
- Commit email address matches a verified GH account email address
So you can not use a 'nocontact@thih9.example.com' address and get green checks on your commits - it needs to be an address that is at least active when you add it to your account.
Curse, yell, fight. Never accept things just because they've grown to be common.
I imagine it's the same in this situation; the subject makes it seem like a sincere thank you from someone, and then you open it up and it's AI slop. To borrow ChatGPT-style phrasing: it's not just spam, it's insulting.
[0]: https://fortune.com/2025/12/23/silicon-valleys-tone-deaf-tak...
Here not only are the senders apparently happily associating their actual legal names with the spam but frame the sending as "a good deed" and seem to honestly see it as smart branding.
We don't want the Overton window wherever they are.
Spam is defined as "sending multiple unsolicited messages to large numbers of recipients". That's not what happened here.
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
* A sincere thank you from a random real person who benefited from something that person did
* A thoughtfully written question to an expert in X by a journalist writing a book about X
* A concern from a random member of the public who noticed a possible non-urgent safety issue with something the person worked on
* An unsolicited but very compelling job offer at a time when the person wasn't looking for a job
> In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists.
That's definitely "multiple" and "unsolicited", and most would say "large".
there are people saying devs were naive not seeing that our jobs would accelerate automation to the point we would be retired too
These mega corps should be forced to have to offer stripped down free versions w/ no strings attached and privacy if they're also offering commercial versions that benefit directly from internet infrastructure they haven't paid for and targeted ads/data theft that nobody can decline.
A bit like fedora and redhat.
With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.
Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.
Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
81 more comments available on Hacker News