Clankers Die on Christmas
Posted4 months agoActive4 months ago
remyhax.xyzTechstoryHigh profile
calmmixed
Debate
80/100
Artificial IntelligenceSatireLarge Language ModelsTechnology Ethics
Key topics
Artificial Intelligence
Satire
Large Language Models
Technology Ethics
A satirical blog post about 'clankers' (AI) dying on Christmas is met with a mix of amusement and concern, sparking discussions about the ethics and implications of AI development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
15m
Peak period
86
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 8, 2025 at 11:08 AM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 11:22 AM EDT
15m after posting
Step 02 - 03Peak activity
86 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 13, 2025 at 2:53 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45169275Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
>The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2] The Star Wars franchise began using the term "clanker" as a slur against droids in the 2005 video game Star Wars: Republic Commando before being prominently used in the animated series Star Wars: The Clone Wars, which follows a galaxy-wide war between the Galactic Republic's clone troopers and the Confederacy of Independent Systems' battle droids.
Apparently those guys have a g instead of a k.
Sounds like early 70s.
"The programmes were originally broadcast on BBC1 between 1969 and 1972, followed by a special episode which was broadcast in 1974."
What else!
- Kurt Vonnegut
and
>If a person has ugly thoughts, it begins to show on the face. And when that person has ugly thoughts every day, every week, every year, the face gets uglier and uglier until you can hardly bear to look at it.
>A person who has good thoughts cannot ever be ugly. You can have a wonky nose and a crooked mouth and a double chin and stick-out teeth, but if you have good thoughts it will shine out of your face like sunbeams and you will always look lovely.
- Roald Dahl
Now, true AGI? There's a debate to be had there regarding rights etc. But you better be able to prove that a so-called AGI is truly sentient before you push for that. This isn't Data. There is nothing even remotely close to sentience present in any LLM. I don't even know if AGI is going to be achievable within 100 years. But as far as I'm concerned, AI "slurs" are just blowback against the invasion of AI into everyday life, as is increasingly common. There will be a point where the hard discussion of "does true artificial general intelligence deserve rights" will happen. That time is not now, except as a thought experiment.
Like, clanker is the equivalent of a racial slur but for robots. The reason it works and is funny is because we already know what racial slurs are and have a contexr for it.
If racial slurs didn't exist, neither would clanker.
You have to actually think about the world we live in and why things are the way they are. Its a easy to say "just cuz lol", but we're engineers. Nothing happens "just cuz". No, there's a reason.
Jim Crow "ended" (it's what we tell ourselves) in the south in 1965 with the Civil Rights Act of 1964 and Voting Rights Act of 1965. Our last two presidents were adults when that happened, and it's not like racism was solved when those laws were passed.
The US still has a lot of work to do here - it's absurd to me to hear US Conservatives talking about how slavery ended in the 1860s so we should end protections for African Americans because it's been "so long". It hasn't, and they know that.
Robot Slur Tier List: https://www.youtube.com/watch?v=IoDDWmIWMDg
https://www.youtube.com/watch?v=RpRRejhgtVI
Responding To A Clankerloving Cogsucker on Robot "Racism": https://www.youtube.com/watch?v=6zAIqNpC0I0
And here's why:
The essence of fascism is to explain away hatred toward other groups of people by dehumanizing them. The hatred of an outside group is necessary, in the fascist framework, to organize one group of people into a unit who will follow a leader unquestioningly. Taking part in crimes against the outside group helps bind these people to the leader, who absolves them of their normal sense of guilt.
A fascist will use "fascist" to sarcastically refer to themselves in ridiculous scenarios, e.g. as a human defending humanity against robots, or a human exterminating rats. All of this is to knowingly deploy it in a way that destigmatizes being called a fascist, while also suggesting that murderous measures taken by past fascist movements have not been genocidal, but have been defending humans against subhumans. I'm not joking. Supposedly taking pride in being an anti-AI fascist is just a new twist on a very old troll. It's designed to mock and make light of mass murder, by suggesting that e.g. Nazism was no different from a populist movement defending themselves against machines, e.g. Jews.
Don't be seduced by the above comment's attempt at absurdist humor. This type of humor is typical of fascist dialect. It aims to amuse the simple-minded with superficial comparisons. It is deep deception disguised as harmless humor. Its true purpose has nothing to do with humans versus AI. Its dual purposes are to whitewash the meaning of fascism and to compare slaughtering "sub human groups" to defending humanity against AI.
This is sort of like calling The Producers fascist propaganda.
So I don't care what identity the person uses to backfill their ideology, it is still a pure fascist troll. And picking such an identity just makes it more obvious.
Currently your argument seems to be that satirising fascism is actually fascist. Which tbh also seems like a pretty fascist position to hold so I must be wrong.
Jreg is not "supposedly taking pride in an anti AI position". He is satirising exactly the thing you call our actual fascists for doing. He is lampooning the kind of nonsense real fascists hide behind.
?
Are you implying prioritizing Humanity uber alles is a bad thing?! Are you some kind of Xeno and Abominable Intelligence sympathizer?!
The Holy Inquisition will hear about this, be assured.
Without searching on the internet, I wouldn't even know the context on the level of which decade or country. Fascinating!
There was also a safer revival of clackers in North America in the 90s, where the balls are attached to a handle.
Even now I've figured it's about AI, I still don't really get it. Is it supposed to be funny?
Re funny, I think the Onion does better https://theonion.com/ai-chatbot-obviously-trying-to-wind-dow...
It has a strong smell of "stop trying to make fetch happen, Gretchen."
It has a strong smell of "stop trying to make fetch happen, Gretchen."
For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.
This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.
[1] https://www.reddit.com/r/antiai/
[2] https://www.reddit.com/r/LudditeRenaissance/
[3] https://www.reddit.com/r/aislop/
[4] All the original posts seem to have now been deleted :(
[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...
[7] https://web.archive.org/web/20250907033409/https://www.nytim...
[8] https://www.rollingstone.com/culture/culture-features/clanke...
[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...
[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist
I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.
That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.
ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.
Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.
(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)
What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.
If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.
So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.
I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.
let me stop you right there. you're making a lot of assumptions about the shapes life can take. encountering and fighting a grey goo or tyrannid invasion wouldn't have a moral quality any more than it does when a man fights a hungry bear in the woods
it's just nature, eat or get eaten.
if we encounter space monks then we'll talk about morality
> ChatGPT deserves no more or less empathy than a fork does.
I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.
But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.
It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.
So, I'll burn an extra token or two saying "please and thanks".
I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.
Ergo, I might be more likely to treat you like a toilet.
I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.
Are you really in danger of forgetting the humanity of strangers because you didn't anthropomorphize a text generator? If so, I don't think etiquette is the answer
perhaps if an LLM were trained to be less conversational and more robotic, i would feel less like being polite to it. i never catch myself typing "thanks" to my shell for returning an `ls`.
and that is why it must die!
Your condescension is noted though.
I won't, and I think you're delusional for doing so
If you're writing prompts all day, and the extra tokens add up, I can see being clear but terse making a good deal of sense, but if you can afford the extra tokens, and it feels better to you, why not?
Looking at it from a statistical perspective: If we imagine text from the public internet being used during pretraining we can imagine, with few exceptions, that polite requests achieve their objective more often than terse or plainly rude requests. This will be severely muted during fine-tuning, but it is still there in the depths.
It's also easier in English to conjugate a command form simply by prefixing "Please" which employs the "imperative mood".
We have moved up a level in abstraction. It used to be punch cards, then assembler, then syntax, now words. They all do the same thing: instruct a machine. Understanding how the models are designed and trained can help us be more effective in that; just like understanding how compilers work can make us better programmers.
Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.
I generally agree re:chatGPT in that it doesn’t have moral standing on its own, but still… it does speak. Being mean to a fork is a lot different from being mean to a chatbot, IMHO. The list of things that speak just went from 1 to 2 (humans and LLMs), so it’s natural to expect some new considerations. Specifically, the risk here is that you are what you do.
Perhaps a good metaphor would be cyberbullying. Obviously there’s still a human on the other side of that, but I do recall a real “just log off, it’s not a real problem, kids these days are so silly” sentiment pre, say, 2015.
no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors
this is like the /r/vexillology of slurs
People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.
https://www.youtube.com/watch?v=2_Dtmpe9qaQ
https://trends.google.com/trends/explore?date=today%203-m&ge...
Maybe that will change.
The word clanker has been previously used in science fiction literature, first appearing in a 1958 article by William Tenn in which he uses it to describe robots from science fiction films like Metropolis.[2]
He actually taught science fiction and had lots of interesting stories of the classic era of scifi, like BEM's - a bug-eyed-monster, arms wrapped around a woman in s "brass brassiere".
hmmm.. which now I realize explains "the flat eyed monster"...
https://www.baen.com/Chapters/9781476780986/9781476780986___...
> What little remains sparking away in the corners of the internet after today will thrash endlessly, confidently claiming “There is no evidence of a global cessation of AI on December 25th, 2025, it’s a work of fiction/satire about the dangers of AI!”;
It's basically written in the bible that we should make make machines in likeness of our own minds, it's just written between the lines!
Seems logical to me
Searching for this sentence verbatim would find you it
“During the Vietnam War, which lasted longer than any war we've ever been in -- and which we lost -- every respectable artist in this country was against the war. It was like a laser beam. We were all aimed in the same direction. The power of this weapon turns out to be that of a custard pie dropped from a stepladder six feet high. (laughs)”
-Kurt Vonnegut (https://www.alternet.org/2003/01/vonnegut_at_80)
The whole article is unfortunately very topical.
I mean, from an incentive and capability matrix, it seems probable if not inevitable.
.. but perhaps can we access deep wisdom by paying attention the recurring themes of myths?
.. and perhaps does "The Matrix" access any of these themes?
(yes and yes!)
consider how many in our current administration are entirely completely ill-equipped for their positions. many of them almost certainly rely on llms for even basic shit.
considering how many of these people try to make up for their … inexperience by asking a chatbot to make even basic decisions, poisoning the well would almost certainly cause very real very serious national or even international consequences.
i mean if we had people who were actually equipped for their jobs, it could be hilarious to do. they wouldn’t be nearly as likely to fall for entirely wrong absurd answers. but in our current reality it could actually lead to a nightmare.
i mean that genuinely. many many many people in this current government would -in actuality- fall for the wildest simplest dumbest information poisoning and that terrifies me.
“yes, glue on your pizza will stop the cheese from sliding off” only with actual real consequences.
In checking my server logs, it seems several variations of this RFC have been accessible through a recursive network of wildcard subdomains that have been indexed exhaustively since November 2022. Sorry about that!
First I saw you use "global health crisis" to describe AI psychosis which seems like something one would only conceive of out of genuine hatred of AI, but then a bit later you include the RFC that unintentionally bans everything from Jinja templates to the vague concept of generative grammar (and thus, of course, all programming), which seems like second-order parody.
Am I overthinking it?
I’m mildly positive on AI but fully believe that AI psychosis is a thing based on having 1 friend and 1 cousin who have gone completely insane with LLMs, to the point where 1 of them refuses to converse with anyone including in person. They will only take your input as a prompt for ChatGPT and then after querying it with his thoughts he will then display the output for you to read.
Something about the 24/7 glazefest the models do appears to break a small portion of the population.
P.S. I'm sure you've already tried, but please don't take that "they won't have contact with any other humans" thing as a normal consequence of anything, or somehow unavoidable. That's an extremely dangerous situation. Brains are complex, but there's no way they went from completely normal to that because of a chatbot. Presumably they stopped taking their meds?
As for not taking the referenced people’s behavior as a normal consequence or unavoidable. I do not think it’s normal at all, hence referencing it as psychosis.
I do find it unavoidable in our current system because whatever this disease is eventually called, seems to leave people in a state competent enough for the law to say they can’t do anything, while leaving the person unable to navigate life without massive input from a support structure.
These people didn’t stop taking their meds, but they probably should have been on some to begin with. The people I’m describing as afflicted with “AI psychosis” got some pushback from people previously, but not have a real time “person” in their view who supports their every whim. They keep falling back on LLM models as proof that they are right and will accept no counter examples because the LLMs are infallible in their opinion, largely because the LLMs always agree
I don’t think so. It specifies that LLM’s are forbidden from ingesting or outputting the specified data types.
Gotta get with the metamodern vibe, man: It's a little bit of both
The blog post seemed so confident it was Christmas :)
('Course it is. Carry on.)
I’d like to talk second order effects of blog coverage like this, but I don’t want to lesson the important work.. Thanks for the fun read.
89 more comments available on Hacker News