Key Takeaways
But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.
Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.
This nails it. This is the fundamental problem with using AI material. You are outsourcing thinking in a way where the response is likely to look very correct without any actual logic or connection to truth.
If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"
But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.
This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.
"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"
People will still do it, but now they're doing it intentionally in a context where they know it's against the guidelines, which is a whole different situation. Staying up late to argue the point (and thus add noise) is obviously not going to work.
I'd prefer the guideline to allow machine translation, though, even when done with a chatbot. If you are using a chatbot intentionally with the purpose of translating your thoughts, that's a very different comment than spewing out the output from a prompt about the topic. There's some gray area where they fuzz together, but in my experience they're still very different. (Even though the translated ones set off all the alarm bells in terms of style, formatting, and phrasing.)
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.
i don't think it is likely to catch on, though, outside of culturally multilingual environments
It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.
You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.
I think building it as a forum feature rather than a browser feature is maybe worth.
It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.
I'm also open to the thought that it's a terrible idea.
I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.
We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.
Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.
We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.
I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.
1. An automatic translation feature.
2. Being able to submit an "original language" version of a post in case a translation is bad/unavailable.
The only problem I see with a post containing multiple copies of content in different languages would be malicious usage: Someone out to deliberately sow confusion/outrage or to evade moderation by making the moderator-read version tame and the other version incendiary.
When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world instead.
I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.
Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.
Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language
Its common enough that it must be a literal translation difference between German and English.
The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!
(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)
It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.
Palmada palmada bot.
The objective of that model, however, is quite different to that of an LLM.
The remaining thing to watch out for is that some LLMs do not -by default- translate accurately due to hallucination and summarization tendencies.
* Check with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.
* always proof-read if you are at all able to!
Ultimately you should be responsible for your own posts.
The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”
Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.
The only time i’d use google translate in 2025 is if i’m using my phone offline, or translating very long text and i’m worried about an LLM hallucinating due to a large context window.
If it was just a translation, then that adds no value.
https://jampauchoa.substack.com/p/writing-with-ai-without-th...
TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.
However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.
I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.
If a user doesn't speak a language well, they won't know whether their meanings were altered.
It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.
Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.
Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.
I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.
Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?
Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say. [1]
Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they definitely can be.
eg. Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.
Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.
"I asked an $LLM and it said" is very different than "in my opinion".
Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're often embarrassing themselves because I can see how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
Well now you're putting words in my mouth.
If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.
The point of asking on a public forum is to get socially relatable human answers.
Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply
Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"
Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.
Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.
But sure, sometimes you get the other kind. Very rarely.
But now people are vomiting chatgpt responses instead of linking to chatgpt.
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.
But yeah, using LLMs to be more intellectually rigorous: Totally a thing.
IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.
But yes it is rude to just respond "source?" unless they are making some wild batshit claims.
This is neither the mechanism nor the goal of human communication, not even on the internet.
The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.
I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.
You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.
The top story on here for 2 days has been “Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now”
I could have typed that into an LLM myself too
https://news.ycombinator.com/item?id=46204895
when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.
When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.
If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.
I don't think this is a good example personally.
The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.
I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.
But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.
One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.
If I'm looking for entertainment, HN is not exactly my first stop... :P
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious, not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.
And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.
I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.
Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.
But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.
When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.
People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.
Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.
"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.
My objection to AI comments is not that they are AI per se, but they are noise. If people are sneaky enough that they start making valuable AI comments, well that is great.
I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.
In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” ads absolutely nothing to the discussion but noise.
"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.
The LLM skips a step, and gets you right to the "unusable source" argument.
Still, I will fight that someone actually doing the leg work even by search engine and reasonable evaluation on a few sources is often quite valuable contribution. Sometimes even if it is done to discredit someone else.
Obligatory xkcd https://xkcd.com/810/
296 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.