It's Insulting to Read AI-Generated Blog Posts
Posted2 months agoActive2 months ago
blog.pabloecortez.comTechstoryHigh profile
heatedmixed
Debate
80/100
AI-Generated ContentWritingBlogging
Key topics
AI-Generated Content
Writing
Blogging
The author argues that reading AI-generated blog posts is insulting, sparking a heated debate on the value and ethics of using AI in writing.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
58m
Peak period
150
0-6h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 27, 2025 at 11:27 AM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 12:25 PM EDT
58m after posting
Step 02 - 03Peak activity
150 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 3:11 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45722069Type: storyLast synced: 11/27/2025, 3:36:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.
I'd sooner have a ship painting from the little shop in the village with the little old fella who paints them in the shop than a perfect robotic simulacrum of a Rembrandt.
Intention matters but it matters less sometimes but I think it matters.
Writing is communication, it's one of the things we as humans do that makes us unique - why would I want to reduce that to a machine generating it or read it when it has.
The Matrix was and is fantastic on many levels.
I've been learning piano too, and I find more joy in performing a piece poorly, than listening to it played competently. My brother asked me why I play if I'm just playing music that's already been performed (a leading question, he's not ignorant). I asked him why he plays hockey if you can watch pros play it far better. It's the journey, not the destination.
I've been (re-)re-re-watching Star Trek TNG and Data touches on this issue numerous times, one of which is specifically about performing violin (but also reciting Shakespeare). And the message is what you're sharing: to recite a piece with perfect technical execution results an in imperfect performance. It's the _human_ aspects that lend a piece deep emotion that other humans connect with, often without being able to concretely describe why. Let us feel your emotions through your work. Everyting written on the page is just the medium for those emotions. Without emotion, your perfectly recited piece is a delivered blank message.
https://www.poetryfoundation.org/poems/43745/andrea-del-sart...
At this point, I don't know there's much more to be said on the topic. Lines of contention are drawn, and all that's left is to see what people decide to do.
I think the rates of ADHD are going to go through the roof soon, and I'm not sure if there is anything that can be done about it.
As a diagnosed medical condition I don't know, as people having seemingly shorter and short attention spans we are seeing it already, TikTok and YT shorts and the like don't help, we've weaponised inattention.
It is physiological.
I don't think any evidence exists that you can cause anyone to become neurodivergent except by traumatic brain injury
TikTok does not "make" people ADHD. They might struggle to let themselves be bored and may be addicted to quick fixes of dopamine, but that is not what ADHD is. ADHD is not an addiction to dopamine hits. ADHD is not an inability to be bored.
TikTok for example will not give you the kinds of tics and lack of proprioception that is common in neurodivergent people. Being addicted to Tiktok will never give you that absurd experience where your brain "hitches" while doing a task and you rapidly oscillate between progressing towards one task vs another. Being habituated to check your phone at every down moment does not cause you to be unable to ignore sensory input because your actual sensory processing machinery in your brain is not functioning normally. Getting addicted to tiktok does not give you a child's handwriting despite decades of practice. If you do not already have significant stimming and jitter symptoms, Tiktok will not make you develop them.
You cannot learn to be ADHD.
Specifically is there any correlation between people who have always read a lot as I do and people who don't.
My observation (anecdota) is that the people I know who read heavily are much better at and much more against AI slop vs people who don't read at all.
Even when I've played with the current latest LLM's and asked them questions, I simply don't like the way they answer, it feels off somehow.
I quite like using LLMs to learn new things. But I agree: I can't stand reading blog posts written by LLMs. Perhaps it is about expectations. A blog post I am expecting to gain a view into an individual's thinking; for an AI, I am looking into an abyss of whirring matrix-shaped gears.
There's nothing wrong with the abyss of matrices, but if I'm at a party and start talking with someone, and get the whirring sound of gears instead of the expected human banter, I'm a little disturbed. And it feels the same for blog content: these are personal communications; machines have their place and their use, but if I get a machine when I'm expecting something personal, it counters expectations.
AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.
This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"
[1] https://en.wikipedia.org/wiki/M._Night_Shaym-Aliens!
Also, reminds me of this cartoon from March 2023. [0]
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.
That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.
But you are saying that is wrong, you should judge the messenger, not the message.
Anyone can access ChatGPT, why do we need an intermediary?
Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?
Or do you remember when Facebook groups or image communities were flooded with funny/meme AI-generated images, "The Godfather, only with Star Wars", etc? Thank you, but I can generate those zero-effort memes myself, I also have access to GenAI.
We truly don't need intermediaries.
> Everything else is just recycled slop.
No, not everything is slop. AI-slop is slop. The term was coined for a reason.
Everyone can ask the AI directly, unlike accessing journals. Journals are intermediaries because you don't have direct access to the source (or cannot conduct the experiment yourself).
Everyone has access to AI at the slop "let's generate blog posts and articles" level we're discussing here.
A better analogy than teachers is: I ask a teacher a random question, and then I tell it to you with almost no changes, with the same voice if the teacher (and you also have access to the same teacher). Why? What value do I add? You can ask the teacher directly. And doubly so because what I'm asking is not some flash of insight, it's random crap instead.
I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.
> Everyone wants to help each other.
No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.
True!
But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
Please do tell more. Do you make it like a rule in your adblocker or something else?
> If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.
Otherwise, I just remember that particular source as being untrustworthy.
I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.
Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
The only thing that changed in all of my experimentation with various saved instruction was that sometimes it prepended its bloated examples with "here's a short, concise example:".
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
This will invalidate even ispell in vim. The entire point of proofreading is to catch things you didn’t notice. Nobody would say “you don’t need the red squiggles underlining strenght because you already know it is spelled strength.”
Grammatical deviations constitute a large part of an author's voice. Removing those deviations is altering that voice.
This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.
My wife is ESL. She's asked me to review documents such as her resume, emails, etc. It's immediately obvious to me that it's been run through ChatGPT, and I'm sure it's immediately obvious to whomever she's sending the email. While it's a great tool to suggest alternatives and fix grammar mistakes that Word etc don't catch, using it wholesale to generate text is so obvious, you may as well write "yo unc gimme a job rn fr no cap" and your odds of impressing a recruiter would be about the same. (the latter might actually be better since it helps you stand out.)
Humans are really good at pattern matching, even unconsciously. When ChatGPT first came out people here were freaking out about how human it sounded. Yet by now most people have a strong intuition for what sounds ChatGPT-generated, and if you paste a GPT-generated comment here you'll (rightfully) get downvoted and flagged to oblivion.
So why wouldn't you use it? Because it masks the authenticity in your writing, at a time when authenticity is at a premium.
These type of complains about LLMs feel like the same ones people probably said about using a typewriter for writing a letter vs. a handwritten one saying it loses intimacy and personality.
You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.
This is like reviewing your own PRs, it completely defeats the purpose.
And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.
That is literally how civilization works.
As an example, knowing that a service is offered by a registered company with presence in my area gives me the knowledge "that they know that I know" that if something goes wrong, I can sue them for negligence, possibly up to piercing the corporate veil the company and having the directors serve prison time. From that I can somewhat rationally derive that if the company has been in business offer similar services for years, it is likely that they have processes in place to maintain a level of professionalism that would lower the risk of such lawsuits. And on an organisational level, even if I still have good reason to think that most of the employees are incompetent, the fact that the company is making it work gives me a significantly higher preference in the "result" than I would in any individual "stupid" component.
And for a closer-to-home example, the internet is well known to be a highly reliable system built from unreliable components.
It's a joke.
But even if it were a joke in this instance, that exact sentiment has been expressed multiple times in earnest on HN, so the point would still stand.
As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.
In the sense that you double check your work, sure. But you wouldn’t be commenting and asking for changes, you wouldn’t be using the reviewing feature of GitHub or whatever code forger you use, you’d simply make the fixes and push again without any review/discussion necessary. That’s what I mean.
> open the view the reviewer will have and take a look. I do this all the time
So do I, we’re in perfect agreement there.
It is, but for all the reasons AI is supposed to fix. If I look at code I myself wrote I might come to a different conclusion about how things should be done because humans are fallible and often have different things on their mind. If it's in any way worth using an AI should be producing one single correct answer each time, rendering self PR review useless.
Yes. You just have to be in a different mindset. I look for cases that I haven't handled (and corner cases in general). I can try to summarize what the code does and see if it actually meets the goal, if there's any downsides. If the solution in the end turns out too complicated to describe, it may be time to step back and think again. If the code can run in many different configurations (or platforms), review time is when I start to see if I accidentally break anything.
> This is like reviewing your own PRs, it completely defeats the purpose.
I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!
Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.
You don’t, we’re on the same page. This is just a case of using different meanings of “review”. I expanded on another sibling comment:
https://news.ycombinator.com/item?id=45723593
> Obviously I don't approve my own PRs.
Exactly. That’s the type of review I meant.
So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?
Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.
Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.
You are transparently engaging in bad faith by purposefully straw manning the argument. No one is arguing for “far better programmer than any human that has ever lived”. That is an exaggeration used to force the other person to reframe their argument within its already obvious context and make it look like they are admitting they were wrong. It’s a dirty argument, and against the HN guidelines (for good reason).
> Coding agents are basically interns.
No, they are not. Interns have the capacity to learn and grow and not make the same mistakes over and over.
> but even if they're doing things 95% correctly
They’re not. 95% is a gross exaggeration.
> This makes no sense, and it’s absurd anyone thinks it does. If the AI PR were any good, it wouldn’t need review. And if it does need review, why would the AI be trustworthy if it did a poor job the first time?
This is an entirely unfair expectation. Even the best human SWEs create PRs with significant issues - it's absurd by the parent to say that if a PR is "any good, it wouldn’t need review"; it's just an unreasonable bar, and I think that @latexr was entirely justified in pushing back against that expectation.
As for the "95% correctly", this appears to be a strawman argument on your end, as they said "even if ...", rather than claiming that this is the situation at the moment. But having said that, I would actually like to ask both of you - what does it even mean for a PR to be 95% correct - does it mean that that 95% of the LoC are bug-free, or do you have something else in mind?
The point of most jobs is not to get anything productive done. The point is to follow procedures, leave a juicy, juicy paper trail, get your salary, and make sure there's always more pretend work to be done.
That's certainly not my experience. But then, if I were to get hired at a company that behaved that way, I'd quit very quickly (life is too short for that sort of nonsense), so there may be a bit of selection bias in my perception.
ultimately I'm happy to fight fire with fire. there was a time I used to debate homophobes on social media - I ended up writing a very comprehensive list of rebuttals so I could just copy and paste in response to their cookie cutter gotchas.
This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.
It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish
I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.
Who are we building all this stuff for, exactly?
Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?
"With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.
In the dating scenario what's really absurd and disgusting isn't actually the artificiality of toys.. it's the ritualistic aspect of the unnecessary preamble, because you could skip straight to tea and talk if that is the point. We write messages from bullet points, ask AI to pad them out uselessly with "professional" sounding fluff, and then on the other side someone is summarizing them back to bullet points? That's insane even if it was lossless, just normalize and promote simple communications. Similarly if an AI review was any value-add for AI PR's, it can be bolted on to the code-gen phase. If editors/reviewers have value in book publishing, they should read the books and opine and do the gate-keeping we supposedly need them for instead of telling authors to bring their own audience, etc etc. I think maybe the focus on rituals, optics, and posturing is a big part of what really makes individual people or whole professions obsolete
I first read that as "coworkers (who are) fully AI generated" and I didn't bat an eye.
All the AI hype has made me immune to AI related surprises. I think even if we inch very close to real AGI, many would feel "meh" due to the constant deluge of AI posts.
I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.
Do you review your comments too with AI?
That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.
Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.
It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.
If a company builds an industrial poop delivery system that lets anyone with dog poop deliver it directly into my yard with the push of a button I have a much different and much bigger problem
Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.
I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
https://en.wiktionary.org/wiki/fruit_machine
380 more comments available on Hacker News