Are AI Generated Stuffs Intentionally Bad?
Mood
skeptical
Sentiment
negative
Category
other
Key topics
My hypothesis is: it's designed that way to generate more profits.
User questions if AI generated content is intentionally bad, citing frequent poor quality outputs.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
32m
Peak period
2
Hour 2
Avg / period
1.5
Key moments
- 01Story posted
Aug 22, 2025 at 6:19 AM EDT
3 months ago
Step 01 - 02First comment
Aug 22, 2025 at 6:51 AM EDT
32m after posting
Step 02 - 03Peak activity
2 comments in Hour 2
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 22, 2025 at 8:10 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
This implies that available training code is good. Consider for example how many GitHub repos are students just trying to learn to code.
First, Hanlon's Razor: Never attribute to malice that which is adequately explained by stupidity.
Second, imagine how you train your model. You want a huge selection of quality writing. If you were to input Twitter posts into your model, your model will be dumb as rocks. So you put in literature. Science articles. Technical documentation. If you're Anthropic, you pirate all those books.
The model you get out then is very high quality. Then you ask it to output text and it is going to output this average tone, style, quality from centuries of writing. The text you're getting is 2% shakespearean, 4% old english, 5% japananese english translated manga.
You need to ask it for exactly what you want. If you dont, it will be confusing.
Because, if you had a machine that could automatically generate excellent code, would you sell access to it, or would you use it to put every other software company out of business?
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.