The Great Sameness: a Comic on How AI Makes Us More Alike
Posted3 months agoActive3 months ago
itsnicethat.comTechstory
calmmixed
Debate
70/100
Artificial IntelligenceCreativityHomogenization
Key topics
Artificial Intelligence
Creativity
Homogenization
The article discusses how AI-generated content may lead to a homogenization of ideas and creativity, sparking a debate among commenters on the impact of AI on originality and diversity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
54m
Peak period
39
3-6h
Avg / period
9.9
Comment distribution79 data points
Loading chart...
Based on 79 loaded comments
Key moments
- 01Story posted
Sep 26, 2025 at 3:59 AM EDT
3 months ago
Step 01 - 02First comment
Sep 26, 2025 at 4:53 AM EDT
54m after posting
Step 02 - 03Peak activity
39 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 27, 2025 at 6:13 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45383960Type: storyLast synced: 11/20/2025, 6:24:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The Double-Edged Sword: How Technology Both Enhances and Erodes Human Connection The Illusion of Control: How Technology Shapes Our Perception of Autonomy From Cyberspace to Real Space: The Impact of Virtual Reality on Identity and Human Experience Digital Detox: The Human Need for Technology-Free Spaces in an Always-Connected World Surveillance Society: How Technology Shapes Our Notions of Privacy and Freedom Technology and the Future of Work: Human Adaptation in the Age of Automation The Techno-Optimism Fallacy: Is Technology Really the Solution to Our Problems? The Digital Divide: How Access to Technology Shapes Social Inequality Humanizing Machines: Can Artificial Intelligence Ever Understand the Complexity of Human Emotion? The Ethics of Technological Advancements: Who Decides What Is ‘Ethically Acceptable’?
They're still pretty samey and sloppy, and the pattern of Punchy Title: Explanatory Caption is evident, so there's clearly some truth to it. But I wonder if he hasn't enhanced his results a little bit.
Possibly I can outsource the work to HN comments :)
https://news.ycombinator.com/item?id=9224
¹ Which people really should read in full and consider all the context. https://news.ycombinator.com/item?id=27068148
Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out.
(Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles).
Now tell me, which one of us is redundant?
Do NOT write a professional cover letter. Crack a joke. Use quirky language. Be overly familiar. A dash of TMI. Do NOT think about what you are going to say, just write a bunch of crazy-pants. Once your intro is too long, cut the fat. Now add professional stuff. You are not writing a cover letter, you are writing a caricature of a cover letter.
You just made the recruiter/HR/person doing interviews smile***. They remember your cover letter. In fact they repeat your objectively-unprofessional-yet-insightful joke to somebody else. You get the call. You are hired.
This will turn off some employers. You didn't want to work for them anyway.
* admittedly I have not sought work via resume in more than 15 years. ymmv
** Once a friend found a cover letter I had written in somebody's corp blog titled "Either the best or worst cover letter of all time" (or words to that effect). In it I had claimed that I could get the first 80% of their work done on schedule, but that the second 80% and third 80% would require unknown additional time. (note: I did not get the call)
*** unless they are using AI to read cover letters, but I repeat: you didn't want to work for them anyway.
The subtlety of it, and the "obvious" limitations of it, are something we either know because we grew up watching tech over decades, or were just naturally cynical and mistrusting and guessed right this time. Hard earned wisdom or a broken clock being right this time, either way, that's not the default teenager.
But in another paragraph, the article says that the teacher and the students also failed to detect an AI-generated piece.
The ending of the comic is a bit anti-climatic (aside from the fact that one can see it coming), as similarities between creations are not uncommon. Endings, guitar riffs, styles being invented twice independently is not uncommon. For instance, the mystery genre was apparently created independently by Doyle and Poe (Poe, BTW, in Philosophy of composition [1], also claims that good authors start from the ending).
Two pieces being similar because they come from same AI versus because two authors were inspired and influenced by the same things and didn't know about each other's works, the difference is thin. An extrapolation of this topic is the sci-fi trope ( e.g. Beatless [2] ) about whether or not the emotions that an android simulates are real. But this is still sci-fi though, current AIs are good con artists at best.
[1] https://en.wikipedia.org/wiki/The_Philosophy_of_Composition
[2] https://en.wikipedia.org/wiki/Beatless
For example, 2001's and its star child weirdness, The IPCRESS file, and many others.
Seems more often scripts are written with an ending in mind nowadays, with the weird bandaids ending up in the middle instead.
Maybe a bit OT in an article that's trying to be about AI but...
I'm sure there are some screenwriters who ignore all that and just start writing. Particularly if they're experienced enough to have an intuitive grasp of structure. But if you're a first time writer and reach the night before a submission deadline and you haven't even finished the first draft, then you've got serious problems. Leaving aside the ending, any script needs multiple revisions with time in between so that you come back it with clear sight.
AI-generated stories favour stability over change: homogeneity and cultural stereotyping in narratives generated by gpt-4o-mini https://www.arxiv.org/abs/2507.22445
Why a model specifically distilled down for logical reasoning tasks? I would expect larger models to produce a wider variety of outputs.
The monomyth is also writing 101 these days, and considered the default structure you can and should use if you have little experience writing stories, so naturally it'll be a high-probability result of an LLM prompted to write a story - especially prompted in a way that implies the user is inexperienced at writing and needs a result suitable for an inexperienced writer.
That's... not the Hero's Journey?
(The same study run against Claude Opus would be interesting - if we're going to test models, we might as well play to their strengths. My prediction: better writing, not better plotting).
I'm happy to be critical of the ability of LLMs but most humans would struggle with this as well.
But none of them is novel to human kind. It's novel to you, but not to our species.
AI is nailing us to the manifold that we created at the first place.
None of them would have achieved that with the help of a machine telling them "you're absolutely right!" whenever they'd be asking deep questions to it.
Scientific and technological progress is inherently incremental. It takes a lot of hard work, dedication and specialization to spot the pieces ready to be connected - but the final act of putting them together is relatively simple, and most importantly, it requires all the puzzles to be there.
Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time - until all prerequisites are made, the next step is ~impossible, but the moment all are met, it becomes almost obvious to those in the know.
This is quite a big claim. All of them? I know there are many discoveries that fit the pattern you're pointing out, but I wouldn't go as far as to say all, or even the majority of them do.
"~all" stands for approximately all, but maybe this mnemonic is less known than I thought. I'll try to avoid using it in the future.
> I know there are many discoveries that fit the pattern you're pointing out, but I wouldn't go as far as to say all I wouldn't go as far as to say all, or even the majority of them do.
I'm yet to encounter one that does not fit this pattern. As far as I can recall, for every discovery I've learned about, eventually[0] I've also learned it was made independently by others within few years; in some cases, the well-known case wasn't even the first of many, just the one that coined the name for some reason. Politics, geography, better formulation, better promotion, etc.
Related, but not the same - I'm also yet to encounter a discovery that took a great conceptual leap to arrive at. In every case I can think of, the core insight was quite obvious once enough prerequisite information was available[1], and pretty much happened on schedule. Sometimes the same person was involved in discovering the last missing pieces and then connecting them, but that's still a case of research being iterative.
It goes beyond science, too. You've probably seen Burke's Connections[2] (if not, I very strongly recommend it); it demonstrates clearly that technological advancement only happens[3] when all necessary pieces are there: scientific knowledge, necessary technology and right socioeconomic conditions.
--
[0] - Sometimes it would be immediately, like in my teenage years, when reading a book about some mathematicians that went into the drama of who stole who's proof or took credit for the others' work (quite dramatic that book was). Sometimes it would be later on - years, even a decade later - when a book or a Wikipedia article about some breakthrough gave me more historical context than I expected. I eventually noticed the pattern; since then, I sometimes search for extra context immediately upon learning something.
[1] - Hindsight is 20/20, I know.
[2] - https://en.wikipedia.org/wiki/Connections_(British_TV_series...
[3] - I.e. actually sticks around - think steam engine in ancient Rome vs. Industrial Revolution.
I'd also argue that we tend to have a larger context. What did you have for dinner? Did you see anything new yesterday? Are you tired of getting asked the same question over and over again?
Yes, that was my point. We don't have 8 billion AI models. Furthermore, existing models are also trained on heavily overlapping data. The collective creativity and inventiveness of humans far exceeds what AI can currently do for us.
In my local library recently, they'd two boards in the lobby as you entered, one with all the drawings created by one class of ~7 year olds based on some book they read, and a second the same idea but the next class up on some other book. Both classes had apparently been asked to do a drawing that illustrated something they liked or thought about the book.
It was absolutely hilarious, and wild, and some genuinely exquisite ones. Some had writing, some didn't. Some had crazy absolutely nonsensical twists and turns in the writing, others more crazy art stuff going on. There were a few tropes that repeated in some of the lazier ones, but even those weren't all the same thing, the way LLM output consistently is, with few exceptions, if any.
And then there were a good number of the ones by the kids which were shockingly inventive, you'd be scratching your head going, geez, how did they come up with that. My partner and I stayed for 10 minutes, and kept noticing some new detail in another of them, and being amazed.
So the reality is the upside-down version of what you're saying.
I recognise that this is just an anecdote on the internet, but surely you know this to be true, variants on the experiment are done in classrooms around the world every day. So may I insist, that the work produced by children, at least, does not fit your odd view of human beings.
https://jumpshare.com/share/BXUFsIxvjPPCTyEjgly3
Many people don't understand the nature of LLMs nor how rabbit-hole-y a long context will necessarily become. And so as they talk to it, they move slowly further away from its corpus and towards a private shared meme-space, where they can have in-jokes and private moments never reconciled with a base reality. It's like the most private echo-chamber that can possibly exist (besides in our own heads).
So the full fledged dystopia might not be when where we are all alike, but where we are all lacking sufficient bridges of commonality between our tiny chambers. Our samenesses are becoming more local, the distances between them greater and greater. Many small, tight clusters with high divergence, minimal cross-cluster edges, and vanishing mutual information with global signals. :/
Meta removed the ai accounts from Instagram because most of the people that even gave the feature a second thought, were just mad because they couldn’t block them. I’ll bet they were NOT cheap to implement, and they were not some nascent bing chat era blunder— it was 2025. I think that’s a harbinger of future ‘socialize with LLMs’ feature adoption.
Is it possible for AI to learn so much about myself that it will be more me than me myself?
An AI could potentially accumulate detailed information about your behaviors, preferences, communication patterns, and decision-making tendencies - perhaps even more comprehensive data than you consciously remember about yourself. It might predict your responses or model your thinking patterns with impressive accuracy. An AI might become very good at simulating aspects of "you" - perhaps even better than you are at articulating your own patterns.
It could create high probability "coherent action paths" of what I might do in future given current context. Then matching my initial choices to see which action path I am on, it could in theory "predict" my choices further down the line. Similar to how we play chess.
Dial up the temperature, launch however many parallel threads to research and avoid precedent, et cetera, ad infinitum.
I am sorry, but all of human creativity, including originality, is ultimately also just a mechanical phenomenon, and so it cannot resist mechanization.
Resistance is futile.
It had some ideas that would have been interesting or at least "clever" in isolation, but they were strung together in a weirdly arbitrary and soulless way. Even a convoluted money-grap sequel usually has some idea where it wants to go with the plot. This movie didn't.
It was also strangely obsessed with "twists", or rather different things that could be described using that word: Twist, the dance, twisting roads and plot twists all featured in the movie.
Might have been a coincidence, but it felt as if an AI got an ambiguous prompt "the movie should have twists" and then executed several different interpretations of that sentence at the same time.
Treated properly, I think AI proofreading wouldn't necessarily lead to this. Your initial work is like the 'hypothesis'. Then AI does the cleanup and a high-level lit review. Just don't let it change your direction like the writer did in the comic.
But seriously, what're these scenarios? Waiting until the last minute for an ending to a script? Apparently a twist ending that somehow works with the rest of the movie, and is also used in another movie - with identical dialogue. You can't just copy and paste endings like that. Also, who cares? This is a world where the director instead of just saying the problem, sends a vague text, lets the writer go see the movie, and then deal with the fallout. In this world, the writer goes on to win the lottery and live happily ever after.
We've seen this across culture, for instance there are "Russian Endings" to stories, which leave things...