I'm Kenyan. I Don't Write Like Chatgpt, Chatgpt Writes Like Me
Key topics
The debate rages on: do people from certain regions get unfairly accused of using AI tools like ChatGPT because their writing style is eerily similar to the AI's output? Commenters pointed out that ChatGPT was originally trained on text written by speakers of African business English, which might explain the similarity. However, others disputed this claim, citing differences in regional English dialects and nuances in writing styles, such as the use of hyphens instead of em dashes. As one commenter astutely noted, humans and AI can both make the same "mistakes," like using hyphens instead of em dashes, or, as another pointed out, not using them.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
27m
Peak period
108
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 15, 2025 at 7:12 AM EST
19 days ago
Step 01 - 02First comment
Dec 15, 2025 at 7:39 AM EST
27m after posting
Step 02 - 03Peak activity
108 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 18, 2025 at 5:37 AM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.
https://www.theguardian.com/technology/2024/apr/16/techscape...
They said nigerian but there may be a common way English is taught in the entire area. Maybe the article author will chip in.
> ChatGPT is designed to write well
If you define well as overly verbose, avoiding anything that could be considered controversial, and generally sycophantic but bland soulless corporate speak, yes.
Nigeria and Kenya are two very regions with different spheres of business. I don't know, but I wouldn't expect the English to overlap that much.
Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica, Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta, Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius, Eswatini (Swaziland).
If what you're saying is right then you'd have to admit Jamaican and Barbados English are just the same as Kenyan or Nigerian... but they're not. They're radically different because they're radically different regions. Uganda and Kenya being similar is what I would expect, but not necessarily Nigeria...
They're radically different predominantly at the street level and everyday usage, but the kind of professional English of journalists, academics and writers that the author of the article was surrounded by is very recognizable.
You can tell an American from an Australian on the beach but in a journal or article in a paper of record that's much more difficult. Higher ed English with its roots in a classical British education you can find all over the globe.
Go read some Kenyan news. It's very obvious.
All we can hope is for a local to show up and explain.
> there is - in my observational opinion - a rather dark and insidious slant to it
That feels too authentic and personal to be any of the current generation of LLMs.
But yes the current commercial ones are somewhat controllable, much of the time.
You can't stop it from doing the "if you like I can <three different dumb followup ideas>" thing in every reply either.
I don't think you're able to set either the developer or system prompt on ChatGPT, you're gonna have to use the OpenAPI (or something else) to be able to set that. Once you have access to setting text in those, you can better steer how the responses are.
How much they follow it depends. Sometimes they know you wrote it and sometimes they don't. Claude in particular likes to complain to me its system prompt is poorly written, which it is.
That's not true, which field do you believe this to be? Because all of the fields I currently see in ChatGPT do have an effect on your conversations, but they're not just raw injections into system/developer prompts, it's something else.
Try using the API with proper system/developer prompts, then copy-paste that exact same thing into ChatGPT's "personalization settings" and try to have the same conversation, and you'll get direct evidence that it isn't actually the system prompts, but they're injected somewhere into the conversation.
In Menlo font (Chrome on Mac's default monospace font, used for HN comments) em-dash(—) and en-dash (–) use the same glyph, though.
It gives a vibe like a car salesman and I really dislike it and personally I consider it a very bad writing style for this very reason.
I do very much prefer LLMs that don't appear to be trained on such data or try to word questions a lot more to have more sane writing styles.
That being said it also reminds me of journalistic articles that feel like the person just tried to reach some quota using up a lot of grand words to say nothing. In my country of residence the biggest medium (a public one) has certain sections that are written exactly like that. Luckily these are labeled. It's the section that is a bit more general, not just news and a bit more "artsy" and I know that their content is largely meaningless and untrue. Usually it's enough to click on the source link or find the source yourself to see it says something completely different. Or it's a topic that one knows about. So there even are multiple layers to being "like LLMs".
The fact that people are taught to write that way outside of marketing or something surprises me.
That being said, this is just my general genuine dislike of this writing style. How an LLM writes is up to a lot of things, also how you engage with it. To some degree they copy your own style, because of how they work. But for generic things there is always that "marketing talk" which I always assumed is simply because the internet/social media is littered with ads.
Are Kenyans really taught to write that way?
I’m highly skeptical. At one point the author tries to argue this local pedagogy is downstream of “The Queen’s English” & British imperial tradition, but modern LLM-speak is a couple orders of magnitude closer to LinkedIn clout-chasing in the vector space than anything else.
Here are some random examples from one of the (at least) half-dozen LLM-co-written posts that rose high on the front page over the weekend:
https://blog.canoozie.net/disks-lie-building-a-wal-that-actu...
You write a record to disk before applying it to your in-memory state. If you crash, you replay the log and recover. Done. Except your disk is lying to you.
This is why people who've lost data in production are paranoid about durability. And rightfully so.
Why this matters: Hardware bit flips happen. Disk firmware corrupts data. Memory busses misbehave. And here's the kicker: None of these trigger an error flag.
Together, they mean: "I know this is slower. I also know I actually care about durability."
This creates an ordering guarantee without context switches. Both writes complete before we return control to the application. No race conditions. No reordering.
... I only got about halfway through. This is just phrasing, forget about the clickbaity noun-phrase subheads or random boldface.
None of these are representative (I hope!) of the kind of "sophisticated" writing meant to reinforce class distinctions or whatever. It's just blech LinkedIn-speak.
For whatever combination of prompt and context, ChatGPT 5.2 did some writing for me the other day that didn't have any of the surface style I find so abrasive. But it could still only express its purported insights in the same "A & ~B" structure and other GPT-isms beneath the surface. Truly effective writers are adept with a much broader set of rhetorical and structural tools.
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.
If you're using it to write in programming language, you often actually get something that runs (provided your specifications are good - or your instructions for writing the specifications are specific enough!) .
If you're asking for natural language output ... yeah... you need to watch it like a hawk by hand - sure. It'd be nice if there was some way to test-suite natural language writing.
The tests were even worse. They exercised the code, tossed the result, then essentially asserted that true was equal to true.
When I told it what was wrong and how to fix it, it instead introduced some superfluous public properties and a few new defects without correcting the original mistake.
The only code I would trust today's agents with is so simple I don't want or need an agent to write it.
I think it depends on what models you are using and what you're asking them to do, and whether that's actually inside their technical abilities. There are not always good manuals for this.
My last experience: I asked claude to code-read for me, and it dug out some really obscure bugs in old Siemens Structured Text source code .
A friend's last experience: they had an agent write an entire Christmas-themed adventure game from scratch (that ran perfectly).
Outrage mills mill outrage. If it wasn't this, it would be something else. The fact that the charge resonated is notable. But the fact that it exists is not.
Is it? Like genuinely, were there experts on literature involved?
Perhaps the US-centric "optimization" of English is to blame here, since it is so obvious in regular US media we all consume across the planet, and is likely the contrasting style.
> You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.
This is :
> humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm
And once you start noticing the 'threes', it's fun also.
Humanity has always been about errors.
Because while people OBVIOUSLY use dashes in writing, humans usually fell back on using the (technically incorrect) hyphen aka the "minus symbol" - because thats whats available on the keyboards and basically no one will care.
Seems like, in the biggest game of telephone called the internet, this has devolved into "using any form of dash = AI".
Great.
Wow, you really do under/over estimate some of us :)
- Barely literate native English speakers not comprehending even minimally sophisticated grammatical constructs.
- Windows-centric people not understanding that you can trivially type em-dash (well, en-dash, but people don’t understand the difference either) on Mac by typing - twice.
Interesting, because he failed me too just because I use Firefox. Have you been told about the article or it actually worked with your screen reader software?
That would probably mess up any screen reader, but it also didn't work on a regular Firefox :)
No, don't think so. To compensate, I probably missed the article about the obfuscation of kindle ebooks...
Earlier today I stumbled upon a blog post that started with a sentence that was obviously written by someone with a slavic background (most writers from other language families create certain grammatical patterns when writing in another language, e.g. German is also quite typical). My first thought was "great, this is most likely not written by a LLM".
I would not want to be an artist in the current environment, it’s total chaos.
Social media artists, gallery artists and artists in the industry (I mean people who work for big game/film studios, not industrial designers) are very different groups. Social media artists are having it the hardest.
Omitting articles? To me, that has always signaled "this will be an interesting and enlightening read, although terse and in need of careful thought." I've found sites from that part of the Internet to be very useful for highly technical and obscure topics.
But yeah, I definitely find mild grammatical quirks expected from English as a foreign language speakers a positive these days, because the writing appears to reflects their actual thoughts and actual fluency.
Perplexity gauges how predictable a text is. If I start a sentence, "The cat sat on the...", your brain, and the AI, will predict the word "floor."
No. No no no. The next word is "mat"!
How do you like that, Mr. Rat
Thought the Cat.
> Your kernel is actually being very polite here. It sees the USB reader, shakes its hand, reads its name tag… and then nothing further happens. That tells us something important. Let’s walk this like a methodical gremlin.
It's so sickly sweet. I hate it.
Some other quotes:
> Let’s sketch a plan that treats your precious network bandwidth like a fragile desert flower and leans on ZFS to become your staging area.
> But before that, a quick philosophical aside: ZFS is a magnificent beast, but it is also picky.
> Ending thought: the database itself is probably tiny compared to your ebooks, and yet the logging machinery went full dragon-hoard. Once you tame binlogs, Booklore should stop trying to cosplay as a backup solution.
> Nice, progress! Login working is half the battle; now we just have to convince the CSS goblins to show up.
> Hyprland on Manjaro is a bit like running a spaceship engine in a treehouse: entirely possible, but the defaults are not tailored for you, so you have to wire a few things yourself.
> The universe has gifted you one of those delightfully cryptic systemd messages: “Failed to enable… already exists.” Despite the ominous tone, this is usually systemd’s way of saying: “Friend, the thing you’re trying to enable is already enabled.”
You can check both in ChatGPT settings.
I just checked settings, apparently I had it set to "nerdy," that might be why. I've just changed it to "efficient," hopefully that'll help.
https://www.theverge.com/features/23764584/ai-artificial-int...
ChatGPT speaking African English was mostly just 3.5. 4o speaks like a TikTok user from LA. 5 seems kind of generic.
ChatGPT :|
ChatGPT (japan) XD
His responses in Zoom Calls were the same mechanical and sounds like AI generated. I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written and gave points to why it believes this message was AI written.
When I showed the response to the colleague he swore that he was not using ant AI to write his responses. I believe after he said to me it was not AI written. And now reading this I can imagine that it's not an isolated experience.
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
But of course he just had to get that great marketing sound bite didn’t he?
I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.
Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.
They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).
They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?
That said, of course Altman is being cynical about this. He's just marketing his product, ChatGPT. I don't believe for a minute he really outsources his baby's well-being to an LLM.
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…
https://www.startupbell.net/post/sam-altman-told-investors-b...
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
https://futurism.com/artificial-intelligence/sam-altman-cari...
Seems reasonable to me. If it can't answer that it doesn't work well enough.
"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, nó matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to ignore the truth if Sam Alman can be put under any negative light.
https://www.benzinga.com/markets/tech/25/12/49323477/openais...
At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.
It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.
In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you can’t even imagine how other people do it, hence you don’t know how anyone does it.
The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.
>Clearly, people did it for a long time, no problem.
In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.
What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.
Saying “no no, he didn’t mean everyone, he was only talking about himself” is not meaningfully better, he’s still encouraging everyone to do what he does and use ChatGPT to obsess about their newborn. It is enough of a representation of his own cluelessness (or greed, take your pick) to warrant criticism.
> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.
To be fair, that is a relatable anxiety. But I can't imagine Altman having the same level of difficulty as the average parent. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself.
Raising a kid is really very natural and instinctive, it's just like how to make it sleep, what to feed it when, and how to wash it. I felt no terror myself and just read my book or asked my parents when I had some stupid doubt.
But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.
If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!
I understand there are things a typical LLM can do and things that it cannot, this is mostly just because I figured it couldn’t do it and I just wanted to see what would happen. But the average person is not really given much information on the constraints and all of these companies are promising the moon with these tools.
Short version: It definitely did not have more common sense or information than a human, and we all know it sure would have given a very confident answer about conditions in the area to this person that were likely not correct. Definitely incorrect if it’s based off a photo
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
English article:
https://www.heise.de/en/news/38C3-AI-tools-must-be-evaluated...
If you speak German, here is their talk from 38c3: https://media.ccc.de/v/38c3-chatbots-im-schulunterricht
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
I can't blame others though- I was looking at notes I wrote in 2019 and even that gave me a flavor of looking like a ChatGPT wrote it. I use the word "delve" and "not just X but also Y often, according to my Obsidian. I've taken to inserting the occasional spelling mistake or Unorthodox Patterns of Writing(tm), even when I would not otherwise.
It's a lot easier to get LLMs to adhere to good writing guides than it is to get them to create something informative and useful. I like to think my notes and writing are informative and useful.
... How does that work, exactly?
This would have been my first question to the parent, that I guess he never had similar correspondence with this friend prior to 2023. Otherwise it would be hard to convince me without an explanation for the switch (transition duuing formative high school / college years etc).
Just recently I was amazed with how good text produced by Gemini 3 Pro in Thinking mode is. It feels like a big improvement, again.
But we also have to honest and accept that nowadays using a certain kind of vocabulary or paragraph structure will make people think that that text was written by AI.
Besides, of course what people write will sound as LLMs, since LLMs are trained on what we've been writing on the internet... For us who've been lucky and written a lot and are more represented in the dataset, the writings of LLMs will be closer to how we already wrote, but then of course we get the blame for sounding like LLMs, because apparently people don't understand that LLMs were trained on texts written by humans...
343 more comments available on Hacker News