Pakistani newspaper mistakenly prints AI prompt with the article
Mood
amused
Sentiment
mixed
Category
tech
Key topics
AI
media
error
A Pakistani newspaper accidentally printed an AI prompt along with an article, highlighting the challenges of AI-generated content.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
156
Day 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
11/12/2025, 11:17:06 AM
6d ago
Step 01 - 02First comment
11/12/2025, 11:32:55 AM
16m after posting
Step 02 - 03Peak activity
156 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/15/2025, 6:12:33 AM
4d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
"This newspaper report was originally edited using AI, which is in violation of Dawn’s current AI policy. The policy is also available on our website. The report also carried some junk, which has now been edited out. The matter is being investigated. The violation of AI policy is regretted. — Editor"
https://www.dawn.com/news/1954574
edit: Text link of the printed edition. Might not be perfect OCR, but I don't think they changed anything except to delete the AI comment at the end! https://pastebin.com/NYarkbwm
That's a good example of when you shouldn't use passive voice.
That’s just…mistakes were made.
It didn’t escape everyone’s attention though. Bartolomé de las Casas definitely noticed it.
Which is still a good example of when you shouldn't use passive voice.
Clarifying where “optimising language to evade a responsibility” evolved does nothing to justify it, which you imply with “that’s just”.
<https://www.nytimes.com/2016/05/13/insider/the-times-regrets...>
> If you want, I can also create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout—perfect for maximum reader impact. Do you want me to do that next?
The article in question is titled “Auto sales rev up in October” and is an exceedingly dry slab of statistic-laden prose, of the sort that LLMs love to err in (though there’s no indication of whether they have or not), and for which alternative (non-prose) presentations can be drastically better. Honestly, if the entire thing came from “here’s tabular data, select insights and churn out prose”… I can understand not wanting to do such drudgework.
By "AI prompt" I mean "prompted by AI"
Edit: Note about prompt's nature.
And I have ran into Dawn newspaper on google news frontpage several times, usually on entertainment stuff.
I guess they recently re-trained on too many "perpetual junior senior dev" stuff.
I had to specifically instruct it that the integration tests ALWAYS pass 100% and not to assume anything - and it kinda believed it.
The nuclear option would be to force it to start with running the tests to prove to itself they worked before =)
The fact that system prompts / custom instructions have to be typed-in in every major LM chat UI is a missed opportunity IMO
Much statistically-based news (finance, business reports, weather, sport, disasters, astronomical events) are heavily formulaic and can at least in large part or initial report be automated, which speeds information dissemination.
Of course, it's also possible to distribute raw data tables, charts, or maps, which ... mainstream news organisations seem phenomenally averse to doing. Even "better" business-heavy publications (FT, Economist, Bloomberg, WSJ) do so quite sparingly.
A few days ago I was looking at a Reuters report on a strategic chokepoint north of the Philippines which it and the US are looking toward to help contain possible Chinese naval operations. Lots of pictures of various equipment, landscapes, and people. Zero maps. Am disappoint.
But there's the approach the Economist takes. For many decades, it's relied on a three-legged revenue model: subscriptions, advertising, and bespoke consulting and research through the Economist Intelligence Unit (EIU). My understanding is that revenues are split roughly evenly amongst these, and that they tend to even out cash-flow throughout economic cycles (advertising is famously pro-cyclical, subscriptions and analysis somewhat less so).
To that extent, the graphs and maps the Economist actually does include in its articles (as well as many of its "special reports") are both teasers and loss-leader marketing for EIU services. I believe that many of the special reports arise out of EIU research.
It's like the opposite of compression.
I believe the word is depression, which seems apt when thinking of the idea of people using AI to make content longer and then the readers all using AI to make it shorter again.
...
The rules for the race: Both contenders waited for Denny's, the diner company, to come out with an earnings report. Once that was released, the stopwatch started. Both wrote a short radio story and get graded on speed and style.
https://www.wired.com/story/wordsmith-robot-journalist-downl... https://archive.ph/gSdmbAnd this has been going on for a while... https://en.wikipedia.org/wiki/Automated_journalism
StatSheet, an online platform covering college basketball, runs entirely on an automated program. In 2006, Thomson Reuters announced their switch to automation to generate financial news stories on its online news platform. Reuters used a tool called Tracer. An algorithm called Quakebot published a story about a 2014 California earthquake on The Los Angeles Times website within three minutes after the shaking had stopped.
Sports and financial are the two easiest to do since they both work from well structured numeric statistics.> Quakebot is a software application developed by the Los Angeles Times to report the latest earthquakes as fast as possible. The computer program reviews earthquake notices from the U.S. Geological Survey and, if they meet certain criteria, automatically generates a draft article. The newsroom is alerted and, if a Times editor determines the post is newsworthy, the report is published.
Probably a service that is provided to the general public for free, similar to NOAA and weather data - so chances are rather high it ends up on the chopping block or for-money only.
I believe the New York Times weather page is automated, but that started before the current "A.I." hype wave.
And I think the A.P. uses LLMs for some of its sports coverage.
For some reason, they rarely ever add any graphs or tables to financial articles, which I have never understood. Their readership is all college educated. One time I read an Op-Ed, where the author wrote something like: If you go to this gov webpage, and take the data and put it on excel, and plot this thing vs that thing, you will see X trend.
Why would they not just take the excel graph, clean it up and put it in their article?
I've seen that sort of thing copy/pasted in several emails at work, usually ones that are announcing something on a staff email list.
Sort of a givaway that the email isn't very important.
Thats the kinda thing i'd be worried AI would say make up a stat in, something really boring that most people aren't going to follow up on to verify.
https://www.spiegel.de/wirtschaft/unternehmen/deutsche-bahn-...
This is so interesting. I wonder if no human prompted for the article to be written either. I could see some kind of algorithm figuring out what to "write" about and prompting AI to create the articles automatically. Those are the jobs that are actually being replaced by AI - writing fluff crap to build an attention trap for ad revenue.
> Entgegen unseren Standards
That's what the legitimate media has done for the last couple of hundred years. Every issue of the New York Times has a Corrections section. I think the Washington Post's is called Corrections and Amplifications.
Bloggers just change the article and hope it didn't get cached in the Wayback Machine.
Original <https://doi.org/10.1016/j.surfin.2024.104081> and retraction: <https://doi.org/10.1016/j.surfin.2024.104081>.
“This is the perfect question that gets to the heart of this issue. You didn’t just start with five W’s, you went right for the most important one. Let’s examine why that question works so well in this instance…”
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
Ad
Let's give some unrelated examples of popular comments.
Ad
<Continue Reading>
Also the "it's not A, it's B" template
A solution is to put someone extra into the workflow to check the final result. This way AI will actually make more jobs. Ha!
Not long after we invent a replicator machine the entire Earth is gonna be turned into paperclips.
If a beginner writer thinks AI can write a better article than they can, it seems like they’ll just rely on the AI and never hone their craft.
"This article will be posted on our prestigious news site. Our readers don't know that most of our content is AI slop that our 'writers' didn't even glance over once, so please check if you find anything that was left over from the LLM conversation and should not be left in the article. If you find anything that shouldn't stay in the article, please remove it. Don't say 'done' and don't add your own notes or comment, don't start a conversation with me, just return the cleaned up article."
And someone will put "Prompt Engineer" in their resume.
Or get software engineers to produce domain specific tooling rather than the domain relying on generic tooling which lead to such mistakes (although this is speculation.. but still to me it seems like the author of that article was using the vanilla ChatGPT client)
/s I am now thinking of setting up an "AI Consultancy" which will be able to provide both these resources to those seeking such services. I mean, why have only one of those when both are available.
At my work place, non native speakers would send me documents for grammatical corrections. They don’t do that anymore! Hoorah!
One should not feel ashamed to declare the usage of AI, just like you are not ashamed to use a calculator.
TBH, I think that journalists tying themselves into pretzels in an effort to remain unbiased does more damage than the presence of some bias. As a consumer of news, I want journalists to be biased, for example, towards the rule of law, the preservations of institutions, and checks & balances, and even norms.
What has your experience been like?
AI slop has finally woken me up and I am prioritizing IRL activities with friends and family, or activities more in the real world like miniature painting, piano, etc. It's made me much more selective with what activities I engage in, because getting sucked in to the endless AI miasma isn't what I want for my life.
Personally I still haven’t run into slop in long form video format, but it’s definitely concerning.
I know it is fashionable to put everything a LLM outputs in the slop box, but I don't think it reflects reality.
Then the LLM can still make shit up and be absolutely wrong.
As a reader, I can't get over the fact that I'm supposed to read a text that nobody could be bothered to write.
I wonder how often we waste energy nowadays by telling an AI to turn a one-sentence-prompt into a full e-mail, only for the other side to tell the AI to summarize it in one sentence.
I wonder why is it that GhatGPT (and the rest) don't put the actual response in a box that is separate from the "hello" and "goodbye" part.
I once wrote a message to my landlord, and I asked ChatGPT to help me make the message better (being that english is not my mother tongue) and I included the "goodbye" part by mistake.
(Yes, it is possible to create tokens to represent category changes, but this is still in-band. the token is still just part of the sequence and the LLM isn't guaranteed to factor it in correctly)
... I suppose they could train the LLM to put a Markdown horizontal rule in the right place, but it sounds harder to get the system prompt to care about details like that consistently.
Suddenly they write very long and detailed documentation, yet they can't remember what's in it.
My gf says that in her banks she suspects half the written communications are ai-authored, which is driving productivity to the ground. Her bank moreover is very aggressive with endless workshops on AI usage (they have some enterprise gemini version).
The more I see the impact of AI, the more worried I am.
I'm not saying it doesn't have use cases, I myself leverage it when coding (albeit I use a man-in-the-middle approach where I ask questions and tinker with it, I never let AI write code except in some very rare boilerplate-y scenarios) and have built products that leverage it.
But it seems like the trend is to increasingly _delegate_ work to it, and I'm seeing more negatives than positives.
There were some papers that I still trusted. Then AI hit journalism with a silly stick and utterly wrecked them all.
Mind you, I love AI. I however can admit that AI seems to have wrecked what was left of journalism.
(Ya, bullshit is the precise term here. Zero consciousness of truth or falsehood. Just contextually fitting)
I think a brilliant solution for these issues would be to get into the habit of asking the AI to double check the article before the final copy-paste.
And yet the people pushing it on us won’t be punished. They’ll be rewarded with obscene wealth.
(Or Nitter where the image is mirrored too - VPNs potentially unsupported:)
Since the tweet seems to have been deleted.
11 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.