The Trust Collapse: Infinite AI Content Is Awful
Posted2 months agoActiveabout 2 months ago
arnon.dkTechstoryHigh profile
heatednegative
Debate
80/100
AI-Generated ContentTrust CollapseInformation Overload
Key topics
AI-Generated Content
Trust Collapse
Information Overload
The article discusses how AI-generated content is flooding the internet, leading to a collapse of trust in online information, and the discussion highlights the consequences and potential solutions to this issue.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
104
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 6, 2025 at 5:12 AM EST
2 months ago
Step 01 - 02First comment
Nov 6, 2025 at 5:18 AM EST
6m after posting
Step 02 - 03Peak activity
104 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 8, 2025 at 2:54 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45833496Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.
This of course means that Freenow is now on the personal blacklist. People should not engage with companies who advertise with "AI" slop.
https://www.theatlantic.com/technology/archive/2025/08/youtu...
https://www.nbcnews.com/tech/tech-news/youtube-dismisses-cre...
It's annoying because the whole point of a lot of this stuff is that it's real, and one can be informed, entertained or have an emotional response to it. When you distrust everything because it's maybe fake, then the fun of the internet as a window into human nature and the rest of the world just disappears.
nevermind if the things are people or their lives!!
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
It's fundamentally exploitation on a population scale, and I believe it's immoral. But because it's also massively lucrative, capitalism allows us to ignore all moral questions and place the blame on the victims, who again, are on the wrong side of a massive power imbalance.
What authority are you going to complain to to "correct the massive power imbalance"? Other than God or Martians I can't see anything working, and those do not exist.
Within the last year I opened an Instagram account just so I could get updates from a few small businesses I like. I have almost no experience with social media. This drove home for me just how much the "this is where their attention goes, so that's revealed preference" thing is bullshit.
You know what I want? The ability to get these updates from the handful of accounts I care about without ever seeing Instagram's algo "feed". Actually, even better would be if I could just have an RSS feed. None of that is an option. Do I sometimes pause and read one of the items in the algo feed that I have to see before I can switch over to the "following" tab? I do, of course, they're tuned to make that happen. Does that mean I want them? NO. I would turn them off if I could. My actual fucking preference is to turn them off and never see them again, no matter that they do sometimes succeed in distracting me.
Like, if you fill my house with junk food I'll get fatter from eating more junk food, but that doesn't mean I want junk food. If I did, I'd fill my house with it myself. But that's often the claim with social media, "oh, it's just showing people more of what they actually want, and it turns out that's outrage-bait crap". But that's a fucking lie bolstered by a system that removes people's ability to avoid even being presented with shit while still getting what they want.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
Wired: "Build things society needs"
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
The people yearn for the casino. Gambling economy NOW! Vote kitku for president :)
PS. Please don't look at the stock market.
> nevermind if the things are people or their lives!!
Breaking things is ok. If people are things then it's ok to break them, right? Got it. Gotta get back to my startup to apply that insight.
Perhaps I am too optimistic...
The exact quote is: "I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
The truth is, for those of us with lower IQ, it doesn't matter how critically we think, we lack the knowledge and mental dexterity to reliably arrive at a nuanced and deep understanding of the world.
You have to stop dreaming of a world where everyone can sort everything out for themselves, and instead build a world where people can reliably trust expert opinion. It's about having a high-trust society. That requires people in privileged positions to not abuse their advantage in a short term way, at the cost of alienating, and losing the trust of the unwashed masses.
Because that's what has happened, the experts have been exploited as a social cudgel, by the psychopathic and malignant managerial class, in such an obvious and blunt way, that even those of us who are self-aware of our limitations, figure we're as likely to get it right ourselves, as to get honest and correct information from our social institutions.
There is zero chance of making everyone smart enough to navigate the world adroitly. But there is a slightly better than zero chance we could organize our society to earn their trust.
If you are not building the next paperclip optimizer the competition already does!
Wow. A new profile text for my Tinder account!
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
Net-growth society: new wealth is being created, if you can be part of the creation you get wealth
No-growth society: only way to acquire wealth is to take it from someone else
Oh plus because essentially every society that experienced it legislated it's way into a no-growth situation. The problem was not that growth was not possible, it's that people used state power, for a lot of different excuses, to prevent growth (and of course really to secure the position of the richest and most powerful in society)
The excuses range from religion, morality separate from religion, wars, avoiding losing wars (and putting the entire economy in a usually futile attempt to win or avoid losing a war) and of course the whole thing feeding onto itself: laws protecting the rich at the direct expense of the poor (that can happen even if there is economic growth, though of course, the more growth the less likely)
Btw: "futile attempt to win or avoid losing a war" these attempt were futile not because they lead to a win or a loss, but because the imposed cost of a no-growth society far exceeded any gains or even avoided losses ...
Wealth is created by work. In any society, be it growth or no-growth, you can create and acquire wealth by working. (Not necessarily for a wage. Working for yourself also creates wealth. Every time you make yourself dinner, or patch a torn pant leg, or change your car's oil, you are creating wealth.)
The problem is that non-working parasites (investors, rent-seekers, warlords) can't acquire wealth in a no-growth society without taking it from someone else.[1] (Because in a no-growth society, investing on the net is ~zero-returns, ~zero-value.)
------
[1] They take it from someone else in a growth society, too, but a person who works and loses half their productive surplus to a rent seeker is still getting the benefits of growth. In a no-growth society, the rent-seeker's gain is 100% someone else's net loss.
A society like that may be quite different in innumerable ways, of course, and the idea of “wealth” in the way we understand it may not make sense.
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Larry Fink and The Money Owners.
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
0 - https://amzn.to/4nFuG7C
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
Trust is hard earned and easily lost.
This, too, goes into the probability of something being right or wrong. But the problem I'm pointing out is an inconsistent epistemology. The same kind of test should be applied to any claim, and then they have to be compared. When people trust a random TikToker over the NYT, they're not applying the same test to both sides.
> It’s also why things like calling your opponents dumb, etc is so harmful.
People who don't try to have any remotely consistent mechanism for weighing the likelihood of one claim against a contradicting one are, by my definition, stupid. Whether it's helpful or harmful to call them stupid is a whole other question.
And a lot of the time, that trust is specific to a topic, one which matters to them personally. If they cannot directly verify claims, they can at least observe ways in which their source resonates with personal experience.
Call me naive, but I think education can help.
From my experience, there absolutely is. It just isn't legible to you.
The only thing I can come up with is that they do believe rigorous scholarship can arrive at answers, but sometimes those who do have the "real answers", lie to us for nefarious reasons. The problem with that is that this just moves the question elsewhere: how do you decide, in a non-arbitrary way, whether what you're being told is an intentional lie? (Never mind how you explain the mechanism of lying on a massive scale.) For example, an epistemology could say that if you can think of some motivation for a lie then it's probably a lie, except that this, too, is not applied consistently. Why would doctors lie to us more than mechanics or pilots?
Anohter option could be, "I believe things I'm told by people who care about me." I can understand why someone who cares about me may not want to lie to me, but what is the mechanism by which caring about someone makes you know the truth? I'm sure that everyone has had the personal experience of caring about someone else, and still advising them incorrectly, so this, too, quickly runs into contradictions.
First, show me a person who believes all of them.
Then, try asking that person.
You are trying to ask me to justify entire worldviews. That is far beyond the scope of a single HN post, and also blatantly off topic.
And I did ask such people such questions - for example, people who fly a lot yet and "chemtrails" are poisoning us - but their answers always ended up with some arbitrary choice that isn't appliled consistently. Pretty much, when forced to choose between claims A and B, they go by which of them they wish to be true, even if they would, in other situations, judge the process of arriving at one of the conclusions to be much stronger than the other. They're more than happy to explain to you that they trust vitamins because of modern scientific research, which they describe as fradulent when it comes to vaccines.
Their epistemology is so flagrantly inconsistent that my only conclusion was that they're stupid. I'm not saying that's an innate character trait, and I think this could well be the result of a poor education.
Could you quickly summarize how and why you felt let down by the media in regards to COVID?
The key is that distrusting one side or source does not logically entail trusting another source more. If you think that the media or medical establishment is wrong, say, 45% of the time, you still have to find a source of information that is only wrong 40% of the time to prefer it.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
Don’t get emotionally attached to content farms.
Regardless, clearly labeled opinions are standard practice in journalism. They're just not on the front page. If you saw that on the front page, then I'd need more context, because that is not common practice at NYT.
It’s simply reality, or else propaganda wouldn’t work so well.
The problem is that often we have to choose because decisions are binary: either we get a vaccine or not. For example, to decide not to get a vaccine, the belief that the medical establishment are lying liars is just not enough. We must also believe that the anti-vaxxers are more knowledgeable and trustworthy than the medical establishment. Doctors could be lying 60% of the time and still be more likely to be right than, say, RFK. It's not enough to only look at one side; we have to compare two claims against each other. For the best outcome, you have to believe someone who's wrong 80% of the time over someone who's wrong 90% of the time. Even if you believe in a systemic, non-random bias, that doesn't help you unless you have a more reliable source of information.
And this is exactly the inconsistent epistemology that we see all around us: People reject one source of information by some metric they devise for themselves and then accept another source that fails on that metric even more.
Except those institutions have long lost all credibility themselves.
But billionaires are making and keeping ever more money than before, so it isn't a problem.
Wall Street, financier centric and biased in general. Very pro oligarchy.
The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.
Funny that he doesn’t say that the institutions have to provide accurate information, but just that we have to trust them to provide accurate information.
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...
Also, this is entirely hand-written ;)
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
This is just how I write in the last few years
"We shape our tools, and thereafter our tools shape us", may be the apposite bon mot.
That aside, I did enjoy your article. Thank you.
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
Our issue was water intrusion along a side wall that was flowing under our hardwoods, warping them and causing them to smell. The first contractor replaced the floor and added in an outside drain.
The drain didn't work, and the water kept intruding and the floor started to warp again.
When we got multiple highly rated contractors out, all of them explained that the drain wasn't installed correctly, that a passive drain couldn't prevent the problem at that location, and that the solution was to either add an actively pumped drain or replace the lower part of the wall with something waterproof. We ended up replacing that part of the wall, and that has fixed the issue along that wall. (We now have water intrusion somewhere else, sigh).
If anything, I was originally biased for the cowboy, as they came recommended, he and his workers were nice, and the other options seemed too expensive & drastic. Now I've learned my lesson, at least about these types of trickier housing issues.
Also, no one mentioned evaluating someone by how they're dressed - the issue was family/friend recommendations vs online reviews, and I while I do take recommendations from friends and family into account, I've actually had better luck trusting online (local) reviews.
LOL
For every standard to be met, you compromise either on cash or time.
because you know the brands and trust them, to a degree
you have prior experience with them
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I'm using this
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
Nit: that is not how it worked. You took your horse to the blacksmith and he (almost always he - blacksmiths benefit from testosterone even if we ignore the rampant sexism) make shoes to fit. You knew it was good because the horse could still walk (if the blacksmith messes up that puts a nail in their flesh instead of the hoof and the horse won't walk for a few days while it heals). In 1600 he made the shoes right there for the horse, in 1800 he bought factory made horseshoes and adjusted them. Either way you never see the horseshoes until they are one the horse and your check is only that the horse can still walk.
Well, no worries. If you subscribe to the post+ service I’ll fix it in a couple years, promise.
That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.
I follow even AI slop via reddit RSS.
I control however what comes in.
I disagree with people suggesting removing noisy RSS feeds, as some are noisy, yet sometime useful. I think RSS client needs advanced filtering and search.
I use my own project for RSS https://github.com/rumca-js/Django-link-archive, but it should be considered 'alpha' as I move fast and break things. It provides me functionality that I miss in most of the clients, and it is a web client, so I can access it through mobile and PC, no need for sync. I am not a front-end dev so it does not look that much appealing.
The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.
Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.
By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.
One thing about it, it's a very modern sort of dystopia!
But you can’t really even make the case to them anymore because like you said they can’t/won’t even read your email.
What mostly happens is they constantly provide free publicity to existing big players whose products they will cover for free and/or will do sponsored videos with.
The only real chance you have to be covered as a small player is to hope your users aggregate to the scale where they make a request often enough that it gets noticed and you get the magical blessing from above.
Not sure what my point is other than it kinda sucks. But it is what it is.
Make friends and work with people where possible. I get that some of this only works for us open source types, but the microphone guy isn't, he just did good work. I initially heard of his company through a pro sound engineer website, and ran with it when the advice turned out to be good.
In any case, I can’t complain anyway because I have received my share of favorable coverage. It is just less frequent when you don’t have the personal connections.
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
51 more comments available on Hacker News