Be Worried
Posted3 months agoActive3 months ago
dlo.meTechstoryHigh profile
calmmixed
Debate
80/100
AISocial MediaInformation Manipulation
Key topics
AI
Social Media
Information Manipulation
The article 'Be Worried' discusses the potential risks of AI-generated content on the internet, and the HN discussion explores the implications, with some commenters expressing concern and others skepticism or dismissal.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
91
0-12h
Avg / period
20.4
Comment distribution102 data points
Loading chart...
Based on 102 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 1:02 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 1:37 PM EDT
36m after posting
Step 02 - 03Peak activity
91 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 8, 2025 at 3:19 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45465098Type: storyLast synced: 11/20/2025, 6:12:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The people involved in making these decisions deserve to be locked up for life, and I'm sure they will be eventually.
The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.
Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.
If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.
0: https://deviantabstraction.com/2025/09/29/against-the-tech-i...
This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.
So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.
I suspect almost all of that data still exists - it just isn’t readily available.
In the desperate end-game of this most recent round of “it’s shit, but what if we collected enough of it?” every last bit of human generated content will be resurrected.
In this case, while it’s totally possible for this sort of data to still exist somewhere, I think the chances of it surfacing again in any accessible format are rare - purely because of the overall stupidity of the system. Keeping data that “alive” for decades is a skill in itself that seems to only happen in a heavily subsidized “perfect” economic times (at least to the outside observer). Once the going gets tough, there isn’t really any business value to saving the data and it likely gets deleted.
I want a platform that real humans, including some sizeable chunk of my social circle, look at, and is filled with real content.
Ive been looking at using Photo.glass, but the subscription cost puts me off a bit emotionally after having been told to believe that 'social media is free' from the tech oligarchs. Logically though, I know that it theoretically attracts a higher bar of photographers who are willing to pay entry/support a new form of ad-free internet through that subscription - Similar to the idea of paid search engines.
> I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.
Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues. And thus if not evolution then at least very accelerated adaptation is in order.
The return to that world will be very painful and chaotic however.
I think a large portion of the population actively distrust experts.
It’s always been this way. That the you thought otherwise is just evidence of how good a central power was at controlling “the truth”.
Trust doesn’t scale. There are methods that work better than others, but it’s a very hard problem.
I think this will create a push for going back to smaller “gated” communities: think phpbb forums from the early 2000s, maybe with invitation-only sign up (similar to lobste.rs, where somebody already in must invite you, and admins can track who-invited-who).
It would probably be a better experience overall.
Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.
[1]: https://www.psychologytoday.com/us/blog/urban-survival/20250...
As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.
What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.
If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.
Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.
I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.
In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.
What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.
It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".
I blame all the background stress and I think it’s a more important factor.
...generated by AI?
The issue is the blind trust that the internet is built on top of.
It's was that initial good faith that made the internet a special thing - you could come online and discover all the weird interesting things people were up to, have conversations with real people that weren't possible in the real world.
At this point most platforms have figured out how to exploit and profit off the blind trust - but AI is threatening to annihilate it completely.
I'm not worried about the generic content, the worthwhile stuff is what's at risk.
I've always thought of it as kind of the opposite - the one information network that is pretty much unpoliced where anyone can publish any nonsense or lies they feel like. It works because people do have some ability to distinguish truth from lies and the open nature means people can publish truth too.
In a pre-AI era, this is a reasonable heuristic.
But I think everyone has their own internal barometer - how much trash are they willing to sift through for gold?
I'm concerned that AI will poison the well to such an extent that people stop trusting, and subsequently stop using, the web altogether. This isn't only a sad state of affairs for the average user, but ties into an existential risk for the business model of the entire tech industry.
I've actually found the LLMs quite good for debunking false facts. It's quite funny seeing Elon Musk interact with Grok when it points out his untruths.
Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.
And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.
An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.
Human extinction won't happen until a couple years later, with stronger ai (if it does happen, which I unfortunately think it will- if we remain on our current trajectory)
Neat, go write science fiction.
Hundreds of billions of dollars are currently being lit on fire to deploy AI datacenters while there's an ecosystem destabilizing heat wave in the ocean. Climate change is a real, measurable, present threat to human civilization. "Strong AI" is something made up by a fan fiction author. Grow up.
Everything about every part of AI in 2025 sounds exactly like science fiction in every way. We are essentially living in the exact world described in science fiction books this very moment, even though I wish we didn't.
Have you ever used an ai chatbot? How is that not exactly like something you'd find in science fiction?
The thing to ask yourself: does what I'm reading provide any value to me? If it does, then what difference does it make where it comes from.
You're absolutely right!
But seriously, if you don't know that it's incorrect information, it does make a difference. Knowing it was produced by AI at least gives you foreknowledge that it may include hallucinations.
Bless the author's heart.
All the major social media apps have been doing machine learning-driven getNext() for years now. Well before LLMs were even a thing. The Youtube algorithm was doing this a decade ago. This isn't on the horizon, we've already drowned in it.
As I understand it:
1. Because machine-generated content is not as good. Recent technical improvements are (IMHO) showing obvious and significant improvements over last year SOTA tech, indicating that the field is still very green. As long as machine-generated content is distinguishable, as long as there are quirks in there that we easily notice, of course it'll be less preferable.
2. Our innate "our vs foreign" biases. I suspect that until something happens to our brains, we'll always tend to prefer "human" to "non-human", just like we prefer "our" products (for arbitrary definition of "our" that drastically varies across societies, cultures and individuals) to other products because we love mental binary partitioning.
Most of the content is basically Idiocracy's "Ow my balls".
A woman in front of me had her phone cradled in both hands, with index and thumb from both hands on the screen - one hand was scrolling and swiping and the other one was tapping the like and other interaction buttons. It was at such a speed that she would seemingly look at two consecutive posts in 1 second and then be able to like or comment within an additional second.
It left me really shaken as to what the actual interaction experience is like if you’re trying to consume short form content but you’re only seeing the first second before you move on.
It explains a lot about how thumbnails and screenshots and beginnings of videos have evolved overtime in order to basically punch you right in the face with what they want you to know.
It’s really quite shocking the extent to which we’re at the lowest possible common denominator for attention and interaction.
People have been manipulated since forever, and coerced before that. You used to be burned or hanged if your opinions differed even a little from orthodoxy (and orthodoxy could change in a span of a couple of years!)
AI slop is mostly noise. It doesn't manipulate, it makes thinking a little more difficult. But so did TV.
There was/is a relatively small amount of channels you have access to, and effectively all your neighbours and friends have the same content.
Short form video took this to the extreme by figuring out what specific content you like and just feed you that - as a result people spend significantly more time watching TikTok and Youtube than they (or their previous generation) did with TV. TV was also often in background, not really actively watched, which is not the case on the internet.
Now, once you put AI generated content there combined with AI recommendation systems, this problem becomes even worse - more content, faster feedback loop, infinite amount of "creators" tailored to what your sweet spot is.
Not until you start mass-producing fake photos, fake videos, fake audios, put all of it into social media, shake shake shake.
Sorry, but when you make claims like this, it just tells me that you are not very familiar with popular culture. Most people hate AI content and at best find it a meme-esque joke. And young people increasingly get their news from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI) as possible in order to get followers. Platforms like YouTube do not benefit from their library being entirely composed of AI slop, and so will be implementing ways to filter AI content from "real people" content.
Ultimately AI tools are mostly going to be useful in situations where the author doesn't matter: sports scores, stock headlines, etc. Everything else will likely be out-competed by actual humans being human.
I think you're overgeneralising here. People don't hate AI content. Just content so low quality that they recognise it as AI. This is not universal and the recognition will drop further: https://journals.sagepub.com/doi/10.1177/09567976231207095
> from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI
AI content can be just as unique. It's not all-or-nothing. People can inject a specific style and direction in an otherwise generated content to keep it on brand.
At best you’re going to get some generically anonymous bot pretending to be human, that has limited reach because they don’t actually exist in the real world. Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.
I just don’t really see what scenario the doomsayers are imagining here. An entire media sphere of AIs that somehow shift public opinion without existing or interacting with the real media world? The practicalities don’t make sense.
Have you not been following how fast video gen is improving? We're not far off convincing fake video interviews
The backlash only happens when people can tell it's AI
So someone makes a fake video of X famous person saying an absurd thing on Joe Rogan’s podcast.
It’s not on the official Rogan account, but just on some low quality AI slop channel. Maybe it fools a handful of people…but who cares? People are already pretty trained to be skeptical of video.
I think we’ll mostly just see a focus on identity verification. If the content isn’t verified as being by the real person, it’ll just be treated as mindless entertainment.
It's fairly trivial to write code that can autogenerate hundreds or even thousands of AI-generated videos using Veo 3 with individual characters to push any narrative you'd like and push to Instagram or TikTok.
That's way scarier to me than a newspaper having a bias, or someone with an audience publishing a controversial blog post.
Really? Because I still see blatantly obvious AI-generated results in web searches all the time.
(Also, a lot of AI operators come across like they wouldn't be capable of fixing those issues even if they cared.)
and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world
manipulation = campaigns = advertisements = psyops (all the same, different connotations)
The article never actually backs this up.
The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.
Reminds me of a random podcast I heard once where someone was asked: "if you woke up in the middle of the night and saw either a random guy or a grey alien in your bedroom, which would scare you more?" The person being interviewed said the dude, and I 100% agree. AI as proxy for oligarchs is much scarier than autonomous alien AI.
There's an old fable about this, The Boy Who Cried "Wolf" about people adapting to false claims. They just discount the source, which is what is going to happen with social media once it is dominated by AI slop. Nobody will find it worth anything anymore, and the empires will melt down. I'm not on any of the big social sites, but I'm already watching a lot less on YouTube, basically only watching channels that I know to be real people. My other recommendations are mostly AI garbage now, outside of that.
Great quote, which should be obvious when we look at our 'leaders', especially in recent history.
A more immediate notion, perhaps, but definitely not scarier than human extinction.
I don't really know what to do about it, even with ground rules of engagement, we all still need to participate in a larger culture where it seems like it's a runaway guarantee that LLMs erode more critical skills that leave us with less and a handful of companies who develop this tech with more.
I'm slowly changing my life around what LLMs tell me, but not necessarily in the ways you'd expect:
1. I have a very simple set of rules of engagement for LLMs. For work, I don't let LLMs write code, and I won't let myself touch an LLM before suffering on a project for an hour at least.
2. I am an experienced meditator with a lot of experience in the Buddhist tradition. I've dusted off my Christian roots, and started exploring these ideas with new eyes, partially from a James Hillman-esq / Rob Burbea Soulmaking Dharma look. I've found a lot of meaning in personal fabrication and myth, and my primary practice now is Centering Prayer.
3. I've been working for a little while on a personal edu-tech idea with the goal of using LLM tech as an auxiliary tech to help people re-develop lost metacognitive skills and not use LLMs as a crutch. I don't know if this will ever see the light of day, it is currently more of a research project than anything, and it has a certain kind of iconoclastic frame like Piotr Wozniak's around what education is and what it should look like.