The Case Against Social Media Is Stronger Than You Think
Key topics
The article argues that social media has a negative impact on society, and the discussion reflects a mix of agreement and criticism, with many commenters sharing their personal experiences and concerns about the effects of social media.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
74
0-6h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 13, 2025 at 2:39 PM EDT
4 months ago
Step 01 - 02First comment
Sep 13, 2025 at 3:15 PM EDT
36m after posting
Step 02 - 03Peak activity
74 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 16, 2025 at 1:23 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> I am going to focus on the putative political impacts of social media
I closed the tab.
The country has always been hostile to “other”. People just have a larger platform to get their message out.
Unfortunately algorithmic social media is one of the factors adding fuel to the fire, and I believe it’s fair to say that social media has helped increase polarization by recommending content to its viewers purely based on engagement metrics without any regard for the consequences of pushing such content. It is much easier to whip people into a frenzy this way. Additionally, echo chambers make it harder for people to be exposed to other points of view. Combine this with dismal educational outcomes for many Americans (including a lack of critical thinking skills), our two-party system that aggregates diverse political views into just two options, a first-past-the-post election system that forces people to choose “the lesser of two evils,” and growing economic pain, and these factors create conditions that are ripe for strife.
Saying social media fans the flames is like saying ignorance is bliss. Mainstream media (cable news, radio, newspapers, etc) only gives us one, largely conservative, viewpoint. If you're lucky, you'll get one carefully controlled opposing viewpoint (out of many!). As you say, our choices are usually evil and not quite as evil.
Anger is not an unreasonable reaction when you realize this. When you realize that other viewpoints exist, the mainstream media and politicians are not acting in anyone's best interest but their own, there really are other options (politically, for news, etc.). Social media is good at bringing these things to light.
There are no easy fixes to the divides you're talking about, but failing to confront them and just giving in to the status quo, or worse, continuing down our current reactionary transcript, is probably the worst way to approach them.
It was the current President of the US that led a charge that a Black man running for President wasn’t a “real American” and was a secret Muslim trying to bring Shari law to the US and close to half of the US was willing to believe it.
https://www.youtube.com/watch?v=WErjPmFulQ0
This was before social media in the northern burbs of Atlanta where I had to a house built in 2016. We didn’t have a problem during the seven years we lived there. But do you think they were “polarized” by social media in the 80s?
That’s just like police brutality didn’t start with the rise of social media. Everyone just has cameras and a platform
https://en.wikipedia.org/wiki/Rwandan_genocide#Radio_station...
The radio didn't create the divide, and it wasn't the sole factor in the genocide, but it engrained in the population a sense of urgency in eliminating the Tutsi, along with a stream of what was mostly fake news to show that the other side is already commiting the atrocities against Hutus
When the genocide happened, it was fast and widespread: people would start killing their own neighbors at scale. In 100 days, a million people were killed.
The trouble with social media is that they somehow managed to shield themselves from the legal repercussions of heavily promoting content similar to what RTLM broadcast. For example, see the role of Facebook and its algorithmic feed in the genocide in Myanmar
https://systemicjustice.org/article/facebook-and-genocide-ho...
It's insane that they can get away with it.
History has shown people don’t need a reason to hate and commit violence against others.
Propaganda and ideology were a major part of the Nazi rise to power.
Marx, Engels, and Mussolini were all in the newspaper business. Jean-Paul Marat's newspaper was very influential in promoting the French reign of terror, including some claiming he's directly responsible for the September Massacres. Nationwide propaganda were major priorities day one to Lenin and after him in Soviet Russia.
Similarly with the Cambodian genocide, Great Leap Forward, Holodomor, etc.
Propaganda even played a big role in Julias Caesar's campaign against the Gauls some 2 millenia before social media.
Before then we had the “Willie Horton ads”. Not to mention that Clinton performatively oversaw the electrocution of a mentally challenged Black man to show that he was tough on crime.
https://jacobin.com/2016/11/bill-clinton-rickey-rector-death...
Yes I know that Obama was also a champion of laws like the defense of marriage act. We have always demonized other in this country. It was just hidden before.
Right now the Supreme Court said that ICE could target people based on the color of their skin and it’s big like Obama won the hearts and minds of the states where Jim Crow was the law of the land in the 60s.
https://news.gallup.com/poll/1687/race-relations.aspx
Easy to cherry-pick stuff. You can cherry-pick Jim Crow south; I can cherry-pick Chicago in the 90s:
https://www.youtube.com/watch?v=rDmAI67nBGU
I think we have to get past black-and-white thinking and see it as a matter of degree. With 340 million people in the USA, realistically, at least a few of them will always be racist. The question is how powerful and influential the racists are. That's a question which social media feeds into.
It’s a huge difference between “a few people being racist” and laws enforcing segregation and laws against interracial marriage.
The racists have always been in power. You can look at the justice system, the disparity between sentencing for the same crimes across races etc.
The Supreme Court said you can’t use race as a basis for college admissions. But you can use it as a basis for arresting someone.
Fox News is the most popular news network and isn’t part of social media.
Why are people who call themselves "progressive" so obsessed with events from half a century ago?
>The racists have always been in power.
It amazes me how quickly people forgot that we had 8 years of Obama. That was a lot more recent than racial segregation.
>the disparity between sentencing for the same crimes
The vast majority of this disparity seems to go away when you control for arrest offense, criminal history, etc.: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1985377
>The Supreme Court said you can’t use race as a basis for college admissions. But you can use it as a basis for arresting someone.
Well yeah, if someone fits the description of a criminal suspect, why not?
>Fox News is the most popular news network and isn’t part of social media.
When's the last time Fox News advocated for segregation or laws against interracial marriage?
IMO you've been making some very handwavey arguments which are collapsing important distinctions.
In any case... you can see from Table 13 in this PDF (page 13) that the rate of black-on-white crime is over 3x the rate of white-on-black crime: https://bjs.ojp.gov/document/cv23.pdf
This isn't something which happened 60 years ago. This is data from 2023. It's more recent / greater in magnitude than most of the points you've been making. So, would it be fair to conclude that black people in the US are hostile to the "other", akin to the conclusion you made in your original comment?
> It amazes me how quickly people forgot that we had 8 years of Obama. That was a lot more recent than racial segregation.
You mean the same 8 years that a large part of the country was saying he wasn’t really an American and that he was a secret Muslim wanting to bring Sharia law?
> Well yeah, if someone fits the description of a criminal suspect, why not?
So you’re okay with harassing all Hispanics because they “fit the description.”? Including American citizens?
Let’s look at what the government data says about discrepancy in sentencing…
https://www.ussc.gov/sites/default/files/pdf/research-and-pu...
https://www.sentencingproject.org/reports/one-in-five-racial...
https://www.ussc.gov/sites/default/files/pdf/research-and-pu...
>So you’re okay with harassing all Hispanics because they “fit the description.”? Including American citizens?
It's not the policy I would pursue if I were president. But I also wouldn't consider it harassment if a cop asked to see my ID.
If there's a community that has been breaking the law on a massive scale, there should be more shame associated with that lawbreaking than there is shame associated with enforcing the law. How are you going to have a functional society if there is more shame for enforcing the law than there is breaking it?
Your first link says that black men receive sentences that are 13.4% longer than white men. I think we should work to reduce that, but it's less than half the size of the male/female sentencing disparity from the same source (29.2%), and it's nothing compared to the 226% disparity in cross-racial crime victimization.
So exactly what should Puerto Ricans who are in the continental United States do to prevent being detained by ICE?
What should my six foot 2 Black stepson living in a lily white suburb of Atlanta GA (my wife and I moved to another state for reasons) do differently? Should he go to the inner city and change hearts and minds?
And you keep changing the subject. I am referring to how the government targets people - the only people who have qualified immunity and can take away an individuals rights. A random Black or White person cant legally detain me or stop me - not even in GA anymore after the Republican governor outlawed citizen’s arrest after it was used to harass and kill an unarmed black man walking down the street - yes it was caught on video.
https://en.m.wikipedia.org/wiki/Murder_of_Ahmaud_Arbery
Why do I find it hard to believe that you would be okay being randomly stopped and harassed walking down the street? I now live in a major diverse city. I can see it now. I a Black guy born in south GA being detained by ICE if they raid the Hispanic barber shop I go to - where three of the barbers are from Puerto Rico - because they hear me doing small talk in Spanish.
Show their ID? https://www.tsa.gov/news/press/releases/2025/04/24/countdown...
>What should my six foot 2 Black stepson living in a lily white suburb of Atlanta GA (my wife and I moved to another state for reasons) do differently? Should he go to the inner city and change hearts and minds?
What should I as a white guy do differently? Go to the prosecutor's office and change hearts and minds?
(Also, if your stepson is so worried about white people, why does he live in a "lily white" area?)
>you keep changing the subject
I've been responding to claims you made in other comments: 'The country has always been hostile to “other”.' and 'The racists have always been in power.'
>the Republican governor outlawed citizen’s arrest after it was used to harass and kill an unarmed black man walking down the street
I suppose this is more evidence that "The racists have always been in power"? Republicans are the KKK party. That's why they outlawed citizen's arrest after the death of a black man...
I'll let you have the last word in this thread.
And a consequence of this is that some people’s perspective of the scale of the nation’s hostilities is limited to the last 5 years or so.
A lot of things suck right now. Social media definitely give us the ability to see that. Using your personal ideology to link correlations is not the same thing as finding causation.
There will be undoubtedly be some damaging aspects of social media, simply because it is large and complex. It would be highly unlikely that all those factors always aligned in the direction of good.
All too often a collection of cherry picked studies are presented in books targeting the worried public. It can build a public opinion that is at odds with the data. Some people write books just to express their ideas. Others like Jonathan Haidt seem to think that putting their efforts into convincing as many people as possible of their ideology is preferable to putting effort into demonstrating that their ideas are true. There is this growing notion that perception is reality, convince enough people and it is true.
I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing. Declaring it's all bad acts as a deterrent to removing faults. I become very sceptical when many disparate threads of the same thing seem to coincidentally turn out to be bad. That suggests either there is an underlying reason that has been left unstated and unproven or the information I have been presented with is selective.
We have evolved to parse information as if its prevalence is controlled by how much people talk about it, how acceptable opinions are to voice, how others react to them. Algorithmic social media intrinsically destroy that. They change how information spreads, but not how we parse its spread.
It's parasocial at best, and very possibly far worse at worst.
When each person is receiving a personalised feed, there is a significant loss of common experience. You are not seeing what others are seeing and that creates a loss of a basis of communication.
I have considered the possibility that the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. In essence, subreddits. The problem then becomes the nature of the curators, subreddits show that human curators are also not ideal. Is there an opportunity for public algorithm curation. You subscribe to the algorithm itself and see the same thing as everyone else who subscribes sees. The curation is neutral (but will be subject to gaming, the fight against bad actors will be perpetual in all areas).
I agree about the tendency for the prevalence of conversation to influence individuals, but I think it can be resisted. I don't think humans live their lives controlled by their base instincts, most learn to find a better way. It is part of why I do not like the idea of de-platforming. I found it quite instructional when Jon Stewart did an in-depth piece on trans issues. It made an extremely good argument, but it infuriated me to see a few days later so many people talking about how great it was because Jon agreed with them and he reaches so many people. They completely missed the point. The reason it was good is because it made a good case. This cynical "It's good if it reaches the conclusion we want and lots of people" is what is destroying us. Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.
Doesn't this already happen to some extent, with content being classified into advertiser-friendly bins and people's feeds being populated primarily by top content from within the bins the algorithm deems they have an interest in?
> Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.
To some extent, this is how human communication always worked. I think the biggest problem is that the digital version of it is sufficiently different from the natural one, and sufficiently influenceable by popular and/or powerful actors, that it enables very pathological outcomes.
The distinction I think would be with publicly disclosable algorithms, you would at least know why you were receiving a particular thing and have the option to not subscribe to that particular algorithm. Ideally such things would be properly, open source. Once public, algorithms are subject to gaming. Open source provides the many eyes and feedback required to stay ahead of the bad actors.
Chronological order: promotes spam, which will be mostly paid actors. Manual curation by "high-quality, trusted" curators: who are they, and how will they find content? Curation by friends and locals: this is probably an improvement over what we have now, but it's still dominated by friends and locals who are more outspoken and charismatic; moreover, it's hard to maintain, because curious people will try going outside their community, especially those who are outcasts.
EDIT: Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.
If users chose who to follow this is hardly a problem. Also classical forums dealt with spam just fine.
Unfortunately, classical forums may have dealt with spam better because there were less people online back then. Classical forums that exist today have mitigations and/or are overrun with spam.
Err... well, no, it was always a big problem, still is, and is made even more so by the technology of our day.
Because all new accounts need to be verified by an actual human, we can filter out 99% of spam before other users see it, and between a dozen mods for a community of 140k people we only need to spend ~15 minutes a week cleaning out spam.
This is exactly why it's a problem. It doesn't even matter whether the algorithm is trained specifically on negative content. The result is the same: negative content is promoted more because it sees more engagement.
The result is more discontent in society, people are constantly angry about something. Anger makes a reasonable discussion impossible which in turn causes polarisation and extremes in society and politics. What we're seeing all over the world.
And the user sourced content is a problem too because it can be used for anyone to run manipulation campaigns. At least with traditional media there was an editor who would make sure fact checking was done. The social media platforms don't stand for the content they publish.
Message boards have existed for a very long time, maybe you're too young to remember, but the questions you're raising have very obvious answers.
They're not without issues, but they have a strong benefit: everyone sees the same thing.
You start with almost nothing on a given platform but over time you build up a wide variety of sources that you can continue to monitor for quality and predictive power over time.
[1] https://www.facebook.com/?sk=h_chr
More and more people declaring it's net-negative is the first step towards changing anything. Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.
(Or we could argue that "social media" in the Facebook-era sense is just one part of a larger entity, "the internet," that we're singling out.)
Net-negative is not quantifiable. But it is definitely qualifiable.
I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM.
I think we can agree all of these are bad, and a net-negative, without needing any mathematic rigor.
>I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM.
Sure, and then there's plenty of children not posting self-harm and suicide, hooliganism and outright crimes posted for viewership, and plenty of information and perfectly normal, non-harmful communication and interaction. "net-negative" implies there is far more harmful content than non-harmful, and that most people using social media are using it in a negative way, which seems more like a bias than anything proven. I can agree that there are harmful and negative aspects of social media without agreeing that the majority of social media content and usage is harmful and negative.
I'm old enough to have lived as an adult pre-SM, and from my perspective the overwhelming impact of social media has been more inflammatory, degrading, divisive, etc., etc., etc., than whatever positives you think you're getting.
A family friend's teenage daughter isn't allowed a cell phone, and thus has zero presence or view into SM spaces. Unlike nearly all her friends, she doesn't suffer from depression, anxiety, or any other common malady that is so prevalent today with the youth. Yes it's anecdotal, but it's also stark.
We got along just fine before SM, and we'd be just fine again without it.
A lot of people using social media aren't teenagers. A lot of teenagers are depressed and anxious for reasons other than using social media. A lot of teenagers use social media and aren't depressed and anxious because of it. A lot of teenagers find community and support for their issues through social media. Your extrapolation from a sample size of "one teenage girl and her friends that I'm aware of" to the billions of people currently using social media, and your conclusion that social media is responsible for all of the maladies common to youth doesn't really mean much.
The reality is social media today lacks most of the rigor and accuracy that traditional media needed to be trustworthy. There's virtually no vested interest in anyone on social media being honest and forthright about anything.
Your second paragraph is simply your perspective (and full of broad statements), and like you say, your opinion on that matter doesn't mean any more to me than apparently mine to you.
Yet here we are, with more depression, anxiety, and civil unrest nationally than we've had since probably Vietnam. At least all that unrest is what I see predominantly on SM.
Traditional media is the absolute worst possible source for anything related to social media because of the extreme conflict of interest. Decentralised media is a fundamental threat to the business model of centralised media, so of course most of the coverage of social media in traditional media will be negative.
Which traditional media outlets follow those things nowadays? Genuine question, looking for information and news to consume.
what's interesting is that those opinions are taken at face value without ever happening to do any practical evaluation about traditional media outlets.
the reality is, if you ever read any alt-news publication it becomes evident extremely quickly how deprived of any standards those publications actually are.
The mainstream media have several sources, including the press releases that get sent to them, the newswires they get their main news from and social media.
In the UK the press, in particular, the BBC, were early adopters of Twitter. Most of the population would not have heard of it had it not been for the journalists at the BBC. The journalists thought it was the best thing since the invention of the printing press. Latterly Instagram has become an equally useful source to them and, since Twitter became X, there is less copying and pasting tweets.
The current U.S. President seems capable of dictatorship via social media, so following his messages on social media is what the press do. I doubt any journalist has been on whitehouse.gov for a long time, the regular web and regular sources have been demoted.
The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus.
The media should not be deciding what is true, they should be reporting what they see. Importantly they should make clear that the existence of a thing is not the same thing as the prevalence of a thing.
>Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.
I think much of my post was in effect saying that a good deal of the problem is the belief that building political momentum is more important than accuracy.
Summaries with links here. https://socialmediavictims.org/effects-of-social-media/
It's really not hard to confirm this.
The problem isn't that "building political momentum is more important than accuracy", it's that social media is a huge global industry that pumps out psychological, emotional, and political pollution.
And like all major polluters, it has a very strong interest in denying what it's doing.
I don't want to have to do a literature review again, and sharing papers is hard because they are often paywalled unless you are associated with a university or are willing to pirate them.
Luckily, The American Psychological Association [0] has shared this nice health advisory [1] which goes into detail. The APA has stewarded psychology research and communicated it to the public in the US for a long time. They have a good track record.
[0]: https://en.wikipedia.org/wiki/American_Psychological_Associa...
[1]: https://www.apa.org/topics/social-media-internet/health-advi...
Largely I don't think the media has been dictating anything. They've just been reporting on the growing body of evidence showing that social media is harmful.
What you'd call "trial by media" is just spreading awareness and discussion of the evidence we have so far which seems like a very good thing. Social media moves faster than scientific consensus, and there's a long history of industry doing everything they can to slow that process down and muddy the waters. We've seen facebook doing exactly that already by burying child safety research.
A decade or more of "Do thing, say nothing" is not a sound strategy when the alternative is letting the public know about the existing research we have showing real harms and letting them decide for themselves what steps to take on an individual level and what concerns to bring to their representatives who could decide policy to mitigate those harms or even dedicate funding to further study them.
I don’t know how I’d state or prove a single underlying reason why most vices are attractive-while-corrosive and still, on the whole, bad. It feels like priests and philosophers have tried for the whole human era to articulate a unified theory of exactly why, for example, “vanity is bad”. But I’m still comfortable saying gambling feels good and breaks material security, lust feels good and breaks contentment (and sometimes relationships), and social media feels good and breaks spirits.
I certainly agree that “social media” feels uncomfortably imprecise as a category—shorthand for individualized feeds, incentives toward vain behavior, gambling-like reinforcement, ephemerality over structure, decontextualization, disindividuation, and so on; as well as potentially nice things like “seeing mom’s vacation pics.”
If we were to accept that social media in its modern form, like other vices, “feels good in the short term and selectively stokes one’s ego,” would that be enough of a positive side to accept the possibility for uniformly negative long-run effects? For that matter, and this is very possible—is there a substantial body of research drawing positive conclusions that I’m not familiar with?
Few hot-button social issues are resolved via research, and I'm not sure they should be. On many divisive issues in social sciences, having a PhD doesn't shield you from working back from what you think the outcome ought to be, so political preferences become a pretty reliable predictor of published results. The consensus you get that way can be pretty shoddy too.
More importantly, a lot of it involves complex moral judgments that can't really be reduced to formulas. For example, let's say that on average, social media doesn't make teen suicides significantly more frequent. But are OK with any number of teens killing themselves because of Instagram? Many people might categorically reject this for reasons that can't be dissected in utilitarian terms. That's just humanity.
I accept that "net-negative" is a cultural shorthand, but I really wish we could go beyond it. I don't think people are suddenly looking at both sides of the equation and evaluating rationally that their social media interactions are net negative.
I think what's happening is a change in the novelty of social media. That is, the the net value is changing. Originally, social media was fun and novel, but once that novelty wears away it's flat and lifeless. It's sort of abstractly interesting to discuss tech with likeminded people on HN, but once we get past the novelty, I don't know any of you. Behind the screen-names is a sea of un-identifiable faces that I have to assume are like-minded to have any interesting discussions with, but which are most certainly not like me at all. Its endless discussions with people who don't care.
I think that's what you're seeing. A society caught up in the novelty, losing that naive enjoyment. Not a realization of met effects.
The situation you reference with regard to Israel/Gaza is only possible because TikTok is partially controlled by Chinese interests. But it also goes to show that TikTok could have easily been banned or censored by western governments. Just kick them off the App Stores and block the servers. For example, there is no support Net Neutrality in the USA that would defend them if the government wanted to quietly throttle their network speed.
Social media as it exists now is not decentralized in any meaningful capacity.
I will say this, and this is anecdotal, but other events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media, and how much social media does to amp up the anger and tone of people. When I open Twitter, or Facebook, or Instagram, or any of the smaller networks I see people baying for blood. Quite literally. But when I talk to my friends, or look at how people are acting in the street, I don't see that. I don't see the absolute frenzy that I see online.
If social media turns up the anger that much, I don't think it's worth the cost.
I don't think it follows that something making money must do so by being harmful. I do think strong regulation should exist to prevent businesses from introducing harmful behaviours to maximise profits, but to justify that opinion I have to believe that there is an ability to be profitable and ethical simultaneously.
>events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media
On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. It's true that there was incorrect information and misinformation on social media, but it was also immediately challenged. That does create a source of conflict, but I don't think the solution is to accept falsehoods unchallenged.
If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.
My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms.
> On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media.
I agree with your point here too, and I don't think the solution is to completely stop or get rid of social media. But, the problem I see is there are tons of corners of social media where you can still see the original lies being repeated as if they are fact. In some spaces they get challenged, but in others they are echoed and repeated uncritically. That is what concerns me - long debunked rumors and lies that get repeated because they feel good.
> If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.
I think many people are actually capable of discussing opposing views without it becoming so inflammatory... in person. But algorithmic amplification online works against that and the strongest, loudest, quickest view tends to win in the attention landscape.
My concern is that social media is lowering people's ability to discuss things calmly, because instead of a discussion amongst acquaintances everything is an argument is against strangers. And that creates a dynamic where people who come to argue are not arguing against just you, but against every position they think you hold. We presort our opponents into categories based on perceived allegiance and then attack the entire image, instead of debating the actual person.
But I don't know if that can fixed behaviorally, because the challenge of social media is that the crowd is effectively infinite. The same arguments get repeated thousands of times, and there's not even a guarantee that the person you are arguing against is a real person and not just a paid employee, or a bot. That frustration builds into a froth because the debate never moves, it just repeats.
The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not.
Surely the same argument could be applied that companies would be incentivised to make a product that was non-harmful over one that was harmful. Harming your users seems counterproductive at least to some extent. I don't think it is a given that a harmful approach is the most profitable.
No, the incentive to hide harm is being given as a reason that studies into harm would be suppressed, not as evidence of harm in and of itself. This is a direct response to your original remark that "Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack."
Potential mechanisms and dynamics that cause harm are in the rest of my comment.
> Harming your users seems counterproductive at least to some extent.
Short term gains always take precedence. Cigarette companies knew about the harm of cigarettes and hid it for literally decades. [0] Fossil fuel companies have known about the danger of climate change for 100 years and hid it. [1]
If you dig through history there are hundreds of examples of companies knowingly harming their users, and continuing to do so until they were forced to stop or went out of business. Look at the Sacklers and the opioid epidemic [2], hell, look at Radithor. [3] It is profitable to harm your users, as long as you get their money before they die.
[0] https://academic.oup.com/ntr/article-abstract/14/1/79/104820... [1] https://news.harvard.edu/gazette/story/2021/09/oil-companies... [2] https://en.wikipedia.org/wiki/Sackler_family [3] https://en.wikipedia.org/wiki/Radithor
That seems like a fair argument. I don't think it means that it grants opinions the weight of truth. I think it would make it fair to identify and criticise suppression of research and advocate for a mechanism by which such research can be conducted. An approach that I would support in this area was a tax or levy on companies with large numbers of users that could be ear-marked for funding independent research regarding the welfare of their user base and on society as a whole.
>Short term gains always take precedence.
That seems a far worthier problem to address.
>If you dig through history there are hundreds of examples of companies knowingly harming their users
I don't deny that these things exist, I simply believe that it is not inevitable.
If we can't fix the underlying problem immediately, treating the symptoms seems reasonable in the meantime.
It doesn't. It's just that when people can publish whatever with impunity, they do just that.
Faced with the reality of what they're calling for they would largely stop immediately.
I believe the term for that is "keyboard warrior".
It's litigated all over and has been for a decade.
Australia for example has set an age limit of 16 to have social media. France 15. Schools or countries are trying various phone bans. There's research into it. There are whistleblowers telling about Facebook's own research they've suppressed as it would show some of their harm.
Perhaps you spend too much time on social media?
The “some governments banned it for kids” argument is an appeal to authority, a logical fallacy.
The actions of tech-reactionist leftist governments absolutely do not constitute sound science or evidence in this matter.
And if you’re claiming the French government only makes government policy based on sound data, I will point you to their currently unraveling government over the mathematically impossible social pension scheme they’ve created.
The actions of several democratic governments is evidence that there is enough popular support for these actions to argue for a broader trend. And before you try for a gotcha, I am well aware that a democratic government can enact regulations without a direct vote proving that a majority of people support such an action. But inasmuch as a government reflects the will of the governed, etc etc etc.
The bans might be unfounded or well founded, you might agree with them or not, but clearly the idea that social media might be bad has spread beyond substack
I certainly do think the idea that social media might be bad has spread far and wide. What I would like to see is experts in the field reaching a consensus on to what extent that idea is true, and what should be done about it.
https://www.nature.com/articles/d41586-024-00902-2#ref-CR6
It should be noted a lot of ideas have spread in recent years. We would do well to not believe all of them, no matter how comforting it is to externalize blame.
How is that a signal the case against social media is weak?
This just shows how futile it is. How do you actually stop someone from using social media? If a 15 year old signs up for Mastodon what is Australia going to do about it?
It shows it's not just a debate on substack though
Since you bring up the Australian law as an example I shall check the expert opinion on that.
For the second time in a week, I find myself in the peculiar position of seeing our research misinterpreted and used to support specific (and ill-advised) policy - this time by the Australian government to justify a blanket social media ban for those under 16.
https://www.linkedin.com/posts/akprzybylski_the-communicatio...
This open-letter, signed by over 140 Australian academics, international experts, and civil society organisations, addresses the proposal to ‘ban’ children from social media until the age of 16. They argue that a ‘ban’ is too blunt an instrument to address risks effectively and that any restrictions must be designed with care.
https://apo.org.au/node/328608
https://ccyp.wa.gov.au/news/anzccga-joint-statement-on-the-s...
https://humanrights.gov.au/about/news/proposed-social-media-...
Social media represents a step change in how we consume news about current events. No longer are there central sources relied on by huge swaths of the population. Institutions which could be held accountable as a whole and stood to lose from poor reporting. Previous behemoths like NYT, WaPo, Bloomberg are now comparatively niche and fighting for attention. This feels so obvious it's not necessary to litigate, but if someone has statistics to the contrary, I'll be happy to look deeper and re-evaluate.
I agree, one should not immediately succumb to fear of the new. At the same time, science is slow by design. It takes years to construct, execute and report on proper controlled studies. Decades to iterate and solidify a holistic analysis. In the mean time, it seems naive to run forward headlong, assuming the safest outcome. We'll have raised a generation or two before we can possibly reach analytical confidence. Serious irreparable damage could be done far before we have a chance to prove it.
There is a an obvious incoherence and even misreasoning present in the people most ruined by the new media.
For example, you might want to drive the risk of something to zero. To do that, you need to calmly respond with policy every bad event of that type with more restrictions at some cost. This should be uncontentious to describe yet again and again the pattern is to mistake the desires, the costs and the interventions.
I can't even mention examples of this without risking massive karma attacks. That is the state of things.
I used to think misreasoning was just something agit prop accounts did online but years ago started hearing the broken calculus being spoken by IRL humans.
We need a path forward to make people understand they should almost all disagree but they MUST agree on how they disagree else they don't actually disagree.They are just barking animals waiting for a chance to attack.
That has been done over and over again, but as long as law makers and regulators remain passive, nothing will improve.
For all we know there are millions who have withdrawn and are making the case outside of social media. Or living the case.
This reply seems a bit fish-in-water to me.
Companies intentionally design social media to be as addictive as possible, which should be enough to declare them as bad. Should we also identify each chemical in a vape and address each one individually as well before banning them for children? I think such a ban for social media would probably be overkill, but it should not be controversial to ban phone use in school.
I guess the difference is that YouTube content creators don't casually drop politics in because it will alienate half their audience and lose revenue. Whereas on those other platforms the people I follow aren't doing it professionally and just share whatever they feel like sharing.
On Mastodon, those I follow do not post about politics and if they do it is hidden behind content warning.
YouTube is probably location based as I have no account there and that type of content is relatively mainstream where I live.
I do not believe humans are capable of responsibly wielding the power to anonymously connect with millions of people without the real weight of social consequence.
See examples like finding someone's employer on LinkedIn to "out" the employee's objectionable behavior, doxxing, or to the extreme, SWATing, etc.
I would replace "it doesn't help a bit" with "it doesn't solve the problem". My casual browsing experience is that X is much more intense / extreme than Facebook.
Of course, the bigger problem is the algorithm - if the extreme is always pushed to the top, then it doesn't matter if it's 1% or 0.001% - the a big enough pool, you only see extremes.
"The algorithm" is going to give you more of what you engage with, and when it comes to sponsored content, it's going to give you the sponsored content you're most likely to engage with too.
I'd argue that, while advertising has probably increased the number of people posting stuff online explicitly designed to try and generate revenue for themselves, that type of content's been around since much earlier.
Heck, look at Reddit or 4chan: they're not sharing revenue with users and I'd say they're at least not without their own content problems.
I'm not sure there's a convincing gap between what users "want" and what they actually engage with organically.
Social interaction is integrated with our brain chemistry at a very fundamental level. It's a situation we've been adapting to for a million years. We have evolved systems for telling us when its time to disengage, and anybody who gets their revenue from advertising has an incentive to interfere with those systems.
The downsides of social media: the radicalization, the disinformation, the echo chambers... These problems are ancient and humans are equipped to deal with them to a certain degree. What's insidious about ad-based social media is that the profit motive has driven the platforms to find ways to anesthetize the parts of us that would interfere with their business model, and it just so happens that those are the same parts that we've been relying on to address these evils back when "social media" was shouting into an intersection from a soap box.
I'm certainly not going to disagree with the notion that ad-based revenue adds a negative tilt to all this, but I think any platforms that tries to give users what they want will end up in a similar place regardless of the revenue model.
The "best" compromise is to give people what they ask for (eg: you manually select interests and nothing suggests you other content), but to me, that's only the same system on a slower path: better but still broken.
But anyway, I think we broadly are in agreement.
If someone I was friends with made racist remarks, they wouldn't be prosecuted for that. But I would stop being their friend. Similarly if I was the only one in my friend group against racism and advocate firefly against it, they would probably stop being my friends.
So you want your friend to be able to anonymously express their racism while being able to hide it from you? I can't imagine advocating for that as a desired goal rather than a negative side effect.
>Similarly if I was the only one in my friend group against racism and advocate firefly against it, they would probably stop being my friends.
If we are talking about a society level problem, I think it is a little silly to think a society as toxic as this hypothetical one could be saved by anonymous internet posting.
For the record, I'm not as against anonymous posting as the person who started this specific comment thread, I just think this line of argument is advocating for a band-aid over bigger issues.
How do we solve those bigger issues when we live in an emperor's new clothes society? Wait for children who haven't learnt the rules to point them out?
this sounds like a suspicious characterisation - how are they trying to undo systemic racism, and what do they identify as "systemic racism"?
For example, "Ended race-based waitlists" (for healthcare).
Trying to correct an inequality with another inequality is still discrimination. People who want that should be honest and identify themselves as racists, not the ones who want to stop racism.
Deceit is a characteristic of our humanity. We all deceive others and ourselves. If people are to be allowed to be fully expressive as humans they need to be able to deceive. And so they require anonymity.
See Robert Trivers' works
https://www.amazon.com/stores/author/B001ITVRUO/about
It's unclear that it's true. I think the implication is deceit is a human characteristic because all humans do it, perhaps even subconsciously; Is the same true of murder?
Maybe a more convincing example is that if I advocate for making it easier to build housing because that will lower the cost of housing and many of my friends are homeowners, they might really not like me because lowering the cost of housing directly lowers their net worth.
Are these people evil for not wanting to lose their retirement savings (wrapped up in their home)?
Edit: also
> So you want your friend to be able to anonymously express their racism while being able to hide it from you?
While on the specific example of racism I'm pretty convinced of my moral correctness, I am not bold enough to declare that every bit of my worldview is the universally correct one. I am also not so bold to say that I will always be instantly convinced of my incorrectnes by a friend challenging my worldview (if they actually do have a better stance on some thing). My conclusion is that my friend should have some place to platform his better opinion without (having to fear) alienating me. And the only way to achieve this as far as I know is anonymous platforms.
What people these days are worried about isn't that they are racist and have no outlets for their racism. It's that they worry that whatever they say will be reinterpreted as racism when they were making an honest attempt to not be racist.
So you agree with my point that people could face social prosecution for dissenting (even when they are correct), so we should have anonymous platforms where they can champion their ideas.
> Your case doesn't sound reasonable and it also doesn't fit the current zeitgeist.
These were just extreme examples to indicate that there can be social repercussions to dissenting.
Maybe a more convincing example is that if I advocate for making it easier to build housing because that will lower the cost of housing and many of my friends are homeowners, they might really not like me because lowering the cost of housing directly lowers their net worth.
The rational tolerant society you imagine is so far fetched we don't even pretend it can exist even in fantasies.
Chilling the discourse would be a feature, not a bug. In fact what discourse in most places these days needs is a reduction in temperature.
This kind of defence of anonymity is grounded in the anthropologically questionable assumption that when you are anonymous you are "who you really are" and when you face consequences for what you say you don't. But the reality is, we're socialized beings and anonymity tends to turn people into mini-sociopaths. I have many times, in particular when I was younger said things online behind anonymity that were stupid, incorrect, more callous, more immoral than I would have ever face-to-face.
And that's not because that's what I really believed in any meaningful sense, it's because you often destroy any natural inhibition to behave like a well-adjusted human through anonymity and a screen. In fact even just the screen is enough when you look at what people post with their name attached, only to be fired the next day.
To be clear, I think freedom of speech is a bedrock foundation of intellectual society and should be the starting point for modern societies.
But perhaps we really should outlaw anonymity when it comes to expression. Allow people to express themselves, but it shouldn't emanate from the void.
I'd argue if all it took was people saying some mean things anonymously to change your opinion, then your convictions weren't very strong to begin with.
I disagree with "just as readily" (i.e. most of the most heinous things are indeed bots or trolls).
Also, I imagine that without the huge amount of bots and anonymous trolls, the real-name-accounts would not post as they do now - both because their opinions are shaped by the bots AND because the bots give them the sense that many more people agree with them.
I’m not sure, given the moral dystopia we currently inhabit, what positive benefit would accrue from removing online anonymity?
The problem is the leaders of the large social media organizations do not care about the consequences of their platforms enough to change how they operate. They're fine with hosting extremist and offensive content, and allowing extremists to build large followings using their platforms. Heck, they even encourage it!
As a side benefit, when you do this enough, the pendulum that goes over the middle line for any of these arbitrary-but-improves-clicks division builds momentum until it hits the extremes. On either side-- it doesn't matter, cause it will swing back just as hard, again and again.
As a side benefit the back and forth of the pendulum is very distracting to the public so we do not pay attention to who is pushing it. Billions of collective hours spent fighting with no progress except for the wallets of rich ppl.
It almost feels like a conspiracy but I think it's just the direct, natural result of the vice driven economy we have these days
That article needs to have about 80% of the words cut out of it.
When the author straight up tells you: I'm posting this in an attempt to increase my subscribership, you know you're in for some blathering.
In spite of that, personally I think algorithmic feeds have had a terrible effect on many people.
I've never participated, and never will...
130 more comments available on Hacker News