Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /UK toughens Online Safety Act with ban on self-harm content
  1. Home
  2. /Discussion
  3. /UK toughens Online Safety Act with ban on self-harm content
Last activity 3 months agoPosted Sep 9, 2025 at 8:01 AM EDT

Uk Toughens Online Safety Act with Ban on Self-Harm Content

_p2zi
43 points
64 comments

Mood

controversial

Sentiment

mixed

Category

other

Key topics

Online Safety
Content Moderation
Free Speech
Debate intensity80/100

The UK government has toughened its Online Safety Act to ban content promoting self-harm, sparking debate about the effectiveness and potential consequences of such regulations.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

35m

Peak period

23

Hour 1

Avg / period

8

Comment distribution64 data points
Loading chart...

Based on 64 loaded comments

Key moments

  1. 01Story posted

    Sep 9, 2025 at 8:01 AM EDT

    3 months ago

    Step 01
  2. 02First comment

    Sep 9, 2025 at 8:36 AM EDT

    35m after posting

    Step 02
  3. 03Peak activity

    23 comments in Hour 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Sep 10, 2025 at 7:12 AM EDT

    3 months ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (64 comments)
Showing 64 comments
atemerev
3 months ago
2 replies
This is a damn s*icide for the country, they are unaliving themselves on turbo, aren't they?
FridayoLeary
3 months ago
1 reply
I'm sorry but the word "turbo" evokes a model of environmentally damaging Internal Combustion Engines (ICE). Your comment is doubly offensive i will have to report it.
pcdoodle
3 months ago
Hey! quit having fun down here!
cs02rm0
3 months ago
It's not great here. Send help.
hermannj314
3 months ago
3 replies
Nice. I am tired of tobacco, alcohol, and gambling content. I am glad to see encouraging people to self-harm with these products is now banned.

Oh wait...self-harm just includes suicide. I guess it is still fine to comvince people to destroy their lives as long as it has a revenue curve behind it.

unglaublich
3 months ago
1 reply
Hey let's not forget sugar, cars and kitchen stairs.
hermannj314
3 months ago
1 reply
"I think your foyer staircase would be more aesthetic wthhout a handrail" is the new "unalive yourself"

It is a long con, but the expected value of self-harm is still positive and it is legally protected speech in the UK.

pjc50
3 months ago
1 reply
What? Telling people to kill themselves is definitely not in the legally protected range?
hermannj314
3 months ago
I am sure you can still encourage people to commit probabilistic self-harm in the UK.

You can't tell someone to kill thelselves but you show alcohol ads to an alcoholic or encourage someone to modify their staircase to increase risk of fall death.

My point is this law is a stupidly narrow definition of self-harm.

dekken_
3 months ago
1 reply
It's performative, distractionary grand standing. Dealing with the symptoms, not the causes.
normalaccess
3 months ago
1 reply
It's more than that.. Its a white night trogon horse designed to consolidate power and oppress the people.
dekken_
3 months ago
Yeah, communism is slavery. We seem to need to relearn this periodically.
pjc50
3 months ago
The Labour party - well, both parties - has got a lot of money from gambling companies over the years. https://www.thenational.scot/news/24624306.labour-took-1m-do...
morkalork
3 months ago
2 replies
So where does Papa Roach - Last Resort fall under this, will YouTube be blocked?
cjs_ac
3 months ago
No, YouTube will be expected to remove (or not serve to UK users) any video containing that song.
rhdunn
3 months ago
And Linkin Park. In movies there's Heat and 13 among others.
voidUpdate
3 months ago
1 reply
I keep seeing youtube putting the little "you're not alone" banners under videos that report about suicide, and even a couple that have no relation to it. Since the rules are apparently the duty of the company to uphold, does that mean these videos will be banned too?
A4ET8a8uTh0_v2
3 months ago
1 reply
And, as usual, that quickly translates into unexpected moments. My feed at one point showed Epstein video ( forget the exact context, but it was a podcast of some sort going over various IC connections ) and immediately underneath was a suicide prevention note. Unintentional dark humor abounds.

Anyhoo, UK has been weird lately, but as most things, new equilibrium will be reached.

voidUpdate
3 months ago
I was watching a summary video of the Mother Horse Eyes story which had a suicide note below it, and I'm pretty sure no suicide was even mentioned
j1elo
3 months ago
1 reply
It starts like this and before you know it they'll be banning Hollow Knight Silksong because people play too long and can get dehydrated.
brookst
3 months ago
2 replies
Slipper slope arguments are lazy and reflect binary “it must be one extreme or the other” thinking.

To see how silly it is, just turn around: if they allow self-harm videos, before you know it they’ll be mandating that ALL videos be about self-harm. Seem reasonable?

tryauuum
3 months ago
I usually remember the story from my personal experience in Russia, from long time ago

A soldier killed himself with a rifle

A local newspaper was asked to remove the page because "it contains information harmful to children", namely guides how to kill yourself. Because there is such a law. They complied

elpocko
3 months ago
It's not reasonable because it makes no sense. "If they allow self-harm videos, before you know it they'll allow ALL videos.", there you go.

The slippery slope argument is warranted because it's true: the law was about pornography, then it was extended to include non-pornographic content, and we can be sure more stuff will be banned in the future, as it always happens with these laws.

perihelions
3 months ago
1 reply
> "Tech companies to be legally required to prevent this content from appearing in the first place, protecting users of all ages."

The typical algorithmic implementation would ban this HN discussion itself, for containing the string "self-harm" and various other keywords on the page. That's often how it ends up, for anyone who's been paying attention. Legitimate support websites are censored, for discussing the subjects the attempt to support. Public health experts get misclassified as disinformation—they use the same set of keywords, just in a different order. Inexpensive ML can't tell them apart.

Important news could be automatically suppressed before anyone would even realize it's authentic news. How could one discuss on an algorithmically "safe" platform, for instance, the allegation that the President of the United States paid the pedophile Jeffrey Epstein to rape an underage child, who later killed herself? That's four or five "high-priority" safety filters right there.

2OEH8eoCRo0
3 months ago
2 replies
Sounds like a big tech problem then. Big tech has no responsibility and shall not be held accountable because their moderation is too shitty!
perihelions
3 months ago
1 reply
The problem is narrower and more nuanced than "held accountable": the problem is that the accountability is asymmetric. There's no incentives against algorithmically deleting good content by error. If you impose large financial and legal risk on one side (type I errors), and basically nothing on the other (type II), a public corporation will very rationally optimize for the incentives you've given them. That means: the cheapest possible moderation, with aggressive filters committing Type II errors all day.

Look at how Microsoft GitHub consistently deletes innocent projects without warning, like SymPy, over LLM hallucinations. That moderation style's a direct consequence of the large financial costs of copyright lawsuits. If you introduce similar financial/legal risks in other areas, you should expect similar results.

KaiserPro
3 months ago
1 reply
> the problem is that the accountability is asymmetric

very much so, large companies are not really held accountable for their user's actions, and thats pretty much by design, for example:

If you hold a party every saturday, where the people that come along abuse residents in the streets and cause general damage, after the 5th or 6th time, and at about the point where the patrons of your parties are prosecuted, you will face legal penalties for letting it happen again. even if its different people at your party. (under a whole bunch of laws, ranging from asbos to breach of the peace to criminal damage, anti-rave laws, all sort.)

If you do that online, as long as you comply with the bare minimum(ie handing over logs), you're free from most legal pain (unless its CSAM, copyrighted or "terrorist" material)

I get your point, but thats where the we get into not a problem of the principle, but the execution. As its OFCOM who are doing the implementing, and they really don't have the expertise or leadership to make "good" guidance, we're going to end up with shit.

perihelions
3 months ago
Oh, the anti-rave laws were a textbook example of unintended consequences: they made it legally fraught to make life-saving harm minimization accessible, since the way the law was drafted, that'd confirm a rave host was "aware of the presence" of drugs—an element of the crime. Likely caused a net increase in drug-overdose deaths.

Poorly-thought out, asymmetric incentives: strongly disincentivizes helping people not-die, while failing at disincentivizing creating an environment for easy access to hard drugs.

ApolloFortyNine
3 months ago
1 reply
What's your big plan to solve the problem? Pay a moderator? What if the moderator makes a mistake? What if it's unrealistic to pay a moderator? 500 hours of footage are uploaded to youtube every minute for instance. God knows how many facebook posts are made a minute.

That's the problem with laws like these, obviously a company is going to air on the side of over censoring if the cost of making a mistake is an unknown. If your moderating. Law of large numbers being what it is, eventually if your platform is large enough something will slip through. And no one knows what the punishment will be or where the line even is.

2OEH8eoCRo0
3 months ago
Repeal section 230
dghf
3 months ago
1 reply
So no more M*A*S*H theme-song on YouTube?
tonyedgecombe
3 months ago
1 reply
I wonder how many people necked a bottle of paracetamol thinking it would be painless.
IAmBroom
3 months ago
1 reply
That's not in the lyrics, so what's your point here?
tonyedgecombe
3 months ago
“suicide is painless, it brings on many changes “
blueflow
3 months ago
1 reply
Cool. How will banning the content about it fix the youth's mental health?
A4ET8a8uTh0_v2
3 months ago
There is an argument to be made for preventing further damage along the lines of 'monkey see, monkey do, monkey pee all over you'. That said, I agree that it is a very low hanging fruit and harvesting that fruit has a lot of consequences.
bArray
3 months ago
2 replies
> [..] putting stricter legal requirements on tech companies to hunt down and remove material that encourages or assists serious self-harm, before it can destroy lives and tear families apart.

Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?

> This government is determined to keep people safe online. Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country.

"Vile" is very emotionally charged, who decides it? Will it be the next government that gets to decide it? Bare in mind, the (recently) ex-Deputy Prime Minister of the current government called her opposition "scum" [1], an extremely negative word.

[1] https://www.bbc.co.uk/news/uk-politics-59081482

delichon
3 months ago
1 reply
> Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?

Or "voting against your own interests"? I can't deny that some voting patterns amount to self harm.

bArray
3 months ago
Well, we only need to look a "democratic" dictatorships to see how the government helps to prevent you from voting from the wrong person.

And why stop at voting? The government could also be responsible for preventing 'harmful' thoughts. Police in the UK are regularly deployed for "non-crime hate incidents", so that they can tell people that they haven't committed a crime, but they will make a note of it in a secret database that affects your employability.

cjs_ac
3 months ago
1 reply
In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.

The quote about 'This government is determined to keep people safe online,' is a 'we're good people' statement for the media - remember, this is a press release.

Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.

bArray
3 months ago
> In the UK, 'self-harm' is used specifically to refer to what used to be called cutting [one's] wrists. It's not intended to mean any action that has negative consequences for oneself.

Definitions slip over time. Violence, abuse, sexual assault, etc, were previously all physical acts. Then they became mental acts, and now just the perception is devastating.

> Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.

In the moment. But a future politician can reasonably interpret the same idea differently.

pjc50
3 months ago
2 replies
Note that one of the few taboos still maintained by traditional news media, despite their questionable ethics elsewhere, is not reporting on suicides as such because it is known to encourage copycats. Hence a lot of young celebrities being reported as "died suddenly" (which in practice means either suicide or overdose, accidental or intentional)

Will this result in more spread of ridiculous euphemisms like unalive? Probably. Will this result in us being able to get people banned from social media for telling other people to kill themselves? Probably only with very intermittent enforcement.

FridayoLeary
3 months ago
3 replies
Who are these people who are censoring perfectly ordinary and inoffensive words? The way it's going soon the only thing that can safely be discussed will be flowers and sunshine.

Wars and death will end simply because people won't be able to discuss them.

n8m8
3 months ago
2 replies
I spend a lot of time on TikTok and it's because of how restrictive the algorithm is in suppressing certain keywords regardless of context. The community is strong on TT, here are a bunch:

- Cute winter boots (ICE officers)

- Gardening (420)

- pdf/pdf file (pedophile)

- :watermelon-emoji: (Palestine)

- neurospicy (neurodivergent)

- bird website (X)

- LA Music festival tour (protests)

Not sure if I see more of the 'algospeak' because the problem is real, because I've interacted with algospeak content before and it's just giving me more of it, or if creators don't really need to do it anymore but just still do.

normalaccess
3 months ago
1 reply
Social platforms have taken steps to promote thought-harmony by joyfully unshowing wrongthink and unbalanced word-units, ensuring content aligns with community wellness standards. In the pursuit of safety-plus and truth-good, certain speak-patterns may be adjusted or unshown to prevent doublefeel or ideafriction. Content adjustment systems, both think-algorithms and human guidance units, help stop crimethink before it wordforms, ensuring all speak aligns with group-love and peaceorder. Unwords and unideas that cause ideafriction are speedwise unposted for harmony-plus. All speak must be fullwise right and joygood — and all oldspeak thoughts are unspeak. Users who feel unjoy at speak-guidance are malusers of freegood, needing rejoy and newlearn. This is fullwise necessary for protect-truth and keepping all minds doubleplusgood.
tavavex
3 months ago
The scary thing is that if you swapped out the 1984-speak for regular modern euphemisms and corporate jargon, and cut out the last few sentences, this could've totally been something posted on one of these website's help pages or in their press release, otherwise unedited.

Though, these websites aren't driven by some misguided but well-meaning vision to treat society of its ills. They only care about making money, and advertisers have just converged on demanding that anything complimenting their ads be squeaky-clean, inoffensive, always happy. They dictate their preferences to services that primarily make money off ads. The way advertiser interests end up clashing with people's desire for free expression is just a side effect to them.

pjc50
3 months ago
Some of that is just people being cute - this place is sometimes referred to derogatorily as "the orange website".

Another common element of both tiktok and instagram is how some posters try to advertise for Onlyfans without triggering a FOSTA/SESTA related ban.

pjc50
3 months ago
Emprirically, it's not perfectly ordinary and inoffensive.

People pretend that speech is weightless and has no consequences, even when that's shown not to be entirely true.

stetrain
3 months ago
These more recent euphemisms have mostly come from creators on platforms like TikTok, Instagram, Youtube, etc. who are either rightly or wrongly concerned that using certain words leads to their content being demonetized.

Advertisers don't want their ads to appear next to certain keywords, platforms use content detection to match those criteria, so creators are monetarily incentivized to avoid them. Capitalism!

andrewinardeer
3 months ago
The giveaway where I live is the that news article traditionally concludes by tacking the phone number for a crisis helpline. No pun intended but it's a dead giveaway. At this point I feel that it is a token gesture which the newspaper has to thanks to a voluntary code.
A_D_E_P_T
3 months ago
1 reply
They're going to keep "toughening" it until they have a more restrictive internet than any so-called Authoritarian Regime.

Today it's 4chan, Kiwi Farms, WPD, pirate sports sites, libgen, Anna's place... Tomorrow it'll surely be every forum where moderation isn't absolutely draconian.

I wonder if they're going to try to ban Twitter.

yupyupyups
3 months ago
1 reply
I wouldn't mention 4chan and libgen in the same sentence.
morkalork
3 months ago
1 reply
Why not? Sure the content and motivations are worlds apart but they're still under the government banhammer together
yupyupyups
3 months ago
Sure, I understand that.

I just think we need to remember that some content is simply undesirable, especially for kids.

An alternative method to protect society from harmful content would perhaps be more readily available and advanced firewalls for consumers, orgs and schools. Currently, opt-out of filth on the internet (however you like to define it) can be prohibitively difficult for the general consumer.

The market has been very backwards in this respect, which has given some powerful elements in society an oppertunity to exercise tyrannical control by imposing a nation-wide Chinese firewall as the solution. If we had recognized the problem of harmful content in the first place, and offered solutions, we would've had more bite against these tyrannical schemes.

betaby
3 months ago
2 replies
It's a copy-cat of the Russian regulations from 2016. I suppose if the West does it, it's all good.

See https://sdelano.media/suicideisbad/ (in Russian)

przmk
3 months ago
Who in this thread is saying it's all good?
toshinoriyagi
3 months ago
I don't think many people in the west think the Online Safety Act is good. Particularly on Hacker News it has been heavily criticized from what I have seen.
petermcneeley
3 months ago
2 replies
Can you guys in the UK see this page? https://www.canada.ca/en/health-canada/services/health-servi...
cjs_ac
3 months ago
Yes. Remember, the Online Safety Act has nothing to do with ISPs, and instead regulates website publishers. This change to legislation has just been announced; of course the Canadian Government hasn't had a chance to update its website.
commandlinefan
3 months ago
What about this one: https://visualstudio.microsoft.com/downloads/?
philipallstar
3 months ago
This makes sense. Start with the content that's hard to fight against.
IAmBroom
3 months ago
In this thread: a WHOLE LOT of people advocating Slippery Slope arguments (and even defending them as such, explicitly).

I can't decide if the bill is toothless and meaningless, or a well-planned first step towards complete control of speech and groupthink. Or both, somehow.

1GZ0
3 months ago
So they're going after mukbang YouTubers now?
dotnet00
3 months ago
Reddit has this thing where, if someone reports your comment for self-harm, regardless of validity, they send you an automated (and IMO useless at best, harmful at worst) "we're here for you" DM.

It's often used as a way to anonymously imply that you should kill yourself, so I wonder if this sort of thing would affect that and how.

Though, overall, I think the censoring of self-harm stuff is already beyond ridiculous. Terms like "self-unalive", "self-terminate", "sewerslide" make a very serious issue sound like a joke. Blinding ourselves to isn't going to make these problems go away.

View full discussion on Hacker News
ID: 45180757Type: storyLast synced: 11/20/2025, 4:53:34 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.