Uk Toughens Online Safety Act with Ban on Self-Harm Content
Mood
controversial
Sentiment
mixed
Category
other
Key topics
The UK government has toughened its Online Safety Act to ban content promoting self-harm, sparking debate about the effectiveness and potential consequences of such regulations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
35m
Peak period
23
Hour 1
Avg / period
8
Based on 64 loaded comments
Key moments
- 01Story posted
Sep 9, 2025 at 8:01 AM EDT
3 months ago
Step 01 - 02First comment
Sep 9, 2025 at 8:36 AM EDT
35m after posting
Step 02 - 03Peak activity
23 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 10, 2025 at 7:12 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Oh wait...self-harm just includes suicide. I guess it is still fine to comvince people to destroy their lives as long as it has a revenue curve behind it.
It is a long con, but the expected value of self-harm is still positive and it is legally protected speech in the UK.
You can't tell someone to kill thelselves but you show alcohol ads to an alcoholic or encourage someone to modify their staircase to increase risk of fall death.
My point is this law is a stupidly narrow definition of self-harm.
Anyhoo, UK has been weird lately, but as most things, new equilibrium will be reached.
To see how silly it is, just turn around: if they allow self-harm videos, before you know it they’ll be mandating that ALL videos be about self-harm. Seem reasonable?
A soldier killed himself with a rifle
A local newspaper was asked to remove the page because "it contains information harmful to children", namely guides how to kill yourself. Because there is such a law. They complied
The slippery slope argument is warranted because it's true: the law was about pornography, then it was extended to include non-pornographic content, and we can be sure more stuff will be banned in the future, as it always happens with these laws.
The typical algorithmic implementation would ban this HN discussion itself, for containing the string "self-harm" and various other keywords on the page. That's often how it ends up, for anyone who's been paying attention. Legitimate support websites are censored, for discussing the subjects the attempt to support. Public health experts get misclassified as disinformation—they use the same set of keywords, just in a different order. Inexpensive ML can't tell them apart.
Important news could be automatically suppressed before anyone would even realize it's authentic news. How could one discuss on an algorithmically "safe" platform, for instance, the allegation that the President of the United States paid the pedophile Jeffrey Epstein to rape an underage child, who later killed herself? That's four or five "high-priority" safety filters right there.
Look at how Microsoft GitHub consistently deletes innocent projects without warning, like SymPy, over LLM hallucinations. That moderation style's a direct consequence of the large financial costs of copyright lawsuits. If you introduce similar financial/legal risks in other areas, you should expect similar results.
very much so, large companies are not really held accountable for their user's actions, and thats pretty much by design, for example:
If you hold a party every saturday, where the people that come along abuse residents in the streets and cause general damage, after the 5th or 6th time, and at about the point where the patrons of your parties are prosecuted, you will face legal penalties for letting it happen again. even if its different people at your party. (under a whole bunch of laws, ranging from asbos to breach of the peace to criminal damage, anti-rave laws, all sort.)
If you do that online, as long as you comply with the bare minimum(ie handing over logs), you're free from most legal pain (unless its CSAM, copyrighted or "terrorist" material)
I get your point, but thats where the we get into not a problem of the principle, but the execution. As its OFCOM who are doing the implementing, and they really don't have the expertise or leadership to make "good" guidance, we're going to end up with shit.
Poorly-thought out, asymmetric incentives: strongly disincentivizes helping people not-die, while failing at disincentivizing creating an environment for easy access to hard drugs.
That's the problem with laws like these, obviously a company is going to air on the side of over censoring if the cost of making a mistake is an unknown. If your moderating. Law of large numbers being what it is, eventually if your platform is large enough something will slip through. And no one knows what the punishment will be or where the line even is.
Does this include smoking, excessive eating, eating sweets, etc? What about listening to sad music?
> This government is determined to keep people safe online. Vile content that promotes self-harm continues to be pushed on social media and can mean potentially heart-wrenching consequences for families across the country.
"Vile" is very emotionally charged, who decides it? Will it be the next government that gets to decide it? Bare in mind, the (recently) ex-Deputy Prime Minister of the current government called her opposition "scum" [1], an extremely negative word.
Or "voting against your own interests"? I can't deny that some voting patterns amount to self harm.
And why stop at voting? The government could also be responsible for preventing 'harmful' thoughts. Police in the UK are regularly deployed for "non-crime hate incidents", so that they can tell people that they haven't committed a crime, but they will make a note of it in a secret database that affects your employability.
The quote about 'This government is determined to keep people safe online,' is a 'we're good people' statement for the media - remember, this is a press release.
Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
Definitions slip over time. Violence, abuse, sexual assault, etc, were previously all physical acts. Then they became mental acts, and now just the perception is devastating.
> Remember, these are politicians: they have no understanding of abstraction and generalisation, and think that generalisation is the act of creating stereotypes.
In the moment. But a future politician can reasonably interpret the same idea differently.
Will this result in more spread of ridiculous euphemisms like unalive? Probably. Will this result in us being able to get people banned from social media for telling other people to kill themselves? Probably only with very intermittent enforcement.
Wars and death will end simply because people won't be able to discuss them.
- Cute winter boots (ICE officers)
- Gardening (420)
- pdf/pdf file (pedophile)
- :watermelon-emoji: (Palestine)
- neurospicy (neurodivergent)
- bird website (X)
- LA Music festival tour (protests)
Not sure if I see more of the 'algospeak' because the problem is real, because I've interacted with algospeak content before and it's just giving me more of it, or if creators don't really need to do it anymore but just still do.
Though, these websites aren't driven by some misguided but well-meaning vision to treat society of its ills. They only care about making money, and advertisers have just converged on demanding that anything complimenting their ads be squeaky-clean, inoffensive, always happy. They dictate their preferences to services that primarily make money off ads. The way advertiser interests end up clashing with people's desire for free expression is just a side effect to them.
Another common element of both tiktok and instagram is how some posters try to advertise for Onlyfans without triggering a FOSTA/SESTA related ban.
People pretend that speech is weightless and has no consequences, even when that's shown not to be entirely true.
Advertisers don't want their ads to appear next to certain keywords, platforms use content detection to match those criteria, so creators are monetarily incentivized to avoid them. Capitalism!
Today it's 4chan, Kiwi Farms, WPD, pirate sports sites, libgen, Anna's place... Tomorrow it'll surely be every forum where moderation isn't absolutely draconian.
I wonder if they're going to try to ban Twitter.
I just think we need to remember that some content is simply undesirable, especially for kids.
An alternative method to protect society from harmful content would perhaps be more readily available and advanced firewalls for consumers, orgs and schools. Currently, opt-out of filth on the internet (however you like to define it) can be prohibitively difficult for the general consumer.
The market has been very backwards in this respect, which has given some powerful elements in society an oppertunity to exercise tyrannical control by imposing a nation-wide Chinese firewall as the solution. If we had recognized the problem of harmful content in the first place, and offered solutions, we would've had more bite against these tyrannical schemes.
See https://sdelano.media/suicideisbad/ (in Russian)
I can't decide if the bill is toothless and meaningless, or a well-planned first step towards complete control of speech and groupthink. Or both, somehow.
It's often used as a way to anonymously imply that you should kill yourself, so I wonder if this sort of thing would affect that and how.
Though, overall, I think the censoring of self-harm stuff is already beyond ridiculous. Terms like "self-unalive", "self-terminate", "sewerslide" make a very serious issue sound like a joke. Blinding ourselves to isn't going to make these problems go away.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.