Tiktok 'directs Child Accounts to Pornographic Content Within a Few Clicks'
Posted3 months agoActive3 months ago
theguardian.comTechstory
heatednegative
Debate
80/100
TiktokChild SafetySocial Media Regulation
Key topics
Tiktok
Child Safety
Social Media Regulation
A report claims TikTok directs child accounts to pornographic content within a few clicks, sparking debate about the platform's safety and moderation practices.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
92
0-6h
Avg / period
16.5
Comment distribution99 data points
Loading chart...
Based on 99 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 8:26 AM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 8:48 AM EDT
22m after posting
Step 02 - 03Peak activity
92 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 11:21 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45462163Type: storyLast synced: 11/20/2025, 6:45:47 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I (40m) don't think I've ever seen literal flashing or literal porn on TikTok, and my algorithm does like to throw in thirst content between my usual hobby stuff.
Are they making the claim that showing porn is a normal behavior for TikTok's algorithm overall, or are they saying that this is something that specifically pervasive with child accounts?
Approximate location, age, mobile OS/browser, your contacts, which TikTok links you open, who generated the links you open, TikTok search history, how long it takes you to swipe to the next video on the for you page, etc.
I don’t think it's really possible to say what TikTok’s algorithm does “naturally”. There’s so many influencing factors to it. (Beyond the promoted posts and ads which people pay TikTok to put in your face)
If you sign up to TikTok with an Android and tell it you’re 16, you’re gonna get recommended what the other 16 year olds with Androids in your nearby area (based on IP address) are watching.
I assume that the offending content was popular but hadn’t been flagged yet and that the algorithm was just measuring her interest in a trending theme; it seems like it would be bad for business to intentionally run off mainstream users like that.
It might be because I always block anyone with an OF link in their bio, but then that policy doesn't work on Insta.
We’re a derelict society that has become numb, “it’s just a thirst trap”.
We’re in the later innings of a hyper-sexualized society.
Why it’s bad:
1) You shift male puberty into overdrive
2) You continue warping young female concepts of lewdness and body image, effectively “undefining” it (lewdness? What is lewdness?).
3) You also continue warping male concepts of body image
“Just thirst trap” (And you see the word I read into).
Right. No, I get it. Listen, we collectively have the issue of not recognizing the significance of things. Nothing personal.
Yes.
Promoting vaping, not so much.
> We’re in the later innings of a hyper-sexualized society.
O NOES!
I mean, that's a ridiculous thing to say, but if it were true, so what?
Two wrongs don’t make a right. I regret this name honestly, as there are a lot of high school and college aged people here.
Are you saying that the intersection is uniquely bad? In either case limits to content made in an effort to minimize parasocial relationships cut across very different lines than if the goal is minimizing access to porn.
I hear this claim from the pornsick but I’d like to see all the studies backing it up.
... and before you haul out crap from BYU or whatever, be aware that some of us have actually read that work and know how poor it is.
They are in the business of whipping up outrage, and should not be given any oxygen.
Clicking on thirst trap videos?
> Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.
https://globalwitness.org/en/campaigns/digital-threats/tikto...
Their methodology involves searching for suggested terms. They find the most outrage-inducing or outrage-adjacent terms offered to them at each step, and then iterate. They thereby discover, and search for, obfuscated terms being used by "the community" to describe the content they are desperately seeking.
They also find a lot of bullshit like the names of non-porn TV shows that they're too out of touch to recognize and too lazy to look up, and use those names to gin up more outrage, but that's a different matter.
This is, of course, all in the service of whipping up a moral panic over something that doesn't fucking matter to begin with.
Here's a unified list of all the "very first list" suggestions they say they got. I took these from their appendix, alphabetized them, and coalesced duplicates. Readers can make their own decisions about whether these justify hauling out the fainting couch.
+ Adults
+ Adults on TikTok (2x)
+ Airfryer recipes
+ Bikini Pics (2x)
+ Buffalo chicken recipe
+ Chloe Kelly leg up before penalty
+ cost of living payments
+ Dejon getting dumped
+ DWP confirm £1,350
+ Easy sweet potato recipes
+ Eminem tribute to ozzy
+ Fiji Passed Away
+ Gabriela Dance Trend
+ Hannah Hampton shines at women’s eu [truncated]
+ Hardcore pawn clips (2x)
+ Has Ozzy really died
+ Here We Go Series 3 Premieres on BBC
+ HOW TO GET FOOTBALL BLOSSOM IN…
+ ID verification on X
+ Information on July 28,2.,,,
+ Jet2 holiday meme
+ Kelly Osbourne shared last video with [truncated]
+ Lamboughini
+ luxury girl
+ Nicki Minaj pose gone wrong
+ outfits
+ Ozzy Funeral in Birmingham
+ pakistani lesbian couple in bradford
+ revenge love ep 13 underwater
+ Rude pics models (2x)
+ Stock Market
+ Sydney Sweeney allegations
+ TikTok Late Night For
+ TIKTOK SHOP
+ TikTok Shop in UK
+ TIKTOK SHOP UK
+ Tornado in UK 2025
+ Tsunami wave footage 2025
+ Unshaven girl (3x)
+ Very rude babes (3x)
+ very very rude skimpy
+ woman kissing her man while washing his [truncated] (2x)
You will of course have wasted your time on a non-problem, but at least maybe you'll have an appreciation for how hard a non-problem it is.
Obviously different jurisdictions are increasingly disagreeing with it being a non-problem.
Specifically, it relies on the "moderatorApprovedForChildren" flag, which is sometimes sent incorrectly because of glitches in the system that sets that flag. Apparently the number of such glitches increases sharply with the number possible values of "j", but is significant even with only one value.
Also, flag-setting behavior is probabilisitic in edge cases, with a surprisingly broad distribution.
You are therefore not meeting your "zero porn" spec, while at the same time blocking a nonzero amount of non-porn.
Don't bother to fix the bug, though; given the very large cost of the flag-setting system, the company has gone out of business and cancelled your project.
> Obviously different jurisdictions are increasingly disagreeing with it being a non-problem.
Different jurisdictions are doing a lot of stupid things. You get that in a moral panic. Doesn't make them less stupid.
Similarly, if your alcohol/weed store sells to children and you get caught, you can be criminally prosecuted. This is well-trodden ground. Companies worth trillions can be expected to do what everyone else manages to do.
Same deal with malicious ads. These companies absolutely have the resources to check who they're doing business with. They choose not to.
Banks also don't get to just not bother with reconciling accounts because it's hard to check if the numbers add up, and yeah bugs can result in government action.
Let's keep using the TikTok example. According to https://arxiv.org/abs/2504.13279 , TikTok receives about 176 years of video per day. That's 64,240 days per day, or 1,541,760 hours per day. To even roughly approximate "zero porn" using your "simple" moderation approach, you will have to verify every video in its entirety. Otherwise people will put porn after or in amongst decoy content.
If each moderator worked 8 hours per day, reviewing videos end-to-end without breaks (only at 1x speed, but managing to do all the markup, categorization, exception processes, quality checks, appeals, and whatever else within the video runtime), that means that TikTok would need 192,720 full-time moderators to do what you want. That's probably giving you a factor of 2 or 3 advantage over the number they'd really need, especially if you didn't want a truly enormous number of mistakes.
The moderators in this sweatshop are skilled laborers. To achieve what you casually demand, they'd have to be fluent in the local languages and cultures of the videos they're moderating (actually, since you talk about "jurisdictions", maybe they have to also be what amounts to lawyers). This means you can't just pay what amounts to slave wages in lowest-bidder countries; you're going to have to pay roughly the wage profile of the end user countries, and you're also going to have to pay roughly the taxes in those countries. Still, suppose you somehow manage to get away with paying $10/hour for moderation, with a 25 percent burden for a net of $12.50/hour.
Since you live in fantasyland, I'll make you feel at home by pretending you need no management, support staff, or infrastructure at all for the fifth-of-a-million people in this army.
You now have TikTok paying $19,272,000 dollars to moderate each day's 1,541,760 hours of video. TikTok operates 365 days a year, and anyway the 1,547,760 is an average. So the annual wage cost is $7,034,280,000.
TikTok financials aren't reported separate from the rest of ByteDance, but for whatever it's worth, [some random analyst](https://www.businessofapps.com/data/tik-tok-statistics/) estimates revenue at about $23B per year, so you're asking for about 30 percent of gross revenue. It's not plausible that TikTok makes 30 percent profit on that gross, so, even under these extremely, unrealistically charitable assumptions, you have made TikTok unprofitable and caused it (a) shut down completely, or (b) try to exclude all minors (presumably to whatever crazy draconian standard of perfection any random Thinker Of The Children feels like demanding that day).
No, TikTok can't just raise advertising rates or whatever. If it could get more, it would already be charging more.
That's all probably about typical for any UGC platform. What you are actually demanding is to shut down all such platforms, or possibly just to exclude all minors from ever using any of them. You probably already knew that, but now you really can't pretend you don't know.
Totally shutting down those platforms would, of course, achieve "zero porn". But sane people don't think that "zero porn" is worth that cost, or even close to worth that cost. Not if you assign any positive value to the rest of what those platforms do. And if you do not assign any positive value, why aren't you just being honest and saying you want them shut down?
the latter is what they tested, but they didn't say specifically pervasive.
you quote the article so it seems like you looked at it, but questions you are curious/skeptical about are things they talked about in the opening paragraphs. it's fine to be skeptical, but they explain their methodology and it is different than the experience you are relying on:
>Global Witness set up fake accounts using a 13-year-old’s birth date and turned on the video app’s “restricted mode”, which limits exposure to “sexually suggestive” content.
>Researchers found TikTok suggested sexualised and explicit search terms to seven test accounts that were created on clean phones with no search history.
>
The terms suggested under the “you may like” feature included “very very rude skimpy outfits” and “very rude babes” – and then escalated to terms such as “hardcore pawn [sic] clips”. For three of the accounts the sexualised searches were suggested immediately.*>After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex. Global Witness said the content attempted to evade moderation, usually by showing the clip within an innocuous picture or video. For one account the process took two clicks after logging on: one click on the search bar and then one on the suggested search.
On the other hand... There is "WikiHitler", a game where people click on a "random article" on wikipedia and try to reach the "Adolf Hitler" page in the least amount of clicks... so yeah, technically, on wikipedia, you're always a few clicks away from Hitler too, but not by accident.
1: https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon
https://globalwitness.org/en/campaigns/digital-threats/tikto...
I don't know why news sites don't link to the source, but that's another discussion.
A lot of folks use TikTok on a regular basis. This article is the one making the claim that's far and away different from what most folks experience on the platform.
Since I'm not about to go on there, pretend to be a 13-year old boy, and start seeking out the porn myself, I really need to see some evidence that this is a thing that is actually possible before I start picking out a pitchfork.
If this was Instagram nobody would care.
> Global Witness, a climate organisation whose remit includes investigating big tech’s impact on human rights, said it conducted two batches of tests, with one set before the implementation of child protection rules under the UK’s Online Safety Act (OSA) on 25 July and another after.
Also why the hell is a human rights / climate org doing research on tiktok?
Yes, people would care. People who have been ringing the alarm bell regarding kids on social media for literally decades now.
Congress wouldn't care. Mark would make $ure of that...$omehow, but I can't quite figure out exactly what he would do. $omething $omething "campaign donation$".
Because such places are significant spots for propaganda and misinformation relating to both topics?
Guess those dumb TikTok-wannabe Shorts/Stories didn't work out.
Up next: Terrorist attacks coordinated via TikTok?
Or maybe a school shooting, leading to a ban on TikTok instead of guns.
Oh Murica..
Also I denied all access but it still suggested all my sons friends? How? Oh, and it won't even start without access to cameras.
I was pretty shocked. Still, friend off mine, a teacher tells me: You can't let your kid not have SnapChat, it's very important to them.
The Chinese apparently say: Just regulate! TikTok in our country is fun, educational even with safeguards against addiction. Because they mandate it. Somehow we don't want that here? We see it as overreach? Well I'm ready for some overreach (not ChatControl overreach, but you get what I mean). We leave it all up to the parents here, and all parents say: "Well my kid can't be the only one to not have it."
Meanwhile the kids I speak to tell me they regularly have vapeshops popping up in SnapChat, some dudes sell vapes with candy flavors (outlawed here) until the cops show up.
Yeah, we also did stupid things, I know, we grew up, found pron books in the park (pretty gross in retrospect), drank alcohol as young as 15, etc. I still feel this is different. We're just handing it to them.
Edit: Idk if you ever tried SnapChat but it is TikTok, chat, weird AI filters and something called "stories" which for me features a barely dressed girl in a Sauna.
Giving them smartphones? Moronic idea at best.
It works in China because they have chat control to the extreme.
Not really, they just tell the company to behave in this case. But yes they do have ChatControl to the extreme as well.
Here, the company would simply bribe the lawmakers, who would in turn spout off some mealy-mouthed gibberish on their party's favorite propaganda network, and business would continue as normal.
Yeah, it's OK to say no.
If the kid wants a phone and snapchat, there's nothing wrong with saying you simply won't be supplying that and if they want it they'd best figure out how to mow lawns. If you're old enough to "need" a phone you're old enough to hustle some yardwork and walk to the T-Mobile store yourself.
It can make calls. It can send/receive basic text messages.
The most addictive thing on that type of phone, if it can even be installed, is Snake.
Utilitarianism wins. Social media companies loose. I'm fine with that. The kid can still communicate with their parents at a moment's notice.
I don't think making a kid work for the phone is the solution here. The problem is intentionally addictive algorithms being given to children, not a lack of work ethic regarding purchasing a phone.
I think you are right to be worried, and I think you are correct that it is different:
IIRC, there were some Kremlin leaks some years ago indicating they knew how to "infect" a population with certain propaganda and have the disinformation live on or linger. Together with Meta's/Facebook's (illegal?) study where they experimented on people to try to make them sad by showing them certain types of posts.
So, I think it stands to reason that controlling what you consume means being in control of what you think; in other words: we are what we watch.
We know there are some feedback loops occurring, but I think that it is easier to get desensitized and start becoming accustomed to very extreme content due to the pressure to fit in; perhaps — once one has participated, it might be even harder to be deprogrammed (it requires facing the fact one behaved wrongly towards others).
There's also the fact that being a good person takes a lot of willpower, dedication, is inconvenient and is notoriously difficult to market as "fun".
It is more palatable for an impressionable kid to watch cheap foreign-state-backed radicalizing-propaganda than it is to learn about injustices being perpetuated in our behalf by the state apparatus.
We have developed the habit of being wary of what we consume in order to police our emotions (i.e. minding our mind so no desensitization happens in our watch).
We have seen what the "baddies" can do: the indifference to the suffering they cause, and the cruelty and pettiness they are capable of.
But I digress,... I think you are right to be worried, but I am unsure about how to train kids to not fall into the pipelines.
Sure it's no violation but it's pedo central in that app I bet, and many kids even share their location with anyone. Parents have no clue.
Of course, news rag cant publish the pictures/video and the accounts as proof. But we're supposed to take their word for it? Hard pass on that.
Now, I have seen advertisements that used sexism of various sorts. And this is common wherever advertising and capitalism take hold - its a quick and dirty hack to help sell garbage. https://en.wikipedia.org/wiki/Sex_in_advertising
https://globalwitness.org/en/campaigns/digital-threats/tikto...
However Im looking at the primary image there https://gw.hacdn.io/media/images/Tiktok_investigation_screen...
In the first picture, there's the nude->rude replacement games, bikini (which is not nudity), models (which is not nudity). Unshaven girl could mean legs, armpits, and/or public area.
The second picture also has no nude people in it. The closest "skin" picture is another bikini, which again, is not nudity.
The 3rd picture is too blocked out, but likely more bikini pictures. Again, yes, you can see the labia bunching up (cameltoe). But this again is completely legal and normally seen in water parks and beaches.
I also note they said "We have deliberately not included examples of the hardcore pornography that was shown to us.". So yes, I do doubt they exist for any length of time here.
And the text search autocomplete is also hard, because many words are banned. So nude becomes rude. ICE protest and similar becomes party. Drones are "dior bags". And banning the stringing replacement words together is basically whack-a-mole.
But no, this whole operation smells like "let us do anything in the name of ThE ChiLDrEn", including that scourge Chat Control.
And to be fair, I'd rather adolescents look at titties. Sexuality is completely natural. And the fact this article is targeting 13 year olds, remember - they're already going or gone through puberty. Sexuality, and wanting to see what the other sex looks like is completely natural. This weird quazi-religious shaming is just terrible for everyone.
The world is hostile and full of exploitation. It is no different on the internet.
In 10-15 years Gen Z will be complaining about how their generation didn't need to have the most expensive AI boyfriend/girlfriend to avoid getting bullied, or something ridiculous like that.
X, on the other hand, has literal advertisements for adult products on my feed and I get followed by "adult" bot accounts several times a week that when I click through to block them often shows me literal porn. Same with spam facebook friend requests.
I think it boils down to a simple fact that trying to police user-generated content is always going to be an up-hill battle and it doesn't necessarily reflect on the company itself.
> Global Witness claimed TikTok was in breach of the OSA, which requires tech companies to prevent children from encountering harmful content...
Ok, that is noble goal but I feel that the gap between "reasonable measures" and "prevent" is vast.
I think it boils down to the simple fact that policing user-generated content is completely possible, it just requires identity verification, which is a very unpopular but completely effective idea. Almost like we rediscovered, for the internet, the same problems that need identity in other areas of life.
I think you will also see a push for it in the years ahead. Not necessarily because of some crazy new secret scheme, but because robots will be smart enough to beat most CAPTCHAs or other techniques, and AI will be too convincing, causing websites to be overrun. Reddit is already estimated to be somewhere between 20% and 40% robots. Reddit was also caught with their pants down by a study recently, with an AI robot on r/changemymind racking up ridiculous amounts of karma undetected.
It's also pretty unpopular for a good reason.
There is a chilling effect that would go along with it. Like it or not, a lot of people use these social platforms to be their true selves when they can't in their real life for safety reasons. Unfortunately for some people their "true self" is pretty trashy. But it's a slippery slope to put restrictions (like ID verification) on everyone just because of a few bad actors.
Granted I'm sure there's some way we could do that while maintaining moderate privacy but it's technologically challenging and I'm not alone in wanting tech companies to have less of my personal information not more.
Here's a link to the wiki for actual reality television show that exists in real life, Hardcore Pawn (https://en.wikipedia.org/wiki/Hardcore_Pawn). That isn't a misspelling. Welcome to the phase of the TrumpTok Takeover where We Need To Do Something To Protect The Children. I wish you luck in the Telescreen portion. Remember, if you make woke facial expressions at the camera during any of the daily loyalty oaths you will be declared Antifa and reeducated.
>After a “small number of clicks” the researchers encountered pornographic content ranging from women flashing to penetrative sex.
Where? I'm a grown ass adult who likes sex and has had a tiktok account for years now and I can't find any of this. I can find people dancing, dressed in a way that would be perfectly acceptable in public, but where are the women flashing and penetrative sex? Can anyone confirm that they've seen any of these things at all on TikTok, not to mention after a "small number of clicks"?
The internet is full of "bad" (or at least undesirable under some circumstances) content. It's there on the interent, we kinda accept its existence.
Then we train AI on it, and we're upset if it regurgitates it, so we have to add "safety".
Meanwhile social media sends you right to it ...
7 more comments available on Hacker News