Denmark's Government Aims to Ban Access to Social Media for Children Under 15
Mood
controversial
Sentiment
mixed
Category
other
Key topics
Denmark's government plans to ban social media access for children under 15, sparking debate on enforcement, effectiveness, and potential implications for online freedom.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
52m
Peak period
143
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 7, 2025 at 11:28 AM EST
20 days ago
Step 01 - 02First comment
Nov 7, 2025 at 12:20 PM EST
52m after posting
Step 02 - 03Peak activity
143 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 18, 2025 at 4:46 AM EST
9 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Same people now: how will the poor company know that it's an underage user?? Oh noes!
The platforms asks your government if you're old enough. You identify yourself to your government. Your government responds to the question with a single Boolean.
It would be possible for them to provide an open-source app, but design the cryptography in such a way that you couldn't deploy it anyway. That would make it rather pointless.
I too hope they design that into the system, which the danish authorities unfortunately don't have a good track record of doing.
If the app is open source, what stops someone from modifying it to always claim the user is over 18 without an ID?
And using someone else's Id and password is the same as every method of auth
Source: I wrote Digitaliseringsstyrelsen in Denmark where this solution will be implemented next year as a pilot, and they confirm that the truly anonymous solution will not be offered on other platforms.
Digitaliseringsstyrelsen and EU is truly, utterly fucking us all over by locking us in to the trusted competing platforms offered by the current American duopoly on the smartphone market.
https://digst.dk/it-loesninger/den-digitale-identitetstegneb...
The difference is meaningful. It's mostly prisoners dilemma. If only one persons porn habit is available thats bad for them. If everyones (legal) porn habits are available, then it gets normalized.
The problem isn't my peers, it's the people in power and how many of them lack any scruples.
this is too narrow a view on the issue. the problem isn't that a colleague, acquaintance, neighbor, or government employee is going to snoop through your data. the problem is that once any government has everyone's data, they will feed it to PRISM-esque systems and use it to accurately model the population, granting the power to predict and shape future events.
Briefly, when the ID provider issues the ID it gets cryptographically bound to your phone. When you use the ID to prove something to a site (age, citizenship, etc) the is done by using a zero-knowledge proof based protocol that allows your phone to prove to the site (1) that you have an ID issued by your ID provider, (2) that ID is bound to your phone, (3) the phone is unlocked, and (4) the thing you are claiming (age, citizenship, etc) matches what the ID says. This protocol does not convey any other information from or about your ID to the site.
Otherwise a single person could donate their ID card and let everyone else authenticate with it.
Now you might counter and say it would be enough to give each card a sequential number independent of the person's identity, but then you run into another problem. Each service might accept each card only once, but there are many services out there, so having a few thousand donations could be enough to cover exactly the niche sites that you don't want kids to see.
There is no way to implement this without a complete authoritarian lockdown of everything. There will always be people slipping past the cracks. This means all this will ever amount to is harm reduction, but nobody is selling it on that platform. Nobody is saying that they are okay with imperfect compromises.
And before anyone asserts that the phone can be anonymous, that doesn't work, otherwise you can just have an app that claims to have a verified ID attached.
the social media platforms already measure more than enough signals to understand a users likely age. they could be required by law to do something about it
It seems to me like it's either a privacy disaster waiting to happen (if not required) or everyone but the biggest players throwing out a lot of bathwater with very little baby by simply not accepting Danish users (if required).
The wording on the page also makes it sound like their threat model doesn't include themselves as a potential threat actor. I absolutely wouldn't want to reveal my complete identity to just anyone requesting it, which the digital ID solution seems to have covered, but I also don't want the issuer of the age attestation to know anything about my browsing habits, which the description doesn't address.
The biggest players in social media are precisely the ones that this law is targeting.
No one in charge of implementing this law is going to care whether some Mastodon server implements a special auth solution for Danish users or not, they are going to care that Facebook, TikTok, Instagram, etc. do so.
And if that little Mastodon server ends up hosting some content that is embarrassing or offensive to the Danish authorities, laws like this will surely not be used to retaliate...
Arbitrarily and selectively enforced laws seem like an obviously bad thing to me. If the government can nail me for anything, even if they practically don't, I'll be very wary of offending or embarrassing the government.
The law will obviously be framed in such a way as to hit the targets it is supposed to hit, avoid collateral damage. It's not like complete amateurs are writing our laws.
i think it'll get to: "these methods aren't good enough, we'll have to enforce digital id".
Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph, as well as letters.[2] Turns out online messaging doesn't count. It'd be a funny one to get to whatever court, because hopefully someone there will have a brain and use it, but it wouldn't be the first time someone didn't.
[1] https://boingboing.net/2025/09/15/danish-justice-minister-we...
Regardless, this wouldn't run afoul of this. This is similar to restricting who can buy alcohol, based purely on age; the identification process is just digital. MitID - the Danish digital identification infrastructure - allows an service to request specific details about another purpose; such as their age or just a boolean value whether they are old enough. Essentially: the service can ask "is this user 18 or older?" and the ID service can respond yes or no, without providing any other PII.
That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncer" to actually check IDs.
That has nothing to do with the medium of the ticket and is all about knowingly presenting a fake ticket. The ticket is a document proving your payment for travel. They could be lumps of dirt and it would still be document fraud to present a fake hand of dirt.
> That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncing" to actually check IDs.
Hopefully the theory will reflect the real world. The 'return bool' to 'isUser15+()' is probably the best we can hope for, and should prevent the obvious problems, but there can always be more shady dealings on the backend (as if there aren't enough of those already).
This is Denmark. The country who reads the EU legislation requesting the construction of a CA to avoid centralizing the system and then legally bends the rules of EU and decides it's far better to create a centralized solution. I.e., the intent is a public key cryptosystem with three bodies, the state being the CA. But no, they should hold both the CA and the Key in escrow. Oh, and then decides that the secret should be a pin such that law enforcement can break it in 10 milliseconds.
I think internet verification is at least 10 years too late. Better late than never. I just lament the fact we are going to get a bad solution to the problem.
That's very much not how danish law works. The specific paragraph says "hvor ingen lov hjemler en særegen undtaglse, alene ske efter en retskendelse." translated as "where no other law grants a special exemption, only happen with a warrant". That is, you can open peoples private mail and enter their private residence, but you have to ask a judge first.
The relevant points I believe to be:
> All citizens are placed under suspicion, without cause, of possibly having committed a crime. Text and photo filters monitor all messages, without exception. No judge is required to order to such monitoring – contrary to the analog world which guarantees the privacy of correspondence and the confidentiality of written communications.
And:
> The confidentiality of private electronic correspondence is being sacrificed. Users of messenger, chat and e-mail services risk having their private messages read and analyzed. Sensitive photos and text content could be forwarded to unknown entities worldwide and can fall into the wrong hands.
> No judge is required to order to such monitoring
That sounds quite extreme, I just can't square that with what I can actually read in the proposal.
> the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State
It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.
That all sounds extremely boring and political, but the essence is that it mandates a local authority to scan messages on platforms that are likely to contain child pornography. That's not a blanket scan of all messages everywhere.
So every platform, everywhere? Facebook and Twitter/X still have problems keeping up with this, Matrix constantly has to block rooms from the public directory, Mastodon mods have plenty of horror stories. Any platform with UGC will face this issue, but it’s not a good reason to compromise E2EE or mandate intrusive scanning of private messages.
I would not be so opposed to mandated scans of public posts on large platforms, as image floods are still a somewhat common form of harassment (though not as common as it once was).
It therefore breaks EtoE as it intercepts the messages on your device and sends them off to whatever 3rd party they are planning to use before those are encrypted and sent to the recipient.
> It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.
How can a judge be involved when we are talking about scanning hundreds of millions if not billions of messages each day? That does not make any sense.
I suggest you re-read the Chat control proposal because I believe you are mistaken if you think that a judge is involved in this process.
I dispute that. The proposal explicitly states it has to be true that "it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material;"
> How can a judge be involved
Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.
I suggest YOU read the proposal, at least once.
> it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material
That is an absolute vague definition that basically encompasses all services available today including messaging providers, email providers and so on. Anything can be used to send pictures these days. So therefore anything can be targeted, ergo it is a complete breach of privacy.
> Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.
Your assertion makes no sense. The only way to know if a message contains something inappropriate is to scan it before it is encrypted. Therefore all messages have to be scanned to know if something inappropriate is in it.
A judge, if necessary, would only be participating in this whole charade at the end of the process not when the scanning happens.
This is taken verbatim from the proposal that you can find here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A20...
> [...] By introducing an obligation for providers to detect, report, block and remove child sexual abuse material from their services, .....
It is an obligation to scan not a choice based some someone's opinion like a judge, ergo no one is involved at all in the scanning process. There is no due process in this process and everyone is under surveillance.
> [...] The EU Centre should work closely with Europol. It will receive the reports from providers, check them to avoid reporting obvious false positives and forward them to Europol as well as to national law enforcement authorities.
Again here no judge involved. The scanning is automated and happens automatically for everyone. Reports will be forwarded automatically.
> [...] only take steps to identify any user in case potential online child sexual abuse is detected
To identify a user who may or may not have shared something inappropriate, that means that they know who the sender is, who the recipient was , what bthe essage contained and when it happened. Therefore it s a complete bypass of EtoE.
This is the same exact thing that we are seeing know with the age requirements for social media. If you want to ban kids who are 16 years old and under then you need to scan everyone's ID in order to know how old everyone is so that you can stop them from using the service.
With scanning, it is exactly the same. If you want to prevent the dissemination of CSAM material on a platform, then you have to know what is in each and every message so that you can detect it and report it as described in my quotes above.
Therefore it means that everyone's messages will be scanned either by the services themselves or this task will be outsourced to a 3rd party business who will be in charge of scanning, cataloging and reporting their finding to the authorities. Either way the scanning will happen.
I am not sure how you can argue that this is not the case. Hundreds of security researchers have spent the better part of the last 3 years warning against such a proposal, are you so sure about yourself that you think they are all wrong?
You're taking quotes from the preamble which are not legislation. If you scroll down a little you'll find the actual text of the proposal which reads:
> The Coordinating Authority of establishment shall have the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State to issue a detection order
You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.
I really dislike you way of arguing. I thought it was important to correct your misconceptions, but I do not believe you to be arguing in good faith.
Censorship really is one of the few laws that are pretty unambiguous, that's really just "No, never again". Not that this stops politicians, but that's a separate debate.
And this is why laws should always include their justification.
The intent was clearly to protect people - to make sure the balance of power does not fall too much in the government's favor that it can silence dissent before it gets organized enough to remove the government (whether legally or illegally does not matter), even if that meant some crimes go unpunished.
These rules were created because most current democratic governments were created by people overthrowing previous dictatorships (whether a dictator calls himself king, president or general secretary does not matter) and they knew very well that even the government they create might need to be overthrown in the future.
Now the governments are intentionally sidestepping these rules because:
- Every organization's primary goal is its own continued existence.
- Every organization's secondary goal is the protection of its members.
- Any officially stated goals are tertiary.
HN is 'social media', btw.
Sure is! If you read the thread before posting in the thread, you'd see that it's come up already.
I'm not entirely sure how I'd want to word it, but it would be something like: It is prohibited to profit from engagement generated by triggering negative emotions in the public.
You should be free to run a rage-bait forum, but you cannot profit from it, as that would potentially generate a perverse incentive to undermine trust in society. You can do it for free, to ensure that people can voice their dissatisfaction with the government, working conditions, billionaires, the HOA and so on. I'd carve out a slight exception for unions being allowed to spend membership fees to run such forums.
Also politicians should be banned from social media. They can run their own websites.
The serious answer is that banning "social media" is a bit silly. We should concentrate on controlling the addictive aspects of it, and ensuring the algorithms are fair and governed by the people.
To me there is no question that children should grow up protected from harmful substances. You don't want kids to smoke, scrolling algo feeds is not better. There is enough interesting internet out there without social media!
That's a dogmatic axiom until you can show how TikTok causes lung cancer.
The effects of social media are more complex and nuanced than smoking. There are a lot of studies that show that social media has a negative effect on mental well being. When someones dies of loneliness or birth rates collapse and young people have less sex than ever, social media is never the only cause and might not even be the main reason, but it seems to play an important role.
Also, I don't think all social media is bad. I do love these discussions on HN even or especially when we don't agree, but tiktok and similar services have a lot of bad incentives with regards to user well being.
Also, I'm not arguing in favour of Social Media here. I just have seen enough moral panics in my years to have become allergic to them. For instance, I'm still waiting for evidence that computer games increase violence in real life, and how comics rot kids' brains.
Strong arguments demand strong empirical evidence. "Well-being" is not, in fact, a good metric (unless we apply it to other aspects of modern life). In fact, the very idea that wellbeing should be a concern in policy is dystopic: Remember that one of the reasons books are banned in "Fahrenheit 451" is that they made readers unhappy.
They are complaining that young males are not having easy one night stands. They also dont like that girls are empowered to say no. In their minds, the dynamic is all wrong when a young man is not complete pressuring jerk and she can say no.
They dont care about underclass of 16 years old with a life destroyed and a baby.
You know how in school they used to tell us we can't use calculators to solve math problems? Same thing. It can't be done by individual parents either, because then kids would get envious and that in itself would cause more problems than it would solve.
It is important for kids to get bored, to socialize in person, to solve problems the hard way, and develop the mental-muscles they need to not only function, but to make best use of modern technology.
It is also important that parents don't use technology to raise their children (includes TV). Most parents just give their kids a tablet with youtube these days.
This is a danger to their mental development. Look at teacher forums all over. r/Teachers on reddit should be illuminating. Tech and parents sticking devices to their kids instead of raising them properly has resulted in utter disaster. If there was no harm imposed on children, I would agree that it is a nanny-state thing.
I myself grew up with a desktop computer from around age 7 and it shaped me early on in a positive way to be curious. Computers were also a central part of my social life. There are many positive things that kids can get out of computers, so I find the comparison with alcohol to be hyperbolic.
The usual figurative nanny state refers to a situation in which unreasonable rules and regulations are imposed on the behavior of grownups, not children.
Yes. Kids getting access to knowledge that clicks with them earlier than later makes a huge difference.
Which is exactly why so many people are rushing in to control what kids get exposed to. You seem to have pretty strong thought on the issue yourself, if you agree on the possible negative impact, you can't also deny the possibility of positive impact.
The dose makes the poison, I think we can understand how extreme position tend to bring more negative than positive consequences, regardless of the rethoric.
[edit: rephrased the last part]
> they learn how to function without technological dependencies.
So like the Amish? Or are they still too technologically dependent and children need to be banned from pulleys, fulcrums, wheels, etc.?
Teaching kids how to code isn't all that meaningful on its own. knowing what to do once you learn how to code is. If your plan is to teach your kid how to code, teach them to solve problems without code at that age. Unless you're serious about thinking learning at age 5 vs age 13 would make a big difference.
I think every kid 13 and above should have an rpi too.
There were a plethora of books in the library on how to program, and here you are suggesting I, and everyone like me, be banned from doing so. You'd probably also ban me from the library by assuming I couldn't read aged 5. I certainly could, especially computer manuals. The computer was an amazing thing which did exactly what I told it, and I learned quickly how precise I needed to tell it, and when I made a mistake, it repeated my mistakes over and over without noticing. I learned more about digital ethics age 5 trying to write games than the typical CEO learns going on a "Do Not Create The Torment Nexus" course.
You'd insist I not be allowed to even use software, let alone write my own. You'd be actively cutting off my future professional life, and depriving entire nations of bedroom programmers cum professional software engineers, with your ill-thought-out ban.
If your children show an aptitude or a fascination for a topic, I hope you feed that and praise them for it.
First, my proposal is a delay, not a ban. This is such a good idea, that a lot of FAANG CEO's are doing this for their kids welfare (more or less) already.
I think the overall welfare of kids should be weighed against the benefits.
I think you should have been learning to tinker with electronics, solve math algorithms and develop all kinds of curiosities. the future of being a programmer involves competing with LLMs, you have to be good at knowing what to program. Humans aren't needed when it comes to simply knowing how to write code.
I acknowledge that there will be exceptions, and perhaps that should be considered. but also lookup terms like "ipad babies" and how gen-alpha is turning out. Most parents don't teach their kids how to code in basic. and content regulation for kids is futile, unless you want the government monitoring your devices "for the children's sake".
> If your children show an aptitude or a fascination for a topic, I hope you feed that and praise them for it.
Same, but I hope you let them learn things in the right order and consider their overall long term wellbeing instead of temporary satisfaction. Children did fine without computers for all of humanity's history. the nature of children hasn't changed in the past 3 decades. What you consider feeding might actually be stagnating. If there is a good and practical way to make sure that children are developed well enough to interact with computers, and we can also make sure that the content they consume is age-appropriate without implementing a dystopian surveillance state, i'm all for it.
But pretending the problem doesn't exist, and letting 99% of children suffer because 1% of kids might learn BASIC doesn't sound like a good plan.
it’s extraordinary meaningful as it helps in brain development.
Coding is just more rewarding, it is important to learn how to solve problems with less rewarding systems. Would you have wanted to solve algebra problems on paper if you knew python? You don't need to solve those problems on paper, but it is good for brain development. Even better than coding for example. Keep in mind that a child's attention window is limited, this is very much a zero sum situation.
I had a good time programming BASIC on my V-Tech pseudocomputer, at age 9. But that's a world away from tiktok, reels and the predatory surveillance economy.
You can teach kids electronics, have them construct toys that work on batteries,etc... work on components that don't require programming. teach them algorithms, math, crypto,etc.. without using computers.
If you're teaching kids how to code, you should give them the skills that will help them learn _what_ to code first?
All 3 were a total hotbed of bad influences for a child: Team Fortress had trade pub servers with people doing sprays of literal CP and wearing custom lewd skins to harass users with them - and people with very questionable social skills and intentions huddled up in realtime microphone comms with children, Roblox's predator problem for the last 14+ years (at least that I can attest) is suddenly en vogue now that they're a public company and there's stock shorting to be had, GMod is still the community with the most colorful vocabulary I've ever encountered - plus grooming. And much more.
Indeed, you can (and I did) get burned by these actualities when exposed to such communities in your youth - and it can cost you real money, real time, real idealism/innocence, and real mental health. However, I think being exposed to softwares, systems and games that inspired curiosity and led me toward a path of WANTING to contribute brought me to this software development career and life path, and it would have been much more inaccessible and unknown to me in any other way. And I favorited a comment from another HN user a few days ago that goes in astute depth on why that path can only be organically introduced and self-governed [1].
I referred to these places earlier in my comment as "bad influences". I think the single-most powerful thing a parent can do tasked with this dilemma - especially during an upbringing in systemically hard, uncertain, and turbulent times - is teaching them how to identify, avoid, and confront bad influences. Equipped with that, and knowing how to handle yourself, is of utmost importance.
Some kids learn to drink and smoke at a that age too, and many turn out ok.
Keep in mind that alcohol is also a carcinogen. Similar to cigarettes, even one drink shouldn't be tolerated. Even if a certain amount will have no ill effects on average, impacts on individuals depends on individual factors, so one harmless drink for you might be one deadly drink for someone else. It is poison.
That said, I don't judge anyone who uses substances. But there is no tolerable threshold to giving children poison.
It doesn't matter how good the tool can be, what matters is how it actually is used
This is perhaps one of the most bizarre opinions I have ever read. This would bar under 13s from using everything from vending machines to modern fridges. What would you consider "using"? Would under 13s be blocked from riding in any car with "smart" features?
This is a perfect example of the kind of nonsensical totalitarian extremism you see on here that people only espouse because they believe it would never affect them. It goes completely against the Hacker ethos.
Remember YikYak? IIRC that was worse for kids than most of the big social media sites, but how do you write a law that anticipates the next YikYak without banning everything?
As someone who got my first BlackBerry at 11, which really spurred a lot of my later interests which are now part of my career or led to it indirectly, I am opposed to paternalistic authoritarian governments making choices for everyone.
(Funny anecdote, but I didn't even figure out how to sign up for Facebook until I was 11-12, because I wouldn't lie about my age and it would tell me I was too young. Heh.)
Moreover just because that laws and regulations are applied inconsistently in the US (and we are talking about Denmark here), does not mean we should completely do away with them.
I highly recommend discussing a smartphone pact such as http://waituntil8th.org with fellow parents before anyone in their friend group gets a cell phone.
Do parents actually fall for this drivel?
Give people technology, but let's have an honest conversation about it finally. As a adult it's already hard to muster enough self control to not keep scrolling.
I don't scroll social media. When I was 14-17, sure. But then I lost interest, much like most of my peers did.
(I do probably refresh HN more than I should though, but I think that's probably the least evil thing I could do compulsively...)
Social media for teens is ubiquitous and where your peers connect. It’s being included in your social group, not opt-in thrill seeking.
Most teens will have multiple accounts for various networks - private accounts for their friends, and then again for closer friends. Or they use apps like Discord that parents have no visibility into at all. There is a lot that most parents never see.
For better or worse.
You're either operating with an anachronistic notion of what constitutes social media, or you're very out of touch with the public. Not sure which one.
The "myspaces" and "facebooks" are trending down, but other forms of social media like tiktok, discord, reddit, youtube, etc are alive and well, still hooking kids young as they always have.
You wouldn't have called the equivalent when you were a kid problematic or even had a word for it. It's often just how they communicate with friends.
I feel as though algorithms dedicated to grabbing as much attention as possible are a major problem (youtube, tiktok), while notification checking on public spaces is also similarly an issue.
But is it so hard to teach your kids how to internet? Id advocate for restrictions but banning seems silly.
What we define as "social media" I think is important. I don't really consider things like TikTok to be "social media" even if there is both a social component and a media component, since the social part is much smaller in comparison to the media part. People aren't communicating on TikTok (I think), which is what people concerned about "being left out by their peers" would be referring to. This type of "social" media probably is not dying, but I think is likely stagnant or will become stagnant in growth, while traditional "social media" continues to regress over the next decade.
Parents are doing what they can, but it inevitably comes down to “but my friend x has it so why can’t I have it” - so all and any help from government / schools is a good thing.
This is so, so, so obviously a nasty, dangerous technology - young brains should absolutely not be exposed to it. In all honesty, neither should older ones, but that’s not what we’re considering here.
Do you buy your kids a toy every time you go to the store? Do you feed them candy for dinner?
273 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.