Character.ai to Bar Children Under 18 From Using Its Chatbots
Posted2 months agoActive2 months ago
nytimes.comTechstory
heatedmixed
Debate
80/100
AI SafetyChild ProtectionOnline Regulation
Key topics
AI Safety
Child Protection
Online Regulation
Character.ai is banning users under 18 due to concerns over its chatbot's impact on children, sparking debate about the effectiveness of this measure and the broader implications for AI development and regulation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
45
3-6h
Avg / period
7.9
Comment distribution95 data points
Loading chart...
Based on 95 loaded comments
Key moments
- 01Story posted
Oct 29, 2025 at 9:52 AM EDT
2 months ago
Step 01 - 02First comment
Oct 29, 2025 at 11:35 AM EDT
2h after posting
Step 02 - 03Peak activity
45 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 2:12 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45746844Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Teen in love with chatbot killed himself – can the chatbot be held responsible?
https://news.ycombinator.com/item?id=45726556
From what I understand, some inputs from the user will trigger a tool call that searches the memory database and other times it will search for a memory randomly.
With that said, I think people started falling in love with LLMs before memory systems and they probably even fell in love with chatbots before LLMs.
I believe that the simple, unfortunate truth is that love is not at all a rational processes and your body doesn't need much input to produce those feelings.
0. https://www.theguardian.com/technology/2023/jul/25/joseph-we...
These kids hammer H100s for 30+ hours a week but will revolt at ads or the idea of paying money.
C.ai probably only exists at its current size because Noam had access cheap access to TPUs and people who can scale inference on them at the earliest stages of their growth (and obviously because he could raise with his pedigree, but looking at things others deal with)
Eventually if the unit economics start to work they can always roll this back, but I think people are underestimating how much of a positive this is for them
Something tells me that this ain't gonna work. Kids and teens are more inventive than they probably realize.
So that the browser could ask: Hey government, is this user above 18? And the government just responds with yes or no.
Sites would then only need to whitelist browsers with this check an merely see an “allowed” claim for whatever type of site they have.
Parents or users could use parental controls to “allow” or “block” certain categories, meaning website creators wouldn’t even get any info on the user.
The additional context here is that Google acquired them to get back Noam Shazeer and took some other members of their technical staff.
So the current company is pretty much shambling along after having served its purpose to all stakeholders, and Google probably doesn't really care more than avoiding any sort of liability.
Good, privacy preserving age check could be implemented instead, but they won't because it's not about keeping the kids away (and never was).
And your example with Instagram is great because they've been caught running experiments actively harming children.
In any case, there is a general problem on the internet regarding how to allow people to remain reasonably anonymous while keeping children out of certain services.
Although it'd still be lame if it's so easy to break that someone can share the key, and then the kids don't need to learn anything.
If a kid thinks like an adult, behaves like an adult and can't be distinguished from an adult from their online presence, let them use the chatbot. On the other end, I wish they'd flag immature adults as kids as well.
18 is an arbitrary number, and if we have more appropriate ways to judge if someone is ready or not (assuming their check is worth its salt), it should be fine to defer to that.
It's not like they're going to a bar to do tequila shots or scam retirees for insurance money.
In both cases they went nuclear in a way that implies they actually don't care if the current product survives as long as C.ai (read: Google) isn't exposed to the ongoing risk
https://news.ycombinator.com/item?id=45733618
On a similar note, I was completing my application for YC Startup School / Co-Founder matching program. And when listing possible ideas for startups I straight out explicitly mentioned I'm not interested in pursuing AI ideas at the moment, AI features are fine, but not as the main pitch.
It feels like at least for me the bubble has popped, I have talked also recently about the way in which the bubble might pop would be due to legal liability collapse in the courts. https://news.ycombinator.com/item?id=45727060
This added with the fact that AI was always a vague folk category of software, it's being used for robotics, NLP and fake images, I just don't think it's a real taxon.
Similar to the crypto buzz from the last season, the reputable parties will exit and stop associating, while the grifters and free-associating mercenaries will remain.
Even if you are completely selfish, it's not even hugely beneficial to be in the "AI" space, at least in my experience, customers come in with huge expectations, and non-huge budgets. Even if you sell your soul to implement a chatbot that will replace 911 operators, at this point the major actors have already done so, or not, and you are left with small companies that want to be able to fire 5 employees and will pay you 3 months of employee salary if you can get it done by vibe code completing their vibe coded prototype within a 2-3 deadline.
My personal favorite story is when I talk my youngest aunt about a videogame my cousin wanted. She said no absolutely not. Then proceeded to buy the game for ehr 10 year old. A game she was carded for. A game that has that it's M rated and has adult themes and whatnot on the box. She called me later in horror about how inappropriate this game I told her about 2 weeks earlier was. How could they make games like that for children she says about the game she was carded for because it's only for adults.
I use her as an example but that situation is a lot of parents. I personally think that it's not the government's place to say how much exposure I want to give my child to the internet, but I have rules and boundaries around that with my kid. Many of her friends have free access and and have always had it since toddlers. People say it's parents not being savvy, but honestly it's parents not caring. Parent controls have been around over 30 years and they have always been dead simple. But they do increase the whining in your life from your kids and that means if parents can allow it a high quantity will. I have no faith that a law will stop significantly more kids than no law. I know too many parents who allow their kids to do things they know are harmful to their kid because "I don't want them to feel left out" or they don't want to deal with whining.
I don't see how that follows at all. Considering children can't consent and their parents consent is the only one that matters. The argument would simplify to
> Most people would consent to X, therefore we shouldn't require consent for X.
Which sounds ridiculous, imagine if we were all forced to do what the majority likes. Even without a tyranny of the majority (where the majority would be a homogenous group) that would be a dystopia.
> I personally think that it's not the government's place to say how much exposure I want to give my child to the internet, but I have rules and boundaries around that with my kid.
First, I am not necessarily talking about "The government" (what exactly is that anyways, judicial, law? state, federal?), for example it could be a site's policy, and in turn it could be determined at courts based on liabilities. If courts enforced more damages to kids that suffer from chatbots as compared to adults, then that would push the companies to limit minor's access without laws or executive orders (whether that is "the government" to you depends, but I find it more helpful to think in terms of more specific groups instead of talking/thinking of THE GOVERNMENT as a homogenous mindhive.)
Your kid is on the fucking computer all day building an unhealthy relationship with essentially a computer game character. Step the fuck in. These companies absolutely have to make liability clear here. It's an 18+ product, watch your kids.
You're more optimistic than I am. Their announcement is virtue signaling at best. Nothing will come from this. Kids will figure out a way around their so-called detection mechanisms because if there were any false positives for adults they would lose adult customers _and_ kids customers.
The universe of " I didn't know the kids would make deep fakes of their classmates ", is yet to come. Some parents going straight to fucking jail. Talk to your kids, things are moving at a dangerous pace.
https://news.ycombinator.com/item?id=44723418
It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).
Every time I hear about some dumb approach to age verification (conversation analysis...really?) or a romance scam story because of a fraudster somewhere in Malaysia..I have the need to scream...THERE IS A CORRECT SOLUTION.
1. backups and account recovery: We’re working with humans here. They will lose their keys in great numbers, sometimes into the hands of malicious actors. How do users then recover their credentials in a quick and reliable manner?
2. Fragmentation: let’s be optimistic and say digital credentials for drivers licenses are given out by _only_ 50 entities (one per State). Assuming we don’t have a single federal format for them (read: politically infeasible national id) how does facebook, let alone some rando startup, handle parsing and authenticating all these different credential formats? Oh and they can change at any time, due to some rando political issue in the given state.
OP, you clearly know all this, so I’m just reminding you as someone down in the identity trenches.
2.The data format issue is (or was) indeed a concern though it was never insurmountable. A data dictionary would have been the most straight forward approach to address it: https://cipheredtrust.com/doc/#data-processing
I say data format discernment
was a concern because as faith would have it, we now have the perfect tech to address that, LLMs. You can shove any data format into an LLM and it will spit out a transformation into what you are looking for without the need to know the source format.Browsers are integrating LLM features as APIs so this type of use would be feasible both for front and back end tasks.
My proposal is here: https://news.ycombinator.com/item?id=45141744
What if a child is at school where there are Chromebooks and teachers aren’t as tech savvy as the majority of hacker news?
What if a child is at a library that has Chromebooks you can take out and use for homework?
Wha if a child is at an older cousins place?
What if a child is a park with their friends and uses a device?
Should parents be next to their child helicopters parenting 24/7?
Is that how you remember your childhood? Is that how you currently parent giving zero atonony to children?
Blaming parents is ridiculous. Lot of parents aren’t tech savvy and are too dumb to be tech savvy and stay on top of the latest tech thing
Plus you're just setting kids up for failure by keeping them from understanding the adult world. You can't keep them ignorant and then drop them in the deep end when they turn 18, and expect good outcomes.
You can explore your identity independently of parental supervision just fine without the internet. I'd much rather have my kids have 5 good friends in real life and spend time together offline, than 500 online (10 of which are predators) while they are spending all their time in their bedrooms or in front of screens.
How would you not have survived to adulthood without the internet? You know how that sounds right? Billions of kids have grown up without the internet just fine (and hello poverty?).
And these days internet integration in school is far stronger, my 6 year old's daily homework is entirely online.
Remember a company is just a building filled with people. People are exploiting & profiting off your kids, not companies.
This is just TikTok tier crap.
I'm no authoritarian - simply the parents need to take responsibility for their kids & their actions, how they spend their time, what they allow into their minds.
Most countries child laws states that the parents (esp the mother) are ultimately for their children’s welfare. Remember that kids cannot consent - the parents carry that responsibility.
Oh and I'm not averse to children at all.
Maybe american culture is just rotten to the core and how you raise your kids is just bewildering and gross to outsiders. At this point I feel that american parents are fully consenting to whatever is happening to your kids (being brain raped by tiktok/instagram/snap/fb/youtube/porn and others ("news")).
Letting my child simply use an online service isn't even vaguely similar to any of these things.
> Maybe american culture is just rotten to the core and how you raise your kids is just bewildering and gross to outsiders. At this point I feel that american parents are fully consenting to whatever is happening to your kids (being brain raped by tiktok/instagram/snap/fb/youtube/porn and others ("news")).
I'm not American, I live in GB and know all too well about overreaching internet regulations.
May I also point out the irony of complaining about pornography while simultaneously using "rape" as an adjective when a more appropriate one would have sufficed?
I literally circumvent website blocking using VPN as a kid, no one can stop anyone from going "online" in 2025
I'd like to believe that most actual people want to protect kids.
It's easy to write off corporations and forget that they are founded by real people and employ real people... some with kids of their own or with nieces or nephews etc, and some of them probably do really care.
Not saying character.ai is driven by that but I imagine the times they've been in the news were genuinely hard times to be working there...
> An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”
> [Dr. Nina Vasan] said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users.
I hope there’s a more gradual transition here for those users. AI companions are often far more available than other people, so it’s easy to talk more and get more attached to them. This restriction may end up being a net negative to affected users.
https://www.bbc.co.uk/news/technology-67012224