Openai Says Over a Million People Talk to Chatgpt About Suicide Weekly
Posted2 months agoActive2 months ago
techcrunch.comTechstoryHigh profile
heatedmixed
Debate
80/100
Artificial IntelligenceMental HealthSuicide Prevention
Key topics
Artificial Intelligence
Mental Health
Suicide Prevention
OpenAI reports that over a million people discuss suicide with ChatGPT weekly, sparking debate about the role of AI in mental health support and the responsibility of tech companies.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
63
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 27, 2025 at 6:26 PM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 8:25 PM EDT
2h after posting
Step 02 - 03Peak activity
63 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 11:59 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45727060Type: storyLast synced: 11/20/2025, 8:09:59 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://www.nimh.nih.gov/health/statistics/mental-illness
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
Would be more meaningful to look at the % of people with suicidal ideation.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by that definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an “illness”? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms-- (a) delusions, (b) hallucinations, (c) serious disorder of thought form, (d) a severe disturbance of mood, (e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” don’t actually have a “mental illness” at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
You’re talking about autism. The reply is about autism spectrum DISORDER.
Different things, exacerbated by the imprecise and evolving language we use to describe current understanding.
An individual can absolutely exhibit autistic traits, whilst also not meeting the diagnostic criteria for the disorder.
And autistic traits are absolutely a variant of normalcy. When you combine many together, and it affects you in a strongly negative way, now you meet ASD criteria.
Here’s a good description: https://www.autism.org.uk/advice-and-guidance/what-is-autism...
BAP is very common among (1) STEM professionals, (2) close blood relatives of people with clinical ASD (if you have a child or sibling with an ASD diagnosis, then if you yourself don’t have ASD, odds are high you have some degree of BAP), (3) people with other psychiatric diagnoses (especially those known to have a lot of overlap with ASD, e.g. ADHD, personality disorders, PTSD, OCD, eating disorders, the schizophrenia spectrum), (4) certain LGBT subgroups (especially transgender people) - all of whom have heightened odds not just of having BAP / subclinical ASD, but clinical ASD too
Like ASD, BAP skews male, but women can have it too. (The average man is a little bit more autistic than the average woman.) Also, autistic traits are positively correlated between romantic partners, so a woman in a relationship with a man with BAP or ASD is more likely to have some degree of BAP herself (as well as being more likely to have clinical ASD)
BAP itself is a matter of degree… autistic traits is a continuum and we are all somewhere on it (actually a one-dimensional continuum is a simplification, it is a multidimensional construct-but a useful simplification) - and clinicians draw a line at some point (they don’t all draw it at the same place, and its location varies across time and space and culture and even clinical subcultures) and if you are on one side of that line you have clinical ASD, if you are on the other you don’t-if you are on the non-clinical side of the line, but nearing it, you have BAP… but “nearing” it subdivides into people who are closer and people who are further away
1. Social media -> connection 2. AGI -> erotica 3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
https://news.ycombinator.com/item?id=45026886
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
I see that explanation for the increased suicide risk caused by antidepressants a lot, but what’s the evidence for it?
It doesn’t necessarily have to be a study, just a reason why people believe it.
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.
You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)
This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.
But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.
So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.
Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.
Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.
Sure! let's take a look at OpenAI's executive staff to see how equipped they are to take a morally different approach than Meta.
Fidji Simo - CEO of Applications (formerly Head of Facebook at Meta)
Vijaye Raji - CTO of Applications (formerly VP of Entertainment at Meta)
Srinivas Narayanan - CTO of B2B Applications (formerly VP of Engineering at Meta)
Kate Rouch - Chief Marketing Officer (formerly VP of Brand and Product Marketing at Meta)
Irina Kofman - Head of Strategic Initiatives (formerly Senior Director of Product Management for Generative AI at Meta)
Becky Waite - Head of Strategy/Operations (formerly Strategic Response at Meta)
David Sasaki - VP of Analytics and Insights (formerly VP of Data Science for Advertising at Meta)
Ashley Alexander - VP of Health Products (formerly Co-Head of Instagram Product at Meta)
Ryan Beiermeister - Director of Product Policy (formerly Director of Product, Social Impact at Meta)
But social media is a far bigger concern than AI.
Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
I have noticed that TikTok can detect a depressive episode within ~a day of it starting (for me), as it always starts sending me way more self harm related content
It had been showing me depressive content for days / weeks beforehand, during the start of the episode, however the sh content only started (Or I only noticed it) a few hours after I had a relapse, so the timing was rather uncanny
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
“conversations that include explicit indicators of potential suicidal planning or intent.”
Sounds like more than just mentioning suicide. Also it’s per week, which is a pretty short time interval.
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
Ha good one
Unpacking your argument you make two points:
1) The human has studied all his life; yes, some humans study and work hard. I have also studied programming half my life and it doesn't mean A.I can't make serious contributions in programming and that A.I won't keep improving.
2) These companies, or OpenAI in particular, are untrustworthy many grabbing assholes. To this I say if they truly care about money they will try to do a good job, e.g provide an A.I that is reliable, empathetic and that actually help you get on with life. If they won't - a competitor will. That's basically the idea of capitalism and it usually works.
^This is the load-bearing statement. "Just talk about your depression" sounds like you know nothing of depression.
Set an alarm on your phone for when you should take your meds. Snooze if you must, but don't turn off /accept the alarm until you take them.
Put daily meds in cheap plastic pillbox container labelled Sunday-Saturday (which you refill weekly). The box will help you notice if you skipped a day or can't remember if you took them or not today. Seeing pills not taken from past days also serves to alert you if/that your "remember-to-take-them" system is broken and you need to make conscious adjustmemts to it.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
Maybe that? I see most my close friends daily and we all do not have kids.
People have issues admitting it even when its visible for everybody around, like some sort of admission you are failing as a parent, partner, human being and whatnot. Nope, we are just humans with limited energy and even good kids can siphon it well beyond 100% continuously, that's all.
Now I am not saying be a bad parent, in contrary, to reach you maximum even as a parent and partner, you need to be in good shape mentally, not running on fumes continuously.
Life without kids is really akin to playing game of life on easiest settings. Much less rewarding at the end, but man that freedom and simplicity... you appreciate it way more once you lose it. The way kids can easily make any parent very angry is simply not experienced elsewhere in adult life... I saw this many times on otherwise very chill people and also myself & my wife. You just can't ever get close to such fury and frustration dealing with other adults.
You're right about the marriage stress. I've definitely seen the light at the end of the tunnel with friends/family that are further along in their kid's ages, though to be fair they haven't really hit peak teenage years either. At least there seems to be something of a lull.
Allowing open source ai models without these safety measures in place is irresponsible and models like qwen or deepseek should be banned. (/s)
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
They can certainly say that their chat bot has a documented history of attempting to reduce the number of suicidal people.
Unless you're in that soviet man-hating mindset that put every failed suicide in mental institution.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
A few days ago I heard about a man who attempted suicide. It's not even an extreme case of disease or anything like that. It's just that he is over 70 (around 72, I think), with his wife in the process of divorcing him, and no children.
Even though I am lucky to be a happy person that enjoys life, I find it difficult to argue that he shouldn't suicide. At that age he's going to see his health declining, it's not going to get better in that respect. He is losing his wife who was probably what gave his life meaning. It's too late for most people to meet someone new. Is life really going to give him more joy than suffering? Very unlikely. I suppose he should still hang on if he loves his wife because his suicide would be a trauma for her, but if the divorce is bitter and he doesn't care... honestly I don't know if I could sincerely argue for him not to do it.
I am describing someone I knew myself. He did not commit suicide, but he was certainly waiting for death to come to him. I don't think anything about his situation was rare. Undoubtedly, he was one of many millions who have experienced something similar.
I should just assume things that aren't there, rather than expect a commenter to provide a substantive argument? OK.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
5% of 800 million is 40 million.
40 million thoughts per year divided by 52 weeks per year approximately equals around 1 million thoughts per week.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
I feel suicide is heavily misunderstood as well
People just copypasta prevention hotlines and turn their minds off from the topic
Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even
The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it
The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions
If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT
Alert: with ChatGPT you're not talking to anyone. It's not a human being.
I believe that if society actually wants people to open up about their problems and seek help, it can’t pull this sort of shit on them.
As an example: Mark Cross, who was on the board of SANE Australia, stated that whenever people were put into seclusion would be reviewed. We know now that this was never the case, and still is not being carried out fully. This lauded psychiatrist didn't seem to even known what was happening in his own unit or EDs.
Listen to it here:
https://www.abc.net.au/listen/programs/conversations/convers...
A very intentional word choice
But ChatGPT does exactly the same.
It always felt the same as one of those spam chumboxes to me. But who am I to say, if it works it works. But does it work? Feels like the purpose of that thing is more for the poster than the receiver.
Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)
This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.
I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.
We would also generally benefit from internalizing ideas from DBT, CBT, and so on. People also seriously need to work on distress tolerance. Having problems is part of life, and an inability to accept the discomfort is debilitating.
Also, we seriously need to get rid of the stupid idea of trigger warnings. The research on the topic is clear. The warnings do not actually help people with PTSD, and can create the symptoms of PTSD in people who didn't previously have it. It is creating the very problem that people imagine it solving!
All of this and more is supported by what is actually known about how to treat mental illness. Will doing these things fix all of the mental illness out there? Of course not! But it is not downplaying serious mental illness to say that we should all do more of the things that have been shown to help mental illness!
If you have mental issues that is not as simple as you let it sound. I'm not arguing the results of exercise but I am arguing the ease of starting with a task which requires continuous effort and behavioural changes.
2) are you going to make sure other people at the gym don't make fun of me?
>> Besides my wife and kids its almost total isolation
Good old "if you have money trouble try decreasing your caviar and truffle intake to only two meals a day"
"are you going to finance that?" I pay $18 a month for my gym membership.
"are you going to make sure other people at the gym don't make fun of me?" I suspect this is the main concern. No one at the gym gives a damn about you friend. We don't care if you are big, small, or in between. Just don't stand in front of the dumbbell rack blocking my access (get your weight and take a couple steps back so people can get theirs) or do curls in the squat rack and you will be fine. Wear normal gym clothes without any political messaging on them, make sure you are clean and wear deodorant. Ensure your gym clothes are washed before you wear them again.
Pre-plan your workout the first few times. I am going to do upper body today so I will do some sort of bench press, some sort of shoulder press, some bicep curls and some triceps extensions. Start small. Use machines while you learn the layout and get comfortable. If someone is on the machine you were going to use, roll with it, just find something else you are just starting, it doesn't matter. As you get more comfortable move to free weights but machines are really fine for most things.
Honestly I know people are intimidated by the gym but there really is no reason to be. Most people just put on their headphones and tune out. If you see someone looking at you I promise they don't really care, you are just passing through their vision. If you are stuck or feel bad, find one of the biggest dudes in the gym (the ones that look like they eat steroids for breakfast) and ask for help in a friendly manner. They are always the most helpful, friendly and least judgmental. Don't take all of their time but a quick, hey would you mind showing me how this works is going to make their day.
Life is not going to change for you, you actually have to make the effort.
You've got this friend, I truly believe in you.
Good luck implementing that.
Forbidding automation will make the product more expensive. Sales will go down, the company will go bankrupt.
Government cannot subsidize or sustain such a behavior forever either.
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
In the pits of depression that first framing can seem like the absolute truth and it's only when it subsides do people see it as a distortion of their thoughts.
Of course, only thereby, through being quite as superior to all others and their thought processes as me [pauses to sniff fart] can one truly find the moral impetus to prevent suicide.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
I have also been told by people in the mental health sector that an awful lot of suicide is impulse. It's why they say the element of human connection which is behind the homily of asking "RU ok" is effective: it breaks the moment. It's hokey, and it's massively oversold but for people in isolation, simply being engaged with can be enough to prevent a tendency to act, which was on the brink.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
Resulting warning: It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https:// findahelpline.com)
0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.
388 more comments available on Hacker News