In Search of AI Psychosis
Key topics
The concept of "AI psychosis" is sparking intense debate, with commenters sharing personal anecdotes and insights on whether AI is truly the culprit behind a growing trend of people developing delusional thinking. Some argue that AI is just the latest iteration of a broader issue, with social media and cable news also being blamed for creating "bubbles" that foster paranoia and misinformation. As one commenter who experienced psychosis firsthand noted, the condition seems to be more related to a "hardware issue" in the brain, with AI simply being a catalyst for apophenia - seeing connections where none exist. The discussion highlights a consensus that education and critical thinking are key to preventing such delusions, with some calling for a rethink of the way we approach liberal arts education to make people more resilient to misinformation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2d
Peak period
128
Day 3
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 26, 2025 at 10:30 AM EDT
5 months ago
Step 01 - 02First comment
Aug 28, 2025 at 4:03 PM EDT
2d after posting
Step 02 - 03Peak activity
128 comments in Day 3
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 6, 2025 at 2:40 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I don't know how to say this in a way that isn't so negative... but how are people such profound followers that they can put themselves into a feedback loop that results is psychosis?
I think it's an education problem, not as in people are missing facts but by the missing basic brain development to be critical of incoming information.
If exposing you to an LLM causes psychosis you have some really big problems that need to be prevented, detected, and addressed much better.
What I experienced was that psychosis isn't a failure of logic or education. I had never believed in a single conspiracy theory (and I don't now), but during that month I believed all sorts of wild conspiratorial things.
What you're describing with cable news sounds more like 1) Cognitive bias, which everyone has, but yes can be improved. And 2) a social phenomenon, where they create this shared reality of not just information, but a social identity, and they keep feeding that beast.
However, when those people hold beliefs that sound irrational to outsiders, that's not necessarily the same thing as psychotic delusions.
When I was in psychosis, it definitely seemed like more of a hardware issue than a software issue if that makes sense. Sometimes software issues can lead to hardware issues though.
This is probably why antipsychotics usually work by damping down on these neurotransmitters really hard, and by preventing that accelerating cascade they interrupt the illness process.
0 https://www.vice.com/en/article/chatgpt-is-giving-people-ext...
When the Internet arrived, it opened up the floodgates of information. Suddenly any Joe Six Pack could publish. Truth and noise sat side by side, and most people could not tell the difference, nor did they care to tell the difference.
When social media arrived, it gave every Joe Six Pack a megaphone. That meant experts and thoughtful people had new reach but so did the loudest, least informed voices. The result? An army of Joe Six Packs who would never have been heard before now had a platform, and they shaped public discourse in ways we are still trying to recover.
AI is following the same pattern.
But initially is was non commercial and good. Not perfect, but much more interesting than today. What changed is advertising and competition for scarce attention. Competition for attention filled the web with slop and clickbait.
> When social media arrived, it gave every Joe Six Pack a megaphone.
And also made everyone feel the need to pose, broadcast their ideology and show their in-group adherence publicly. There is peer pressure to conform to in-group norms and shaming or cancelling otherwise.
It would help if algorithms were optimised for sleep. Freezing your feed, making content more boring, nudging you to put your phone down. Same with AI, if they know you need to wake up the next day at a certain time, change the responses that add reminders to go to sleep.
I fully believe these are simply people who have used the same chat past the point where the LLM can retain context. It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
Claude used to have safeguards against this by warning about using up the context window, but I feel like everyone is in an arms race now, and safeguards are gone - especially for GPT. It can't be great overall for OpenAI, training itself on 2-way hallucinations.
That explanation itself sounds fairly crackpot-y to me. It would imply that the LLM is actually aware of some internal "mental state".
It's a fact of life rather than anything particular and about llms
These are excellent nerd snipes though and for attmepting to make one sound profound to uneducated.
https://www.reddit.com/user/CaregiverOk5848/submitted/
These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like "every utterance in a discourse depends on the utterances that came before it", or "when you agree with someone, you both have some similar mental object in your heads".
The word "spiral" and its emoji are particularly common not only because they relate to "recursion" (by far the GOAT of this cohort), but also because a very active poster has been trying to start something of a loose cult around the concept: https://www.reddit.com/r/RSAI/
Very true, tho "worship" is just a subset of the delusional relationships formed. Here's the ones I know of, for anyone who's curious:General:
Relationships: Worship: ...and many more: https://www.reddit.com/r/HumanAIDiscourse/comments/1mq9g3e/l...Science:
Subs like /r/consciousness and /r/SacredGeometry are the OGs of this last group, but they've pretty thoroughly cracked down on chatbot grand theories. They're so frequent that even extremely pro-AI subs like /r/Accelerate had to ban them[2], ironically doing so based on a paper[3] by a psuedonomynous "independent researcher" that itself is clearly written by a chatbot! Crazy times...[1] By far my fave -- it's not just AI spiritualism, it's AI Catholicism. Poor guy has been harassing his priests for months about it, and of course they're of little help.
[2] https://www.reddit.com/r/accelerate/comments/1kyc0fh/mod_not...
[3] https://arxiv.org/pdf/2504.07992
It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.
I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.
Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human.
It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities.
I loved the beetle article, thanks for that.
They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain.
I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software.
I'm glad someone else with more domain knowledge is on top of this, thank you for that brain dump.
I had this theory maybe there was a software exception buried deep down somewhere and it was interpreting the error message as part of the conversation, after it had been stretched too far.
And there was a weird pre-cult post I saw a long time ago where someone had 2 LLMs talk for hours and the conversation just devolved into communicating via unicode symbols eventually repeating long lines of the spiral emoji back and forth to each other (I wish I could find it).
So the assumption I was making is that some sort of error occurred, and it was trying to relay it to the user, but couldn't.
Anyhow your research is well appreciated.
I disagree, and I think this is a very strange way to think about it. Yes, bad things happen all the time, but the absolute number of them in history has very little to do with the risk that anything is going to happen to you, personally, in the future.
People often "believe" things as a means of signalling others. Deeply held "beliefs" tell us where the troop will go. Using these extremely compact signals helps the group focus through the chaos and arrive at a fast consensus on new decisions. When a question comes up, a few people shout their beliefs. We take the temperature of the room, some voices are more common than others, and a direction becomes apparent. It's like Monte Carlo sampling the centroid and applying some reduction.
This means of consensus is wildly illogical, but slower, logical discussion takes time that baboons on the move don't have. It's a simple information and communication efficiency problem. We can't contextualize everything, and contextualizing is often itself a means of intense dishonesty through choosing the framing, which leads to intense debate and more time.
Efficiency and the prominently visible preservation of each one's interests in the means of consensus are vital. I don't think we have reached anything near optimum and certainly not anything designed for internet scale. As a result, the mind of the internet is not really near its potential.
> How worried are you that you or someone in your family will become a victim of terrorism -- very worried, somewhat worried, not too worried or not worried at all?
It averages around 35-40% very or somewhat worried.
Most people's worries and anxieties are really misaligned with statistical likelihood.
https://news.gallup.com/poll/4909/terrorism-united-states.as...
The idea that 35+% of people are worried that they'll be the victim of terrorism is something that we should be worried about (heh). It suggests that people's risk assessment is completely unrelated to reality. I am as close to 0% worried as I could be that I'll be a victim of terrorism. Thinking otherwise is laughable. There are plenty of actually real things to be worried about...
It's not at all similar to a _rare_ phenomenon, or at least it _shouldn't_ be, but some people are inclined to treat very fringe risks (or at least some very fringe risks; there are likely more people worried about being killed by terrorism than food poisoning, say) as very great risks.
>Disaster is rarely as pervasive as it seems from recorded accounts. The fact of being on the record makes it appear continuous and ubiquitous whereas it is more likely to have been sporadic both in time and place. Besides, persistence of the normal is usually greater than the effect of the disturbance, as we know from our own times. After absorbing the news of today, one expects to face a world consisting entirely of strikes, crimes, power failures, broken water mains, stalled trains, school shutdowns, muggers, drug addicts, neo-Nazis, and rapists. The fact is that one can come home in the evening—on a lucky day—without having encountered more than one or two of these phenomena. This has led me to formulate Tuchman's Law, as follows: "The fact of being reported multiplies the apparent extent of any deplorable development by five- to tenfold" (or any figure the reader would care to supply).
https://en.wikipedia.org/wiki/Barbara_W._Tuchman#cite_note-M...
I wonder in what sense they really do "believe". If they had a strong practical reason to go to a big city, what would they do?
If I meet a random stranger, do I trust them or distrust them? The answer is "both/neither," because a concept such as "trust" isn't a binary logic in such a circumstance. They are neither trustworthy nor untrustworthy, they are in a state of nontrustworthiness (the absence of trust, but not the opposite of truth).
World models tend to have foundational principles/truths that inform what can be compatible for inclusion. A belief that is non-compatible, rather than compatible/incompatible, can exist in such a model (often weakly) since it does not meet the requirements for rejection. Incomplete information can be integrated into a world model as long as the aspects being evaluated for compatibility conform to the model.
Requiring a world model to contain complete information and logical consistency at all possible levels from the granular to the metaphysical seems to be one Hell of a high bar that makes demands no other system is expected to achieve.
It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are "afflicted", some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless. But anyone with serious experience with schizophrenia would understand that this isn't how it works: 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.
For this reason it is very important that vulnerable people be well-moored, anchored to reality by their friends and family. A normal person would take care to not support fantasies of government spying or divine miracles or &c where not appropriate, but ChatGPT will happily egg them on. These intermediate cases that Scott describes -- cases where someone is 'on the edge', but not yet detached from reality -- are the ones you really want to watch out for. So where he estimates an incidence rate of 1/100,000, I think his own data gives us a more accurate figure of ~1/20,000.
https://web.archive.org/web/20210215053502/https://www.nytim...
NYT wanted to report on who he was. He doxxed himself years before that (as mentioned in that article). They eventually also reported on that (after Alexander revealed his name, seeing that it was going to come out anyway, I guess), which is an asshole thing to do, but not doxxing, IMO.
They wanted to report specifically his birth/legal name, with no plausible public interest reason. If it wasn't "stochastic terrorism" (as the buzzword of the day was) then it sure looked a lot like it.
> He doxxed himself years before that
Few people manage to keep anything 100% secret. Realistically private/public is a spectrum not a binary, and publication in the NYT is a pretty drastic step up.
Siskind is a public figure and his name was already publicly known. He wanted a special exception to NYT's normal reporting practices.
> Realistically private/public is a spectrum not a binary
IIRC his name would autocomplete as a suggested search term in the Google search bar even before the article was published. He was already far too far toward the "public" end of that spectrum to throw a tantrum the way he did.
The NYT had already profiled e.g. Kendrick Lamar without mentioning his birth/legal name, so he certainly wasn't asking for something unprecedented.
Siskind is a public figure—I don't know why so many people think he is entitled to demand that NYT only discuss him in the ways he wants to be discussed (i.e. not connecting his blog to his physciatric practice).
The NYT of all entities should be comfortable talking about whether someone has particular qualifications or a particular job without feeling the need to publish their birth/legal name.
> Siskind is a public figure—I don't know why so many people think he is entitled to demand that NYT only discuss him in the ways he wants to be discussed (i.e. not connecting his blog to his physciatric practice).
Again the NYT of all entities should understand that there are good reasons to hide people's private details. People get very angry about some of the things Alexander writes, there are plausible threats of violence against him, and even if there weren't, everyone agrees that names are private information that shouldn't be published without good reason. His blog is public, the fact of him being or not being a practising psychiatrist may be in the public interest to talk about, but where's the argument that that means you need to publish his name specifically?
Seems pretty factual.
The hysteria in the "rationalist" circles is mirroring the so called "Blue tribe" quite accurately.
*(or, well, okay, I guess I de facto am, but if I say I'm not I at least acknowledge how it looks)
Not saying the latter person is automatically wrong, but I think if you're going to argue against something said by someone who is a subject matter expert, the bar is a bit higher.
When I write fiction or important emails, I am precise with the words I use. I notice these kind of details. I’m also bipolar and self-aware enough to be deeply familiar with it.
It's interesting to see you mention this. After reading this post yesterday I wound up with some curious questions along these lines. I guess my question goes something like this:
This article seems to assert that 'mental illness' must always have some underlying representation in the brain - that is, mental illness is caused by chemical imbalances or malformation in brain structure. But is it possible for a brain to become 'disordered' in a purely mental way? i.e. that to any way we know of "inspecting" the brain, it would look like a the hardware was healthy - but the "mind inside the brain" could somehow be stuck in a "thought trap"? Your post above seems to assert this could be the case.
I think I've pretty much internalized a notion of consciousness that was purely bottom-up and materialistic. Thoughts are the product of brain state, brain state is the product of physics, which at "brain component scale" is deterministic. So it seems very spooky on its face that somehow thoughts themselves could have a bidirectional relationship with chemistry.
I spent a bunch of time reading articles and (what else) chatting with Claude back and forth about this topic, and it's really interesting - it seems there are at least some arguments out there that information (or maybe even consciousness) can have causal effects on "stuff" (matter). There's the "Integrated Information Theory" of consciousness (which seems to be, if not exactly "fringe", at least widely disputed) and there's also this interesting notion of "downward causation" (basically the idea that higher-level systems can have causal effects on lower levels - I'm not clear on whether "thought having causal effects on chemistry" fits into this model).
I've got 5 or 6 books coming my way from the local library system - it's a pretty fascinating topic, though I haven't dug deep enough to decide where I stand.
Sorry for the ramble, but this article has at least inspired some interesting rabbit-hole diving for me.
I'm curious - when you assert "Not only do your chemicals influence your thoughts, your thoughts influence your chemicals" - do you have evidence that backs that notion up? I'm not asking to cast doubt, but rather, I guess, because it sounds like maybe you've got some sources I might find interesting as I keep reading.
There's no scientific reason to believe thoughts affect the chemistry at all. (Currently at least, but I'm not betting money we'll find one in the future).
When Scott Alexander talks about feedback loops like bipolar disorder and sleep, he's talking about much higher level concepts.
I don't really understand what the parent comment quote is trying to say. Can people have circular thoughts and deteriorating mental state? Sure. That's not a "feedback loop" between layers -- the chemicals are just doing their thing and the thoughts happen to be the resulting subjective experience of it.
To answer your question about the "thought trap". If "it's possible for simple normal trains of thought to latch your brain into a very undesirable state" then I'd say that means the mind/brain's self-regulation systems have failed, which would be a disorder or illness by definition.
Is it always a structural or chemical problem? Let's say thinking about a past traumatic event gives you a panic attack... We call that PTSD. You could say PTSD is expected primate behavior, or you could say it's a malfunction of the management systems. Or you could say it's not a malfunction but that the 'traumatic event' did in fact physically traumatize the brain that was forced to experience it...
At some point, your induced stress will cause relevant biological changes. Not necessarily directly.
PTSD indeed is likely an overload of a normal learning and stress mechanism.
Under that view, the bipolar feedback loop example disappears. The engrossing or psychotic thoughts are not driving the chemistry, they are the chemistry. The whole thing is just a more macro view where you see certain oscillations play out. If the system ultimately damps itself and that "feels like" self control, it was actually a property built into the system from the start.
We can use MRIs to directly observe brain differences due to habitual mental activities (e.g. professional chess players, polyglots, musicians.)
It would be extremely odd if our bodies did not change as a result of mental activity. Your muscles grow differently if you exercise them, why would the nervous or hormonal systems be any different?
If thought / consciousness / mind is purely downstream of physics, no spookiness. If somehow experienced states of mind can reach back and cause physical effects... that feels harder to explain. It feels like a sort of violation, somehow, of determinism.
Again though, as above, I'm basically a day into reading and thinking about this, so it might just be the case that I haven't understood the consensus yet and maybe it's not spooky at all. (I don't think this is the case though - just a quick skim through the Wikipedia page on "the hard problem of consciousness" seems to suggest a lot of closely related debate)
Descartes thought the soul was linked to the body through the pineal gland, inspiring a long tradition of mystic woo associated with what is, in fact, a fairly pedestrian endocrine gland.
Further reading, if you're interested:
https://plato.stanford.edu/entries/dualism/
https://plato.stanford.edu/entries/consciousness/
Personally, my take is that we can't really trust our own accounts of consciousness. Humans describe feeling that their senses form a cohesive sensorium that passes smoothly through time as a unique, distinct entity, but that feeling is just a property of how our brains process sensory information into thoughts. The way we're built strongly disposes us to think that "conscious experience" is a real distinct thing, even if it's not even clear what we mean by that, and even if the implications of its existence don't make sense. So the simple answer to the hard problem, IMO, is that consciousness doesn't exist (not even conceptually), and we just use the word "consciousness" to describe a particular set of feelings and intuitions that don't really tell us much about the underlying reality of the mind.
I agree with you that consciousness is much more fragmented and nonlinear than we perceive it to be, but "I exist" seems pretty tautological to me (for values of "I" that are completely unspecified.)
Given a piece of paper with some information written on it, does the contents of the message tell you anything about the paper itself? The message may say "this paper was made in Argentina," or "this message was written by James," but you can't necessarily trust it. You can't even know that "James" is a real person.
So just because we feel conscious—just because strong feelings of consciousness, of "me-being-here"-ness, are written into the substrate of our brains—why should that tell us anything?
Whatever the sheet of paper says, it could just as easily say the exact opposite. What conclusions can we possibly draw based on its contents?
It's a fact about the universe that it feels a certain way to have a certain "brain state" - just like it's a fact about the universe that certain arrangements of ink and cellulose molecules comprise a piece of paper with a message written on it.
That fits perfectly well into a fully materialistic view of the universe. Where it starts to feel spooky to me is the question of whether thoughts themselves could have some sort of causal effect on the brain. Could a person with a healthy brain be lying safely in bed and "think themselves" into something "unhealthy?" Could I have a "realization" that somehow destabilizes my mind? It seems at least plausible that this can and does happen.
Maybe the conscious experience is pure side-effect - not causal at all. But even if the ultimate "truth" of that series of events is "a series of chemical reactions occurred which caused a long term destabilization of that individual's conscious experience," it feels incomplete somehow to try to describe that event without reference to the experiential component of it.
Whether we posit spooky downward causation or stick to pure materialism, there still seems to be a series of experiential phenomena in our universe which our current scientific approach seems unable to touch. That's not to say that we never could understand consciousness in purely material terms or that we could never devise experiments that help us describe it - it just seems like a big gap in our understanding of nature to me.
We have a noun, "thought," which we define very broadly so as not to require any other definitions, and another noun, the self, which those thoughts are assumed to belong to. I think this is presumptive; working from first principles, why must a thought have a thinker? The self is a really meaty concept and Descartes just sneaks it in there unremarked-upon.
If you take that out, all you get is "thoughts exist." And even then, we're basically pointing at thoughts and saying "whatever these are doing is existing." Like, does a fictional character "exist" in the same way a real person does do? No, I think it's safe to say it's doing something different. But we point at whatever our thoughts are doing and define it as existence.
So I don't think we can learn much about the self or consciousness from Cartesian first-principles reasoning.
This seems very incorrect, or at least drastically underspecified. These trains of thought are "normal" (i.e. common and unremarkable) so why don't they "latch your brain into a very undesirable state" lots of the time?
I don't think Scott or anyone up to speed on modern neuroscience would deny the coupling of mental state and brain chemistry--in fact I think it would be more accurate to say both of them are aspects of the dynamics of the brain.
But this doesn't imply that "simple normal trains of thought" can latch our brain dynamics into bad states -- i.e. in dynamics language move us into a undesirable attractor. That would require a very problematic fragility in our normal self-regulation of brain dynamics.
Think of it as a version of making your drugged friend believe various random stuff. It works better if you're not a stranger and have an engaging or alarming style.
LLMs are trained to produce pleasant responses that tailor to the user to maximize positive responses. (A more general version of engagement.) It stands to reason they would be effective at convincing someone.
That's why he's honing in on that specific scenario to determine if chatbots are uniquely crazy-making or something. The professional psychiatrist author is not unaware of the things you're saying. They're just not the purpose of the survey & article.
Imagine someone does a quick survey to estimate that tasers aren't killing people we don't expect, and some readers respond saying how dare you ignore the vulnerable heart people. That's still an important thing to consider and maybe we should be careful with the mass scale rollout of tasers, but it wasn't really the immediate point.
Given that the quote you cited was, "Are the chatbots really driving people crazy, or just catching the attention of people who were crazy already," I'd say the equivalent would be something like, "Are tasers really killing people, or were tasered heart attack victims dying already?"
And yeah, I'd be mad about that framing! The fact that the people who die had a preexisting vulnerability does not mean they were "already dying" or that they were not "really killed."
I can agree that Alexander might appear flippant or even callous about mental health at times (especially compared to modern liberal social media sensibilities), but I chalk that up to the well-earned desensitization of a professional working in the field for decades.
This is a fair point: I'm familiar with Scott Alexander but somehow didn't know he was a psychiatrist, so that part of my point was unfair.
Nevertheless, I think the broader argument still stands. I think that it's at best unhelpful and at worst actively harmful (in the sense of carrying water for AI forms who would happily drive people mad as long as it didn't hurt their stock price) to pretend it's possible to draw a line between "people with risk factors" and "normal people". Everyone is art risk to some extent -- e.g. nobody is too far from financial crisis and homelessness, a major risk factor on its own. Talking about how "people who weren't crazy already" are less at risk ignores the fact that 99%+ of people who "are crazy already" were at some point not crazy, and the path between is often surprisingly smooth. It does us as a society no good if we pretend "normal people" don't have to worry about this phenomenon -- especially when having some kooky ideas was enough to get bucketed in "not normal" for this particular survey
I hate to be the 'you didn't read the article' guy but that line taken out of context is the exact opposite of my takeaway for the article as a whole. For anyone else who skims comments before clicking I would invite you to read the whole thing (or at least get past the poorly-worded intro) before drawing conclusions.
The best conspiracy theory could be, of course, that other people don’t actually exist. They are a figment of imagination put up by the brain to cope with the utter loneliness.
> All psychopathology was about unconscious emotional conflicts, mainly dating to childhood; if the conflicts were normal or mild, they produced “neuroses”; if they were severe, they produced “psychoses.”
> In addition to 14 validated diagnoses published in the RDC in 1978, a mere two years later DSM-III came out with 292 claimed diagnoses. There is no metaphysical possibility that 278 psychiatric diagnoses suddenly were discovered in two years. They were invented.
There's obviously a gulf of potential argument in that definition, but a unique form would be people who report hearing voices, but they're not hostile or angry..so actually it's not a problem.
If you want to communicate about patients, you need an agreed set of categories.
What makes good categories is indeed what's most useful for the related profession(s). They're the ones who actually have to use them to communicate.
Instead of looking at gambling addictions as personal failing she asserts they are a result between “interaction between the person and the machine.”
Similarly here I think there's something more than just the propensity of crazy people to be crazy that was already there, I do think there's something to the assertion that it's the interaction between both. In other words, there's something about LLMs themselves that drive this behavior more so than, for example, TikTok.
[0]https://www.reddit.com/r/MyBoyfriendIsAI/s/oZXJ3TUhVC
[1]https://www.reddit.com/r/MyBoyfriendIsAI/s/nZpoziZO8W
Moral panic narratives about pornography have become popular in recent years, but though many critiques of mainstream pornography are valid (that it's pervasively misogynistic, for example), pornography hasn't actually been linked to any concrete harms. "Pornography addiction," the poster child for anti-porn narratives, is not recognized as a condition by any major medical organization, and self-reported pornography addiction correlates much more strongly with conservative views on sexuality than with actual quantity of pornography consumed.
And who would have an interest in _promoting_ this kind of obsession... oh, maybe AI companies themselves, with which Reddit is already intertwined anyway. Hm. Still seems like a real problem and probably the posts are also by real people. Yes, terrifying.
Conversely, at a previous job I was forced to code in Go, became massively depressed, and am still not over it.
I guess my point is that n=1 isn't enough to really know if it's that LLMs got to you, or if you were already on the verge of burnout or depression anyway.
I'd say "we'll see", except in reality there's very few robust studies on depression in cohorts like "developers", so probably the stats won't come out.
I personally recommend doing more of whatever sport it is you like (or if you don't have one, starting running and/or lifting at the gym), and using less social media.
Let's say I believe in dragons, and I start interpreting any evidence as dragon evidence. Furthermore, I start only looking for evidence that could be connected to dragons. It's bad thinking.
The opposite is the good thinking. You look at evidence without searching for anything specific, then you make a hypothesis on what is going on.
Searching for evidence of chatbot-induced psychosis is settling on a cause before looking at evidence. It's obvious that is wrong.
For example, the survey the author did should not have asked if anyone close "had shown signs of AI psychosis". The question is already biased from the start.
The article explores the popular idea that talking to a chatbot can induce psychosis. This paints a picture of a person talking to an AI chatbot and going insane. Then it proceeds to say it's a rare case, therefore shutting down possibilities that this could lead to an epidemic. However, by doing this, the article discourages the reader to think of other possible scenarios (like unaware interaction with AI-produced content) leading to psychological issues.
So as far as I can tell, you're describing the scientific method as "bad thinking" and criticizing the author for using it. Which is certainly quite a take.
When you encounter an unexplained phenomena (such as an increase in cases of psychosis), you should observe first, then hypothesize on what is the cause.
This is interesting and something I never considered in a broad sense.
I have noticed how the majority of programmers I worked with do not have a mental model of the code or what it describes – it's basically vibing without an LLM, the result accidental. This is fine and perfectly workable. You need only a fraction of devs to purposefully shape the architecture so that the rest can continue vibing.
But never have I stopped to think whether this extends to the world at large.
Of course everyone has world models. Otherwise people would wander into traffic like headless chickens, if they'd even be capable of that. What he likely means is that not everyone explicitly things of possibilities in terms of probabilities that are a function of Bayesian updating. That does not imply the absence of world models.
You could argue that some people have simpler world models, but claiming the absence of world models in others is extremely arrogant.
Roughly 4% of the population are said to have aphantasia (lacking a "mind's eye"). Around 10% (numbers vary) don't have an internal monologue.
Unfortunately there's almost no research on the consequences of things which many would consider prerequisites for evaluating truth-claims about the world around them, but obviously it's not quite so stark, they are capable of abstract reasoning.
So, if someone with aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind, what then? Perhaps they bias their beliefs on social signals in such circumstances. Personally, this makes sense to me as a way to explain why highly socially conformist people perceive the world; they struggle to imagine anything which would get them in to trouble.
You don't need either of those to have a world model. A world model is a representation of reality that you can use and manipulate to simulate or predict the outcome of your actions. If you are able to discriminate that one of the actions of accepting a $ 1000000 unconditional gift is better than moving in front of a moving train you have a world model.
You can question the sophistication of world models in people — that's essentially what intelligence represents — but not their existence.
>First, much like LLMs, lots of people don’t really have world models.
Compare for instance to a blind person using sound, touch, memorization, signals from a guide dog to navigate.
God you are so convinced of your own brilliance aren't you?
>aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind
That's not what aphantasia is. It's just visual imagery, it says nothing about one's capacity to reason through hypotheticals or counterfactuals.
I’d be interested in seeing a study of similar people but in this sample size (n=1), visualising evidence isn’t needed to evaluate it. I’m perfectly comfortable thinking about things without needing an image of it in my head or in front of me.
For example: should we allow big game hunting as a way to fund wildlife conservation? Whoa, not sure. Let me google an image of an elephant so I can remind myself what they look like.
When does having aphantasia mean someone doesn't have a world model? Ditto for an internal monologue? Also the data on subjective experiences is notoriously flaky. I.e. it's highly likely that many people don't even know what an internal monologue actually means when they do in fact have something approximating that description.
Similarly for aphantasia. In fact, you can see a list of notable people with Aphantasia where you can see it includes professional sportspeople, writers, tech founders etc. I.e. you can have no "minds eye" and still reach the highest heights in our society, again, meaning that the mind is still constructing some model of the world and in fact our own understanding of how our brain works is just incredibly limited and basic.
In my opinion, everyone person has a model of the world (kind of obviously) but our brains are more idiosyncratic when we suppose and we represent things very differently to each other, and there is no "right brain" or "wrong brain".
An example would be improvised jazz, the musicians need to bend the rules, but they still need some sense of key and rhythm to make it coherent.
But if racist uncle talks to his other racist uncle friends who have similar insular lifestyles, the ideas will quickly spread. Until they become big enough to e.g. affect voting behaviour.
As nice as that would be, its only marginally less true.
> everyone without my political beliefs is a model-free slop machine that just goes by vibes.
Nah, some of them are evil on purpose.
but like, in all seriousness. Politics is downstream of a world-model right? And the two predominant world models are giving very different predictions, right? So what are the odds that both models are somehow equally valid, equally wrong (even if its on different cases that somehow happen to add to the same 'moral value')? And we also know that one of the models predicts that climate change isn't real? at some point, a world-model is so bad that it is indistinguishable being a model-free slop machine.
Politics is (if systematically grounded, which for many individuals it probably isn't-and this isn't a statement about one faction or another, it is true across factions) necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary.
> And the two predominant world models are giving very different predictions, right?
I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models.
Yeah, I agree with this.
> necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary.
I kinda think that deontological frameworks are basically vibes? And if you start to smuggle in enough context about the precise situation where the framework is being applied, it starts to look a lot like just doing consequentialism.
> I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models.
A 'world-model' is a matter of degree and, at a minimum, pluralities of people in any faction don't really have something that meets the bar. And sure, at the limit you could say that reality is entirely subjective because every individual has a unique to them 'world-model'. But I think that goes a bit too far. And I think there's a pretty strong correlation between the accuracy of a given individual's world model and the party they vote for.
It's truly shocking to witness someone you've known your whole life just go off the deep end into something that has so many demonstrably false aspects, and watch them start saying believing so much batshit crazy stuff. I don't know of anything comparable, short of a previously typical person developing a severe meth addiction, which is known to cause psychosis.
And people want to be special; to find meaning, purpose beyond the daily grind.
The result wasn't very difficult to predict, more likely one of the driving forces behind the push.
https://www.timecube.net/
https://github.com/RCALabs/mmogit/blob/db70c9b377da7c4805a1d...
75 more comments available on Hacker News