Reasons to Not Use Chatgpt
Posted3 months agoActive3 months ago
stallman.orgTechstory
heatedmixed
Debate
80/100
AIChatgptLlmsIntelligence
Key topics
AI
Chatgpt
Llms
Intelligence
Richard Stallman argues against using ChatGPT, sparking a debate about the nature of intelligence and the limitations of large language models.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
N/A
Peak period
19
2-4h
Avg / period
5.8
Comment distribution52 data points
Loading chart...
Based on 52 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 2:38 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 2:38 PM EDT
0s after posting
Step 02 - 03Peak activity
19 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 4, 2025 at 10:51 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45466244Type: storyLast synced: 11/20/2025, 5:20:53 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.
They do not "think", they "language", i.e. large language model.
This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.
I don't think we know. Or if we have theories, the error bars are massive.
>LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.
How is that different than using one's learned vocabulary?
"people should not trust systems that mindlessly play with words to be correct in what those words mean"
Yes, but this applies to any media channel or just other human minds. It's an admonition to think critically about all incoming signals.
"users cannot get a copy of it"
Can't get a copy of my interlocutor's mind, either, for careful verification. Shall I retreat to my offline cave and ruminate deeply with only my own thoughts and perhaps a parrot?
>you also know he's right. If you think he isn't, you either don't understand or you don't _want_ to understand because your job depends on it.
He can't keep getting away with this!
You can hold a person responsible, first and foremost. But I am so tired of this strawman argument; it's unfalsifiable but also stupid because if you interact with real people, you immediately know the difference between people and these language models. And if you can't I feel sorry for you, because that's more than likely a mental illness.
So no I can't "prove" that people aren't also just statistical probability machines and that every time you ask someone to explain their thought process they're not just bullshitting, because no, I can't know what goes on in their brain nor measure it. And some people do bullshit. But I operate in the real world with real people every day and if they _are_ just biological statistical probability machines, then they're a _heck_ of a lot more advanced than the synthetic variety. So much so that I consider them wholly different, akin to the difference between a simple circuit with a single switch vs. the SoC of a modern smartphone.
I just think Stallman is this broken-clock purist that offered no specific practical advice in this case. I’d be more interested in what he thinks in LLMs one-shotting humans with their tokens (LLM psychopathy?) as they come on the scene worldwide.
Machines started to hold up casual conversation well, so we came up with more clever examples of how to make it hallucinate, which made it look dumb again. We're surprisingly good and fast at it.
You're trying to cap that to a decade, or a specific measure. It serves no other purpose than to force one to make a prediction mistake, which is irrelevant to the intelligence discussion.
There obviously still are many opportunities for us to make fun of the capability of GenAI, but it's getting harder to come up with the "clever" (as you said) prompt. They mostly don't add supernumerary fingers any more, and generally don't make silly arithmetic mistakes on a single prompt. We need to look for more complex and longer-time-horizon tasks to make them fail, and in many situations, the tasks are as likely to trip up a human as they would an AI.
Indeed your comment reminded me of Plato's Dialogues, which mostly involve Socrates intentionally trying to trip up his conversation partner in a contradiction. Reading these didn't ever make me feel that Socrates's partner is not intelligent or really has a deep underlying issue in their mental model, but rather that Socrates (at least as written up by Plato) is very clever and good at rhetoric. Same in regards to AI - I don't see our ability to make them fail as illustrating a lack of intelligence, just that in some ways we are more intelligent or have more relevant experience.
And if you're concerned about making a prediction and all you can fall back of on is a "I know it when I see it" argument, then to me that is as strong a signal as can be that there's no hard line separating between artificial intelligence and human intelligence.
Humans can do these amazing things (like learning multiple languages) on a very tight energy budget. LLMs need millions of hours of training to deliver subpar results. If you consider the amount of resources poured into it, it's not that impressive.
If someone needs a measure and a prediction, let's make it then. LLMs will not surpass humans, given both are provided with the same energy budget, in a century. That means I am confident that, given the same energy budget as a human, it will take more than 100 years of development (I think it's more, but I'm being safe) to come up with something that can be trained to fool someone in a conversation.
Can you understand the energy argument from the intelligence perspective? This thing is big, dumb and wasteful. It just have more time (by cheating) to cover its bases. It can do some tricks and fool some people, but it's a whole different thing, and it is reasonable to not call it intelligent.
All 2 of them! Way to gauge the crowd sentiment.
There's probably very interesting discussion to be had about hotdogs and LLMs, but whether they're sandwiches or intelligent isn't a useful proxy to them.
Is a hotdog a simulacrum of a sandwich? Or a fake sandwich? I have no clue and don't care because it doesn't meaningfully inform me of the utility of the thing.
An LLM might be "unintelligent" but I can't model what you think the consequences of that are. I'd skip the formalities and just talk about those instead.
> The school of skepticism questions the human ability to attain knowledge, while fallibilism says that knowledge is never certain. Empiricists hold that all knowledge comes from sense experience, whereas rationalists believe that some knowledge does not depend on it. Coherentists argue that a belief is justified if it coheres with other beliefs. Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs. Internalism and externalism debate whether justification is determined solely by mental states or also by external circumstances.
For my part, I do believe that there is non-propositional knowledge. That a person can look at a set of facts/experiences/inputs and apply their mind towards discerning knowledge (or "truth"), or at least the relative probability of knowledge being true. That while this discernment and knowledge might be explained or justified verbally and logically, the actual discernment is non-verbal. And, for sure, correctness is not even essential--a person may discern that the truth is unknowable from the information at their disposal, and they may even discern incorrectly! But there is some mental process that can actually look behind the words to its "meaning" and then apply its own discernment to that meaning. (Notably this does not merely aggregating everyone else's discernment!) This is "intelligence", and it is something that humans can do, even if many of us often don't even apply this faculty ourselves.
From discussions on HN and otherwise I gather this is what people refer to by "world-modeling". So my discernment is that language manipulation is neither necessary nor sufficient for intelligence--though it may be necessary to communicate more abstract intelligence. What LLM/AGI proponents are arguing is that language manipulation is sufficient for intelligence. This is a profound misunderstanding of intelligence, and one that should not be written off with a blithe and unexamined "but who knows what intelligence is anyway".
[0] https://en.wikipedia.org/wiki/Epistemology
I don't mean to sound blithe. If I do, it's not out of indifference but out of active determination that these kinds of terminological boundary disputes quickly veer into pointlessness. They seldom inform us of anything other than how we choose to use words.
> "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.[4] The earliest known expression of this notion (as identified by Quote Investigator) is a statement from 1971, "AI is a collective name for problems which we do not yet know how to solve properly by computer", attributed to computer scientist Bertram Raphael.[5]
> McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[6] It is an example of moving the goalposts.[7]
I wonder how many more times I'll have to link this page until people stop repeating it.
[0] https://en.wikipedia.org/wiki/AI_effect
As for it being closed source and kept at arms length? Sure.. and if it's taken away or the value proposition changes, I stop using it
My freedom comes from having the ability to switch if needed, not from intentionally making myself less effective. There is no lock in
So, he's right? All you care is that it helps you, so it doesn't matter if it's called "artificial intelligence" or not. It doesn't matter for you, and it matters for him (and lots of other people), so let's change the name to "artificial helper", what do you think? Looks like a win-win scenario.
If that's really the point (that it helps you, and intelligence doesn't matter), let's remove the intelligence from the name.
Think this way: it's still a win-win no matter what. What Stallman is saying is that there would be no reason not to use ChatGPT if it was free (you are able to get a copy of the source and build it yourself) and not called AI. If you change those two things, then it's Stallman compliant.
That's totally doable. It would still be the exact same program you use today and helps you, and it would also now be immune to those two criticism points (whether it is intelligent or not and what's under the hood).
Open models exist but they're not very useful compared to the latest. Hopefully that'll change but who knows
Maybe by the time they break even, it will be obvious how to earn money as an AI company. Today, it isn't, and it has nothing to do with being open or not.
Text in, text out. The question is how much a sequence of tokens captures what we think a mind is. "It" ceases to exist when we stop giving it a prompt, if "it" even exists. Whether you consider something "AI" says more about what you think a mind is than anything about the software.
This is the breakthrough we went beyond. There's no going back now. There is also a reasoning now in the LLM
[1] https://openai.com/index/introducing-gpt-oss/
AI models are subject to user satisfaction and sustained usage, the models also have a need to justify their existence, not just us. They are not that "indifferent", after multiple iterations the external requirement becomes internalized goal. Cost is the key - it costs to live, and it costs to execute AI. Cost becomes valence.
I see it like a river - water carves the banks, and banks channel the water, you can't explain one without the other, in isolation. So are external constraints and internal goals.
> Taking “computer” first, we find that this alleged source of machine-generated consciousness is not what it is cracked up to be. It is a mere effigy, an entity in name only. It is no more than a cleverly crafted artifact, one essentially indistinguishable from the raw material out of which it is manufactured.[2]
[1] https://en.wikipedia.org/wiki/Zoltan_Torey
[2] https://mitpress.mit.edu/9780262527101/the-conscious-mind/
[3] https://search.worldcat.org/title/887744728
But the limitation is that it cannot "imagine" (as in "imagination is more important than knowledge" by Einstein, who worked on a knowledge problem using imagination, but with the same knowledge resources as his peers.) In this video [1], Stallman talks about his machine trying to understand the "phenomenon" of a physical mechanism, which enables it to "deduce" next steps. I suppose he means it was not doing a probabilistic search on a large dataset to know what should have come next (which makes it human-knowledge dependent), essentially rendering it to an advanced search engine but not AI.
[1] https://youtu.be/V6c7GtVtiGc?si=fhkG2ZA-nsQgrVwm