Chatgpt Is Not a LLM – GPT Is
Key topics
The article argues that ChatGPT is not just a Large Language Model (LLM), but a more sophisticated cognitive architecture that includes memory, tools, and orchestration, sparking a discussion on the nuances of AI terminology and its implications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
54m
Peak period
15
0-2h
Avg / period
5.8
Based on 23 loaded comments
Key moments
- 01Story posted
Sep 7, 2025 at 8:52 AM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 9:46 AM EDT
54m after posting
Step 02 - 03Peak activity
15 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 8, 2025 at 1:21 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But the offerings of ChatGPT or Google's AI Studio surpass the feature set of LibreChat by a lot. It used to be just a "better" system prompt, but now it's a lot more.
> This isn’t just about keeping a chat history - it’s about building and maintaining a model
> Understanding this distinction isn’t just about getting the terminology right. It’s about understanding the future of human-computer interaction
> When we say “ChatGPT” when we mean “LLM,” we’re not just being sloppy - we’re obscuring fundamental architectural and strategic decisions
> when you interact with modern ChatGPT, you’re not just talking to a language model - you’re collaborating with an intelligent agent
No comment.
It’s not acting as an agent unless it does something in the world on your behalf.
If you're thinking about how to integrate AI into your system, it's worth asking the question of why your system isn't just ChatGPT.
- Do you have unique data you can pass as context?
- Do you have APIs or actions that are awkward to teach to other systems via MCP?
- Do you have a unique viewpoint that you are writing into your system prompt?
- Do you have a way to structure stored information that's more valuable than freeform text memories?
- etc.
For instance, we [0] are writing an agent that helps you plan migrations. You can do this with ChatGPT, but it hugely benefits from (in descending order of uniqueness) access to
1) a structured memory that's a cross between Asana and the output of `grep` in a spreadsheet,
2) a bunch of best-practice instructions on how to prep your codebase for a migration, and
3) the AI code assistant-style tools like ls, find, bash, etc.
So yeah, we're writing at agent, not building a model. And I'm not worried about ChatGPT doing this, despite the fact that GPT5 is pretty good at it.
[0] https://tern.sh
Anyway, agents are control systems that using planning, tools, and a collection of underlying models. ChatGPT is an agent. What kind? The kind optimized for the general user looking to do work with public knowledge. That’s the best definition I can come up with.
Anyway, let’s make sure people understand the difference between AI systems and AI models. The former is where a lot of startup activity will be for a decade. The latter will be in the hands of a few well funded behemoths.
Predictable: Same input always produces consistent output”
There is no always.
Was this written by GPT? ;)
Fwiw, right at the end