Gpt‑5-Codex and Upgrades to Codex
Mood
calm
Sentiment
mixed
Category
other
Key topics
The article discusses the release of GPT-5-Codex and its upgrades, sparking discussion around its capabilities, potential fine-tuning, and the credibility of related content creators.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
3h
Peak period
12
Day 1
Avg / period
7.5
Based on 15 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 3:17 PM EDT
2 months ago
Step 01 - 02First comment
Sep 15, 2025 at 6:24 PM EDT
3h after posting
Step 02 - 03Peak activity
12 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 17, 2025 at 4:13 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.
My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.
For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."
So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.
This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.