Back to Home11/13/2025, 6:46:16 PM

GPT-5.1 for Developers

109 points
27 comments

Mood

excited

Sentiment

positive

Category

tech

Key topics

AI

GPT

OpenAI

LLMs

Debate intensity70/100

OpenAI announces GPT-5.1, a new AI model for developers, sparking interest and discussion on HN.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

3h

Peak period

26

Day 1

Avg / period

9.7

Comment distribution29 data points

Based on 29 loaded comments

Key moments

  1. 01Story posted

    11/13/2025, 6:46:16 PM

    5d ago

    Step 01
  2. 02First comment

    11/13/2025, 10:01:27 PM

    3h after posting

    Step 02
  3. 03Peak activity

    26 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/17/2025, 6:39:55 PM

    1d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (27 comments)
Showing 29 comments
kevinkatzke
5d ago
3 replies
This got only a single comment and 34 points in 3 hours. Crazy how the dynamics have changed around model releases in just a single year.
throwup238
5d ago
1 reply
There was already an announcement post for 5.1 yesterday: https://news.ycombinator.com/item?id=45904551
dang
5d ago
Thanks! Macroexpanded:

GPT-5.1: A smarter, more conversational ChatGPT - https://news.ycombinator.com/item?id=45904551 - Nov 2025 (672 comments)

observationist
5d ago
2 replies
This is the first low-key, silent feature rollout, treated like "just another software update", with no hype or buzz beforehand. Prior to this point, every other feature release was pumped for weeks or even months with "leaks" from insiders and deliberately getting people amped. I don't know if OpenAI changed marketing tactics, or if they're in a new chapter in some book, but this is a radical shift from what they were doing before.
anuramat
4d ago
sounds like this is just a new snapshot, so I don't think anything changed (upd: anything about their marketing I mean)
voc
5d ago
I feel like the rollout was a bit rushed. Benchmarks for 5.1 came out a day after the launch. New models weren't immediately available through the API. And then there's 5-Codex-Mini which was deprecated only six days later by 5.1-Codex-Mini. Wondering if Gemini 3 forced their hand here?
amelius
5d ago
More of the same, I suppose.

You have to be called Apple to get raving reviews for that.

__jl__
5d ago
1 reply
The prompt caching change is awesome for any agent. Claude is far behind with increased costs for caching and manual caching checkpoints. Certainly depends on your application but prompt caching is also ignored in a lot of cost comparisons.
pants2
5d ago
Though to be fair, thinking tokens are also ignored in a lot of cost comparisons and in my experience Claude generally uses fewer thinking tokens for the same intelligence
miohtama
5d ago
2 replies
> On coding, we’ve worked closely with startups like Cursor, Cognition, Augment Code, Factory, and Warp to improve GPT‑5.1’s coding personality, steerability, and code quality.

Why no GitHub?

conception
5d ago
Microsoft isn’t a startup and I suspect open AI is working closely with Microsoft already.
mmusc
4d ago
Model is available on copilot.
dweekly
5d ago
3 replies
A few hours of playing around and I'm suitably impressed.

Claude 4.5 Sonnet definitely struggles with Swift 6.2 Concurrency semantics and has several times gotten itself stuck rather badly. Additionally Claude Code has developed a number of bugs, including rapidly re-scrolling the terminal buffer, pegging local CPU to 100%, and consuming vast amounts of RAM. Codex CLI was woefully behind a few months ago and, despite overly conservative out-of-the-box sandbox settings, has quite caught up to Claude Code. (Gemini CLI is an altogether embarrassing experience, but Google did just put a solid PM behind it and 3.0 Pro should be out this month if we're lucky.)

Codex with 5.1 high managed to thoughtfully paw through the documentation and source code and - with a little help pulling down parts of the Swift Book - managed to correctly resolve the issue.

I remember getting the thread manager right being one of the harder parts of my operating systems course doing an undergrad in computer science; testing threaded programs has always been a challenge. It's a strange circle-of-life moment to realize that what was hard for undergrads also serves as a benchmark for coding agents!

CharlesW
5d ago
1 reply
> Claude 4.5 Sonnet definitely struggles with Swift 6.2 Concurrency semantics and has several times gotten itself stuck rather badly.

What solved that for me was to leverage the for-LLM docs Apple ships with Xcode, and then build a swift6-concurrency skill. Here's an example script to copy the Xcode docs into your repo: https://gist.github.com/CharlesWiltgen/75583f53114d1f2f5bae3...

dweekly
5d ago
Lovely find!

/Applications/Xcode.app/Contents/PlugIns/IDEIntelligenceChat.framework/Versions/A/Resources/AdditionalDocumentation/Swift-Concurrency-Updates.md

is exactly the primer to give an agent.

WhyOhWhyQ
5d ago
1 reply
"including rapidly re-scrolling the terminal buffer" Yes this bug is brutal.

"consuming vast amounts of RAM" Also this. Claude will leave hanging instances all the time. If you check your task manager after a few days of using it without doing a full reset you'll see a number of hanging Claude processes using up 400 mb of RAM.

Claude actually has a huge number of very painful bugs. I'm aware of at least a dozen.

gigatree
4d ago
The iOS app has also gotten pretty buggy. Not a great sign for the future of software, in terms of stability.
htrp
5d ago
>but Google did just put a solid PM behind it

Citation?

gedy
5d ago
1 reply
The "apply_patch" addition is nice, as have been struggling to get any AI API to correctly return diffs
anuramat
4d ago
1 reply
what's the point of apply_patch and shell tools though? can't you just define your custom tools with exactly the same behaviour, since you're implementing the actual execution on your side anyway? sounds like vendor lock in for the sake of vendor lock in
gedy
4d ago
1 reply
In my case, I don't want to do diff tool on my side as the diff is much smaller to send. Versus the LLM sending the whole file (slowly), just to send it back.
anuramat
1d ago
1 reply
I thought you still need to implement the patching on your side? judging by <https://platform.openai.com/docs/guides/tools-apply-patch>
gedy
1d ago
You do, but the issue this helps with is it's difficult to get LLMs to return accurate unified diffs, which are valuable if you are editing some larger text via their APIs. The alternative of letting it send back the entire edited text is pretty slow. So them sending an accurate server-side diff (likely from some actual diff tool and not just LLM generated thing that sort of looks like a diff) is really helpful.
sunaookami
5d ago
2 replies
Man these names are so confusing and now reasoning_effort "minimal" was renamed to "none"? And the error message says only "medium" is supported?? Also the docs make no mention if gpt-5.1-chat-latest is included in the "free" offer (when having prompt sharing turned on). The popup says gpt-5.1 is included but not gpt-5.1-chat even though gpt-5-chat-latest is included. Why is it even called "chat" when it's official name is "Instant"? And what even IS the difference between gpt-5.1 and gpt-5.1-chat if both support reasoning_effort??
selbyk
5d ago
It's all vibe coded
tedsanders
5d ago
- reasoning_effort "minimal" was not renamed to "none"; "none" is a new, faster level supported by GPT-5.1 but not GPT-5

- there's no good reason it's called "chat" instead of "Instant"

- gpt-5.1 and gpt-5.1-chat are different models, even though they both reason now. gpt-5.1 is more factual and can think for much longer. most people want gpt-5.1, unless the use case is ChatGPT-like or they prefer its personality.

felixbraun
5d ago
Already live in Cursor btw
jtrn
5d ago
So is this better, different or replacing current codex ?
Tankenstein
4d ago
This is the first time since GPT 4.1 that I think I can upgrade our main agent model. Any noticeable amount of reasoning has been too slow for us, since the model is having a real-time conversation with the user. "minimal" reasoning GPT-5 performs terribly, it's significantly dumber than GPT 4.1 in a long, multi-turn conversation with tools.

This time, I just dropped it in and at first glance it seems to work well. I'll probably upgrade over the weekend if I see a boost in performance somewhere after tuning the prompts.

ID: 45918802Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.