Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /GPT-5.2-codex-rewardmaxx-ultra-think and products from AI labs
  1. Home
  2. /Discussion
  3. /GPT-5.2-codex-rewardmaxx-ultra-think and products from AI labs
10h agoPosted Nov 26, 2025 at 10:19 AM EST

GPT-5.2-Codex-Rewardmaxx-Ultra-Think and Products From AI Labs

akira_067
1 points
0 comments

Mood

skeptical

Sentiment

negative

Category

tech_discussion

Key topics

AI Development
Product Management
Tech Industry Trends
Debate intensity60/100
Model naming has seemingly been an issue recently, especially with OpenAi, and so I wanted to take a moment to discuss this.

Researchers consistently are, well, researchers. Their goal is to do research, not to name your model correctly. The product team on the other hand does have the job of naming models correctly. One of the biggest issues right now it seems is that the product team, engineering team, and research teams at most of these companies are separated.

Take a look at claude code for example. They hired a bunch of devs, despite the claims of "claude code building itself". They have 2 product people that bounced around between companies, and the product is becoming so insanely bloated that I'm not sure what theyre focused on.

OpenAi is in a similar boat. The generality of the tools they are shipping, and the understanding that they should ship a general coding model on top of the rest of the "gpt" models is crazy from a consumer perspective. They have such general tech AND have a profit incentive to monopolize the stack. This led to the responses api which is significantly more stateful and painful to use as an end user. It really only serves to provide openai with more lock in.

As these features get baked into the apis (including things like caching reasoning blocks etc.) we are going to see more and more product scope increase and more and more confusing products as they try to bake more unrelated features into single products.

Discussion Activity

No activity data yet

We're still syncing comments from Hacker News.

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (0 comments)

Discussion hasn't started yet.

ID: 46058210Type: storyLast synced: 11/26/2025, 3:20:08 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

View on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.