AI Gets More 'meh' as You Get to Know It Better
Key topics
The article discusses how researchers' enthusiasm for AI wanes as they become more familiar with its limitations, a sentiment echoed in the comments where users share their varied experiences with AI, from coding assistance to creative tasks.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
26
0-6h
Avg / period
6.8
Based on 34 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 2:32 PM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 2:57 PM EDT
25m after posting
Step 02 - 03Peak activity
26 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 11, 2025 at 7:26 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".
Can you explain what this means?
Your friend felt drained because chat gpt was asking for her engagement?
4o, the model most non-tech people use (that I wish they would depreciate) is very...chatty, it will actively try to engage you, and give you "useful things" you think you need, and take you down huge long rabbit holes. On the second point, it used to be very "high EQ" to people (sycophantic). Once they rolled back the sycophancy thing, even a couple of my non-technical friends msg'd me asking what happened to ChatGPT. I know one person who we've currently lost to 4o, it's got them talked into a very strange place friends can't reason them out of, and one friend who has recently "come back from it" so to speak.
A high EQ might well be a prerequisite for successful sycophancy, but the other way definitely does not hold.
Basically yeah (except the "she" in my comment is referring to ChatGPT).
I genuinely wonder where the next innovative leap in AI will come from and what it will look like. Inference speed? Sharper reasoning?
I’m open to the possibility of faster, cheaper and smaller (we saw an instance of that with deepseek) but think there’s a real chance we hit a wall elsewhere.
Really? Im not convinced we have the right people in this day-and-age to bring about those leaps.
It might be that humanity goes another 50 years until someone comes around with a novel take.
I am close to a very prolific human bullshitter. The hardest thing is that anyone unfamiliar with them will have bought hook, line and sinker into their latest story, and you have to work hard to explain how that's a complete fabrication, while getting attacked as a naysayer and a hater. It's exhausting, and often it's just easier to nod along.
The parallels with discussing the pros and cons of LLMs in this atmosphere of hype are undeniable.
It's a game changer for some people who only need it to mostly get things started and pretend they did their job, and a work generator for anyone who actually needs to get things working.
The code was shockingly bad, and had to be rewritten to be able to do step 2 of the task.
The problem with this IMO is when a human writes the code, they know the code they wrote, and have a sense of ownership in terms of correctness and quality.
Current industry workflows attempt to improve quality and ownership with PR reviews.
Most folks I see using AI coding don't know all the corner cases they might encounter, but more importantly don't know the code or feel any real ownership over it.
The AI typed it, and the AI said it's correct. And whatever meager tests exist either passed or got a 1 line change to make them pass.
Quality is going down from those who rely on tools to produce code they don't know. This has a cost associated with it that's been deferred.
Sometimes this is fine, like POC where you are comfortable with tossing the code out.
This isn't fine for business who need to be able to plan out work in the future. That requires knowing the system more so than just reading the code base.
https://www.youtube.com/watch?v=PdFB7q89_3U
Fast forward a hundred years when we have a holodeck and sooner or later everyone will get bored with it.
Sort of like an information desk. The person there might not be a nobel laureate, but I don't know anything and they usually have enough knowledge to be immediately helpful.
Like "compare expedition max vs platinum"
(notice I didn't know max meant extra length, while platinum is a trim level)