AI Hype Is Crashing Into Reality. Stay Calm
Posted4 months agoActive4 months ago
businessinsider.comTechstory
calmmixed
Debate
60/100
Artificial IntelligenceHype CycleTechnology Adoption
Key topics
Artificial Intelligence
Hype Cycle
Technology Adoption
The article discusses how AI hype is meeting reality, with some commenters questioning the extent of AI's potential impact and others highlighting the need for accountability among CEOs and managers who invested heavily in AI.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
53m
Peak period
7
0-3h
Avg / period
2.9
Comment distribution20 data points
Loading chart...
Based on 20 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 3:06 PM EDT
4 months ago
Step 01 - 02First comment
Sep 6, 2025 at 3:58 PM EDT
53m after posting
Step 02 - 03Peak activity
7 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 8, 2025 at 9:16 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45152001Type: storyLast synced: 11/20/2025, 3:32:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
IOW, who will be fired for getting this wrong? Answer: nobody.
Some small AI-based companies will tank, but all the leaders in F500 companies know how to survive. If the emperor has no clothes, they'll all have collective amnesia and say they knew it the whole time. A few quotes to the press here, a few emails there, and they will move on with their BSing and talk about "what we learned."
I'll give Cook a lot of respect since he knows supply chains and manufacturing in particular deeply and that's really been his job post Steve Jobs.
But what value does Sundar or Satya add, really? I've not heard a single super insightful comment come out of their mouths. You could replace them and not notice a difference in financial performance.
There's never any accountability for the bad—sometimes destructively, catastrophically bad—decisions of managers and executives. And frankly, this is part of what is causing serious problems for our society.
It breeds a class of people who genuinely believe that either a) they are truly always right, always the smartest ones around, and any mistakes or failures are because of all the people around them, or b) it doesn't matter how often they're wrong; they're entitled to always be taken seriously and get their full bonuses, no matter how badly things are falling apart because of their decisions.
Now, part of this is because our corporate world is really really bad at assessing the outcomes of decisions like these (and this is not wholly unrelated to the fact that proper assessment would reveal the levels of incompetence in many C-suites). But part of it is simply because we have built a culture that says these people are never to be questioned.
And that's toxic to any attempts to actually build something better.
In my daily use cases I only see regressions.
That does not seem like a valid conclusion to draw from the observations in the article.
So Sam Altman admitted that he isn't so smart after all? :-)
With this, hopefully we can stop having peope using AI as a term exclusively for llms
We're about to, or have already hit the "data" wall recently as it gets increasingly hard to get more data to train on, exacerbated by more and more content being generated that risks autophagy
it's less and less about gathering more data now and either we're gonna try to engineer more data into existence, or it's back to the deep learning boom where it's more common to resort to optimizing architecture and training methodologies to squeeze out more performance from what data we have
implying development has levelled off from a functional point of view. Which was kind of true with iPhones - my 13 does basically the same as the 4 did just with better images and speed. And in the future the iphone 29 will probably still do apps, photos and calls, just a bit better.
But I don't think that'll be true with AI - there are huge categories of stuff - being able to do human jobs, being self improving, having robots that can build houses and factories and so on that don't work at the moment but may well in a decade or two. I think it may be more of a Sopwith Camel moment.