The AI Vibe Shift Is Upon Us
Posted4 months agoActive4 months ago
cnn.comTechstory
heatedmixed
Debate
85/100
AI AdoptionGenerative AITech Hype
Key topics
AI Adoption
Generative AI
Tech Hype
The article discusses the 'AI vibe shift' and a report claiming 95% of generative AI programs fail to achieve their intended purpose, sparking debate among HN commenters about the validity of the report and the future of AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
78
0-6h
Avg / period
20
Comment distribution100 data points
Loading chart...
Based on 100 loaded comments
Key moments
- 01Story posted
Aug 24, 2025 at 6:31 AM EDT
4 months ago
Step 01 - 02First comment
Aug 24, 2025 at 6:47 AM EDT
16m after posting
Step 02 - 03Peak activity
78 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 26, 2025 at 5:53 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45003052Type: storyLast synced: 11/20/2025, 5:30:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I think everyone had a gut feel for something along those lines, but those numbers are even starker than I would've imagined. Granted, many (most?) people trying to vibe code full apps don't know much about building software, so they're bound to struggle to get it to do what they want. But this quote is about companies and code they've actually put into production. Don't get me wrong, I've vibe coded a bunch of utilities that I now use daily, but 95% is way higher than I would've expected.
I’m expecting a similar future for AI, it will not deliver the “deprecating devs” part but it will still be a useful and ubiquitous tool.
I must write a "me too" here because I have seen this a lot recently on various sites. Whether it comes from managers or non-coders (I guess astrosurfing managers), it's always about those awful developers gate-keeping software development with their complicated compiled languages. I know it's all fake but it's exhausting, and it's nice to see it acknowledged here on HN.
It was all the hype at the time, like LLMs are now. Most of them died because it was a bad idea.
And the reason we still use some, like SQL, is not because of the sintaxt.
We're a few years in. It takes time to figure things out and see returns.
The web and dot com boom and bust still led to several trillion dollar companies, eventually.
AI will transform my industry, but not overnight. My employer is within that 95%... but won't be forever.
Not even all geeks had it.
So by the time smart phones hit almost everyone had a computer at home. If you are talking about the 90s that isn't relevant, the relevant part is how smart phones changed things, and at that time internet was already available to a large majority at home, smart phones just made it portable.
50% households in US had computer at home. 36% had internet access.
I love when people reply wuth shit they have no fucking idea.
The internet connected computer in the home was a productivity tool. Even just gaming required gamers to become pretty PC/OS/tech savvy. Cheap postpaid internet phones are bread and circuses. They two have different effects on society.
But that improvement didn't come, the technology plateaued so most of these efforts failed.
[1] https://en.m.wiktionary.org/wiki/Lizardman%27s_Constant
For the time being, and the foreseeable future, LLM’s sweet spot seems to be low-grade translation, and ultra-low-grade bottom barrel ‘content generation’. Which is… not nothing, but also not what you’d call world-changing. As a number of people said, there probably is an industry here; it’s just that it’s worth on the order of tens of billions, not trillions as the markets currently appear to believe.
(Some people will claim it’s a great programming tool. Personally sceptical, but even if it’s the greatest, most amazingest programming tool ever, well, “we might be even more important than Borland and Jetbrains were” is not going to thrill the markets too much. Current valuations are built on mass-market applicability, and if that doesn’t show up soon there will be trouble.)
Like, that’s still just silly.
On the developer tools thing in particular, I’d note that it is historically extremely difficult to make a sustainable business, nevermind a wildly profitable business, in that space. Borland and Jetbrains probably are the closest that anyone has come.
Except in this case, where AI can enable people with absolutely no experience in some area to produce something that at least superficially can seem plausibly viable, it's no surprise that the percentage of crap is even higher.
Junior developers require guidance but are still producing value. And with good guidance, they will do amazing work.
With AI we need fewer programmers, and the juniors will possibly be the first to go, but they might me retrained for other careers (which might eventually get cancelled too because of AI), or out of work.
The software they produced did something - it might have been a CRM or a game, but out of work people might have to cut back on their gaming spend. As for the CRM app business, the customers and potential software customers are also cutting back in staff, and the CRM apps will be able to conduct direct B2B negotiations with client CRMs, so there's no job opportunities there, and so more people are out of work. Perhaps the businesses that used the AI-based B2B and B2C CRM and ERP systems won't be needed any more, or not have a viable customer base, too.
Other industries are replacing folks with 'AI', so the unemployment pool is getting larger. This means the luxury and non-vital goods manufacturers will have less revenue and they are laying off staff so there's some compensation there, but eventually not enough for survival - which is 'fine' because AI is replacing all this stuff.
This snowballs into other industries, leaving just those jobs that can be done more easily by a human, but those jobs will also reduce as AI and surrounding robotics etc improve, so what do all these unemployed people do all day. Some will embrace leisure activities that don't break the bank. Some may volunteer for community work or projects to improve the World, but they still need to eat and pay bills - who's going to help with that?
One solution might be a 'Star Trek' economy not based on work for reward, but that's a big cultural shift that people and governments will struggle massively to get their heads around conceptually.
There will also be powerful resistance to such a radical rebasing of the planet-wide financial model, especially by those people and organisations that have amassed wealth and don't want to give it up. They'll even fight back with lobbying and arguments against change while they're getting replaced with AI.
Or...?
"The mythical man month" and all that.
e.g. I waste a lot of time with converting business requirements into a proprietary rule language. It should be simple tasks, but the requirements are freaky, the language is limited and I often need to look up internals of systems that produce data the rules act upon.
My bosses boss currently wants me to replace my work with AI. It can not work. It‘s setup for failure.
A huge fraction of people at my work use LLMs, but only a small fraction use the LLM they provided. Almost everyone is using a personal license
"We wanted to make money with it, but we didn't immediately make a lot of money" feels very different from "the project failed to deliver what it set out to".
[1] https://fortune.com/2025/08/18/mit-report-95-percent-generat...
Is it mostly rarer and more expensive materials like gold/lithium, or is it mainly bulk plastic and aluminium?
[0] https://www.ebay.com/str/evolutionecycling
Everyone victory lapping this as a grand failure should pay attention to the above snippet.
so yeah, targeted well thought out usecases that are handled well by LLMs will deliver value, but it wont replace developers or anything like that, which is what these people with barely an understanding of the tech's limitations have been claiming.
OpenAI hasnt "internally achieved" AGI. thats what people are calling bullshit on
Fixes one pain point good. Can’t really be applied to everything.
So just another tool, not a magic bullet like it is being marketed.
On the other hand, its ability to eliminate toilsome work in a variety of areas (it can generate a basic legal contract as well as a basic rails app) is pretty astounding. There are many other industries besides software dev where having tools that can understand and communicate in human language and context could be totally transformative, and they have barely begun to look into it. I think this is where startups should be focused.
LLMs are receiving a level of investment that appears to be based on them being world-changing, and that just doesn’t seem to be working out.
We just received a call at work using the voice of the head of accounting.
I really hope the good of all the other uses offset the harm done.
Like, they used to go ask questions to botters in games to see if they could answer, and bots used to be unable to respond to most questions in a reasonable manner. But today you can't do that, an LLM easily respond to most kind of trick question, well aside from stuff like "how many r are there in strawberry", you need such things to be able to recognize that you are talking to a bot.
Personally, i'm a but frightful about the stability of modern democratic systems under these conditions. Healthy news media industry has been a cornerstone in democracy since their inception.... and i would not call the current new industry very functional... even before AI entered the scene.
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
Software developers commenting on HN and elsewhere routinely focus on majorities, e.g., "80/20" memes, references to Zipf's Law, etc., and conclude without hesitation that if a small minority, say 5%, of software users do not follow a pattern that a large majority, say 95%, follow, the minority can be safely disregarded
Is it really suprising that people reading the MIT report might focus on the 95% instead of the 5%
IMO, the report is mostly about the 5% but as it happens people care about majorities like 95%
With llms the changes are transformative. I’m trying to learn 3d modeling and chatgpt just gave me a passable sketch for what I had in my mind. So much better than googling for 2 hours. Is the cooling off because industry leadership promised agi last year and it’s not here yet?
(I am, FWIW, _super_ unconvinced that our magic robot friends will be even as helpful there as any decent tutorial on the subject, but even if they are, that’s not really touching on education writ large.)
Education has a shelf life. AI needs the pre-AI world in order for AI to train and be useful, but AI also wants to replace the pre-AI world with a new AI world. So the world will need to freeze in place between the two.
AI in an entropy machine.
You can say it does a bit more than education material aggregators, but it doesn't do that much more, it doesn't replace paid education in any way so far.
The company at some point crossed the billion-dollar valuation, yet only handed out a single-digit million as pay for the maintainers.
I'm not discounting the value of having ChatGPT just hand you the answer straight up. If you just want to get the task done as fast as possible that's a pretty cool option that didn't used to exist. But the old way wasn't really worse.
What the LLM gives you is essentially an example project, and you can ask for the specific examples you need. You can compare and contrast alternative ways of doing it. You can give it an example in terms of what you already know, and ask it to show you how that translates into whatever you're trying to learn. You don't have to just blindly take what it produces and use it unread.
LLMs are making up for the lack of this.
It’s the Backus-Naur approach vs the Human approach.
Humans learn by example. IMHO this is why math education (and most software documentations) fails so hard - starting with axioms instead of examples.
It's endlessly mind-boggling to me how there's so many people who can't grasp the idea of just using llms as a tool in your engineering toolkit, and that you should still be responsible, thoughtful, and do code review - as you would if you delegated to a junior dev (or anyone!)
They see complete fools just accepting the output wholesale, and somehow extrapolate that to thinking everyone works that way
I remember in college during the late 90's the hype was that CASE tools (Computer Aided Software Engineering) was going to make software engineers irrelevant, you just tell the system your requirements and it spits out working code. Never turned out.
Today, the only way the amount of investment returns a profit if it replaces a whole bunch of workers. If it replaces a whole bunch of workers, well there will be a whole lot less people to buy stuff. So either the bubble bursts or a lot of people lose their job. Either way we are in for a rough ride.
Exactly. The business world isn't remotely close to being rational. The technology is incredible, but that doesn't mean it's going to translate to massive business value in the next quarter.
The market reaction to this is driven by hype and by people who don't understand the technology well enough to see through the hype.
I wonder if I should have listened to the hype generators (which you sound like one) and just have created ‘passable’ models with help of an LLM, instead of exercising my brain, learning something new and getting out of my comfort zone.
At the risk of sounding controversial, I’ll add that I also have a diametrically opposed view of crypto’s utility vs LLM than you, especially in the long-term: one will allow us to free ourselves from the shackles of government policy as censorship expands, the other is a very fancy and very expensive nonsense regurgitator that will pretty much go on to destroy the Internet and any sort of credibility and sense of truth, making people dumber at large scale, while lining the pockets of a lucky few.
Here's the iPhone 13, it makes better pictures, lasts longer on battery, and plays games faster than the iPhone 12. Buy it for $699.
Now it has become:
Here's the iPhone 13, the greatest breakthrough in the history of civilization. But enough about that, let's talk about the iPhone 14. We've released a whitepaper showing the iPhone 14 will almost certainly take your job, and the iPhone 15 will kill us all, provided no further steps are taken. It's so powerful, that we decided to instill powerful moral safeguards into it, so it will steer you towards goodness, and prevent it being used for evil (such as looking at saucy pictures). We also find it necessary to keep a permanent and comprehensive log of every interaction you have with it.
You also can't have it, but can hold it in your hand, provided you pay us $20/month and we deem you morally worthy of accessing this powerful technology. (Do not doubt any of this, we are intellectually superior to you, and have humanity's best interests at heart, you don't want to look like a fool, do you?)
Effectively, yes: the promises are so huge, that even the impressive usefulness and value it brings today is dwarfed in comparison.
Building a small script is easy for chatgpt, but actually leveraging the workforce consistently turns out to be a lot harder than the hype promised.
Reading stuff like this makes me question the entirety of the article.
AI startups were meant to solve problems in novel ways not to amass revenue.
Let me show you what I mean: Let's someone runs a grocery, and they want to make it more profitable. After looking at the value chain, they conclude the person growing the lettuces makes 10% of the profit, logistics makes 40%, and retail 50%.
So they conclude that the best way to improve the business, is to optimize the retail side.
Then you walk into the store and see the tiny withered lettuce on the gleaming fancy shelves.
If they decided to focus on where the value is created, and helped the farmer grow better groceries, everybody would've been happy.
I don't agree with this, merging completely unrelated activities into one company isn't good or efficient - it's only a good idea because the perverse incentives disappear, which exist, because the big powerful fish (the retailers who monopolize access to markets) can dictate the terms for small and divided fish (the producers who produce the goods).
This is endemic in the system, and very hard to fight against.
It's also incredibly prevalent in the field of software engineering as well - if I create a best-in-class open-source tool, I won't see a dime of return on it (maybe very little), even if a huge cloud provider build a product on top of it that makes billions (this has happened too many times to count).
If I do the same thing in the confines of a big company, the end result wouldn't look like capitalism at all - lets say I do the same good work, but someone has an even better idea or executes better on it - in a free-market system if somebody were to come up with a better tool, they would just announce it, and people would be free to move to it - in a corporate setting, it would be seen as redundant, a waste of money, and an organizational red flag to run 2 separate parallel teams.
It'd be great if there existed a system that rewarded individuals and organizations according the value they bring to the table. However at the very mention of 'intrinsic value', capitalists break out in hives and call you a Marxist.
Revenue is probably the wrong measure, it should be profit. And a startup that doesn't somehow turn into profit for its _customers_ usually doesn't see much traction.
They can either increase revenue (there's a lot of AI sales tools that promise just that), or, more commonly, reduce costs, which also increases profits. If it saves time or money, it reduces costs. If it doesn't do either of these things, you'd have to really enjoy the product to still pay for it.
That is even if you can time it correctly.
Better wait for a crash, see people panic sell thinking nvidia has any skin in the, game and buy the dip.
That doesn't matter, the question is if Nvidia investors has seen it coming or if they still overpay for the "sell shovels in a goldrush" meme. When people think you can't go wrong investing in a company then you know the company is almost surely overvalued because many have invested in it without thinking about the price.
Meta just spent billions to get a B team of AI researchers. The cream of the crop couldn’t be persuaded with 8-10 figure comp packages.
This article is absolute garbage.
The thing about "vibe shifts" is that a big part of the shift occurs among people who have no idea what's going on. They've played with ChatGPT twice, talked about it at parties, and then invested $50,000 in NVIDIA stock. Or they're a corporate VP who doesn't understand this stuff but knows it's trendy and that it impresses the C-suite. When those people bail, the market retrenches hard, trading irrational enthusiasm for equally irrational panic and gloom.
My guess is that the highly-visible switch from the sycophantic GPT 4o to the underwhelming GPT 5 is what made this concrete in the minds of the least informed investors and customers.
presents no evidence
Meanwhile some of the most profitable companies to have ever existed post record profits and gangbusters projections based on AI capabilities.
Which companies? You mean NVIDIA? They post record profits due to AI hype, not due to AI capabilities.
The value of it wasn’t ColdFusion or Flash, it was the novel ways that people used the foundational tech.
So yeah, the AI bubble may burst and one model or another (or a company like OpenAI) may fail, but I don’t think we have even scratched the surface on the novel things this tech can do.
Edit: I mean the one discussed here, and in countless other recently submitted articles:
95% of Companies See 'Zero Return' on $30B Generative AI Spend - https://news.ycombinator.com/item?id=44974104 - Aug 2025 (413 comments)
95% of generative AI pilots at companies are failing – MIT report - https://news.ycombinator.com/item?id=44941118 - Aug 2025 (167 comments)
95 per cent of organisations are getting zero return from AI according to MIT - https://news.ycombinator.com/item?id=44956648 - Aug 2025 (14 comments)
Some earlier discussions:
Say farewell to the AI bubble, and get ready for the crash
https://news.ycombinator.com/item?id=44964548
Tech, chip stock sell-off continues as AI bubble fears mount
https://news.ycombinator.com/item?id=44965187
Is the A.I. Sell-Off the Start of Something Bigger?
https://news.ycombinator.com/item?id=44963715
HN is full of articles about coding agents in a way it wasn't a few months ago.
What is overhyped is OpenAI. They don't have any moat. Why use an OpenAI model when you could use Claude or Qwen?
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
A few HN members did submit the MIT report [PDF]^1 but HN discussion has instead centered around articles written about the report and the market's apparent reaction to it
1. For example,
https://news.ycombinator.com/item?id=44941374
https://news.ycombinator.com/item?id=44972204
https://news.ycombinator.com/item?id=44978557