95% of Companies See 'Zero Return' on $30B Generative AI Spend
Key topics
Regulars are buzzing about a report claiming 95% of companies see "zero return" on their $30 billion generative AI investments, sparking a lively debate about the technology's value. Commenters riff on the hype cycle, with some pinpointing the current state as the "Trough of Disillusionment," while others predict that the rapid evolution of AI will soon make it impossible to keep up with the hype. As some consultants see dollar signs in the reported waste, others poke fun at the idea of profiting from companies' AI missteps, with one commenter defending their "real software" consulting services against generative AI "spambots." The discussion feels relevant now as it highlights the growing pains and growing skepticism surrounding AI adoption.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3m
Peak period
153
0-6h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 21, 2025 at 11:36 AM EDT
5 months ago
Step 01 - 02First comment
Aug 21, 2025 at 11:39 AM EDT
3m after posting
Step 02 - 03Peak activity
153 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 25, 2025 at 5:59 AM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(I'm not really offended honestly. Startups will come crying to un-vibe the codebases soon enough.)
So far business is booming and clients are happy with both human interactions with senior engineers as well as a final deliverable on best practices for using AI to write code.
Curious to compare notes
This is confusing.. it's directly saying AI is improving employee productivity, but that's not leading to more business profit... how does that happen?
One trivial way is that the increase of productivity is less than the added cost of the tools. Which suggests that (either due to their own pricing, or just mis-judgement) the AI companies are mis-pricing their tools. If the tool adds $5000 in productivity, it should be priced at $4999, eventually -- the AI companies have every motivation to capture nearly all of the value, but they need to leave something, even if just a penny, for the purchasing company to motivate adoption. If they're pricing at $5001, there's no motivation to use the tool at all; but of course at $4998 they're leaving money on the table. There's no stable equilibrium here where the purchasing companies end up with a /significant/ increase in (productivity - cost of that productivity), of course.
Sounds like the AI companies are not so much mispricing, as the companies using the tools are simply paying wayyy too much for the privilege.
As long as the companies keep paying, the AI companies are gonna keep the usage charges as high as possible. (Or at least, at a level as profitable to themselves as possible.) It's unreasonable to expect AI companies to unilaterally lower their prices.
There's a reason it has a catchy headline, there's a reason you needed to fill in an email form to get access to the study, and there's a reason why the 2nd author has an agentic AI startup.
I thought it was a low quality article with no data or in detail methods. MIT needs to do better. This is the second article of this type in the last few months that caught headlines.
For some reason, I'm thinking most of the money went to either inferencing costs or NVidia.
Executives mistook that novelty for a business revolution. After years of degraded search, SEO spam, and “zero-click” answers, suddenly ChatGPT spat out a coherent paragraph and everyone thought: my god, the future is here. No - you just got a glimpse of 2009 Google with autocomplete.
So billions were lit on fire chasing “the sliced bread moment” of finally finding information again - except this time it’s wrapped in stochastic parroting, hallucinations, and a SaaS subscription. The real irony is that most of these AI pilots aren’t “failing to deliver ROI” - they’re faithfully mirroring the mediocrity of the organisations deploying them. Brittle workflows meet brittle models, and everyone acts surprised.
The pitch was always upside-down. These things don’t think, don’t learn, don’t adapt. They remix. At best they’re productivity duct tape for bored middle managers. At worst they’re a trillion-dollar hallucination engine being sold as “strategy.”
The MIT study basically confirms what was obvious: if you expect parrots to run your company, you get birdshite for returns.
> No - you just got a glimpse of 2009 Google with autocomplete.
Go look at the kinds of "answers" you got in 2009 with Google's autocomplete.
Time to drop this convenient nicely-packaged narrative.
> These things don’t think, don’t learn, don’t adapt.
Perhaps some don't -- for some intentionally constructed definitions of these terms. Adding a feedback loop (chain of thought, etc) challenges even such definitions though.
> if you expect parrots to run your company, you get birdshite for returns.
If this is a reference to "stochastic parrots" metaphor, then I would encourage you to read these articles and HN comments below, which push-back against misunderstandings or overuse of that metaphor:
[1]: https://www.lesswrong.com/posts/HxRjHq3QG8vcYy4yy/the-stocha...
[2]: https://www.lesswrong.com/posts/7aHCZbofofA5JeKgb/memetic-ju...
[3]: https://news.ycombinator.com/item?id=44299996
[4]: https://news.ycombinator.com/item?id=44926540
[5]: https://news.ycombinator.com/item?id=44967655
[6]: https://news.ycombinator.com/item?id=43676755
[7]: https://news.ycombinator.com/item?id=43585572
[8]: https://news.ycombinator.com/item?id=42351348
> they’re faithfully mirroring the mediocrity of the organisation
This is happening with to technologies / tools / etc...
In this case, that's NVDA
Crypto's over, gaming isn't a large enough market to fill the hole, the only customers that could fill the demand would be military projects. Considering the arms race with China, and the many military applications of AI, that seems the most likely to me. That's not a pleasant thought, of course.
The alternative is a massive crash of the stock price, and considering the fact that NVIDIA makes up 8% of everyone's favorite index, that's not a very pleasant alternative either.
It seems to me that an ultra-financialized economy has trouble with controlled deceleration, once the hypetrain is on it's full-throttle until you hit a wall.
Data centers might, but then they'll need something else to compute, and if AI fails to deliver on the big disruptive promises it seems unlikely that other technologies will fill those shoes.
I'm just saying that something big will have to change, either Nvidias story or share price. And the story is most likely to pivot to military applications.
It’s all fun and games until the bean counters start asking for evidence of return on investment. GenAI folks better buckle up. Bumps ahead. The smart folks are already quietly preparing for a shift to ride the next hype wave up while others ride this train to the trough’s bottom.
Cue a bunch of increasingly desperate puff PR trying to show this stuff returns value.
"Hey, guys, listen, I know that this just completely torched decades of best practices in your field, but if you can't show me progress in a fiscal year, I have to turn it down." - some MBA somewhere, probably, trying and failing yet again to rub his two brain cells together for the first time since high school.
Just agentic coding is a huge change. Like a years-to-grasp change, and the very nature of the changes that need to be made keep changing.
Agents may be good (I haven't seen it yet, maybe it's a skill issue but I'm not spending hundreds of dollars to find out and my company seems reluctant to spend thousands to find out) but they are definitely, definitely not general superintelligence like SamA has been promising
at all
really is sinking in
these might be useful tools, yes, but the market was sold science fiction. We have a useful supercharged autocomplete sold as goddamn positronic brains. The commentariat here perhaps understood that (definitely not everyone) but it's no surprise that there's a correction now that GPT-5 isn't literally smarter than 95% of the population when that's how it was being marketed
You really set yourself up with a nice glass house trying to make fun of the money guys when you are essentially just moving your own goal posts. It was annoying two (or three?) years ago when we were all talking about replacing doctors and lawyers, now it just cant help but feel like a parody of itself in some small way.
I've been programming professionally for > 20 years and I intend to do it for another > 20 years. The tools available have evolved continually, and will continue to do so. Keeping abreast of that evolution is an important part of the job. But the essential nature of the role has not changed and I don't expect it to do so. Gen AI is a tool, one that so far to me feels very much like IDE tooling (autocomplete, live diagnostics, source navigation): something that's nice to have, that's probably worth the time, and maybe worth the money, to set up, but which I can easily get by without and experience very little disadvantage.
I can't see the future any more than anyone else, but I don't expect the capabilities and limitations of LLMs to change materially and I don't expect to be left in the dust by people who've learned to wrangle wonders from them by dark magics. I certainly don't think they've "torched decades of best practice in my field". I expect them to improve as tools and, as they do, I may find myself using them more as I go about my job, continuing to apply all of the other skills I've learned over the years.
And yes, I do have an eye-wateringly expensive Claude subscription and have beheld the wonders of Opus 4. I've used Claude Code and worked around its shitty error handling [1]. I've seen it one-shot useful programs from brief prompts, programs I've subsequently used for real. It has saved me non-zero amounts of time - actual, measurable time, which I've spent doodling, making tea and thinking. It's extremely impressive, it's genuinely useful, it's something I would have thought impossible a few years ago and it changes none of the above.
[1] https://github.com/anthropics/claude-code/issues/473
I mean, this is basically how all R&D works, everywhere, minus the strawman bit about "single fiscal year", which isn't functionally true.
And this is a serious career tip: you need to get good at this. Being able to break down extremely ambitious, many-year projects into discrete chunks that prove progress and value is a fundamental skill to being able to do big things.
If a group of very smart people said "give us ${BILLIONS} and don't bother us for 15 years while we cook up the next world-shaking thing", the correct response to that is "no thanks". Not because we hate innovation, but because there's no way to tell the geniuses apart from the cranks, and there's not even a way to tell the geniuses-pursuing-dead-ends from the geniuses-pursuing-real-progress.
If you do want to have billions and 15 years to invent the next big thing, you need to be able to break the project up to milestones where each one represents convincing evidence that you're on the right track. It doesn't have to be on an annual basis, but it needs to be on some cadence.
Now, I don’t believe this is an actual conspiracy, but rather a culture of hating the poor. The rich will jump on any endeavor—no matter how ridiculous—as long as the poor stay poor, even if they loose money in the process.
"Donald Trump and Silicon Valley's Billionaire Elegy" - https://www.wired.com/story/donald-trump-and-silicon-valleys...
"Secret White House spreadsheet ranks US companies based on loyalty to Trump" - https://www.telegraph.co.uk/business/2025/08/15/secret-white...
"Spending on AI data centers is so massive that it’s taken a bigger chunk of GDP growth than shopping" - https://fortune.com/2025/08/06/data-center-artificial-intell...
We'll either see a new class of "AWS of AI" companies that'll survive and be used by everyone (that's part of the play Anthropic & OpenAI are making, despite API generating a fraction of their current revenue), or Amazon + Google + Microsoft will remain as the undisputed leaders.
idk what a person would do with a 6509 or a Sun Fire hah but they were all over craigslist iirc.
...I'll try not to sound desperate tho.
That said, technologies like this can also go through a rollercoaster pattern itself. Lots of innovation and improvement, followed by very little improvement but lots of research, which then explodes more improvements.
I think LLMs have a better chance at following that pattern than computer vision did when that hype cycle was all the rage
When GPT-5 came out, it wasn't going from GPT-4 to GPT-5. Since GPT-4 there has been: 4o, o1, o3, o3-mini, o4-mini, o4-mini-high, GPT-4.1, and GPT-4.5. And many other models (Llama, DeepSeek, Gemini, etc) from competitors have been released too.
We'll probably never experience a GPT-3.5 to GPT-4 jump again. If GPT-5 was the first reasoning model, it would have seemed like that kind of jump, but it wasn't the first of anything. It is trying to unify all of the kinds of models OpenAI has offered, into one model family.
Trying to claim victory against AI/US Companies this early is a dangerous move.
Too young to remember GSM?
I just showed this is not true. There are plenty other examples.
From the down-votes - either we don't all agree on a common definition of Kings English, or it challenges some members self-identity.
[0]https://www.researchgate.net/figure/Napoleon-march-graphic-C...
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."
https://news.ycombinator.com/newsguidelines.html
If my comment can be characterized as flamebait, it has to be to a lesser degree than the parent, right?
And I'm not even claiming that the situation applies. If you take the strongest plausible interpretation of my comment, it says that if indeed this whole AI bubble is hubris, if indeed there's a huge fallout, then the leaders of this merry adventure, right now, must feel like Napoleon entering Moscow.
But well, anyways, cheers dang, it's a tough job.
[1]: the strongest possible interpretation of "This is how America ends up being ahead of the rest of world with every new technology breakthrough" is arrogance, right?
But yeah, I have to be colored surprised. I was certain I was replying to arrogance, rather than (possibly misplaced) admiration. I guess that puts me on the wrong side of the moderation fence. Sorry.
But I totally get how the GP comment landed the way you describe, but that's why we have guidelines like these:
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
and (repeating this one) "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Applying those to the GP comment (https://news.ycombinator.com/item?id=44974675), while it's true that the first sentence could sound like chest-beating, the rest of the comment was making an interesting point about risk tolerance.
The 'strongest plausible interpretation' might go something like this: "Even if the article is correct that 95% of companies are seeing zero return on AI spend so far, that by no means proves that they're on the wrong track. With a major technical wave like AI, it's to be expected that early efforts will involve a lot of losses. Long-term success may require taking early risk, and those with lesser risk tolerance, who aren't willing to sustain the losses associated with these pathfinding efforts, may find themselves losing out in the long run."
I have no idea whether that's right or not but it would make for a more interesting and less hostile conversation! which is basically what we're shooting for here.
You could make that claim for the software industry, but I’m pretty sure a big part of the US moat is due to oligopolies, lock-in effects, or corruption in favour of billionaires and their ventures.
Sorry, this are only the categories. But i have actual products in mind.
1. Generate content to create online influence. This is at this point probably way oversaturated and I think more sophisticated models will not make it better.
2. Replace junior developers with Claude Code or similar. Only sort of works. After all, you can only babysit one of these at a time no matter how senior you are so realistically it will make you, what, 50% more productive?
3. Replace your customer service staff. This may work in the long run but it saves money instead of making money so its impact has a hard ceiling (of spending just the cost of electricity).
4. Assistive tools. Someone to do basic analysis, double check your writing to make it better, generate secondary graphic assets. Can save a bit of money but can’t really make you a ton because you are still the limiting factor.
Aside: I have tried it for editing writing and it works pretty well but only if I have it do minimal actual writing. The more words it adds, the worse the essay. Having it point out awkward phrasing and finding missing parts of a theme is genuinely helpful.
5. AI for characters in video games, robot dogs, etc. Could be a brave new frontier for video games that don’t have such a rigid cause/effect quest based system.
6. AI girlfriends and boyfriends and other NSFW content. Probably a good money maker for a decade or so before authentic human connections swing back as a priority over anxiety over speaking to humans.
What use cases am I missing?
As for relying on the code base, that is good for code, although not for onboarding/deployment/operations/monitoring/troubleshooting that have manual steps.
Disclaimer: We are building this at https://dosu.dev
We connect with slack/notion/code/etc so that you can do the following:
1. Ask questions about how your code/product works 2. Generate release notes instantly 3. Auto update your documentation when your code changes
We primarily rely on the codebase since it is never out of date
How much does that cost these days? Do you still have to fly to remote islands?
I toyed with it and found it to be less frustrating to set up the latest layout for a VueJS project, but having it actually write code was… well I had to manually rewrite large chunks of it after it was done. I am sure it will improve but how long until you can tell it the specs, have it work for a few minutes or hours or days, and come back to an actual finished project? My bet is decades to never.
If those prompts pop up constantly asking for elevated privileges, this is actually worse because it trains people to just reflexively allow elevation.
How many hundreds of hours is your team spending to get there? What is the ROI on this vs investing that money elsewhere?
Sorry this is some bull. Either it works or it doesn’t.
It is uniquely susceptible because the gaming market is well acclimated to mediocre writing and one dimensional character development that’s tacked on to a software product, so the improvements of making “thinking” improvisational characters can be immense.
Another revenue potential you’ve missed is visual effects, where AI tools allow what were previously labor intensive and expensive projects to be completed in much less time and with less, but not no, human input per frame
I mostly disagree. Every gaming AI character demo I've seen so far is just adds more irrelevant filler dialogue between the player and the game they want to play. It's the same problem that some of the older RPG games had, thinking that 4 paragraphs of text is always better than 1.
The thing is, you aren't contacting customer services because everything is going well, you are contacting them because you have a problem.
The last thing you need is to be gaslit by an AI.
The worst ones are the ones where you don't realise right away you aren't talking to a person, you get that initial hope that you've actually gotten through to someone who can help you (and really quickly too) only to have it dawn on you that you are talking to a ChatGPT wrapper who can't help you at all.
But if you're actually trying to provide good customer service because people are paying you for it any paying per case then you wouldn't dare put a phone menu or AI chat bot in-between them and the human. The person handles all the interaction with the client and then uses AI where it's useful to speed up the actual work.
I don't know why everyone goes to "replacing". Were a bunch of computer programmers replaced when compilers came out that made writing machine code a lot easier? Of course not, they were more productive and accomplished a lot more, which made them more valuable, not less.
That means you expand from millions to billions of potential customers.
Billions get spent annually in administrative overhead focused on squeezing the most money out of these notes as possible. A tremendous expense can be justified to increase note quality (aka revenue, though 'accuracy/efficiency' is the trojan horse used to slip by regulators).
GenAI has a ton of potential there. Likewise on the insurance side, which has to wade through these notes and produce a very labor intensive paper trail of their own.
Eventually the AIs will just sling em-dashes at each other while we sit by pool.
What menial about knowledge work, anyway?
Here's the truth: NO ONE KNOWS.
What part of No One Actually Knows do people not understand? This applies to both the "AI WILL RULE THE WORLD MUAHAHA" and "AI is BIG BIG HOAX" crowd.
I think we should actually ban all digital art platforms, no Photoshop, no special effects, all hand drawn. And I'll use some weird weaponized empathy calling out for the human soul and human creativity.
What a toxic bunch.
You're not standing up for art and culture. You're not asking for a "little reflection". You are however just being a cynic. And cynicism is toxic. It's bad for health. It's a weird affliction. Worse it's actively harmful to society.
Optimism is better. Tools that create abundance are better. Managing scarcity is dystopian, and ultimately harmful. It's a mindset that needs to be purged. Creating abundance is a far superior mindset.
While people are doing their work, they don't think, "Oh man, I am really excited to talk with AI today, and I can't wait to talk with a chatbot."
People want to do their jobs without being too bored and overwhelmed, and that's where AI comes in. But of course, we cannot hype features; we sell products after all, so that's the state we are in.
If you go to Notion, Slack, or Airtable, the headline emphasizes AI first instead of "Text Editor, Corporate Chat etc".
The problem is that AI is not "the thing", it is the "tool that gets you to the thing".
Too many companies are just trying to spoon AI into their product somehow, as if AI itself is a desired feature, and are forgetting to find an actual user problem for it to actually solve.
But that is exactly what we got. AI washing machine! AI espresso machine! And many more AI tools.
In reality, AI sparkles and logos and autocompletes are everywhere. It's distracting. It makes itself the star of the show instead of being a backup dancer to my work. It could very well have some useful applications, but that's for users to decide and adapt to their particular needs. The ham-fisted approach of shoving it into every UI front-and-center signals a gross sense of desperation, neediness, and entitlement. These companies need to learn how to STFU sometimes.
I could be wrong but, all in all, buy a .com for your "ai" product, such that you survive the Dot-ai bubble [1]
I Love LLM's though!! Amazing math and tech.
[1] - https://en.wikipedia.org/wiki/Dot-com_bubble
Here's the source report, not linked to by this content farm's AI-written article: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
- everyone and their mother are doing a "generative ai program" right now, a lot of times just using the label to try to get their project funded, ai being an afterthought
- if the 1 out of 20 projects is game-changing, then you could argue right now people should actually be willing to spend even more on the opportunity, maybe the number should actually be 1 in 100. (The VC model is about having big success 1 in 10 times.)
- studies of ongoing business activities are inherently methodologically limited by the data available; I don't have a ton of confidence that these researchers' numbers are authoritative -- it's inherently impossible to truly report on internal R&D spend especially a private companies without inside information, and if you have the inside information you likely don't have the full picture.
255 more comments available on Hacker News