AI Bubble 2027
Key topics
The AI bubble is expected to burst, but that doesn't mean the technology isn't already making waves - and money - in certain areas. Commenters are drawing parallels with past tech bubbles, like the dot-com era, noting that while some companies are raking in revenue, profitability is a different story. As one commenter put it, "bubbles are most likely to occur when something is plausibly valuable," and AI's potential is undeniable, even if its current hype is unsustainable. The discussion highlights the complex relationship between innovation, hype, and financial reality.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
27
0-2h
Avg / period
8.1
Based on 65 loaded comments
Key moments
- 01Story posted
Aug 27, 2025 at 10:37 AM EDT
5 months ago
Step 01 - 02First comment
Aug 27, 2025 at 11:43 AM EDT
1h after posting
Step 02 - 03Peak activity
27 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 28, 2025 at 11:41 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).
So one question is timing — When will the crash come?
The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?
[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle
I should also think further, railroads and radio also good examples!
If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.
Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.
And the conversational style makes it all look like good reasoning.
But as soon as the wanders off the highways into little-used areas of knowledge (such as wiring for a CNC machine controller board instead of a software package with millions of users' forum posts), even pre-stuffing the context with heaps of specifically relevant documents rapidly reveals there is zero reasoning happening.
Similarly, the occasional excursions into completely the wrong field even with a detailed prompt show that the LLM really does not have a clue what it is 'reasoning' about. Even with thinking, multiple steps, etc., the 'stochastic parrot' moniker remains applicable — a very damn smart parrot, but still.
In the world of T&E law, there are a lot of mediocre (to be kind) attorneys who claim expertise but are very bad at it (causing a lot of work for the more serious firms and a lot of costs & losses for the intended heirs). They often write papers for marketing themselves as experts, so the internet is flooded with many papers giving advice that is exactly wrong and much more that is wrong in more subtle ways that will blow up decades later.
If an LLM could reason, it would be able to sort out the wrong nonsense from the real expertise by applying reason, e.g., comparing the advice to the actual legal code and precedent-setting rulings, and by comparing it to results, and be able to identify the real experts, and generate output based on the writings of the real experts only.
However, LLMs show zero sign of any similar reasoning. They simply output something resembling the average of all the dreck of the mediocre-minus attorneys posting blogs.
I'm not saying this could not be fixed by Altman et. al. applying a large amount of computer power to exactly the loops I described above (check legal advice against the actual code and judges' rulings, check against actual results, select only the credible sources and retrain), but it is obviously no where near that yet.
The big problem, is that this is only obvious to a top expert in the field who deeply knows from training and experience the difference between the top experts and the dreck.
To the rest of us who actually need the advice, the LLMs sound great.
Very smart parrot, but still dumbly averaging and stochastic.
(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)
That is... certainly something to think about (and clever).
cogito ergo sum™ / attention is all you need™
•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).
•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]
•Top 7 public techs are where predominant gains have grown / held
•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T
•Gold has doubled, to present.
>what kind of effects are we going to see
•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.
•UBI & conscription, if only to lessen previous bullet-point.
¢¢, hoping I'm wrong. But if I'm not, maybe we can focus on domestics instead of endless struggles abroad (reimpliment Civilian Conservation Corps?).
I'm going to quote my favorite client's eighty-eight year old wife, a miserly multi-millionaire:
>"Nobody wants to be the last one at the party, because then you have to help clean up all the mess!"
She is a die-hard Reaganomicist, unable to comprehend why none of her grand-children (and only one of her daughters) is reproducing. My response to her husband, my friend, is not every fish needs to see the shark for them to all respond appropriately.
Only one of my own brothers has a child, only one. These wealthiest brothers (and the above friend) are just now beginning to realize that something is massively wrong with how we're allowing society to continue operating. It's heartbreaking to witness their own awakenings, years behind my own apathetic view(s).
>"It's incredible that I have all this inside of me — and to you it's just words..." —DFWallace (Pale King)
¢¢
FCFF = EBIT(1-t) - Reinvestment.
If OAI stops the Reinvestment, they lose to competition. Got it? Simple.
The article mentions
> This is a bubble driven by vibes not returns ...
I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.
Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.
I have a private list of these starting from 2006 to today.
LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.
Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.
Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs
We’ll be fine
As a casual observer for decades I think this one is different in that we are at approximate hardware equivalence with the human brain and advancing which will have interesting economic implications.
Here’s two good papers to start with:
https://arxiv.org/abs/2410.14606
https://storage.googleapis.com/deepmind-media/Era-of-Experie...
LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.
But empty hype? I just can't disagree more.
They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.
That's incredibly powerful.
They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.
But empty hype?
Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.
They are just too useful.
We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.
Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.
There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.
It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.
The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.
- offline and even online benchmarks are terrible unless actually a standard product experiment (a/b test etc). Evaluation science is extremely flawed.
- skepticism is healthy!
- measure on delivered value vs promised value!
- there are hard problems! Possibly ones that require paradigm shifts that need time to develop!
But
- delivered value and developments alone are extraordinary. Problems originally thought unsolvable are now completely tractable or solved even if you rightfully don’t trust eval numbers like LLMArena, market copy, and offline evals.
- output quality is seeing diminishing returns? I cannot understand this argument at all. We have scaled the first good idea with great success. People really believe this is the end of the line? We’re out of great ideas? We’ve just scratched the surface.
- even with a “feels” approach, people are unimpressed?? It’s subjective, you are welcome to be unimpressed. But I just cannot understand or fathom how
Claude Code and ChatGPT brought me back to the early 2010s golden age when indies could be a one-man army. Not only code, but also for localizations, marketing. I'm even finally building some infrastructure for QA automation! And tests, lots of tests. Unimaginable for me before because I never had that bandwidth.
Not to mention that they unblock me and have basically fixed a large part of my ADHD issues because I can easily kickstart whatever task or delegate the most numbing routine work to an agent.
Just released a huge update of my language-learning app that I would never dreamed of without LLM assistance (lots of meticulous grammar-related work over many months) and have been getting a stream of great reviews. And all of that for only $100+20 a month – I was paying almost twice as much for Unity3d subscription a decade ago.
In short, you and others like you will enjoy your time, but will care very little of the systemic risk you are introducing.
But hey, whatever, gotta nut, right?
—-
I don’t mean you specifically. Companies like Windsurf, Cursor, many, they are all currently building the package for Wallstreet with literally no care that it will pull in retail investment en masse. This is going to be a fucked up rug pull for regular investors in a few years.
We’re in a much wilder financial environment since 2008. It’s very normal for crypto to be seen as a viable investment. AI is going to appear even more viable. Things are primed.
There's a general negativity bias on the internet (and probably in humans at large) which skews the discourse on this topic as any other - but there are plenty of active, creative LLM enthusiasts here.
Would be interesting to see some analysis from HN data to understand just how accurate my perception is; of course doesn’t clear up the bias issue.
The tech is undoubtedly impressive, and I'm sure has a ton of headroom to grow (although I have no direct knowledge of this, but I'd take you at your word, because I'm sure it's true).
But at least my perception of the idea that this is a "bubble" presently is rooted in the businesses that are created using the technology. Tons of money spent to power AI agents to conduct tasks that would be 99% less expensive to conduct via a simple API call, or because the actual unstructured work is 2 or 3 levels higher in the value chain, and given enough time, there will be new vertically integrated companies that use AI to solve the problem at the root and eliminate the need for entire categories of companies at the level below.
In other words: the root of the bubble (to me) is not that the value will never be realized, but that many (if not most) of this crop of companies, given the amount of time the workflows and technology have had to take hold in organizations, will almost certainly not be able to survive long enough to be the ones to realize it.
This also seems to be why folks draw comparison to the dot com bubble, because it was quite similar. The tech was undoubtedly world changing. But the world needed time to adapt, and most of those companies no longer exist, even though many of the problems were solved a decade later by a new startup who achieved incredible scale.
I work as a ML researcher in a small startup researching, developing and training large models on a daily basis. I see the improvements done in my field every day in academia and in the industry, and newer models come out constantly that continue to improve the product's performance. It feels as if people who talk about AI being a bubble are not familiar with AI which is not LLMs, and the amazing advancements it already did in drug discovery, ASR, media generation, etc.
If foundation model development stopped right now and chatgpt would not be any better, there would be at least five if not ten years of new technological developments just to build off the models we have trained so far.
But The Business is the bubble part. Like all the companies during the first internet boom/bubble who did stuff like lay tons of fiber and raise tons of money for rickety business plans. Those companies went out of business but the fiber was still there and still useful. So I think you're right in that the Tech part is being shafted a little in the conversation because the Business part is so bubbly.
Fundamentally, serving a model via API is profitable (re: Dario, OpenAI), and inference costs come down drastically over time.
The main expense comes twofold: 1. The cost of train a new model is extremely expensive. GPUs / yolo runs / data
2. Newer models tend to churn through more tokens and be more expensive to serve in the beginning before optimizations are made.
(not including payrolls)
OpenAI and Anthropic can become money printers once they downgrade the Free tiers, add ads or other attention monetizing methods, and rely on a usage model once people and businesses become more and more integrated with LLMs, which are undoubtedly useful.
He defines it as "everything that happens from when you put a prompt in to generate an output" -> but he seems to conflate inference with a query. Putting in input to generate the next single token is inference. A query or response just means the LLM repeats this until the stop token is emitted. (Happy to be corrected here)
The cost of inference per token is going down - the cost per query goes up because models consume more tokens, which was my point.
Either way, charging consumers per token pretty much guarantees that serving models is profitable (each of Anthropic's prior models turn a profit). The consumer-friendly flat 20$ subscription is not sustainable in the long run.
https://epoch.ai/data-insights/llm-inference-price-trends
https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
https://x.com/eladgil/status/1827521805755806107
- AI-first app companies that actually go public on the stock exchange
- Massive influx of investment from retail as the basket of “AI” is just too much to pass up
- This basket is no longer a collection of top tier hardware and software titans, but led by resellers and wrappers like Palantir, like something like Cursor, like Windsurf, and finally rounded out with crud-apps turned publicly traded companies. Figma going public is a very bad indicator of what’s to come. Perplexity going public would be one of my biggest Red Flag moments.
- The basket I’m describing is the package that includes all these “toxic” assets.
- Some really dumb big players will lose here too because they will acquire some of these resellers and wrappers at prices they’ll never recoup (Newscorp buying MySpace).
- And finally, those who know, know, and they will bail first unscathed. Say it ain’t so, the story of our lives.
That will be the vehicle retail will pile into. We’re a little bit aways from that as companies are still building out their AI offerings. We’ll need a flurry of companies like that to go public soon after OpenAI does, sparking the beginning of one of the worst bubbles ever. You won’t be able to make sense of it because the bull market will make it impossible to not FOMO in.
That’s the systemic risk to this entire industry and the broader economy in about a few years.
Remember, humans can’t have nice things. If the secondary companies didn’t rush to the stock market as their prime imperative, we wouldn’t have to worry about it because all sensible investment will be in the large caps. The pursuit of gaudy returns will fail humans again, as always.
Stay safe and right-sized, all. The actual tech is not over-hyped.
Retail may never really get to participate at all, beyond trading Nvidia and similar.
In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments
The collapse of FTX sent bitcoin from ~$20k to ~$17k. It's now $110k. I imagine the AI boom will 'collapse' in the same sort of way.
A lot of the economics depends on whether you think human level intelligence is coming or not. Zitron kind of assumes not in which case his economic doomerism makes sense. But if it does come you could effectively double gdp which is a lot of financial upside.