Google boss says AI investment boom has 'elements of irrationality'
Mood
heated
Sentiment
mixed
Category
tech
Key topics
AI investment
tech bubble
Google's boss warns of 'irrationality' in AI investment boom, sparking debate among HN users about the potential for a bubble burst and its consequences.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
29
Hour 14
Avg / period
7.6
Based on 160 loaded comments
Key moments
- 01Story posted
11/18/2025, 6:06:52 AM
1d ago
Step 01 - 02First comment
11/18/2025, 6:21:17 AM
14m after posting
Step 02 - 03Peak activity
29 comments in Hour 14
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 3:02:08 PM
4h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
To me, we're clearly not peak AI exuberance. AI agents are just getting started and getting so freaking good. Just the other day, I used Vercel's v0 to build a small business website for a relative in 10 minutes. It looked fantastic and very mobile friendly. I fed the website to ChatGPT5.1 and asked it to improve the marketing text. I fed those improvements back to v0. Finished in 15 minutes. Would have taken me at least one week in the past to do a design, code it, test it for desktop/mobile, write the copy.
The way AI has disrupted software building in 3 short years is astonishing. Yes, code is uniquely great for LLM training due to open source code and documentation but as other industries catch up on LLM training, they will change profoundly too.
Yes, it is even one of necessary components. Everybody is twitchy afraid of the pop, but immediate returns are too tempting so they keep money in. The bubble pops when something happens and they all start to panicking at the same time. They all need to be sufficiently stressed for that mass run to happen.
In other words, do you think we're in 1995 of the dotcom or 2000?
So what if it's subsidized and companies are in market share grab? Is it going to cost $40 instead of $20 that I paid? Big deal. It still beats the hell out of $2k - $3k that it would have taken before and weeks in waiting time.
100x cheaper, 1000x faster delivery. Further more, v0 and ChatGPT together for sure did much better than the average web designer and copy writer.
Lastly, OpenAI has already stated a few times that they are "very profitable" in inference. There was an analysis posted on HN showing that inference for open source models like Deepseek are also profitable on a per token basis.
Think about the pricing. OpenAI fixed everyone's prices to free and/or roughly the cost of a Netflix subscription, which in turn was pinned to the cost of a cable TV subscription (originally). These prices were made up to sound good to his friends, they weren't chosen based on sane business modelling.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
If the numbers leaked to Ed Zitron are true then they aren't profitable on inference. But even if that were true, so what? It's a meaningless statement, just another way of saying they're still under-pricing their models. Inferencing and model licensing are their only revenue streams! That has to cover everything including training, staff costs, data licensing, lawsuits, support, office costs etc.
Maybe OpenAI can launch an ad network soon. That's their only hope of salvation but it's risky because if they botch it users might just migrate to Grok or Gemini or Claude.
Then everyone had to follow. So Anthropic launched Claude Code at the same price point, before realizing that was deadly and overnight the price went up by an order of magnitude. From $20 to $200/month, and even that doesn't seem to be enough.
Maybe it was because demand was so high that they didn't have enough GPUs to serve? Hence, the insane GPU demand?The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
The gap between fundamental financial data and valuations is very large. The risk is a brutal reassessment of these prices. That's what people call a bubble bursting and it doesn't mean the underlying technology has no value. The internet bubble burst yet the internet is probably the most significant innovation of the past twenty years.
The question is: is the value generated by AI aligned with the market projected value as currently priced in AI companies valuation? That's what's more difficult to assess.
I agree it is difficult to assess. Right now, competitive pressure is causing big players to go all in or get left behind.That said, I don't think the bubble is done growing nor do I think it is about to burst.
I personally think we are in 1995 of the dotcom bubble equivalent. When it bursts, it will still be much bigger than in November 2025.
The problem is no one attained that position, price expectations are set and it turns out that wishful thinking of reducing costs of running the models by orders of magnitude wasn't fruitful.
Is AI useful? of course.
Are the real costs of it justified? in most cases no.
It's how much money is being poured into it, how much of the money that is just changing hands between the big players, the revenue, and the valuations.
If hyperscalers keep buying GPUs and Chinese companies keep saying they don't have enough GPUs, especially advanced ones, why should we believe someone that it's a bubble based on "feel"?
because leaders in the space also keep saying it? And then making financial moves that us pleebs can't even dream of, which back that up?
This whole "the media keeps reporting it" as a point against the credibility of something is utterly silly and illogical.
The vast majority of AI doomers in the mass media have never used tools like v0 or Cursor. How would they know that AI is overvalued?
Startups and other unprofitable companies however...
But unlike 08 crisis, we're getting a heads up to bring out the lube.
Oracle will likely fail. It funded its AI pivot with debt. The Debt-to-Revenue ratio is 1.77, the Debt-to-Equity ratio D/E is 520, and it has a free cash flow problem.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
They are one of the few companies actually making money with AI as they have intelligently leveraged the position of Office 365 in companies to sell Copilot. Their AI investment plans are, well, plans which could be scaled down easily. Worst case scenario for them is their investment in OpenAI becoming worthless.
It would hurt but is hardly life threatening. Their revenue driver is clearly their position at the heart of entreprise IT and they are pretty much untouchable here.
And even then, if that happens when the bubble pops, they'll likely just acquire OpenAI on the cheap. Thanks to the current agreement, it already runs on Azure, they already have access to OpenAI's IP, and Microsoft has already developed all their Copilots on top of it. It would be near-zero cost for Microsoft at that point to just absorb them and continue on as they are today.
Microsoft isn't going anywhere, for better or for worse.
Despite them pissing off users with Windows, what HN forgets, is they aren't Microsoft's customer. The individual user/consumer never was. We may not want what MS is selling, but their enterprise customers definitely do.
Azure is a product all right, but there’s nothing particularly better there than anywhere else.
Google is only place that serves the enterprise (Workspace for productivity, Cloud for IT, Devices for end users) AND conducts meaningful AI research.
AWS doesn't (they can sell cloud effectively, but don't have any meaningful in-house AI R&D), Meta doesn't (they don't cover enterprise and, frankly, nobody trusts Zuck... and they're flaky.
Oracle doesn't. They have grown their cloud business rapidly by 1) easy button for Oracle on-prem to move to OCI, and 2) acting like a big colo for bare metal "cloud" infra. No AI.
Open AI has fundamental research and is starting to have products, but it's still niche. Same as Anthropic. They're not in the same ball game as the others, and they're going to continue to pay billions to the hyperscalers annually for infra, too.
This is Google's game to lose, imho, but the biggest loser will be AWS (not Azure/Microsoft).
Tesla (P/E: 273, PEG: 16.3) the car maker without robots, robotaxis is less than 15% of the Tesla valuation at best. When the AI hype dies, selloff starts and negative sentiment hits, we have below $200B market cap company.
It will hurt Elon mentally. He will need a hug.
OpenAI, Anthropic, and others will be bought for cents on the dollar.
OpenAI is existential threat to all big tech including Meta, Google, Microsoft, Apple. Hence, they're all spending lavishly right now to not get left behind.Meta --> GenAI Content creation can disrupt Instagram. ChatGPT likely has more data on a person than Instagram does by now for ads. 800 million daily active users for ChatGPT already.
Google --> Cash cow search is under threat from ChatGPT.
Microsoft --> Productivity/work is fundamentally changed with GenAI.
Apple --> OpenAI can make a device that runs ChatGPT as the OS instead of relying on iOS.
I'm betting that OpenAI will emerge bigger than current big tech in ~5 years or less.
OpenAI does not expect to be cash-flow positive until 2029. When no new capital comes in, it can't continue.
OpenAI can's survive any kind of price competition.
They have infrastructure that serves 800 million monthly active users.
Investors are lining up to give them money. When they IPO, they'll easily be worth over $1 trillion.
There's price competition right now. They're still surviving. If there is price competition, they're the most likely to survive.
Your premise is that there is no bubble. We are talking about what happens when bubble bursts. Without investor money drying out there is no bubble.
Even worse, they train their model(s) on the interactions of those non-paying customers, what makes the model(s) less useful for paying customers. It's kind of a "you can not charge for a Porsche if you only satisfy the needs of a typical Dacia owner".
They have <a really expensive> infrastructure that serves 800 million monthly active <but non-paying> users.
I don't pay Meta any money too. Yet, Meta is one of the most profitable companies in the world.I give more of my data to OpenAI than to Meta. ChatGPT knows so much about me. Don't you think they can easily monetize their 800 million (close to 1 billion by now) users?
Yeah... No they can't. I don't agree with any of your "disruptions," but this one is just comically incorrect. There was a post on HN somewhat recently that was a simulated computer using LLMs, and it was unusable.
AI isn’t bullshit, but selling access to a proprietary model has certainly not been proven as a business model yet.
I seriously doubt it. If this bubble pops, the best OpenAI can hope for is they just get absorbed into Microsoft.
Or, instead of spending billions training models that are nearly all the same, they instead take advantage of all the datacenter full of GPUs, and AI companies frantically trying to make a profit, many most likely crashing and burning in the process, to pay relative pennies to use the top, nearly commoditized, model of the month?
Then, maybe someday, starting late and taking advantage of the latest research/training methods that shave off years of training time, save billions on a foundation model of their own?
I don't think it makes sense for Apple to be an AI company. It makes sense for them to use AI, but I don't see why everyone needs to have their own model, right now, during all the churn. It's nearly already commodity. In house doesn't make sense to me.
Ah yes, PromptOS will go down in the history books for sure.
And just like 23andme, so will all that data be sold for dimes.
Survive, yes. I don't think anybody ever questioned this.
I wonder if they will be able to remain as "growth stocks", however. These companies are allergic to be seen as nature companies, with more modest growth profiles, share profits, etc.
It's actually 520% or 5.2 - still high but 520 would be crazy.
Not sure how the situation is in Europe and Asia, but I would guess about the same.
(I know of a few companies, but they’re tiny tiny minnows compared to the big AI companies listed in the US).
Makes one think that this was the plan all along. I think they saw how SVB went down and realize that if they're reckless and irresponsible at a big enough scale they can get the government to just transfer money to them. It's almost like this is their new business model "we're selling exposure to the $XX trillion dollar bailout industry."
Not really. Sundar is still pretty bullish on GenAI, just not the investor excitement around it (bubble).
Pichai described AI as "the most profound technology" humankind has worked on. "We will have to work through societal disruptions," he said, adding that the technology would "create new opportunities" and "evolve and transition certain jobs." He said people who adapt to AI tools "will do better" in their professions, whatever field they work in.Current admin really, really wants the number going up, and is also incapable of considering or is ignorant to any notion of consequence for any actions of any kind.
To pile on, there's hardly a product being developed that doesn't integrate "ai" in some way. I was trying to figure out why my brand new laptop was running slowly, and (among other things) noticed 3 different services running- microsoft copilot, microsoft 365 copilot (not the same as the first, naturally) and the laptop manufacturer's "chat" service. That same day, I had no fewer than 5 other programs all begging me to try their AI integrations.
Job boards for startups are all filled with "using AI" fluff because that's the only thing investors seem to want to put money into.
We really are all dirty here.
I guess but is it better for an investor to own 2 shares of Google or 1 share of OpenAI and 1 share of TSMC?
Like I have no doubt that being vertically integrated as a single company has lot of benefits but one can also create a trust that invests vertically as well.
Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have an order backlog to grow into.
https://en.wikipedia.org/wiki/Double_marginalization?wprov=s...
Maybe this is because these industries are better understood and there is less risk involved, but I wonder if the current big software companies will take similar paths in the future.
Nvidia earnings tomorrow will be the litmus test if things are going to topple over.
That's a reduction of complexity, of course, but the core of the lesson is there. We have actually kept on with all the practices that led to the housing crash (MBS, predatory lending, Mixing investment and traditional banking).
I know financially it will be bad because number not go up and number need go up.
But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.
It is very far from clear that there is any real value being extracted from this technology.
The government should let it burn.
Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.
Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:
- search replacement (beats google almost all of the time)
- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)
- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.
Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.
Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.
Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code... It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.
On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/
I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.
Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
I also think they should be let to fail, but there's no way the US GOV ever allows them to.
> I also think they should be let to fail, but there's no way the US GOV ever allows them to.
There's different ways to fail, though: liquidation, and a reorganization that wipes out the shareholders.
OpenAI could be liquidated and all its technology thrown in to the trash, and I wouldn't shed a tear, but Microsoft makes (some) stuff (cough, Windows) that has too much stuff dependent on it to go away. The shareholders can eat it (though I think broad-based index funds should get priority over all other shareholders in a bankruptcy).
It all depends on whether MAGA survives as a single community.
One of the few things MAGA understands correctly is that AI is a job-killer.
Trump going all out to rescue OpenAI doesn't feel likely. Who actually needs it, as a dependency? Who can't live without it?
Anthropic are waaaay down the list.
Similarly, can you actually see him agreeing to bail out Microsoft without taking an absurd stake in the business? MAGA won't like it. MS could be broken up and sold.
NVidia, now that I can see. Because Trump is surrounded by crypto grifters and is dependent on crypto for his wealth. GPUs are at least real solid products.
Trump (and by extension MAGA) has the worst job growth of any President in the past 50 years. I don't think that's their brand at all. They put a bunch of concessions to AI companies in the Big Beautiful Bill, and Trump is not running again. He would completely bail them out, and MAGA will believe whatever he says, and congress will follow whatever wind is blowing.
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
The only reason WhatsApp is so popular, is because so many people are on it, but you have all you need (their phone number) to contact them elsewhere anyway
I'll grant you Meta, but losing Google in that way would be highly disruptive because so many people have their primary email account on it.
Hard disagree
I am shocked at the part they know it is a bubble and they are doing nothing to amortize it. Which means they expect the government to step in and save their butts.
... Well, not that shocked.
finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.
You absolutely can and it's an extremely reliable path to success. The only thing that's changed is the amount of marketing hype thrown out by the fake-it vendors. Staying quiet and debuting a solid product is still a big win.
> I'm still amazed at the number of people that do not know that the output is still not 100% accurate.
This is the part that "scares" me. People who do not understand the tool thinking they're ACTUALLY INTELLIGENT. Not only are they not intelligent, they're not even ACTUALLY language models because few LLMs are actually trained on only language data and none work on language units (letters, words, sentences), tokens are abstractions from that. They're OUTPUT modelers. And they're absolutely not even close to being let loose unattended on important things. There are already people losing careers over AI crap like lawyers using AI to appeal sanctions because they had AI write a motion. Etc.
And I think that was ultimately the biggest unforced error of these AI companies and the ultimate reason for the coming bubble crash. They didn't temper expectations at all, the massive gap between expectation and reality is already costing companies huge amounts of money, and it's only going to get worse. Had they started saying, "these work well, but use them carefully as we increase reliability" they'd be in a much better spot.
In the past 2 years I've been involved in several projects trying to leverage AI, and all but one has failed. The most spectacular failure was Microsoft's Dragon Copilot. We piloted it with 100 doctors, after a few months we had a 20% retention rate, and by the end of a year, ONE doctor still liked it. We replaced it with another tool that WORKS, docs love it, and it was 12.6% the cost, literally a sixth the price. MS was EXTREMELY unhappy we canceled after a year, tried to throw discounts at us, but ultimately we had to say "the product does not work nearly as well as the competition."
[1] https://en.wikipedia.org/wiki/AI_winter#The_setbacks_of_1974
[2] https://en.wikipedia.org/wiki/AI_winter#AI_winter_of_the_199...
Oh, you think we won't see AGI in 2026?
This time they'll be gifted 70 trillion to make up for the shortfall, and life shall continue on for the rich.
It's win-win for them, there's no risk at all
That's what I'm personally hoping for anyway, would rather the economy avoid a big recession.
Private actors are the ones who are investing into AI, and there's no real way for them to invest into public infrastructure, or to eventually profit from it, the way investors reasonably expect to do when they put up their money for something.
It's the government who can choose to invest into infrastructure, and it's us voters who can choose to vote for politicians who will make that choice. But we haven't done that. So many people want to complain endlessly about government and corporations -- not entirely without merit, of course -- but then are quick to let voters off the hook.
Better to rip the bandaid off and begin anew.
But as I try to sort of narrative the ideas behind bubbles and bursts, one thing I realize, is that I think in order for a bubble to burst, people essentially have to want it to burst(or the opposite have to want to not keep it going).
But like Bernie Madoff got caught because he couldn't keep paying dividends in his ponzi scheme, and people started withdrawing money. But in theory, even if everyone knew, if no one withdrew their money (and told the FCC) and he was able to use the current deposits to pay dividends a few years. The ponzi scheme didn't _have_ to end, the bubble didn't have to pop.
So I've been wondering, like if everyone knows AI is a bubble, what has to happen to have it collapse? Like if a price is what people are willing to pay, in order for Tesla to collapse, people have to decide they no longer want to pay $400 for Tesla shares. If they keep paying $400 for tesla shares, then it will continue to be worth $400.
So I've been trying to think, in the most simple terms, what would have to happen to have the AI bubble pop, and basically, as long as people perceive AI companies to have the biggest returns, and they don't want to move their money to another place with higher returns (similar to TSLA bulls) then the bubble won't pop.
And I guess that can keep happening as long as the economy keeps growing. And if circular deals are causing the stock market to keep rising, can they just go on like this forever?
The downside of course being, the starvation of investments in other parts of the economy, and giving up what may be better gains. It's game theory, as long as no one decides to stop playing the game, and say pull out all their money and put it into I dunno, bonds or GME, the music keeps playing?
Economically, AI is a bubble, and lots of startups whose current business model is "UI in front of the OpenAI API" are likely doomed. That's just economic reality - you can't run on investor money forever. Eventually you need actual revenue, and many of these companies aren't generating very much of it.
That being said, most of these companies aren't publicly traded right now, and their demise would currently be unlikely to significantly affect the stock market. Conversely, the publicly traded companies who are currently investing a lot in AI (Google, Apple, Microsoft, etc) aren't dependent on AI, and certainly wouldn't go out of business over it.
The problem with the dotcom bubble was that there were a lot of publicly traded companies that went bankrupt. This wiped out trillions of dollars in value from regular investors. Doesn't matter how much you may irrationally want a bubble to continue - you simply can't stay invested in a company that doesn't exist anymore.
On the other hand, the AI bubble bursting is probably going to cost private equity a lot of money, but not so much regular investors unless/until AI startups (startups dependent on AI for their core business model) start to go public in large numbers.
Plus the information they can provide to the State on the sentiment of users is also going to be greatly valued
This one? When China commits to subsidising and releasing cutting-edge open-source models. What BYD did to Tesla's FSD fee dreams, Beijing could do to American AI's export ambitions.
A bubble doesn’t need a grand catalyst to collapse. It only needs prices to slip below the level where investors collectively decide the downside risk outweighs the upside hope. Once that threshold is crossed, selling accelerates, confidence unravels, and the fall feeds on itself.
Imagine if interest rates go up and you can get 5% from a savings account. One big player pulls out cash triggering a minor drop in AI stocks. Panic sells happen trying to not be the last one out of the door, margin calls etc.
You're assuming cash will never stop flowing in driving up prices. It will. The only way it goes on forever is if the companies end up being wildly profitable
See y'all in the spring!
They can't, not firever. Bubbles pop.
The comparison made to the dotcom bubble is apt. It was a bubble, but that didn't mean that all the internet and e-commerce ideas were wrong, it was more a matter of investing too much too early. When the AI bubble pops or deflates, progress on AI models will continue on.
Not immune, maybe, but pretty well off if they didn't buy in.
507 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.