Ai's $344b 'language Model' Bet Looks Fragile
Posted4 months agoActive4 months ago
bloomberg.comTechstory
skepticalmixed
Debate
80/100
AILlmsInvestment
Key topics
AI
Llms
Investment
The article questions the $344B investment in AI language models, sparking a discussion on the technology's value, potential returns, and the risks of over-hype.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
70
0-2h
Avg / period
9
Comment distribution117 data points
Loading chart...
Based on 117 loaded comments
Key moments
- 01Story posted
Sep 11, 2025 at 7:49 AM EDT
4 months ago
Step 01 - 02First comment
Sep 11, 2025 at 8:25 AM EDT
36m after posting
Step 02 - 03Peak activity
70 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 11:55 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45210451Type: storyLast synced: 11/20/2025, 4:50:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The Economist recently featured a piece pointing out that it's no longer risk that drives the market but a balance of fear of loss and fear of missing out (https://www.economist.com/finance-and-economics/2025/08/06/w...). FOMO is out of control right now
Exactly. The whole stock market is currently behaving like the crypto bubbles.
This leads to an over rotation in the perceived value.. the value is significant just as the mobile phone was, but not going to live up to the hype in the near term.
It's definitely interesting how in anonymous forums there's a lot more people pointing out that they think this is hype whereas when we wear our professional hats many of us join in. It's like we all want you to party going no we all know what's going to happen
When your boss is hyping it up and demanding all hands on deck full steam ahead on the Good Ship AI, lots of people join in out of fear, particularly in the currently awful job market which is partly being ruined by AI itself.
Some of us just stay quiet, keep our heads down, and plug away using tools actually fit for purpose, like LSPs and refactoring tools.
Very few have the courage to stand up in a professional setting and say the emperor has no clothes.
https://content.techgig.com/technology/developer-fires-entir...
My spouse works at a large (50k-100k) org in a program management role where she is getting a lot of pressure to organize various AI evangelism efforts aimed at developers. Workshops, bake offs, demo days, etc.
I mean sounds neat, but is this being done because it's useful or because someone up high needs to justify their AI budget spend with AI usage metrics?
Do we believe that ICs are actually so stupid/stubborn they need to be mandated, coaxed, coached, bullied and bribed to use something that makes their jobs easier?
Doesn't most of the best tech end up being bottoms-up?
Most of us who were around 15+ years ago recall a lot of BigCorp had to be dragged unwillingly into mobile by internal useres/devs who got their first iPhone and saw the light. A lot of stuff starts as small team internal skunk works / unofficial projects working around productivity drains. I am highly suspicious that the C-suite knows what people 10 levels down actually need for productivity enhancement.
Yeah thanks guys. Now I have outlook/teams on my phone and am expected to be reachable 24/7. If not, I'm expected to respond to text, and share my phone number with my colleagues. Those I don't directly share it with will get it from someone who knows me.
Your comparison to smart phones is interesting. Smart phones are definitely transformative. There was a lot of hype, but still transformative.
Do you believe that LLMs and AI is also going to be transformative?
[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle
What that means is the ad model of the internet will come apart.
And what that means is that the LLMs will need to charge for answer optimization to plug the ads hole.
And so where this is going is basically a whole cottage industry around that. Around controlling and shaping knowledge in other words.
Yes frightening politically more so than economically. At least from my view.
And if it dumbs us down and erodes critical thinking then maybe it will have negative effects economically and politically long term.
Different speeches for different audiences. On HN, for all its faults, people don't need to be told that yes, SOTA LLM can somewhat help you with code, parsing documentation, etc. A lot of people in the "real world" are still grossly underestimating this technology.
Did you mean "overestimating"? "somewhat help" is putting it strongly, IMO.
So far the Self driving car hype cycle has served as a useful reference for understanding the hype with LLMs. The main difference with LLMs is that there is 10 times as much money flying around.
I think there’s too much hype and money, but LLMs are useful right now.
LLMs are in a similar place, and still a long way from doing the whole job reliably by themselves. The current state of AI is one or several unknown unknowns away from real AGI. We're missing something fundamental.
Likewise with autonomous vehicles, the mythical SAE level 5 vehicle that goes anywhere and everywhere there is a road is still very much science fiction.
I ride in waymo all the time, they are SAE 4.
I know many people who take a waymo every day.
What percentage of driving represents the delta between 4 and 5?
SAE 4 level LLM/Ai (if we can really even make that comparison) would have far less difficulty in deployment and would be far more disruptive in a far shorter period of time than SAE 4 self driving cars.
Waymo's robotaxis, likewise, are able to do the driving task in geo-fenced areas. Waymo does a lot of hand-built code and testing to deal with particular problems, such as the 5-points in the Cairo district of SF, where it's a completely unique intersection, there's no other intersection quite like it. A ton of effort went into dealing with just that one intersection, and the bespoke effort doesn't generalize to any other intersection.
So if you want to be able to say, have a platform for producing working video games out of prompts, well, I believe that can be done with our current AI, but it will depend on a lot hard work making tools and hand-built code that do not generalize to other domain specific jobs.
Now if you want to make a movie worth watching out of prompts, that could be done too, but it depends on solving a whole different set of bespoke problems that once again need to be solved the hard way using more conventional software.
People are still figuring out very basic integrations, and even now, at this early stage, the things I can do with LLMs are pretty incredible. For example, I was able to set Cursor up on finally dragging an old codebase out of the dark ages. It then built new features that I've long wanted. It took a few hours on my end, but would have been at least a week or two without it.
I'm not exposed to much of the hype, though, so maybe my calibration of what the hype is, is wrong.
Such as?
Just look at what Cursor (and similar) have done in terms of the tooling for LLMs. There's still tons of progress to be made there, but similar tooling can happen across a variety of industries and categories.
For example, I run a database of information that needs constant updating. I set up automated fact checking (with a human looped in), that enables nearly live updates, which would be incredibly expensive without an LLM. There are so many projects, big and small, just like that one, that are being created right now. The low hanging fruit is extremely abundant, for those who are able and willing to find it.
Ok fella. Its so abundant right. So why not go ahead, start your own firm and profit from this opportunity, that according to you exists? lol.
> For example, I run a database of information that needs constant updating. I set up automated fact checking (with a human looped in), that enables nearly live updates, which would be incredibly expensive without an LLM. There are so many projects, big and small, just like that one, that are being created right now.
Show me something, preferably with the financials to support it!
Then a new model or tool comes out, all is wonderful for a bit, then repeat (except for GPT-5, oddly; that one seems to have inspired hatred from the start).
It rarely seems to occur to people that familiarity breeds contempt; once the novelty wears off people start noticing the problems. The model isn't getting magically worse, you're just _noticing_ more.
https://www.reddit.com/r/ClaudeAI/comments/1nc4mem/update_on...
On the other I get 5 hours of work done in 5 minutes every other day.
Worst I can see happening is a doc-com crash. pets.com will go out of business, but amazon won't.
I have seen that in anonymous forums there's a lot more people pointing out that they think this is transformative like say early smartphones.
When it comes to wearing professional hats people fall into two categories.
People who are using it outside their expertise area. Things they had to previously rely on others like back end operations. They gush about it - praising the technology as if they have discovered smartphones. Only when they use it for their expertise area they realize that AI might not be all it is hyped to be and that they need to be cautiously optimistic.
Then there are people who are using it within their expertise area. Like vibe coding. There are people who gush about it. But there are more who are cautiously optimistic. They will tell you it works but in a contained environment.
Well, when people's financial and employment stability are dependent on placating the overlords who are entranced...
You just summed up machine learning, not just AI/LLMs. My domain is very far from LLMs, but even in my domain, you can build a really cool demo, that is entirely misleading.
Oil isn't "free" anyway, it takes energy to make energy, and EROEI has been going down for some time as the easy oil is extracted.
Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.
New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.
I'm just wondering what the plan is.
Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.
“Somebody” like… Sam Altman? Because he said that’s what he actually believes.
https://www.startupbell.net/post/sam-altman-told-investors-b...
Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.
New co's built by individuals who get AI are best positioned to unlock the dramatic effects of the technology, and it's going to take time for them to eclipse encumbent players and then seed the labor market with AI-fluent talent
But rather than speculating, I'm generally curious what the companies are saying to their investors about the matter.
(Heh, I see "proliferate" itself is a back-formation.)
We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.
Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.
Apparently the total market capitalisation of the US stock market is $62.8 trillion. Shiller's CAPE ratio for the S & P index is currently about 38 -- CAPE is defined as current price / (earnings, averaged over the trailing 10 years)
That suggests that over the last 10 years, the average earnings of the US stock market is about $1.7 trillion annually.
So $344B of spending is about 1/5 of the average earnings of the total US stock market.
Still hard to interpret that, but 1/5 is an easier number to think about.
> This year the world’s four largest tech firms will spend $344 billion on AI
> Altogether, the four companies are expected to spend more than $344 billion for the year, with much of it going to the data centers necessarily to run AI models.
so both articles frame that $344B as estimates of capex within 1 year.
If one would assume it's nearly all a bubble, How would you correct earnings for the US? I am interested in applying it to any investment that tracks AI heavy companies in the US.
One approach I've seen a few folks do is to fit a regression model of annualized real stock market returns over the next 10 years as some function of CAPE or 1/CAPE or log(CAPE).
It doesn't give a very good fit on training data, R^2 in the range of 0.2-0.3, i.e. it cannot "explain" most of the variation in 10 year returns.
CAPE based regression models like that have said the US stock market has been overpriced for the last decade! But investors in the US stock market have done pretty well over that period, with really good returns. Maybe these models are accurate but we've just gotten lucky? Maybe these models aren't very good. Hard to tell.
Elm capital publish estimates of expected returns of a few asset classes quarterly: https://elmwealth.com/capital-market-assumptions/
LLMs are a million times better than Crypto currencies.
LLMs will be a million times more valuable than crypto shit in my opinion.
It’s just none of that had any baring on the value of the coins.
If they do "figure it out" (both AGI and a viable business model), a lot of people here will likely be out of a job. If they don't, the whole thing will come crashing down, taking our invested savings with it.
The comparison to crypto keeps coming up. Not everyone's savings went into crypto, but a lot of people's savings and retirement funds are being invested in funds tied to the stock market. And right now its growth depending on pumping cash into the AI bubble.
Do I find value in paying 20 dollars a month for ChatGPT? Yes. Do others? As far as I can see, yes. Most people are happy with the value.
Are AI companies profitable if they stop R&D? Yes.
Where’s the skepticism coming from?
> Are AI companies profitable if they stop R&D? Yes.
By what metrics? Where's the data?
- Sam Altman
... I mean, of course they haven't. They are a natural consequence of how the things work!
What it isn't is the actual final "thing" itself. It's just the thin veneer right now.
I'm not convinced that that revolution was worth whatever trillions we'll end up spending, but fortunately that's not on my shoulders to be worried about.
I don't even care about multimodality etc. I think pure text models are a very appealing idea.
This does not mean ideas are not working. I personally think pretraining has done its job. We did not know what the job previously was, but now we do given the way RL works. Pretraining and test time compute enables models to develop a generalized prior they can use to solve any given problem (much like how humans solve such problems). Sometimes priors are lacking so you need to train more using RLVR, and still early days, but directionally I think we have another scaling curve here.
LLMs risk most of those companies business, they can't afford to not be ahead. If they aren't ahead, there's a risk that the entire US's economy would be in a terrible shape.
American Big Tech companies that make plenty of INTERNATIONAL revenue from Ads (Meta, Google), can quickly become a shell of its former self.
How? Countries and economic blocks could quickly substitute their American products counterparts if they have nothing to offer and could roll out their own.
The US's economy has become very dependant on FAANG cashflow, it's what gets other parts of the economy moving.
No wonder they had a dinner with Trump. If this fades away, US will look very weak and with a terrible economic outlook.