Reader Response to "ai Overinvestment"
Posted3 months agoActive3 months ago
mbi-deepdives.comTechstory
skepticalnegative
Debate
80/100
AI InvestmentAI CapabilitiesAgi
Key topics
AI Investment
AI Capabilities
Agi
The article discusses the potential overinvestment in AI and the discussion revolves around the sustainability of current AI investments, the potential for local AI to satisfy demand, and the feasibility of achieving AGI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
49m
Peak period
40
2-4h
Avg / period
9.5
Comment distribution76 data points
Loading chart...
Based on 76 loaded comments
Key moments
- 01Story posted
Sep 28, 2025 at 10:53 PM EDT
3 months ago
Step 01 - 02First comment
Sep 28, 2025 at 11:42 PM EDT
49m after posting
Step 02 - 03Peak activity
40 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 30, 2025 at 8:35 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45409956Type: storyLast synced: 11/20/2025, 8:56:45 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Qwen + Your Laptop + 3 years is more interesting to me than offloading AI to some hyperscale datacenter. Yes efficiency gains can work for both, but theres a certain level below which you may as well just run the app on your own silicon. AI might not eventually meet the threshold for "apps on tap" if every user with an i7 and 32GB ram is ably served locally.
There are loads of use cases like this that will most definitely be solved by local LLM.
The powers that be are not competent/powerful enough to engineer a society-wide conspiracy like that.
A lot of waste (especially in big organizations, and the bigger the organization the more capacity for waste) happens, and the more the income is detached from production (e.g. governments get tax revenue regardless of how well they spend it) the less people care about efficiency.
But for the most part people get hired because somebody thought that makes business sense to do so.
Disagree. The receptionist recording something in excel manually, vs building an automation for it is a finance issue. Its opex vs capex.
When people make huge assumptions about the future of technology, they tend to miss that lots of people dont want to fork out the capital to buy robots or build new tools, when it works "just fine" having karen enter it manually.
If training slowed down by 2/3rds would consumers be that much worse off?
Though I agree and believe personal LLM agents with access to our personal data would be much more effective. Though perhaps we should give LLMs a few more years to mature and safeguards to be created before letting it say gamble your house on the newest meme coin. ;)
Buying hardware that covers 90% of my use pattern isn't going to pay itself back for 5 or 10 years. With the added benefit that i can change my setup every month.
I strongly believe we're in a bubble, but even just buying stocks with the money seems a better investment in my situation.
We'll be inference token constrained indefinitely: i.e. inference tokens supply will never exceed demand, it's just that the $/token may not be able to pay back the capital investment.
the loss is private, so that's OK.
A similar thing happened to the internet bandwidth capacity when the dot-com bust happened - overinvestment in fibre everywhere (came to be called dark fibre iirc), which became superbly useful once the recovery started, despite those building these capacity not making much money. They ate the losses, so that the benefit can flow out.
The only time this is not OK is when the overinvestment comes from gov't sources, and is ultimately a taxpayer funded grift.
However, the cost-benefit analysis on governmental projects typically includes non-monetary or indirect benefits.
Fiber will remain a valuable asset until/unless some moron snaps it with a backhoe. And it costs almost nothing to operate.
Your data center full of H100s will wear out in 5 years. Any that don’t are still going to require substantial costs to run/may not be cost-competitive with whatever new higher performance card Nvidia releases next year.
There is a surprising amount of real long-term infrastructure being built beyond the quickly obsolete chips.
The capital overhang of having more fiber than needed is so small compared to other costs I doubt the telecoms have really regretted any of the overprovisioning they've done, even when their models for future demand didn't pan out.
If the LLM boom ends then bitcoin miners can eat up all the slack grid power! You can put those ASICs anywhere
[0] I own no crypto and have no horse in that race, just from what i understood from online discourse.
So Bitcoin is locked into its current wasteful state with no end in sight.
Or maybe we shouldnt use that energy at all. We're still heading towards climate collapse so just frivolously wasting energy on something that has 0 value like cryptocurrencies doesnt seem like the smartest idea.
> a highly autonomous system that outperforms humans at most economically valuable work
The goalpost is always moving.
Definitions change to fit the narrative each one is trying to push at the time.
At this point, from what I can see, it's all money going in, nothing coming out yet. So hence, not ponzi.
It may be a bubble. Or it may be smart investing. At this point it could go either way.
That's not at all obvious to me; costs as a consumer are going down, rather than up. Can someone steel-man this guy's argument for me?
There's a lot of use case for the information retrieval side, especially for people whose educational level and mindset means well-crafted search queries were never a thing for them.
But there's also the other side of the trade. How much are suppliers willing to pay for upranking? It's a dirty business model, Enshittification 3.0+, but when did that stop anyone?
It would be a good sign if paid demand exceeds supply but I don't know if we can measure that.
But the moment that the AI can exceed a human programmer, at something as narrow as coding, then the company that has that AI shouldn't sell it to replace humans at other companies - it should instead use it to write programs to replace the other companies.
And the moment an AI can exceed a human generally, then the company that has that AI shouldn't sell it to replace humans at other companies - it should instead ask it how to dominate the world and replace all other companies (with side quest to ensure no competitor achieves AGI)?
For sure, there are some fantastically successful, vertically integrated companies. But mostly it’s less risky to sell the shovels rather than mine the gold.
Software is something that can be completely automated. Its just compute budget. They can create a ERM from scratch _and_ provide all the migration scripts to move customers off of the competitors etc. Its just compute.
Any transition to essentially all AGI is going to involve people for a while. But the bulk of decision making and activity would move to the AGI rapidly.
The key is, suddenly intellectual labor has a nearly free marginal cost. Including the intellectual labor of managing different units, each with a different focus.
Where is the bottleneck?
Right now, human beings, our costs, limitations, individual idiosyncrasies, inability to scale, unreliability, down time, etc., are the most profound bottlenecks companies have.
If AGI ever arrives and is in any way cost-efficient, our entire society is obsolete and the only jobs that matter are pretty much mining and taking care of hardware until robots take over. That and techno-libertarian aristocrat, of course.
> But the moment that the AI can exceed a human programmer, at something as narrow as coding, then the company that has that AI shouldn't sell it to replace humans at other companies - it should instead use it to write programs to replace the other companies.
That paragraph makes a very narrow claim, unlike your broad one about AGI.
I'm not sure this is a good take. There's a reason why Microsoft employs 200,000+ people, and you can be sure most of them they're not cranking out code 9-to-5.
Companies are living organisms more than they are pieces of software. In fact, the source code for Windows has been leaked a number of times over the years, and it resulted in approximately $0 revenue lost.
But an AI winter is unlikely to come either, as it currently adds a lot of value in many places. But the profits coming out of it are unlikely to line up with current investments being poured in.
You’d basically need a dedicated model per user (or chat even) + continuous training infrastructure as a prerequisite for AGI, I don’t see that happening in the next decade or without a completely different paradigm, let alone being economically viable.