Spending on AI Is at Epic Levels. Will It Ever Pay Off?
Posted3 months agoActive3 months ago
wsj.comTechstory
skepticalmixed
Debate
80/100
AI InvestmentAI ApplicationsTech Bubble
Key topics
AI Investment
AI Applications
Tech Bubble
The article questions whether the massive spending on AI will ever pay off, sparking a debate among HN commenters about the value and potential risks of AI investments.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1m
Peak period
46
0-3h
Avg / period
9.6
Comment distribution77 data points
Loading chart...
Based on 77 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 7:33 PM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 7:34 PM EDT
1m after posting
Step 02 - 03Peak activity
46 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 29, 2025 at 1:44 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45400269Type: storyLast synced: 11/20/2025, 9:01:20 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It is fantastic at reasonable scale ports / refactors, even with complicated subject matter like insurance. We have a project at work where Pro has saved us hours of time just trying to understand the over complicated that is currently in place.
For context, it’s a salvage project with a wonderful mix of Razor pages and a partial migration to Vue 2 / Vuetify.
It’s best with logic, but it doesn’t do great with understanding the particulars of UI.
The sketchy part is that LLMs are super good at faking confidence and expertise all while randomly injected subtle but critical hallucinations. This ruins basically all significant output. Double-checking and babysitting the results is a huge time and energy sink. Human post-processing negates nearly all benefits.
Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".
I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.
It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.
In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.
Most likely by trying to get a promotion or bonus now and getting the hell out of Dodge before anyone notices those subtle landmines left behind :-)
Not everyone is the biggest cat in town with infinite money and expertise. I have no intention of leaving anytime soon, so I have confidence that the code that was generated by the AI (after confirming with our guy who is the insurance OG) is solid improvement over what was before.
To your example, frankly, I would have started with that very important caveat, of an initial situation defined by very poor quality. It's a very valid angle as a lot of code that's available today is of very low quality and if AI can't take 1/10 or 2/10 and make it 5/10 or 6/10, yes, everyone benefits.
Just like tech debt, there's a time for rushing. And if you're really getting good results from LLMs, that's fabulous.
I don't have a final position on LLM's but it has only been two days since I worked with a colleague who definitely had no idea how to proceed when they were off the "happy path" of LLM use, so I'm sure there are plenty of people getting left behind.
- group that see them as invaluable tools capable of being an immense productivity multiplier
- group that tried things here and there and gave up
we collectively decided that we want to be in the first group and were willing to put time to be in that group.
I have not see any tangible difference in the output of both.
I'm interested in business outcomes, is more code or perceived velocity translating into benefits to the business? This is really hard to measure though because in pretty much any startup or growing company you'll see better business outcomes, but it's hard to find evidence for the counterfactual.
Not snarking, but if they are automated away, then isn't this like 0 story points for effort/complexity?
So the product velocity didn't exactly go up, but you are now producing less technical debt (hopefully) with a similar velocity, sounds reasonable.
You will live the rest of your life like that. Because nobody likes you. Enjoy.
- anything that goes deep into issues (I seldom read “i love llms” type posts like this is great: https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/)
- lots of experimentation - specifically I have spent hours and hours doing the exact same feature (my record is 23 times).
- if something “doesn’t work” I create a task immediately to investigate and understand it. even the smallest thing that bother me I will spend hours to figure out why it might have happened (this is sometimes frustrating) and how to prevent it from happening again (this is fun)
My collegue describes the process as Javascript developer trying to learn Rust while tripping on mushrooms :)
I've found that they're a moderate productivity increase, i.e. on a par with, say, using a different language, using a faster CI system, or breaking down some bureaucracy. Noticeable, worth it, but not entirely transformational.
I only really get useful output from them when I'm holding _most_ of the context that I'd be holding if writing the code, and that's a limiting factor on how useful they can be. I can delegate things that are easy, but I'm hand-holding enough that I can't realistically parallelise my work that much more than I already do (I'm fairly good at context switching already).
Programmers tend to overestimate their knowledge of non-programming domains, so the OP is probably just not understanding that there are serious issues with the LLM's output for complicated subject matters like insurance.
[1] https://internethistory.org/wp-content/uploads/2020/01/OSA_B...
Even the smallest and poorest countries in the world invested in their fiber networks.
Only China and the US have money to create models.
Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.
With that said, the fiber will be good for many years. None of the LLM models or hardware will be useful in more than a few years, with everything being replaced to newer and better on a continual basis. They're stepping stones, not infrastructure.
We did not need it? Did you ever used DSL?
What is AI replacing? People?
Where I live (Germany), lots of people have vDSL at advertised speeds of 100 Mib/s, using pair copper wires. Not saying that fiber is not better, it obviously is, and hence the government is subsidizing large-scale fiber buildouts. But as it stands right now, I'm confident that for 99% of consumers, vDSL is indeed enough.
In the 90s and 2000s, I remember our (as in: tech nerds') argument to policy-makers being "just give people more bandwidth and they will find a way to use it", and in that period, that was absolutely true. In the 2000s, lots of people got access to broadband internet, and approximately five milliseconds later, YouTube launched.
But the same argument now falls apart because we have the hindsight of seeing lots of people with hundreds of megabits or even gigabit connections... and yet the most bandwidth-demanding thing most of them do is video streaming. I looked at the specs for GeForce Now, and it says that to stream the highest quality (a 5K video feed at 120Hz), you should have 65 Mib/s downstream. You can literally do that with a vDSL line. [1] Sure, there are always people with special usecases, but I don't recall any tech trend in the last 10 years that was stunted because not enough consumers had the bandwidth required to adopt it.
[1] Arguably, a 100 Mib/s line might end up delivering less than that, but I believe Nvidia have already factored this into their advertised requirements. They say that you need 25 Mib/s to sustain a 1080p 60fps stream, but my own stream recordings in the same format are only about 5 Mib/s. They might encode with higher quality than I do, but I doubt it's five times more bitrate.
Is not only the bandwidth but latency, i get 5 ms ping to servers in eu.
The quality of life I get from that cant be paid with money.
Also with DSL because how it works they can give you only 10% of the bandwidth and is fine.
I believe the relevant term here is “survivorship bias”.
They have a visceral hatred of workers. The sooner people accept that as reality, the better.
If they AI companies just charged enough to users to cover their costs then the demand would drop 10x and we wouldn't need most of the data centers.
They aren't the only company doing this.
Oh and the really fucky part is half of them just want to get a bit richer, but the other half seem to be in a cult that thinks AI's gross disruption of human economies and our environment is actually the most logically ethical thing to do.
Based on AGI 2026, they convinced US Government to block high-end GPU sales to China. They said, we only needed 1-2 more years to hold them off. Then AGI and OpenAI/US rules the world. Is this still the plan? /s
If AGI does not materialize in 2026, I think there might be trouble, as China develops alternative GPUs and NVIDIA loses that market.
I think Altman has been getting mentored by Musk. I think we'll get full self-driving Teslas before quantum mechanics is "solved", though, and I am not expecting that in the foreseeable future.
He did say if Chat GTP 8 creates a theory of quantum gravity... I can't... that will mean we have reached AGI.
insert Rick & Morty "Show me what you got" gif here
Decision support, coding, and for structured outputs I love it. I know its not human, and i write instructions that are specific to the way it reasons.
I don't think the companies betting on AI are burning mountains of cash because they think it will be a moderately useful tool for decision support, coding and such. They are betting this will be "The Future™" in their search of perpetual growth.
[0] https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
Cost of AGI Delusion
https://news.ycombinator.com/item?id=45395661
AI Investment Is Starting to Look Like a Slush Fund
https://news.ycombinator.com/item?id=45393649
1 more comments available on Hacker News