America's Top Companies Keep Talking About AI – but Can't Explain the Upsides
Posted3 months agoActive3 months ago
ft.comTechstoryHigh profile
skepticalmixed
Debate
80/100
AI AdoptionCorporate InvestmentTechnology Hype
Key topics
AI Adoption
Corporate Investment
Technology Hype
The article discusses how top US companies are struggling to explain the benefits of their AI investments, sparking a debate among commenters about the true value and potential of AI technology.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
7m
Peak period
100
Day 1
Avg / period
21.6
Comment distribution108 data points
Loading chart...
Based on 108 loaded comments
Key moments
- 01Story posted
Sep 23, 2025 at 10:59 PM EDT
3 months ago
Step 01 - 02First comment
Sep 23, 2025 at 11:06 PM EDT
7m after posting
Step 02 - 03Peak activity
100 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 3:04 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45355806Type: storyLast synced: 11/20/2025, 3:56:10 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Building clusters six servers at a time... that last the order of weeks, appeasing "stakeholders" that are closer to steaks.
Whole lot of empty movement and minds behind these 'investments'. FTE that amounts to contracted, disposed, labor to support The Hype.
We could get this to the point of taking people off the street/putting them to task... but instead, we've collectively found it more valuable to push the spreadsheet along a few cells at a time, together.
Perhaps it's mistaken to localize the BS; it's shared. Soft and tends to spread.
https://en.wikipedia.org/wiki/Constructive_dismissal
>In employment law, constructive dismissal occurs when an employee resigns due to the employer creating a hostile work environment.
No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.
I absolutely will resign if my job becomes 100% generating and reviewing AI generated slop, having to review my coworker's AI slop has already made my job way less fun.
I am my own agent in the world. I don't get any satisfaction from using AI to do my work and offloading my responsibility and my skills to a computer. Computers are tools for me to use, not partners or subordinates
Strongly thinking about going back to school to retrain into something else aside from software. The only thing stopping me at the moment is that I think AI is making every industry and job similarly stupid and fruitless right now so changing lanes will still land me in the "AI pilot" career path
What a shitty time to be alive. I used to love technology. Now I'm coming to loathe it. It is diminishing what it means to be human
Layoffs and attrition happen for reasons that are not positive, AI provides a positive spin.
No, but some are resigning when they're told their bonus is being cut because they didn't use enough AI.
Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily.
0: https://fortune.com/2025/08/18/mit-report-95-percent-generat...
The real question is do those unicorns exist or is it all worthless.
> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.
The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as old as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.
"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.
You could read this quote this way. But the report knocked the most common tools.
The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don't learn, integrate poorly, or match workflows. Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What's missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide.[1]
[1] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.
As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.
First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.
Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.
The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.
I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.
There was no need to post this.
> is probably because
I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.
Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."
>> is probably because
> I don't mean to be contrary, but these statements stand in opposition
No, they don't. It's perfectly consistent to say one reason is certainly wrong without saying another much more likely reason is definitely right.
> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage
But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.
Both when the money has been actually committed and when it’s usage based.
I have found that companies are rarely rational and will not “leave money on the table”
This makes no sense for coding subscriptions. Just how far behind can you be in skills by taking a wait and see position?
After all, it's not like this specific product needs more than a single day for the user to get up to speed.
And I say this as someone who didn't make the transition after 25 years as a software engineer. While I get a lot of value out of AI, I felt it largely changed my job from "mostly author" to "mostly editor", and I just didn't enjoy it nearly as much, so I got out of software altogether and went to violin making school.
Yes, this is the correct answer.
This doesn't make a huge amount of sense, because the stuff is changing so quickly anyway. It's far from clear that, in the hypothetical future where this stuff is net-useful in five years, experience with _today's_ tools will be of any real use at all.
Alas, many members of the C suite do not exactly fit that description. They just have typed in a prompt or three, marveled that a computer can reply, and fantasize that it's basically a human replacement.
There are going to be a lot of (figurative, incorporated) dead bodies on the floor. But there will also be a few winners who actually understood what they were doing, and the wins will be massive. Same as it was post dot-com.
They have judgement. They can improve what was generated. They can fix a result when it falls short of the objective.
And they know when to give up on trying to get AI to understand. When rephrasing won't improve next word prediction. Which happens when the situation is complex.
I am such a one, and AI isn't useful to me. The answers it gives me are routinely so bad, I can just answer my own questions with a search engine or product documentation faster than I can get the AI to give me something. Often enough I can never get the AI to give me something useful. The current products are shockingly bad relative to the level of hype being thrown about.
Yeah, I agree. This has been a big source of imposter syndrome for me lately, since all of this AI coding stuff has skyrocketed
People making wild claims about building these incredible things overnight, but meanwhile I can't get anything useful out of them at all
Something isn't adding up. Either I'm not a very good programmer, or others are lying about how much the AI is doing for them
And I'm pretty sure I'm a pretty good programmer
For example I have some product ideas in my head for things to 3D print, but I don't know enough about design to come up with the exact mechanisms and hinges for it. I've tried the chatbots but none of them can really tell me anything useful. But once I already know the answer, they can list all kinds of details and know all about the specific mechanisms. But are completely unable to suggest them to me when I don't mention them by name in the prompt.
The thing that changed it was smartphones (~7 years later). Suddenly, the internet was available everywhere and not just a thing for nerds.
Not sure that AI is quite there yet, currently trying to identify what will be the catalyst that makes it seamless.
It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.
The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
Tell me, why should I not use a hyphen for hyphenated words?
I was schooled is British English where the spaced endash - is preferred.
Shall I go on?
And anyone can go back to the pre LLM era and see your comments on HN.
You need to understand that ChatGPT has a unique style of writing and overuses certain words and sentence constructions that are statistically different from normal human writing.
Rewriting things with an LLM is not a crime, so you don’t need to act like it is.
But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells: "Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"
"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.
"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!
"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.
"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.
And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance.
So maybe while it makes you feel smart because you're a stoichastic parrot that can repeat LLM generated!111 like you're a model with a million parameters, every time you see an emdash, it's a lazy dismissal and tramples curiosity.
But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.
> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.
Bold claim. Toxic positivism seems to be too common in AI evangelists.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.
There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...
Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.
Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).
There aren't enough programmers to justify the valuations and capex
tl;dr I was merely answering the question the article proposes.
I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.
What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.
Today it’s only the same SEO formatted crap without answer.
I am working on a solution.
The AI doesn't carry professional liability insurance, so this is about as good as asking one of the legal subreddits. It's probably fine in this case since the worst case is not getting the money that you were at risk of not getting anyway.
I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.
Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.
My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.
Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.
You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.
It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting.
"They're on top of it! They always email me the new file when they make changes and approve my access requests quickly."
There are limits to my stubbornness, and my first use of LLMs for coding assistance was to ask for help figuring out how to Excel, after a mere three decades of avoidance.
After engaging and learning more about their challenges, it turned out one of their "data feeds" was actually them manually copy/pasting into a web form with a broken batch import that they'd give up on submitting project requests for, which I quietly fixed so they got to retain their turnaround while they planned some other changes.
Ultimately nothing grand, but I would never have bothered if I'd had to wade through the usual sort of learning resources available or ask another person. Being able to transfer and translate higher level literacy, though, is right up my alley.
I’ve also had experiences where I started out well but the AI got confused, hallucinated, or otherwise got stuck. At least for me those cases have turned pathological because it always _feels_ like just one or two more tweaks to the prompt, a little cleanup, and you’ll be done, but you can end up far down that path before you realize that you need to step back and either write the thing yourself or, at the very least, be methodical enough with the AI that you can get it to help you debug the issue.
The latter case happens maybe 20% of the time for me, but the cost is high enough that it erases most of the time savings I’ve seen in the happy path scenario.
It’s theoretically easy to avoid by just being more thoughtful and active as a reviewer, but that reduces the efficiency gain in the happy path. More importantly, I think it’s hard to do for the same reason partially self driving cars are dangerous: humans are bad at paying attention well in “mostly safe and boring, occasionally disastrous” type settings.
My guess is that in the end we’ll see less of the problematic cases. In part because AI improves, and in part because we’ll develop better intuition for when we’ve stepped onto the unproductive path. I think a lot of it too will also be that we adopt ways of working that minimize the pathological “lost all day to weird LLM issues” problems by trying to keep humans in the loop more deeply engaged. That will necessarily also reduce the maximum size of the wins we get, but we’ll come away with a net positive gain in productivity.
It works best when the target is small and easily testable (without the LLM being able to fudge the tests, which it will do.)
For many other tasks it's like training an intern, which is worth it if the intern is going to grow and take on more responsibility and learn to do things correctly. But since the LLM doesn't learn from its mistakes, it's not a clear and worthwhile investment.
I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines.
You can’t convince me otherwise, we just haven’t found a ‘killer app’ yet.
Note: AWS has a hosted blockchain that you can use. [1]
PS: If anyone has read that essay, please do share the link. I can't really locate it but that's a wonderful read.
[1]. https://aws.amazon.com/managed-blockchain/
And Google and Microsoft are hellbent on pushing AI into everything. Even if users don't want it. Nope, we're gonna throw the kitchen sink at you and see if it sticks.
In the non-tech world, nobody gives a shit about AI. People and businesses go about their daily lives without thinking about things like "Hmmm...maybe I could have prompted that LLM a different way..."
In a non-start up, bureaucratic companies, these report are there to make cover ups, or basically to cover everyone's ass so no one is doing anything wrong because the report said so.
agentic ai which is a huge buzz in enterprise feels more like workflow and rpa (again) and people misunderstanding that getting the happy flow working is only 20% of the job.
The hype does serve a purpose, though: it motivates people to try to find more possible uses for LLMs. However, as with all experiments, we should expect most of these attempts to fail.
I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete.