Investors Expect AI Use to Soar, but That's Not Happening
Original: Investors expect AI use to soar. That's not happening
Mood
skeptical
Sentiment
neutral
Category
news
Key topics
Discussion Activity
Very active discussionFirst comment
47s
Peak period
37
Hour 1
Avg / period
13.2
Key moments
- 01Story posted
Nov 26, 2025 at 12:57 PM EST
8h ago
Step 01 - 02First comment
Nov 26, 2025 at 12:58 PM EST
47s after posting
Step 02 - 03Peak activity
37 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 26, 2025 at 5:37 PM EST
3h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
x="Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr"
y=https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening
busybox wget -U $x -O 1.htm $y
firefox ./1.htmBeen doing this for many years now. It's a short list, small enough to be contained in the local fwd proxy config
# economist.com
http-request set-header user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr" if { hdr(host) -m end economist.com }
I don't use curl, wget, browser extensions/add-ons, etc. except in HN examples. I don't use command line arguments ilke "-A" or "-U". The proxy controls the HTTP headersThere was a storm of hype the last couple weeks for Gemini 3 and everyone, correctly, rolled their eyes. Investors are demanding a return and it's not happening. They're just going to have to face reality at some point.
The next hype wants to be quantum computing, but its just not there yet - never mind the lack of real-world applications.
I thought nVidia would start promoting GPUs (whole data centers) to run classical simulations of QC to develop the applications while real hardware gets figured out.
Probably more likely though to be something novel that few took seriously before it demonstrates utility. And this is the issue for QC, we already know what it’s useful for: a handful of niche search algorithms. It’s a bit like fusion in that even if you work out the (very significant) engineering issues you’re left with something that while useful is far from transformative.
VR -> Cloud -> Crypto -> VR -> AI -> ?
Just because they happened to be in the right place, at the right time, and idling, gets paid 10M USD+ due to stock options vetting.
Sounds like crypto^2, money is spread completely unfairly and completely disconnected from actual efforts.
Good that we won't need money anymore, thanks to AGI right ?
did he do something other than that podcast?
Google Glass comes to mind, which died 11 years ago and XR is only just now starting to resurface.
Tablets also come to mind, pre-iPad, they more or less failed to achieve any meaningful adoption, and again sort of disappeared for a while until Apple released the iPad.
Then you have Segway as an example of innovation failure which never really returned in the same form like the others, and instead now we have e-scooters and e-bikes which fit better into existing infrastructure and cultural attitudes.
It's quite possible LLMs are just like those other examples, and the current form is not the going to be the successful form the technology takes.
The internet famously doubled in connectivity every 100 days during its expansion era. It's usefulness was blindingly obvious - there was no need for management to send out emails warning that they were monitoring internet usage - and you'd better make sure that you were using it enough. Can you imagine!
We are at a remarkable point in tech. The least-informed people in an organization (execs) are pushing a technology onto their organizations. A jaw-droppingly enormous amount of capital is being deployed in essentially a "pushing on a rope" scenario.
> at least 10% of the employees use GenAI daily.
Remember that this includes people who are forced to use it (otherwise they wouldn't meet KPIs and would expect conversations with HR)It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.
- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use
- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)
- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers
- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket
This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.
Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.
What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.
Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.
For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.
40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.
At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.
This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.
Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!
But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.
The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.
I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.
But yes I do use a lot more AI then I used to 6 months ago - some of them internally built - many others are sourced externally. I bet I will be using even more AI going forward.
I think it is inevitable!
Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.
The article quietly ignored two better explanations: the day to day work of executives can be automated more easily (Manna vibes) and/or the execs have a vested interest in AI succeeding so they can cut headcount so they are evangelists for AI.
Medical doctors as well, officially 0%, reality ?
Also many programmers hide the truth, because it is quite difficult to justify their salary (that was priced from the pre-AI times when programming was much more difficult).
A popular belief these days is that investors from 2000 ultimately got it right. Truth - they simply got it wrong. They dumped tons of money into things that had no hope of justifying an ROI. They thought adoption of the technology would happen at a pace that was unprecedented or even possible. They assumed things would happen in 3 years that actually took 20. Yes - Shocker!
It's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )
This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage!
Multiple sources have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.
Lets compare to the adoption of the internet. Mosaic was released in 1993. Businesses adopted the internet progressively during the 90s, starting slow but accelerating toward the decade's end with broad adoption of the internet as a business necessity by 2000.
Three years is a ridiculously small amount of time for businesses to make dramatic changes.
(soar - overestimated)
> In recent earnings calls, nearly two-thirds of executives at S&P 500 companies mentioned AI. At the same time, the people actually responsible for implementing AI may not be as forward-thinking, perhaps because they are worried about the tech putting them out of a job.
Ah, those brave, forward-looking executives with their finger on the pulse of the future while their employees are just needlessly stalling adoption. Completely absent from the article is the possibility that the technology is not as revolutionary as claimed.
This is the point.
This is what matters.
A revolutionary technology birthed in a bonfire of cash
I use coding agents often, but I don't burn all the tokens out of my Claude Max plan and ChatGPT Business plan with two seats.
https://www.genaiadoptiontracker.com/
TFA presents the most pessimistic stat it could find: daily GenAI usage at work growing from 12.1% to 12.6% in a year. (Interestingly there was a dip to 9% in Nov 2024; maybe end-of-year holidays?)
It does not mention that the same tracker also shows that overall usage (at and outside work, at least once last week) has steadily climbed from 44% to 54%. That is a 10 percentage point growth in a year. (This may also be why OpenAI reveals WAU rather than DAU; people mostly regularly use it on a weekly basis.)
Here is something even more interesting from the same authors at the St Louis Fed using the same data:
https://www.stlouisfed.org/on-the-economy/2025/nov/state-gen...
Really, read that article, it is short and a bit astounding. Money quote:
> When we feed these estimates into a standard aggregate production model, this suggests that generative AI may have increased labor productivity by up to 1.3% since the introduction of ChatGPT. This is consistent with recent estimates of aggregate labor productivity in the U.S. nonfarm business sector. For example, productivity increased at an average rate of 1.43% per year from 2015-2019, before the COVID-19 pandemic. By contrast, from the fourth quarter of 2022 through the second quarter of 2025, aggregate labor productivity increased by 2.16% on an annualized basis. Relative to its prepandemic trend, this corresponds to excess cumulative productivity growth of 1.89 percentage points since ChatGPT was publicly released.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.