After the AI Boom: What Might We Be Left With?
Posted3 months agoActive3 months ago
blog.robbowley.netTechstoryHigh profile
calmmixed
Debate
80/100
AITechnology BubbleFuture of Tech
Key topics
AI
Technology Bubble
Future of Tech
The article discusses the potential aftermath of the AI boom, with the community debating whether AI represents a sustainable technological advancement or a speculative bubble.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
143
0-12h
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 12, 2025 at 3:40 PM EDT
3 months ago
Step 01 - 02First comment
Oct 12, 2025 at 3:56 PM EDT
16m after posting
Step 02 - 03Peak activity
143 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 4:26 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45561164Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It occured to me as a teenager and I wrote an essay on if for my uni admissions exam back in 1981 but it's not rocket science and the idea goes back at least to John von Neumann who came up with the 'singularity' term in the 50s.
We are well into this process already. Core chat capabilities have pretty much stalled out. But most of the attempts at application are still very thin layers over chat bots.
There is no winning scenario.
Slightly different cohorts.
I'm not really trying to be snarky; I'm trying to point out to you that you're being really vague. And that when you actually get really, really concrete about what we have it ... starts to seem a little less magical than saying "computers that talk and think". Computers that are really quite good at sampling from a distribution of high-likelihood next language tokens based upon complex and long context window is still a pretty incredible thing, but it seems a little less likely to put us all out of a job in the next 10 years.
And it became and industry that as completely and totally changed the world. The world was just so analog back then.
>starts to seem a little less magical than saying "computers that talk and think"
Computer thinking will never become magical. As soon as we figure something out it becomes "oh that is just X". It is human thinking that will become less magical over time.
LLMs may be a stepping stone to AGI. It's impressive tech. But nobody's proven anything like that yet, and you're running on pure faith not facts here.
I'm enjoying the new LLM based tooling a lot, but nothing about it suggests that we're in any way near to AGI because it's very much a one trick pony so far.
When we see generative AI that updates its weights in real time (currently an intractible problem) as part of the feedback loop then things might get very interesting. Until then it's just another tool in the box. CS interns learn.
I would be interested to hear the way that you see. I don't have any problem seeing a huge number of roadblocks to post-scarcity that AI won't solve, but I am open to a different perspective.
My own experience, using ChatGPT and Claude for both dev and other business productivity tasks, lends credence to the METR model of exponential improvement in task time-horizon [0]. There are obviously still significant open technical issues, particularly around memory/context management and around online learning, but extensive work is being done on these fronts, propelled amongst other things by the ARC-AGI challenge [1], and I don't see anything that is an actual roadblock to progress. If anything, from my perspective, it seems appears that there are significant low-hanging-fruit opportunities around plain-old software engineering and ergonomics for AI agents, more so than a need for fundamental breakthroughs in neural network architecture (although I believe that these too will come).
So then, with an increasing time horizon and improved task accuracy (much of it assured by improvements in QA mechanisms), we will see ourselves handing off more and more complex tasks to AI agents, until eventually we could have "the factory of the future ... [with] only two employees: a man and a dog", and at that stage I believe that there would be no imperative for humans to work (unless they choose to, or have a deeply ingrained Calvinist work ethic). And then, as you said, we're down to the non-technological roadblocks.
Obviously capitalists would fight to stay in control, and unlike some who expect a fully peaceful and organic transition, I do expect somewhat of a war here (whether kinetic or cold), but I do envision that when push comes to shove, those of us who believe in the free software movement and the foundational principles of democracy will be able to assert shared national/international (rather than corporate) control over the AIs and restructure society into a form where AI (and later robots) perform the work for the benefit of humans who would all share in the bounty. I am not an economist and don't have a clear prediction on the exact form this new society would take, but from my reading of the various pilot implementations of UBI [2], I think that we will see acceptance towards a society where people are essentially in retirement throughout their life. Just as currently, some retired people, choose to only stay home and watch TV, while others study, do art, travel the world, help raise and teach future generations or contribute to social causes close to their hearts, so we'll all be able to do what is in our hearts, without worrying about subsistence.
You may say that I'm a dreamer...
[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
[1] https://arcprize.org/leaderboard
[2] https://en.wikipedia.org/wiki/Universal_basic_income_pilots
Not that I think you're wrong, but come on - make the case!
I have the very unoriginal view that - yes, it's a (huge) bubble but also, just like the dot com bubble, the tevhnology is a big deal - but it's not obvious to see what will stand and fall in the aftermath.
Remember that Sun Microsystems, a very established pre-dot com business, rose to huge heights on the bubble and was then smashed by the fall when it popped. Who's the AI bubble's Sun and who's its Amazon? Place your bets...
Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.
Most ideas, even in the reasoning fields, are generated in non-linguistic processes.
Of course, some problems are solved by step-by-step linguistic (or math) A, then B, then C steps, etc., but even for those types of problems, when they get complex, the solution looks more like follow a bunch of paths to dead ends, think some more, go away, and then "Aha!" the idea of a solution pops into our head, then we back it up and make it explicit with the linguistic/logical 'chain of reasoning' to explain it to others. That solution did not come from manipulating language, but from some other cognitive processes we do not understand, but the explanation of it used language.
LLMs aren't even close to that type of processing.
That's an extremely speculative view that has been fashionable at several points in the last 50 years.
Prediction is obviously involved in certain forms of cognition, but it obviously isn't all there is to the kinds of beings we are.
Napkin scribbles
Nah, we aren't. There's a reason the output of generative AI is called slop.
It's always different this time.
More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.
The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.
That I think is the entire mistake of this bubble. We confused what we do have with some kind of science fiction fantasy and then have worked backwards from the science fiction fantasy as if it is inevitable.
If anything, the lack of use cases is what is most interesting with LLMs. Then again, "AI" can do anything. Probabilistic language models? Kind of limited.
Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.
The compute and data are both limitations of NNs.
We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).
Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.
No, GAI will require new architectures no one has thought of in nearly a century.
2. The category of computerized machines (of which self checkouts are one example) has absolutely revolutionized the world. Computerization is the defining technology of the last twenty years.
They revolutionized supermarkets.
And for small baskets, sure, but it was scan as you shop that really changed supermarkets and those things thankfully do not talk.
I would really like to hear you explain how they revolutionized supermarkets.
I use them every day, and my shopping experience is served far better by going to a place that is smaller than one that has automated checkout machines. (Smaller means so much faster.)
Hell, if you go to Costco, the automated checkout line moves slower than the ones manned by experienced workers.
Outside of the software world it's mostly a (much!) better Google.
Between now and a Star Trek world, there's so much to build that we can use any help we can get.
Indeed. I was using speech to text three decades ago. Dragon Naturally Speaking was released in the 90s.
It's blatantly obvious to see if you work with something you personally have a lot of expertise in. They're effectively advanced search engines. Useful sure.. but they're not anywhere close to "making decisions"
An RNG can do what you're describing
From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.
The most problematic things are:
1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)
2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.
3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.
We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.
Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.
Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.
There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.
For how "naive" transformer LLMs seem, they sure set a high bar.
Saying "I know better" is quite easy. Backing that up is really hard.
Why is there no fundamental limitation that would prevent LLMs from matching human hallucination rates? I'd like to hear more about how you arrived at that conclusion.
This is not something that's impossible for an LLM to do. There is no fundamental issue there. It is, however, very easy for an LLM to fail at it.
Humans get their (imperfect, mind) meta-knowledge "for free" - they learn it as they learn the knowledge itself. LLM pre-training doesn't give them much of that, although it does give them some. Better training can give LLMs a better understanding of what the limits of their knowledge are.
The second part is acting on that meta-knowledge. You can encourage a human to act outside his knowledge - dismiss his "out of your depth" and provide his best answer anyway. The resulting answers would be plausible-sounding but often wrong - "hallucinations".
For an LLM, that's an unfortunate behavioral default. Many LLMs can recognize their own uncertainty sometimes, flawed as their meta-knowledge is - but not act on it. You can run "anti-hallucintion training" to make them more eager to act on it. Conversely, careless training for performance can encourage hallucinations instead (see: o3).
Here's a primer on the hallucination problem, by OpenAI. It doesn't say anything groundbreaking, but it does sum up what's well known in the industry: https://openai.com/index/why-language-models-hallucinate/
OpenAI claims that hallucination isn't an inevitability because you can train a model to "abstain" rather than "guess" when giving an "answer". But what does that look like in practice?
My understanding is that an LLM's purpose is to predict the next token in a list of tokens. To prevent hallucination, does that mean it is assigning a certainty rating to the very next token it's predicting? How can a model know if its final answer will be correct if it doesn't know what the tokens that come after the current one are going to be?
Or is the idea to have the LLM generate its entire output, assign a certainty score to that, and then generate a new output saying "I don't know" if the certainty score isn't high enough?
"Next token prediction" is often overstated - "pick the next token" is the exposed tip of a very large computational process.
And LLMs are very sharp at squeezing the context for every single bit of information available in it. Much less so at using it in the ways you want them to.
There's enough information at "no token emitted yet" for an LLM to start steering the output towards "here's the answer" or "I don't know the answer" or "I need to look up more information to give the answer" immediately. And if it fails to steer it right away? An LLM optimized for hallucination avoidance could still go "fuck consistency drive" and take a sharp pivot towards "no, I'm wrong" mid-sentence if it had to. For example, if you took control and forced a wrong answer by tampering with the tokens directly, then handed the control back to the LLM.
Can you help correct where I'm going wrong?
If what you're claiming is that external, vaguely-symbolic tooling allows a non-symbolic AI to perform better on certain tasks, then I agree with that.
If you replace "a non-symbolic AI" with "a human", I agree with that too.
It really irks me that the direction every player seems to be going to is to layer LLMs on top of each other with the goal of saving money on inference while still making the users believe that they are returning high quality results.
Instead of discovering some radical new ways of improving the algorithms they are only marginally improving existing architectures and even that is debatable.
In the end leaving the world changed, but not as meaningfully or positively as promised.
Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.
---
Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.
Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.
This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.
I’ll take a potential solution I can validate over no idea whatsoever of my own any day.
If any answer is acceptable, just get your local toddler to babble some nonsense for you.
If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts. At that point, the LLM did nothing except charge you for a few tokens before you went down the usual research path. I could see LLMs being good for providing an outline of what you'd need to research, which is definitely helpful but not in a singularity way.
The woo is laughable. A cryptobro could have pulled the same nonsense out of their ass about web 3.0
Oh, I guess you mean when they grow up.
It would have taken me a whole day, easily, to do on my own.
Useless it is emphatically not
She doesn't like using Claude, but she accepts the necessity of doing so, and it reduces 3-month projects to 2-week projects. Claude is an excellent debating partner.
Crypto? Blockchain? No-one sceptical could ever see the point of either, unless and until their transaction costs were less than that of cash. That... has not happened, to put it mildly.
These things are NOT the same.
Basically the hype cycle is as American as Apple Pie.
The hype is real, but there’s actual practical affordable understandable day-to-day use for the tech - unlike crypto, unlike blockchain, unlike web3.
So why's it different this time?
302 more comments available on Hacker News