What If the Singularity Lies Beyond a Plateau We Cannot Cross?
Posted3 months agoActive3 months ago
jasonwillems.comResearchstory
calmmixed
Debate
60/100
Artificial IntelligenceSingularityTechnological Progress
Key topics
Artificial Intelligence
Singularity
Technological Progress
The article proposes that the technological singularity may be unachievable due to a potential 'plateau' in AI development, sparking discussion on the limitations and future of AI research.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
N/A
Peak period
17
0-12h
Avg / period
5.6
Comment distribution28 data points
Loading chart...
Based on 28 loaded comments
Key moments
- 01Story posted
Oct 9, 2025 at 5:13 PM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 5:13 PM EDT
0s after posting
Step 02 - 03Peak activity
17 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 8:29 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45533152Type: storyLast synced: 11/20/2025, 1:23:53 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
No new unicorns, no new kernel designs, no need for new engineered software that often. With the industry in stasis, the industry is finally able to be regulated to the same degree as plumbing, haircutting, or other licensed fields. An industry no longer any more exceptional than any other. The gold rush is over, the boring process of subjecting it to the will of the people and politicians begins.
I think we're also getting to the limits, across the board, soon. Consider AWS S3, infrastructure for society. 2021 - 100 trillion objects. 2025 - 350 trillion objects. Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle. How soon until we reach the point even a minor prolonged disruption to hard drives, or GPUs, or DRAM, forces hard choices?
But I don't think the market is saturated just yet.
Even when we stop having unicorns, SWE salaries may go down, but that'll also open new opportunities.
But w may one day have to contend with expecting fewer "new" paradigms and the ultra rapid industry growth that accompanies them (dotcom, SaaS, ML, etc). Will "software eating the world" be enough to counteract this long term? Hard to say
Do you even need an MMU, if you have memory safe languages?
Honestly, maybe semi scaling is a problem for AI, but for most other software today, the problem is bugs, bloat, latency.
The replenishment of these hard drives is baked into the cost of S3. If there is a major disruption of hard drive supply then S3 prices will definitely rise, and enterprises that currently store lots of garbage that they don't need, will be priced out of storing this data on hard drives, into Glacier or at worst full deletion of old junk data. That's not necessarily a bad thing, in my opinion.
There is lots of junk data in S3 that should probably be in cold storage rather than spinning metal, if merely for environmental reasons.
But our software could be orders of magnitude more efficient. Am I wrong?
Slowing transistor scaling just gives us one less domain through which to depend on for improvements - the others are all still valid, and will probably be something we come to invest more effort into.
Where as tailored hardware, software improvements are unlikely to continue yielding such payoff again and again.
So the argument that cost of improvements will go up is not wrong. And maybe improvements will be more linear than exponential.
We also don't know that current semi tech stack is the best. But it's fair to argue that the cost of moving off a local optimum to a completely different technology stack would be wild.
Asimov's story The Last Question ends with the Multivac machine having collected all the data in the universe and still not answering the question "how can entropy be reversed?", so it spends an immeasurable amount of time processing the data in all possible ways. The article argues that we might not get to "the singularity" because progress will stop, but even if we can't make better transistors, we can make more of them, and we can spend longer processing data with them. If what we're missing in an AGI is architectural it might only need insight and distributed computing, not future computers.
> "We built our optimism during a rare century when progress got cheaper as it got faster. That era may be over."
This effect of progress building on progress goes back a hundred years before that, and a hundred years before that. The first practical steam engine was weak, inefficient, and coal-hungry in the early 1700s and what made it 'practical' is that it pumped water out of coal mines. Coalmine owners could get more coal by buying a steam engine; the engine made its fuel cheaper and easier and more coal to sell. Probably this pattern goes back a lot before that because everything builds on everything, but this was a key industrial revolution turning point a long time before the article's claim. The era may be another two hundred years away from being over.
> "There are still areas of promise for step-function improvements: fusion, quantum computing, high-temperature superconductors. But scientific progress is not guaranteed to continue."
Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough? Neuralink style cyborg interfaces, biological, genetic, health, anti-ageing, new materials or meta-materials, nanotechnology, distributed computing, vibe coding, no possible areas for step changes in any of those?
> "But the burden of proof lies with those claims. Based on what we know today, a plateau is inevitable. Within that plateau, we can only speculate:"
Based on what we know today there isn't "a" plateau, there are many, and they give way to newer things. Steam power plateaued, propellor aircraft plateaued, sailboat speed and size plateaued, cog and gear computer speed plateaued, then electro-mechanical computer speed, then valve computer speed, then discrete logic speed, then integrated circuit speed, then single core, then what, CPUs, then GPUs, then TPUs...
> "Are therapies for broad set of complex autoimmune diseases ahead of the plateau? Probably."
How many autoimmune diseases have been cured, ever? Where does this "Probably" come from - the burden of proof very much lies with that probably.
> "Will we have Earth-based space elevators before the plateau? Probably not."
We don't have a rope strong enough to hang 36km or a way to make one or a way to lift that much mass into geostationary orbit in one go. But if we could make a cable thicker in space, thinner at the ground, launch it in pieces and join it together, we might not be that far away from plausible space elevator. Like if Musk got a bee in his bonnet and opened his wallet wide, I wouldn't bet against SpaceX having a basic one by 2040. Or 2035. I probably would bet against 2028.
Regulatory and economic barriers are probably the easiest to overcome. But they are an obstacle. All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely.
> Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough?
The premise of the article is that the hardware that AGI (or really ASI) would depend on may itself reach diminishing returns. What if progress is severely hampered by the need for one or two more process improvements that we simply can’t eke out?
Even if the algorithms exist, the underlying compute and energy requirements might hit hard ceilings before we reach "recursive improvement."
> How many autoimmune diseases have been cured, ever? Where does this “Probably” come from — the burden of proof very much lies with that probably.
The point isn't that we're there now, or even close. It’s that we likely don’t need a step-function technological breakthrough to get there.
With incremental improvements in CAR-T therapies — particularly those targeting B cells — Lupus is probably a prime candidate for an autoimmune disease that could feasibly be functionally "cured" within the next decade or so (using extensions of existing technology, not new physics).
In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run.
> We might not be that far away from a plausible space elevator.
I haven't seen convincing arguments that current materials can get us there, at least not on Earth. But the moon seems a lot more plausible due to lower gravity and virtually no atmosphere.
But I'd be very happy to be wrong about this.
> Based on what we know today, there isn’t “a” plateau — there are many, and they give way to newer things.
True. But the point is that when a plateau is governed by physical limits (for example, transistor size), further progress depends on a step-function improvement — and there's no guarantee that such an improvement exists.
Steam and coal weren't limited by physics. Which is the same reason why I didn't mention lithium batteries in the article (surely we can move beyond lithium to other chemistries, so the ceiling on what lithium can deliver isn't relevant). But for fields bounded by fundamental constants or quantum effects, there may not necessarily be a successor.
Smartphones sell 1.2 billion every year. Add in server, laptop, embedded chips, GPUs, TPUS - even if transistor density and process improvements stall soon, the amount of compute power on Earth is vast and increasing rapidly until 'soon' happens, and can still increase rapidly by churning out more compute with the best available process indefinitely after that. You haven't made a case that process improvements are necessary or that they are not going to happen, you've only said that they might be neccessary and might not happen. "All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely" - again, true in terms of process improvements, not true in general because we could potentially make progress towards AGI with existing compute power by changing how we organise it and what we do with it; at least, you haven't given a good reason why that couldn't happen.
> "In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run."
One of the strongest counterpoints is that human brains exist - there's definitely some way to get to human-equivalent intelligence, on Earth, within the laws of Physics, the energy and temperature constraints which exist here. Handwaving "what if AGI is impossible because of the laws of physics" needs you to make a case why the laws of physics are a blocker, not just state that there are some physical limits to some things sometimes.
Yes transistors are hard to make smaller but that is not the only option - we've developed stacked 3D transistors to pack more into a small volume, hardware acceleration through design for more and more algorithms (compression, video compression, encryption) to better use existing transistor budgets, we've developed better more cache-friendly and SIMD friendly algorithms, chips where part of them can power down to allow more power for other parts without hitting thermal limits. More than one S curve involved, not just one plateau.
Try to see the binary as an impassable ceiling that turns us into craven, greedy, status junkie apes. Mediocrity gone wild. That's the binary, and it's seeped already into language, which is symbolic arbitrariness. We don't know how to confront this because we've never confronted it collectively. There was never a front page image on Time Magazine that stated: Are we arbitrary?
Yet we are, we're the poster child for the extinct and doesn't know it sentient.
Each stage of our steady drive to emasculate signaling in favor of adding value displays this openly. Each stage of taking action and expression and rendering them as symbol, then binary, then as counted token, into pretend intelligence showcases a lunatic drive to double down on folk science using logic as a beard in defiance of scientific reason.
> "our mission should remain the same: accelerate to the maximum extent possible."
I think you need to justify why hurrying to be AI servants should be our mission :-|
The acceleration we've experienced has allowed us to "outrun" our problems. In earlier generations, that meant famine or disease. Today, it might be climate change. Tomorrow, it'll be something else entirely.
Technological progress has generally been the reason humanity should be optimistic against challenges: it gives us ever improving tools to solve our hardest problems faster than we succumb to them. Without it, that optimism becomes much harder to justify.
Even if there is a plateau we can't cross, if we believe we drive more benefit from technology than the problems it creates, it makes sense to extract as much progress as we can from the physics we have.
In this case I use "singularity", by which I mean it more abstractly: a hypothetical point where technological progress begins to accelerate recursively, with heavily reduced human intervention.
My point isn't theological or utopian, just that the physical limits of computation, energy, and scale make that kind of runaway acceleration far less likely IMO than many assume.
https://wiki.evageeks.org/S%C2%B2_Engine
In time, AI and VR/AR will converge to allow us to evolve new ways to educate/entertain new generations, to distill knowledge in a faster and much more reliable way. We will experience societal upheavals before the "plateau". Our current world order will probably experience major changes.
AGI will probably be a long, long way ahead of us -- in its current state, LLMs are not going to spontaneously develop sentience. There is a massive scale (power, resources, space, etc) issue to contend with.
Most people don't even get close to their genetic limits anyway. We're capable of much more than we realize but we fall into mental traps and fool ourselves into thinking we're incapable. Some guy just set a world record by holding his breath for 29 minutes: a few years ago most people would have said that's impossible.
https://www.uow.edu.au/media/2025/the-science-behind-a-freed...
Hmm, the cultural zeitgeist is about LLMs.
Are LLMs improving anything (in the sense of optimization)? I think LLMs are enabling us to automate tasks which are tedious and don't really add value (eg, compliance tasks). And they are helping us create art, content, and ads. I'm not aware of LLMs optimizing systems, let alone themselves. But I'm not very tuned in to all the applications.