AI Pullback Has Officially Started
Posted2 months agoActive2 months ago
planetearthandbeyond.coTechstory
skepticalmixed
Debate
80/100
AI AdoptionAI LimitationsTech Hype
Key topics
AI Adoption
AI Limitations
Tech Hype
The article claims that the AI pullback has started, but HN commenters are divided on whether AI is living up to its hype and whether its adoption is slowing down.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1h
Peak period
13
10-12h
Avg / period
6.2
Comment distribution62 data points
Loading chart...
Based on 62 loaded comments
Key moments
- 01Story posted
Oct 26, 2025 at 2:01 AM EDT
2 months ago
Step 01 - 02First comment
Oct 26, 2025 at 3:26 AM EDT
1h after posting
Step 02 - 03Peak activity
13 comments in 10-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 27, 2025 at 6:03 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45709486Type: storyLast synced: 11/20/2025, 2:40:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Seriously though, "AI fucks up" is a known thing (as is humans fuck up!) and the people who are using the tech successfully account for that and build guardrails into their systems. Use version control, build automated tests (e2e/stress, not just unit), update your process so you're not incentivizing dumb shit like employees dumping unchecked AI prs, etc.
there are roughly 2.09% of SWEs that actually know what they are doing so this 97.91% generally prodces garbage (after 30 years doing this shit I have once experiencing being brought it to a project (I have been working as a consultant for a long time now) and went “wow, now this is beautiful codebase!”
aka you don't maybe think tha - as an outside consultant - the nature of the job means you'd rarely be brought in to fix "beautiful codebases"...?
I probably worked with 300-400 SWEs directly and of them there is only one I’d trust to write code if my life depended on it. and I think that is likely in-line with how many SWEs are actually great at their jobs
llms don't learn. nor do they operate with any sort of intent towards precision.
we can develop around, plan for and predict most common human errors. also, humans typically get smarter and learn from their mistakes.
llms will go on making the same ridiculous mistakes, confidently making up bullshit frameworks methods and code, and no matter how much correction you try to offer, they will never get any better until the next multi-billion dollar model update. and even then, it's more of a crossed finger situation than an inevitability improvement and growth.
I hate hate hate hate hate that AI seems to be increasing Dunning Krueger's effect on all our lives...
But the metrics or facts without context or deeper explanations also don't mean much in that article.
> 95% of AI pilots didn’t increase a company’s profit or productivity
If 5% do that could very well be enough to justify it, depending on for which reasons and after how much time the pilots are failing. It's widely touted that only 5% of start ups succeed, yet start ups overall have brought immense technological and productivity gains to the World. You could live in a hut and be happy, and argue none of it is needed, but none the less the gains by some metrics are here, despite 95% failing.
The article throws out numbers to make a point that it wanted to make, but fails to account for any nuance.
If there's a promising new tech, it makes sense that there will be many failed attempts to make use of it, and it makes sense a lot of money will be thrown in. If 5% succeed, it takes 1 million to do 1 attempt, but the potential is 1 billion if it succeeds, it's already 50x return.
In my personal experience, if used correctly it increases my own productivity a lot and I've been using AI daily ever since GPT 3.5 release. I would say I use it during most of what I do.
> AI Pullback Has Officially Started
So I'm personally not seeing this at all, based on how much I personally pay for AI, how much I use it, and how I see it iteratively improving, while it's already so useful for me.
We are building and seeing things that weren't realistic or feasible before now.
And that’s ignoring the rampant copyright infringement, the exploding power use and accompanying increase in climate change, the harm it already does to people who are incapable of dealing with a sycophantic lying machine, the huge amounts of extremely low quality text and code and social media clips it produces. Oh and the further damage it is going to do to civil society because while we already struggled with fake news, this is turning the dial not to 11 but to 100.
This means the free cash flow to the firm OAI generates will have to be huge, given the negative cash outflows to date.
To prove, that AI is a overhyped bubble and won't bring in the ROI expected, you'd need to show that these companies won't bring expected ROI within a longer timeframe.
Use it where it works.. ignore the agents hype and other bullshit peddled by 19yo dropouts.
Unlike the 19yo dropouts of the 2010s these guys have brain rot and I don’t trust them after having talked to such people at start up events and getting their black pill takes. They have products that don’t work and lie about numbers.
I’ll trust people like Karpathy and others who are genuinely smart af and not kumon products.
all it takes is significant emotional investment for someone to become a bit blind to reality.
That people keep repeating a lie does not make it true.
If that makes me a sour grape, so be it.
At least those are not directly harmful to me.
I'm slightly less fine when my time is wasted by some generated bullshit.
And not at all fine when some vibed some product and ignored basic good practises on security and so on.
Still, in this case I think a market analogy fits better. There are people who want it and people who don't want it. If the people with a lot of money (to manage for companies) want it, this will move the balance. If it eventually moves it enough remains to be seen. Decisions can be made with too much excitement and based on overpromises, but eventually someone will draw a bottom line under (generative) AI, the one where currently the huge amount of money gets pumped into. Either will generate generate value that people pay for and the investors make a profit or not. Bubbles and misconceptions can extend the time when the line is drawn, but eventually it will be.
If LLM and generative is generally creates value, or not, I cannot say. I am sure that the more specialised AI solutions that are better described as machine learning does create this value in their special use cases and will stay.
kind of makes me doubt the pullback. Maybe the hype's dying but it's getting along as an everyday tool?
My wife's team spent 20+ man hours analyzing and trying to fufil the requests of one of their biggest customer, in the end it turned out to be a fully llm generated feature request email from someone who didn't quite understand the product in the first place...
When you save one hour on a coding task somewhere someone spends two hours trying to parse some bullshit email or report. I'm convinced it's a net negative overall
It is actually kind of shocking how I can go home and learn about quantum computing from LLMs but find them useless for simple business processes. This though I think exemplifies the entire mistake of the bubble. Most business processes don't benefit at all from understanding the Hamiltonian of a system. Most business processes are simple tasks that are the end result of previous automation. In practice, most business processes in 2025 are simple processes done by a human who can deal with the random uncertainty and distribution shift that inevitably comes up. Exactly what a language isn't good but it is even beyond that. So much of what a customer service agent for example is doing is dealing with uncertainty and externality. There is the process that LLMs aren't good anyway but then there is the human judgement on when to disregard the process because of various externalities. The trivial business processes LLMs would be good at automating have already been automated years ago or the business went out of business years ago. AGI would in theory be amazing at all this too but we don't have AGI. We have language models that have a very limited use case beyond an interactive version of Wikipedia. I love the interactive version of Wikipedia but it is not worth trillions of dollars.
to me i think all the hype comes from promises of C3P0s and R2D2s instead pitching it as building computational tools that make you more efficient or give you new ways to model your ideas inside a computer.
I mean, per the article, only if you don’t care about correctness. There aren’t actually that many use cases where correctness doesn’t matter at all.
I use Claude Code regularly at work and can tell it is absolutely fantastic and getting better. You obviously need to guide it well (use plan mode first) and point to hand coded stuff to follow, and it will save you enormous amount of time and effort. Please don't put off trying AI coding out after reading misinformed articles like this.
However I agree that a lot of this stuff is FUD and AI dev is like a new skill, it takes time to master. It took me a few months but I’m comfortably more productive and having a more fun time at work with Claude Code
To people who aren't programmers, there isn't really the same kind of easily-verified use case. Most people can't tell at a glance that a business proposal or email is full of errors they need to correct, thus the stuff causes even more damage.
Unfortunately, programmers, as a rule, aren't terribly good at listening to the experiences and perspectives of non-coders, so I don't see this dynamic changing anytime soon.
I once heard this put in the context of engineer-code vs software-developer-code:
A professional software developer's skill isn't writing working code (anyone with enough time and intelligence can do that), but rather writing maintainable, efficient working code.
- The METR paper surveyed just 16 developers to arrive at their conclusion. Not sure how that got past review. [0]
- The finding from the MIT report can also be viewed from a glass 5% full perspective:
> Just 5% of integrated AI pilots are extracting millions in value. > Winning startups build systems that learn from feedback (66% of executives want this), retain context (63% demand this), and customize deeply to specific workflows. They start at workflow edges with significant customization, then scale into core processes. [1]
[0] https://arxiv.org/abs/2507.09089
[1] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
… Excuse me, they were doing _what_? The world has gone mad. Unless you think a chatbot that doesn’t know how many ‘r’s are in strawberry is your peer you shouldn’t be using it for peer review, bloody hell.
At least in tech there’s still usually human code review which catches the worst of the magic robot-generated nonsense.
That said, the non-tech-executive/product-management take on AI has often been an utter failure to recognize key differences between problems and systems. I spend an inordinate amount of time framing questions in terms of promises to customers, completeness, reproducibility, and contextual complexity.
However, for someone in my role, building and ideating in innovation programs, the power of LLM assisted coding is hard to pass up. It may only get things 50% of the way there before collapsing into a spiral of sloppy overwrought code, but we often only need 30-40% fidelity to exercise an idea. Ideation is a great space for vibe coding. However, one enormous risk in these approaches is in overpromising the undeliverable. If folks don’t keep a sharp eye on the nature of the promises they’re making, they may be in for a pretty wild ride; with the last “20%” of the program taking more than 90% of the calendar time due to compression of the first “80%” and complication of the remainder.
We’re going to need to adjust. These tools are here to stay, but they’re far from taking over the whole show.
I'd bet that some sort of exponentiate the learning rate until shit goes haywire then rollback the weights is actually probably a fairly decent algorithm (something like backtracking line search).