AI Adoption Rates Starting to Flatten Out
Postedabout 1 month agoActiveabout 1 month ago
apolloacademy.comTech DiscussionstoryHigh profile
informativeneutral
Debate
20/100
AI AdoptionTechnology TrendsIndustry Analysis
Key topics
AI Adoption
Technology Trends
Industry Analysis
Discussion Activity
Very active discussionFirst comment
6m
Peak period
99
0-6h
Avg / period
15.4
Comment distribution154 data points
Loading chart...
Based on 154 loaded comments
Key moments
- 01Story posted
Nov 28, 2025 at 11:21 AM EST
about 1 month ago
Step 01 - 02First comment
Nov 28, 2025 at 11:27 AM EST
6m after posting
Step 02 - 03Peak activity
99 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 2, 2025 at 6:17 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46079987Type: story
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It’s way too early to decide whether it’s flattening out.
At larger companies adoption will probably stop at the level where managers will start to be threatened.
Giving AI away for free to people who don't give a rat's ass about the quality of its output isn't very difficult. But that's not exactly going to pay your datacenter bill...
> Data from the Census Bureau and Ramp shows that AI adoption rates are starting to flatten out across all firm sizes, see charts below.
It’s flat-out nonsense, and anyone with any experience in this kind of statistics can see it.
> Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist
> Note: Ramp Al Index measures the adoption rate of artificial intelligence products and services among American businesses. The sample includes more than 40,000 American businesses and billions of dollars in corporate spend using data from Ramp’s corporate card and bill pay platform. Sources: Ramp, Bloomberg, Macrobond, Apollo Chief Economist
It seems that the real interesting thing to see here is that the companies using Ramp are extremely atypical.
Adoption rate = first derivative
Flattening adoption rate = the second derivative is negative
Starting to flatten = the third derivative is negative
I don't think anyone cares what the third derivative of something is when the first derivative could easily change by a macroscopic amount overnight.
It really is a beautiful title.
Yes. The title specifically is beautiful. The charts aren't nearly as interesting, though probably a bit more than a meta discussion on whether certain time intervals align with one interpretation of the author's intent or another.
Which paints a grimmer picture—I was surprised that they report a marked decline in adoption amongst firms of 250+ employees. That rate-as-first-derivative apparently turned negative months ago!
Then again, it’s awfully scant on context: does the absolute number of firms tell us much about how (or how productively) they’re using this tech? Maybe that’s for their deluxe investors.
However lim x->inf log(x) is still inf.
If you need everything to be math, at least have the courtesy to use the https://en.wikipedia.org/wiki/Logistic_function and not unbounded logarithmic curves when referring to on our very finite world.
In which case it's at least funny, but maybe subtract one from all my derivatives.. Which kills my point also. Dang.
Corporate AI adoption looks to be hitting a plateau, and adoption in large companies is even shrinking. The only market still showing growth is companies with fewer than 5 employees - and even there it's only linear growth.
Considering our economy is pumping billions into the AI industry, that's pretty bad news. If the industry isn't rapidly growing, why are they building all those data centers? Are they just setting money on fire in a desperate attempt to keep their share price from plummeting?
> Adoption rate = first derivative
If you mean with respect to time, wrong. The denonimator in adoption rate that makes it a “rate” is the number of existing businesses, not time. It is adoption scaled to the universe of businesses, not the rate of change of adoption over time.
When it talks about the adoption rate flattening it is talking about the first derivative of the adoption rate (as defined in the previous paragraph, not as you wish it was defined) with respect to time tending toward 0 (and, consequently, the second derivative being negative.) Not the third derivative with respect to time being negative.
What tickled me into making the comment above had nothing to do with whether adoption rate was used by the author (or is used generally) to mean market penetration or the rate of adoption. It was because a visual aid that is labeled ambiguously enough to support the exact opposite perspective was used as a basis for clearing up any ambiguity.
The purpose of a time series chart is necessarily time-derivative, as the slope or shape of the line is generally the focus (is a value trending upward, downward, varying seasonally, etc). It's fair to include or omit a label on the dependent axis. If omitted, it's also fair to label the chart as the dependent variable and also to let the "... over time" be implicit.
However, when the dependent axis is not explicitly labeled and "over time" is left implicit, it's absolutely hilarious to me to point to it and say it clearly shows that the chart's title is or is not time-derivative.
I know comment sections are generally for heated debates trying to prove right and wrong, but sometimes it's nice to be able to muse for a moment on funny things like this.
Derivatives irl do not follow the rules of calculus that you learn in class because they don't have to be continuous. (you could quibble that if you zoom in enough it can be regarded as continuous.. But you don't gain anything from doing that, it really does behave discontinuous)
(I suppose a rudimentary version of this is taught in intro calc. It's been a long time so I don't really remember.)
Awesome stuff.
The derivative at 0 exists and is 0, because lim h-> 0 (h^2 sin(1/h))/h = lim h-> 0 (h sin(1/h)), which equals 0 because the sin function is bounded.
When x !=0, the derivative is given by the product and chain rules as 2x sin(1/x) - cos(1/x), which obviously approaches no limit as x-> 0, and so the derivative exists but is discontinuous.
Perfectly excusable post that says absolutely nothing about anything.
That's a massive deal because the AI companies today are valued on the assumption that they'll 10x their revenue over the next couple of years. If their revenue growth starts to slow down, their valuations will change to reflect that
Companies like Anthropic will not survive as an independent. They won't come close to having enough revenue & profit to sustain their operating costs (they're Lyft to Google or OpenAI's Uber, Anthropic will never reach the scale needed to roll over to significant profit generation). Its fair value is 1/10th or less what it's being valued at currently (yes because I say so). Anthropic's valuation will implode to reconcile that, as the market for AI does. Some larger company will scoop them up during the pain phase, once they get desperate enough to sell. When the implosion of the speculative hype is done, the real value creation will begin thereafter. Over the following two or three decades a radical amount of value will be generated by AI collectively, far beyond anything seen during this hype phase. A lot of lesser AI companies will follow the same path as Anthropic.
Compare to databases. You could probably have plotted a chart of database adoption rates in the '90s as small companies started running e.g. Lotus Notes, FoxPro and SQL server everywhere to build in-house CRMs and back-office apps. Those companies still operate those functions, but now most small businesses do not run databases themselves. Why manage SQL Server when you can just pay for Salesforce and Notion with predictable monthly spend?
(All of this is more complex, but analogous at larger companies.)
My take is the big rise in AI adoption, if it arrives, will similarly be embedded inside application functions.
The Ramp chart seems to use actual payment information from companies using their accounting platform. That should be more objective, though they don't disclose much about their methodology (and their customers aren't necessarily representative, the purpose and intensity of use aren't captured at all, etc.).
https://ramp.com/data/ai-index
That's odd. I use AI tools at work occasionally, but since our business involves selling physical goods, I guess we would not count as an AI adopter in this survey.
https://www.census.gov/hfp/btos/downloads/Employment%20Size%...
I'm unable to find a questionnaire with that language, though. I found a different questionnaire with many AI-related questions, some of which I believe would usefully capture both your situation and my hypothetical above. None closely match that language, though. The closest might be question 23, but that asks about use "in any of its business functions".
https://www2.census.gov/data/experimental-data-products/busi...
> Between MMM DD – MMM DD, did this business use Artificial Intelligence (AI) in producing goods or services? (Examples of AI: machine learning, natural language processing, virtual agents, voice recognition, etc.)
https://www2.census.gov/data/experimental-data-products/busi...
This closely tracks Apollo's language and the language in the results spreadsheet.
My hunch is that long term value might be quite low: a few years into vibe coding huge projects, developers might hit a wall with a mountain of slop code they can no longer manage or understand. There was an article here recently titled “vibe code is legacy code” which made a similar argument. Again, results surely vary wildly
I know JavaScript on a pretty surface level, but I can use Claude to wire up react and tailwind, and then my experience with all the other programming I’ve done gives me enough intuition to clean it up. That helps me turn rough things into usable tools that can be reused or deployed in small scale.
That’s a productivity increase for sure.
It has not helped me with the problems that I need to spend 2-5 days just thinking about and wrapping my head around solutions to. Even if it does come up with solutions that pass tests, they still need to be scrutinized and rewritten.
But the small tasks it’s good at add up to being worth the price tag for a subscription.
Do you feel that you will become so well-versed in it that you will be able to debug weird edge cases in the future?
Will you be able to reason about performance? Develop deep intuition why pattern X doesn't work for React but pattern Y does. etc?
I personally learned for myself that this learning is not happening. My knowledge of tools that I used LLMs for stayed pretty superficial. I became dependent on the machine.
I’ve been learning zig and using LLMs clearly did hamper my ability to actually write code myself, which was the goal of learning zig, so I’ve seen this too.
It is important to make the right choice of when/how to use these tools.
A company that has implemented most current AI technologies in their applicable areas in known-functionally capabilities? That is a vastly larger definition of Full Adoption.
It's the different between access and full utilization. The gulf is massive. And I'm not aware of any major company, or really any, that have said, "yep, we're done, we're doing everything we think we can with AI and we're not going to try to improve upon it."
Implementation of acquired capabilities, implementations... Very early days. And it appears this study's definition is more like user access, not completed implementations. Somewhat annoyingly, I receive 3 or 4 calls a day, sometimes on weekends, from contracting firms looking for leads, TPMs, ML/Data scientists with genai / workflow experience. 3 months ago, without having done anything to put my name out any more that however it had been found before that, I was only getting 1 ever day or two.
I don't think this study is using a useful definition for what they intend to measure. It is certainly not capturing more than a fraction of activity.
1. No y axis label.
2. It supposedly plots a “rate”, but the time interval is unspecified. Per second? Per month? Per year? Intuitively my best guess is that the rate is per-year. However that would imply the second plot believes we are very near to 100% adoption. So what is this? Some esoteric time internal line bi-yearly?
3. More likely, it is not a rate at all, but instead a plot of total adoption. In this case, the title is chosen _very_ poorly. The author of the plot probably doesn’t know what they’re looking at.
4. Without grid lines, it’s very hard to read the data in the middle of the plot.
By cutting out AI for most of my stuff, I really improved my well-being. I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-). I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more. And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.
For what I use AI still is to correct my sentences (sometimes) :-)
I'm curious, can you expand on this? Why did you start using coding agents, and why did you stop?
And that's fine for some things. Horrible if you want to do non-conventional things.
I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.
I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.
Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.
You can tell the AI to change the "ugly code" to be how you like. Work for me most of the time.
Even better, tell the AI to not write it that way in the first place. It writes a plan, you skim the plan, and tell it to change it.
These tools are not going away, so we need to learn how to use them effectively.
Did you try changing your prompts?
Thank you, and sorry my thoughts are all over...
If you speak fluent japanese, and you dont practice, you will remember being fluent but no longer actually be able to speak fluently.
Its true for many things; writing code is not like riding a bike.
You cant not write code for a year and then come back at the same skill level.
Using an agent is not writing code; but using an agent effectively requires that you have the skill of writing code.
So, if, after using a tool that automatically writes code for you, that you probably give some superficial review to, you will find over time that you are worse at coding.
You can sigh and shake your head and stamp your feet and disagree, but its flat out a fact of life:
If you dont practice, you lose skill.
I, personally found, this happening, so I now do 50/50 time: 1 week with AI, 1 week with strictly no AI.
If the no AI week “feels hard” then I extend it for another week, to make sure I retain the skills I feel I should have.
Anecdotally, here at $corp, I see people struggling because they are offloading the “make an initial plan to do x that I can review” step too much, and losing the ability to plan software effectively.
Dont be that guy.
If you offload all your responsibilities to an agent and sit playing with your phone, you are making yourself entirely replacable.
The models seem to still (claude opus 4.5) not get things right, and miss edge cases, and work code in a way that’s not very structured.
I use them daily, but I often have to rewrite a lot to reshape the codebase to a point where it makes sense to use the model again.
I’m sure they’ll continue to get better, but out of a job better in 5 years? I’m not betting on it.
It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.
- we are far removed from “early adopter” stages at this point
- “eventually all that will smooth out…” is assuming that this is eventually going to be some magic that just works - if this actually happens both early and late adopters will be unemployed.
it is not magic, it is unlikely to ever be magic. but from my personal perspective and many others I read - if you spend time (I am now just over 1,200 hours spent, I bill it so I track it :) ) it will pay dividends (and also will feel like magic ocassionally)
It doesn’t appear like anything of this sort is happening and the idea that good employer with a solid technical team would start firing people for not “knowing AI” instead of giving them a 2 week intro course seems unrealistic to me.
The real nuts and bolts are still software engineering. Or is that going to change too?
the best SWEs will automate anything they have to manually do more than once. I have seen this over and over and over again. LLMs have take automation to another level and learning everything they can be helpful with to automate as much of my work will be worth 12,000+ hours in the long run.
We are also in the early days still, I guess everyone has their own way of doing this ATM.
Speaking as someone with a ton of experience here.
None of the things they do can go without immense efforts in validation and verification by a human who knows what they're doing.
All of the extra engineering effort could have been spent just making your own infrastructure and procedures far more resilient and valuable to far more people in your team and yourself going forward.
You will burn more and more and more hours overtime because of relying on LLMs for ANYTHING non-trivial. It becomes a technical debt factory.
That's the reality.
Please stop listening to these grifters. Listen to someone who actually knows what they're talking about, like Carl Brown.
Not this one, presumably: https://en.wikipedia.org/wiki/Carl_Robert_Brown
For it to work best you should be an expert in the subject matter, or something equivalent.
You need to know enough about what your making not just to specify it, but to see where the LLM is deviating (perhaps because you needed to ask more specifically).
Garbage in garbage out is as important as ever.
Something similar will happen with agentic workflows - those who aren't me already productive with the status quo will have to eventually adopt productivity enhancing tooling.
Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.
If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.
Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.
Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.
> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.
Unpacking some of your examples a bit --
Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?
Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.
Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.
Low sev customer support: Sounds like a good way to infuriate customers and harm the business.
I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.
This is equivalent of that.
> This is equivalent of that.
In the hands of a crappy engineer from above, you are correct.
an SWE does not necessarily need to "learn" Claude Code more than someone who does not know programming at all. What actually matters is that they understand what those tools are doing, and then give directions/correct mistakes/review code.
In fact, I'd argue tools should be simple and intuitive for any engineer to quickly pick up. If an engineer has solid background in programming but with no prior experience with the tools cannot be productive with such a tool after an hour, it is the tool that failed us.
You don't see people talk about "prompt engineering" as much these days, because that simply isn't so important any more. Any good tool should understand your request like another human does.
* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.
* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.
That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.
But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.
For one it still can't make good jokes :) (my litmus test)
LLM = co-pilot, Gemini, Claude, Mistral chat
No one who uses an LLM would trust an agentic LLM
The LLM is the mush of everyone's stuff like the juice at the bottom of the bin is a mix of all the restaurants food.
The writing out the other end of the LLM is bland.
What it IS useful for is seeing a wrong thing and then going and making my own.
I still use it for various little scripts and menial tasks.
The push for this stuff to replace creativity is disgusting.
Sticking LLMs in every place is just crap, I've had enough.
From Issac Asimov. Something I have been contemplating a lot lately.
We should all find little joys in our life and avoid things that deaden us. If AI is that for you, I'd say you made a good decision.
I think what will happen is in parallel more products will be built that address the engineering challenges and the models will keep getting better. I don't know though if that will lead to another hockey stick or just slow and steady.
It's the switch between: know which service to use, consider capabilities, try to get AI to do a thing, if you even have a thing that needs done that it can do; versus: AI just does a thing for you, requiring little to no thought. Very active vs very passive. Use will go up in direct relation to that changeover. The super users are already at peak, they're fully engaged. A software developer wants a very active relationship with their AI; Joe Average does not.
The complexity has to vanish entirely. It's the difference between hiding the extraordinary engineering that is Google search behind a simple input box, and making users select a hundred settings before firing off a search. Imagine if the average search user needed to know something meaningful about the capabilities of Google search or search in general, before using it. Prime Google search (~1998-2016) obliterated the competition (including the portals) with that one simple search box, by shifting all the complexity to the back-end; they made it so simple the user really couldn't screw anything up. That's also why ChatGPT got so far so fast: input box, type something, complexity mostly hidden.
I had fun with that one getting GPT-5 and ChatGPT Code Interpreter to recreate it from a screenshot of the chart and some uploaded census data: https://simonwillison.net/2025/Sep/9/apollo-ai-adoption/
Then I repeated the same experiment with Claude Sonnet 4.5 after Anthropic released their own code interpreter style tool later on that same day: https://simonwillison.net/2025/Sep/9/claude-code-interpreter...
None of the tools make the difference. The thinking is what matters.
What happens to all the debt? Was all this just for chatbots that are finally barely good enough for satnav and image gen that does slightly better photoshop that the layperson can use?
I plan on doing this every time now because ChatGPT gets things wrong constantly, apologizes and changes its facts, while Gemini is cheerful and positive like a salesperson.
These things have given me tremendous doubt after one year of usage.
/s