AI Agents Are Starting to Eat SAAS
Key topics
The debate rages on: are AI agents poised to disrupt the SaaS industry, making specialized tools like Retool obsolete? Some commenters argue that while AI has made tremendous progress, it still can't replace the ease and reliability of dedicated SaaS apps, particularly when it comes to maintenance and risk mitigation. Others share personal anecdotes, like using AI to compare edited text documents, and discovering that AI-powered workflows can be more efficient than traditional diff tools. As the discussion unfolds, a consensus emerges that AI will likely augment, rather than replace, existing SaaS tools, with some commenters pointing to the importance of understanding the lifecycle of tools and making informed decisions about when to build versus buy.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
104
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 14, 2025 at 6:48 PM EST
19 days ago
Step 01 - 02First comment
Dec 14, 2025 at 7:14 PM EST
26m after posting
Step 02 - 03Peak activity
104 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 17, 2025 at 12:27 PM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Oh, child.... building is easy. Coordinating maintenance of the tool across a non-technical team is hell.
Corporations think in terms of risk.
Second only to providing a useful function, a successful SaaS app will have been built to mitigate risk well.
It's not going to be easy to meet these requirements without prior knowledge and experience.
1. I had two text documents containing plain text to compare. One with minor edits (done by AI).
2. I wanted to see what AI changed in my text.
3. I tried the usual diff tools. They diffed line by line and result was terrible. I searched google for "text comparison tool but not line-based"
4. As second search result it found me https://www.diffchecker.com/ (It's a SaaS, right?)
5. Initially it did equally bad job but I noticed it had a switch "Real-time diff" which did exactly what I wanted.
6. I got curious what is this algorithm. So I asked Gemini with "Deep Research" mode: "The website https://www.diffchecker.com/ uses a diff algorithm they call real-time diff. It works really good for reformatted and corrected text documents. I'd like to know what is this algorithm and if there's any other software, preferably open-source that uses it."
7. As a first suggestion it listed diff-match-patch from Google. It had Python package.
8. I started Antigravity in a new folder, ran uv init. Then I prompted the following:
"Write a commandline tool that uses https://github.com/google/diff-match-patch/wiki/Language:-Py... to generate diff of two files and presents it as side by side comparison in generated html file."
[...]
"I installed the missing dependance for you. Please continue." - I noticed it doesn't use uv for installing dependencies so I interrupted and did it myself.
[...]
"This project uses uv. To run python code use
uv run python test_diff.py" - I noticed it still doesn't use uv for running the code so its testing fails.
[...]
"Semantic cleanup is important, please use it." - Things started to show up but it looked like linear diff. I noticed it had a call to semantic cleanup method commented out so I thought it might help if I push it in that direction.
[...]
"also display the complete, raw diff object below the table" - the display of the diff still didn't seem good so I got curious if it's the problem with the diffing code or the display code
[...]
"I don't see the contents of the object, just text {diffs}" - it made a silly mistake by outputting template variable instead of actual object.
[...]
"While comparing larger files 1.txt and 2.txt I notice that the diff is not very granular. Text changed just slightly but the diff looks like deleting nearly all the lines of the document, and inserting completely fresh ones. Can you force diff library to be more granular?
You seem to be doing the right thing https://github.com/google/diff-match-patch/wiki/Line-or-Word... but the outcome is not good.
Maybe there's some better matching algoritm in the library?" - it seemed that while on small tests that Antigravity made itself it worked decently but on the texts that I actually wanted to compare was still terrible although I've seen glimpses of hope because some spots were diffed more granularly. I inspected the code and it seemed to be doing character level diffing as per diff-match-patch example. While it processed this prompt I was searching for solution myself by clicking around diff-match-patch repo and demos. I found a potential solution by adjusting cleanup, but it actually solved the problem by itself by ditching the character level diffing (which I'm not sure I would have come up with at this point). Diffed object looked great but as I compared the result to https://www.diffchecker.com/ output it seemed that they did one minor thing about formatting better.
[...]
"Could you use rowspan so that rows on one side that are equivalent to multiple rows on the other side would have same height as the rows on the other side they are equivalent to?" - I felt very clumsily trying to phrase it and I wasn't sure if Antigravity will understand. But it did and executed perfectly.
I didn't have to revert a single prompt and interrupted just two times at the beginning.
After a while I added watch functionality with a single prompt:
"I'd like to add a -w (--watch) flag that will cause the program to keep running and monitor source files to diff and update the output diff file whenever they change."
[...]
So I basically went from having two very similar text files and knowing very little about diffing to knowing a bit more and having my own local tool that let's me compare texts in satisfying manner, with beautiful highlighting and formatting, that I can extend or modify however I like, that mirrors interesting part of the functionality of the best tool I found online. And all of that in the time span shorter than it took me to write this comment (at least the coding part was, I followed few wrong paths during my search for a bit).
My experience tells me that even if I could replicate what I did today (keeping motivated is an issue for me), it would most likely be multi-day project full of frustration and hunting small errors and venturing into wrong paths. Python isn't even my strongest language. Instead it was a pleasant and fun evening with occasional jaw drops and feeling so blessed that I live in SciFi times I read about as a kid (and adult).
Um. I don't want to be That Guy (shouting at clouds, or at kids to get off my lawn or whatever) but ... what "usual diff" tools did you use? Because comparing two text files with minor edits is exactly what diff-related tools have excelled at for decades.
There is word-level diff, for example. Was that not good enough? Or delta [0] perhaps?
[0] https://github.com/dandavison/delta
All weren't remotely close to what I wanted.
On first glance delta is also doing completely standard thing. But I can't rule out that there are some flags to coerce it to do the thing I wanted. But searching for tools and flag combinations is not that fun and success is not guaranteed. Also I found a (SaaS) tool that and a flag there that did exactly what I wanted. Just decided to make my own local tool afterwards for better control. With an agent.
> Because comparing two text files with minor edits is exactly what diff-related tools have excelled at for decades.
True. And I ended up using this excellent Google duff-match-patch library for diffing. AI didn't write me a diffing algorithm. Just everything around it to make me a convenient tool. Most tools are for source code and treat end lines as a very strong structural feature of compared texts. I needed something that doesn't care about end lines all that much and can show me which characters changed between one line of one file and five lines it was split into in the second file.
Also maintaining a software is pain
Also for perpetually small companies, its now easy to build simple scripts to be achieve some productivity gains.
> The signals I'm seeing
Here are the signals:
> If I want an internal dashboard...
> If I need to re-encode videos...
> This is even more pronounced for less pure software development tasks. For example, I've had Gemini 3 produce really high quality UI/UX mockups and wireframes
> people really questioning renewal quotes from larger "enterprise" SaaS companies
Who are "people"?
Is the author a competent UX designer who can actually judge the quality of the UX and mockups?
> I write about web development, AI tooling, performance optimization, and building better software. I also teach workshops on AI development for engineering teams. I've worked on dozens of enterprise software projects and enjoy the intersection between commercial success and pragmatic technical excellence.
Nope.
Then it dawned on me how many companies are deeply integrating Copilot into their everyday workflows. It's the perfect Trojan Horse.
None of the mainstream paid services ingest operating data into their training sets. You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.
What? That’s literally my point: Enterprise agreements aren’t training on the data of their enterprise customers like the parent commenter claimed.
Nothing is really preventing this though. AI companies have already proven they will ignore copyright and any other legal nuisance so they can train models.
The enterprise user agreement is preventing this.
Suggesting that AI companies will uniquely ignore the law or contracts is conspiracy theory thinking.
"Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal"
https://www.wired.com/story/new-documents-unredacted-meta-co...
They even admitted to using copyrighted material.
"‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says"
https://www.theguardian.com/technology/2024/jan/08/ai-tools-...
https://www.vice.com/en/article/meta-says-the-2400-adult-mov...
It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this. And it keeps happening. Granted I'm unaware of cases of this occuring currently with professional AI services but it's basic security 101 that you should never let anything even have the remote opportunity to ingest data unless you don't care about the data.
This is objectively untrue? Giants swaths of enterprise software is based on establishing trust with approved vendors and systems.
Do you have any citations or sources for this at all?
I hope you find some self awareness when you slip a disc bending over this much for these corpo fascists, especially when they are failing to hold their own language to your level of prevarication and puffery:
> When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models.
https://help.openai.com/en/articles/5722486-how-your-data-is...
Stealing implies the thing is gone, no longer accessible to the owner.
People aren't protected from copying in the same way. There are lots of valid exclusions, and building new non competing tools is a very common exclusion.
The big issue with the OpenAI case, is that they didn't pay for the books. Scanning them and using them for training is very much likely to be protected. Similar case with the old Nintendo bootloader.
The "Corpo Fascists" are buoyed by your support for the IP laws that have thus far supported them. If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
You know a position is indefensible when you equivocation fallacy this hard.
> The "Corpo Fascists" are buoyed by your support for the IP laws
You know a position is indefensible when you strawman this hard.
> If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
> Scanning them and using them for training is very much likely to be protected.
Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
No its quite defensible. And if that was equivocation, you can simply outline that you didn't mean to invoke the specific definition of stealing, but were just using it for its emotive value.
>You know a position is indefensible when you strawman this hard.
Its accurate. No one wants thes LLM guys stopped more than other big fascistic corporations, plenty of oppositional noise out there for you to educate yourself with.
>Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
Cool, so if you agree all data should usable to create derivative works then I don't see what your complaint is.
>Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
You invoked "strawman" and then hit me with this combo strawman/non sequitur? Cool move <1 day old account, really adds to your 0 credibility.
I literally pointed out they should have to pay the same access fee as anyone else for the data, but once obtained, should be able to use it any way. Reading the comment explains the comment.
Unless, charitably, you are suggesting that if a company is legally able to purchase content, and use it as training data, that somehow compels them to release that data for free themselves?
Weird take if true.
Isn't this a little simplistic?
If the value of something lies in its scarcity, then making it widely available has robbed the owner of a scarcity value which cannot be retrieved.
A win for consumers, perhaps, but a loss for the owner nonetheless.
Trying to group (Thing I dont like) with (Thing everyone doesnt like) is an old semantic trick that needs to be abolished. Taxonomy is good, if your arguments are good, you dont need emotively charged imprecise language.
“How can I control whether my data is used for model training?
If you are logged into Copilot with a Microsoft Account or other third-party authentication, you can control whether your conversations are used for training the generative AI models used in Copilot. Opting out will exclude your past, present, and future conversations from being used for training these AI models, unless you choose to opt back in. If you opt out, that change will be reflected throughout our systems within 30 days.” https://support.microsoft.com/en-us/topic/privacy-faq-for-mi...
Moving the goalposts that far is kind of meaningless, so even if that were true it changes little.
While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.
Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.
However, let's say I record human interactions with my app; for example when a user accepts or rejects an AI sythesised answer.
This data can be used by me, to influence the behaviour of an LLM via RAG or by altering application behaviour.
It's not going to change the weighting of the model, but it would influence its behaviour.
“You can use an LLM to paraphrase the incoming requests and save that. Never save the verbatim request. If they ask for all the request data we have, we tell them the truth, we don’t have it. If they ask for paraphrased data, we’d have no way of correlating it to their requests.”
“And what would you say, is this a 3 or a 5 or…”
Everything obvious happens. Look closely at the PII management agreements. Btw OpenAI won’t even sign them because they’re not sure if paraphrasing “counts.” Google will.
"We will train new models using data from Free, Pro, and Max accounts when this setting is on (including when you use Claude Code from these accounts)."
Many of the top AI services use human feedback to continuously apply "reinforcement learning" after the initial deployment of a pre-trained model.
https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...
Inference (what happens when you use an LLM as a customer) is separate from training.
Inference and training are separate processes. Using an LLM doesn’t train it. That’s not what RLHF means.
The big companies - take Midjourney, or OpenAI, for example - take the feedback that is generated by users, and then apply it as part of the RLHF pass on the next model release, which happens every few months. That's why they have the terms in their TOS that allow them to do that.
Also I wonder if the ToS covers "queries & interaction" vs "uploaded data" - I could imagine some tricky language in there that says we wont use your word document, but we may at some time use the queries you put against it, not as raw corpus but as a second layer examining what tools/workflows to expand/exploit.
There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.
if they can get away with it (say by claiming it's "fair use"), they'll ignore corporate ones too
it's an incentive to pretend as if you're following the contract, which is not the same thing
despite all 3 branches of the government disagreeing with them over and over again
https://www.whitehouse.gov/presidential-actions/2025/12/elim...
You AI cucks are insufferable.
For example, in RL, you have a train set, and a test set, which the model never sees, but is used to validate it - why not put proprietary data in the test set?
I'm pretty sure 99% of ML engineers would say this would constitute training on your data, but this is an argument you could drag out in courts forever.
Or alternatively - it's easier to ask for forgiveness than permission.
I've recently had an apocalyptic vision, that one day we'll wake up, an find that AI companies have produced an AI copy of every piece of software in existence - AI Windows, AI Office, AI Photoshop etc.
There may very well be clever techniques that don't require directly training on the users' data. Perhaps generating a parallel paraphrased corpus as they serve user queries - one which they CAN train on legally.
The amount of value unlocked by stealing practically ~everyone's lunch makes me not want to put that past anyone who's capable of implementing such a technology.
Many businesses simply couldn't afford to operate without such an edge.
There are claims all through this thread that “AI companies” are probably doing bad things with enterprise customer data but nobody has provided a single source for the claim.
This has been a theme on HN. There was a thread a few weeks back where someone confidently claimed up and down the thread that Gemini’s terms of service allowed them to train on your company’s customer data, even though 30 seconds of searching leads to the exact docs that say otherwise. There is a lot of hearsay being spread as fact, but nobody actually linking to ToS or citing sections they’re talking about.
It's not the hackernews i knew even 3 years ago anymore and i'm seriously close to just ditching the site after 15+ years of use.
I use AI heavily but everyday there's crazy optimistic almost manic posts about how AI is going to take over various sectors that are completely ludicrous - and they are all filled with comments from bizarrely optimistic people that have seemingly any knowledge of of how software is actually run or built, ie. it's the human organisational, research and management elements that are the hard parts, something AI can't do in any shape or form at the moment and the backbone of 95% at least vertical SaaS systems.
The world is indeed getting weird.
Spreadsheets! They are everywhere. In fact, they are so abundant these days that that many are spawned for a quick job and immediately discarded. In fact, the cost of having these spreadsheets is practically zero so in many cases one may find themselves having hundreds if not thousands of them sitting around with no indication to ever being deleted. Spreadsheets are also personal and annoying especially when forced upon you (since you did not make it yourself). Spreadsheets are also programming for non-programmers.
These new vibe-coded tools are essentially the new spreadsheets. They are useful,... for 5 minutes. They are also easily forgettable. They are also personal (for the person who made them) and hated (by everyone else). I have no doubt in my mind that organisation will start using more and more of these new types of software to automate repetitive tasks, improve existing processes and so on but ultimately, apart from perhaps just a few, none will replace existing, purpose-built systems.
Ultimately you can make your own pretty dashboard that nobody else will see or use because when the cost of production is so low your users will want to create their own version because they would think they could do better.
After all, how hard is to prompt harder then the previous person?
Also, do you really think that SaaS companies are not deploying AI themselves? It is practically an arms race: the non-expert plus some AI vs 10 specialist developers plus their AIs doing this all day long.
Who is going to have the upper-hand?
I’d also add a number of the vibe tools tech adjacent people on my team have made are used and liked by the team. Even engineering likes them because it frees up their time to work on customer facing things.
The only named product was Retool.
It took me no more than 2 hours to put those together. We didn't renew our TeamRetro
Okay, so two hours with an LLM vs maybe 2.5 days without an LLM in the best-case scenario (i.e. LLMs gave you a 10x boost. I would expect it to be less than that though, like maybe a 2x boost) - it sounds like it was always pretty cheap to replace the SaaS, but the business didn't do it.
TBH, the arguments were never "It would take too long to do ourselves", it was always "but then we'd have to maintain it ourselves".
The place I am consulting at now just moved (i.e. a month ago) from their in-house built ticketing system ($0/m as it had not needed maintenance for over a year) to Jira (~$2k/m).
In this specific case, it was literally 0 hours to avoid paying the SaaS, and they still moved, because they wanted some modern features (billing for time on support calls, etc) and figured that rather than update their in-house system to add support hours costing (a day, at most) they may as well move to a system that already had it.
(Joke's on them though - the Jira solution for support hours costing is not near the level of granularity they need, even with multiple paid plugins).
Once again, companies aren't using SaaS because it's cheaper or quicker; they could already quickly replace their SaaS with in-house.
I'm not a frontend guy, I'm an operations guy that sometimes does some backends. So it's likely a solid 2.5 days for me to build the pair of these, probably more I haven't touched Javascript in over a decade.
Right, understood and agreed, but this was not about you and your specific skills or lack thereof; your anecdote was in support of an argument that companies would stop their SaaS because LLMs enable them to build in house.
That was your argument, right?
So in the absence of LLMs, if the company wanted to stop paying for the SaaS, would they have chosen you to do the replacement, or someone who had recent experience in the tech?
Look, we are interested in comparing the time taken to replace the SaaS with an LLM, and the time taken to replace the SaaS without LLM assistance.
That's really the only two scenarios under discussion, so lets explore those exhaustively:
1. Without LLMs: In the worst case scenario, the company had to pay for 2.5 days of employee time with the best case being 1 day of employee time. Lets go with something in-between like 1.5 days of dev time.
2. With LLMs: The company pays for 0.5 days of employee time (includes the overhead of token cost/subscription).
The difference between the only two scenarios that we have is literally a single day of employee costs!
I am skeptical that the company failed to leave the SaaS earlier because they didn't want to eat the cost of a 1.5 paid days for an employee, but a difference of a single day of cost was enough to tip the scales.
I wasn't intending to make an argument, I was specifically replying to:
>does not mention a single specific SaaS subscription he’s cancelled
I was imagining it could start a thread of examples where it's happened.
>would they have chosen you to do the replacement, or someone who had recent experience in the tech?
I get what you're saying, but those aren't the only two options; they very likely would have chosen neither of those options. The resources we had available was an ops guy who is pretty handy with the LLMs.
I get the point you're making, I really do. My counterpoint is that there are some SaaSes out there that people can build replacements for by using the LLMs at no incremental cost.
>I am skeptical that the company failed to leave the SaaS earlier because they didn't want to eat the cost of a 1.5 paid days for an employee
Sure, I'd be skeptical about it when put that way as well. That's not how it played out however: We were having a retro and the guy running it said that our subscription was expiring the end of the month and wanted discussion about whether we wanted to purchase it for another year. 2 weeks later, before our next retro, I threw a prompt at Claude Code and asked a couple people to try out the result, incorporated their feedback and we ran the retro on it. We aren't planning to renew.
This was not something "the company" had a big discussion about; my boss made an offhand comment about it, and I did it as a side project while I was doing something else.
I’m pretty certain AI quadruples my output at least and facilitates fixing, improving and upgrading poor quality inherited software much better than in the past. Why pay for SaaS when you can build something “good enough” in a week or two?
Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.
SaaS benefits from network effects - your internal tools don't. So overall SaaS is cheaper.
The reality is that software license costs is a tiny fraction of total business costs. Most of it is salaries. The situation you are describing the kind of dead spiral many companies will get into and that will be their downfall not salvation.
The reason software licenses are easier to cut by the finance team when things are not going well is because software does not have feelings although we all know that this not making a dent. Ultimately software scales much better than people and if the software is "thinking" it will scale infinitely better.
Building it all in house will only happen for 2 reasons: 1. The problem is so specific that this is the only variable option and the quickest (fear enough). 2. Developers and management do not have real understanding of software costs.
Developers not understanding the real costs should be forgiven because most of them are never in position to make these type of decisions - i.e they are not trained. However a manager / executive not understanding this is sign of lack of experience. You really need to try to build a few medium-sized none essential software systems in-house to get an idea how bad this can get and what a waste of time and money it really is - resources you could have spent elsewhere to effect the bottom the real bottomline.
Also the lines of code that are written do not scale linearly with team sizes. The more code you produce the bigger the problem - even with AI.
Ultimately a company wants to write as few line of code as possible that extract as much value as feasibly possible.
A lot of the SaaS target companies won't even have a CTO
About a decade ago we worked with a partner company who was building their own in-house software for everything. They used it as one of their selling points and as a differentiator over competitors.
They could move fast and add little features quickly. It seemed cool at first.
The problems showed up later. Everything was a little bit fragile in subtle ways. New projects always worked well on the happy path, but then they’d change one thing and it would trigger a cascade of little unintended consequences that broke something else. No problem, they’d just have their in-house team work on it and push out a new deploy. That also seemed cool at first, until they accumulated a backlog of hard to diagnose issues. Then we were spending a lot of time trying to write up bug reports to describe the problem in enough detail for them to replicate, along with constant battles over tickets being closed with “works in the dev environment” or “cannot reproduce”.
> You also get exactly what you want rather than some £300k per year CRM
What’s the fully loaded (including taxes and benefits) cost of hiring enough extra developers and ops people to run and maintain the in house software, complete with someone to manage the project and enough people to handle ops coverage with room for rotations and allowing holidays off? It turns out the cost of running in-house software at scale is always a lot higher than 300K, unless the company can tolerate low ops coverage and gaps when people go on vacation.
We often ended up discarding large chunks of these poorly tested features, instead of trying to get them to work, and wrote our own. This got to a point where only the core platform was used, and replacing that seemed to be totally feasible.
SaaS often doesn't solve issues but replaces them - you substitute general engineering knowledge and open-source knowhow with proprietary one, and end up with experts in configuring commercial software - a skill that has very little value on the market where said software is not used, and chains you to a given vendor.
But what you're describing is the narrow but deep vs wide but shallow problem. Most SaaS software is narrow but deep. Their solution is always going to be better than yours. But some SaaS software is wide but shallow, it's meant to fit a wide range of business processes. Its USP is that it does 95% of what you want.
It sounds like you were using a "wide-shallow" SaaS in a "narrow-deep" way, only using a specific part of the functionality. And that's where you hit the problems you saw.
It's full of features, half of which either do not work, or do not work as expected, or need some arcane domain knowledge to get them working. These features provide 'user-friendly' abstractions over raw stuff, like authing with various repos, downloading and publishing packages of different formats.
Underlying these tools are probably the same shell scripts and logic that we as devs are already familiar with. So often the exercise when forced to use these things is to get the underlying code to do what we want through this opaque intermediate layer.
Some people have resorted to fragile hacks, while others completely bypassed these proprietary mechanisms, and our build scripts are 'Run build.sh', with the logic being a shell or python script, which does all the requisite stuff.
And just like I mentioned in my prev post, SaaS software in this case might get tested more in general, but due to the sheer complexity it needs to support on the client side, testing every configuration at every client is not feasible.
At least the bugs we make, we can fix.
And while I'm sure some of this narrow-deep kinds of SaaS works well (I've had the pleasure to use Datadog, Tailscale, and some big cloud provider stuff tends to be great as well), that's not all there is that's out there and doesn't cover everything we need.
You have bought a shallow but wide SaaS product, one with tons of features that don't get much development or testing individually.
You're then trying to use it like a deep but narrow product and complaining that your complex use case doesn't fit their OK-ish feature.
MS do this in a lot of their products, which is why Slack is much better than Teams, but lots of companies feel Teams is "good enough" and then won't buy Slack.
I'm sure you have encountered the pattern where you write A that calls B that uses C as the underlying platform. You need something in A, and know C can do it, but you have to figure out how you can achieve it through B. For a highly skilled individual(or one armed with AI) , B might have a very different value proposition than one who has to learn stuff from scratch.
Js packages are perfect illustration of these issues - there are tons of browser APIs that are wrapped by easy-to-use 'wrapper' packages, that have unforeseen consequences down the road.
On top of that, SaaS takes your power away. A bug could be quite small, but if a vendor doesn't bother to fix it, it can still ruin your life for a long time. I've seen small bugs get sandbagged by vendors for months. If you have the source code you can fix problems like these in a day or two, rather than waiting for some nebulous backlog to work down.
My experience with SaaS is that products start out fine, when the people building them are hungry and responsive and the products are slim and well priced. Then they get bloated trying to grow market share, they lose focus and the builders become unresponsive, while increasing prices.
At this point you wish you had just used open source, but now it's even harder to switch because you have to jump through a byzantine data exfiltration process.
Maybe write some tests and have great software development practices and most importantly people who care about getting the details right. Honestly there’s no reason for software to be like this is there? I don’t know how much off the shelf ERP software you have used but I wouldn’t exactly describe that as flawless and bug free either!
To attempt to summarize the debate, there seems to be three prevailing schools of thought:
1. Status Quo + AI. SaaS companies will adopt AI and not lose share. Everyone keeps paying for the same SaaS plus a few bells and whistles. This seems unlikely given AI makes it dramatically cheaper to build and maintain SaaS. Incumbents will save on COGS, but have to cut their pricing (which is a hard sell to investors in the short term).
2. SaaS gets eaten by internal development (per OP). Unlikely in short/medium term (as most commenters highlight). See: complete cloud adoption will take 30+ years (shows that even obviously positive ROI development often does not happen). This view reminds me a bit of the (in)famous DropBox HN comment(1) - the average HN commenter is 100x more minded to hack and maintain their own tool than the market.
benzible (commenter) elsewhere said this well - "The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon."
This same logic explains why external boutique beats internal builds --
3. AI helps boutique-software flourish because it changes vendor economics (not buyer economics). Whereas previously an ERP for a specific niche industry (e.g. wealth managers who only work with Canadian / US cross-border clients) would have had to make do with a non-specific ERP, there will now be a custom solution for them. Before AI, the $20MM TAM for this product would have made it a non-starter for VC backed startups. But now, a two person team can build and maintain a product that previously took ten devs. Distribution becomes the bottleneck.
This trend has been ongoing for a while -- Toast, Procore, Veeva -- AI just accelerates it.
If I had to guess, I expect some combination of all three - some incumbents will adapt well, cut pricing, and expand their offering. Some customers will move development in house (e.g. I have already seen several large private equity firms creating their own internal AI tooling teams rather than pay for expensive external vendors). And there will be a major flourishing of boutique tools.
(1) https://news.ycombinator.com/item?id=9224
Quite honestly, this is exactly what I am currently doing - identified a market with probably $50mm global TAM. Bootstrapping with first design partners currently.
One thing I didn't mention is that there are often a few sleepy legacy SaaS players (often public) in these niche markets who don't have the chops to add AI to their product and may be a good takeout / exit down the line. Won't be for billions, but if you bootstrap, that doesn't really matter.
What _has_ surprised me though is just how many companies are (or are considering) building 'internal' tooling to replace SaaS they are not happy with. These are not the classic HN types whatsoever. I think when non technical people get to play with AI software dev they go 'wow so why can't we do everything like this'.
I think your point 3 is really interesting too.
But yes the point of my article (hopefully) wasn't that SaaS is overnight dead, but some thin/lower "quality" products are potentially in real trouble.
People will still buy and use expertly designed products that are really nice to use. But a lot of b2b SaaS is not that, its a slow clunky mess that wants to make you scream!
I agree - it is surprising how many are looking at doing in house.
I think what they miss (and I say this as someone who spent the early part of his career outside of tech) is an understanding of what goes into maintaining software products - and this ignorance will be short lived. I was honestly shocked how complex it was to build and maintain my first web app. So business types (like I was) who are used to 'maintaining' an excel spreadsheet and powerpoint deck they update every quarter may think of SaaS like a software license they can build once and use forever. They have no appreciation of the depth of challenges that come with maintaining anything in production.
My working model is that of no-code - many non-tech types experimented with bubble etc, but quickly realize that tech products are far deeper than the (heavily curated) surface level experience that the user has. It is not like an excel model where the UI is codebase. I expect vibe-coders will find the same thing.
I have on several occasions built my own versions of tools, only to cave and buy a $99 a year off the shelf version because the maintenance time isn't worth it. Non-tech folks have no idea of the depth of pain of maintaining any system.
They will learn. Will be interesting to see how it plays out.
Building is only one part. Maintaining and using/running is another.
Onboarding for both technical and functional teams takes longer as the ERP is different from other company. Feature creep is an issue. After all who can say no to more bespoke features. Maybe roll CRM, Reporting and Analytics into one. Maintenance costs and priorities now become more important.
We have also explored AI agents in this area. People specific tasks are great use cases. Create mock up and wireframes? AI can do well and you still have human in the loop. Enterprise level tasks like say book closing for late company ERP? AI makes lot of mistakes.
this means if I sell it to your business for the price of < your salary - you will get fired and business will use my version.
Why? because my will always be better as 10 people work on it vs you alone.
Internal versions will never be better or cheaper than saas (unless you are doing some tiny and very specific automation).
They can be better than current solution - but only a matter of time when someone makes a saas equal and better to what you do internally.
Sure almost anything will be better and cheaper that hubspot.
But with AI smaller CRMs that are hyper focused on businesses like yours will start popping up and eating its market.
Anything bigger than a toy project will always be cheaper/better to buy.
AI-generated code still requires software engineers to build, test, debug, deploy, ensure security, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.
I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.
Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.
This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it. I know this is not happening for the next few decades or so.
The difference is that an AI-coded internal Jira clone is something that could realistically happen today. Vague notions of AI "understanding" anything are not currently realistic and won't be for an indeterminate amount of time, which could mean next year, 30 years from now, or never. I don't consider that worth discussing.
Are you as a dev still going to pay for analytics and dashboards that you could have propped up by Claude in 5 minutes instead?
Generating code is one part of software engineering is a small part of SaaS.
Do you pay for OpenTelemetry? How is this related?
So, I ask again - how do you know that the service you're paying for is all of those things?
How do you know anything? How do you know the bank won't lose your money? How do you know the bank note you hold is worth what it says? How do you know?
Most SaaS products could be replaced by a form + spreadsheet + email workflow, and the reason they aren't is that people don't want to be dealing with a hacky solution. Devs can hack together a nice little webapp instead of a network of spreadsheets, but it's still a hack. Factoring in AI assistance, perhaps SaaS is now competing with "something I hacked together in a week" as opposed to "something I hacked together in a month," but it's a hack either way.
I am absolutely going to pay for analytics and dashboards, because I don't want the operational concerns of my Elasticsearch analytics cluster getting in the way of the alarm that goes off when my primary database catches fire. Ops visibility is too important to be a hack, regardless of how quickly I could implement that hack.
Not to mention the author appears to run a 1-2 person company, so ... yeah. AI thought leadership ahoy.
With AI, that equation is now changing. I anticipate that within 5 years autonomous coding agents will be able to rapidly and cheaply clone almost any existing software, while also providing hosting, operations, and support, all for a small fraction of the cost.
This will inevitably destroy many existing businesses. In order to survive, businesses will require strong network effects (e.g. marketplaces) or extremely deep data/compute moats. There will also be many new opportunities created by the very low cost of software. What could you build if it were possible to create software 1000x faster and cheaper?"
Paul Bucheit
https://x.com/paultoo/status/1999245292294803914
At the same time - do any of us think a small sassy SaaS like Bingo card creator could take off now? :-)
https://training.kalzumeus.com/newsletters/archive/selling_s...
SaaS maintenance isn't about upgrading packages, it's about accountability and a point of contact when something breaks along with SLAs and contractual obligations. It isn't because building a kanban board app is hard.
The problem is, nobody knows how much AI will improve or how much it will cost if it does.
That uncertainty alone is very problematic and I think is being underestimated in terms of its impact on everything it can potentially touch.
Summary is that for agents to work well they need clear vision into all things, and putting the data behind a gui or not well maintained CLI is a hinderance. Combined with how structured crud apps are an how the agents can for sure write good crud apps, no reason to not have your own. Wins all around with not paying for it, having a better understanding of processes, and letting agents handle workflows.
- anything that requires very high uptime
-very high volume systems and data lakes
-software with significant network effects
-companies that have proprietary datasets
-regulation and compliance is still very important
Then this project to lets you generate static sites from svelte components (matches protobuf structures) and markdown (documentation) and global template variables: https://github.com/accretional/statue
A lot of the SaaS ecosystem actually has rather simple domain logic and oftentimes doesn't even model data very well, or at least not in a way that matches their clients/users mental models or application logic. A lot of the value is in integrations, or the data/scaling, or the marketing and developer experience, or some kind of expertise in actually properly providing a simple interface to a complex solution.
So why not just create a compact universal representation of that? Because it's not so big a leap to go beyond eating SaaS to eating integrations, migration costs/bad moats, and the marketing/documentation/wrapper.
223 more comments available on Hacker News