Why Haven't Local-First Apps Become Popular?
Posted3 months agoActive3 months ago
marcobambini.substack.comTechstoryHigh profile
calmmixed
Debate
80/100
Local-First AppsDecentralized SystemsSoftware Development
Key topics
Local-First Apps
Decentralized Systems
Software Development
The article discusses why local-first apps haven't become popular, sparking a discussion on the technical, economic, and social reasons behind this phenomenon.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
85
0-3h
Avg / period
14.5
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 22, 2025 at 9:17 AM EDT
3 months ago
Step 01 - 02First comment
Sep 22, 2025 at 9:34 AM EDT
16m after posting
Step 02 - 03Peak activity
85 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 2:26 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45333021Type: storyLast synced: 11/20/2025, 8:09:59 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://en.wikipedia.org/wiki/HCL_Notes
If that was deterministic, that was a very bad idea.
The future? I thought all apps were like this before this web2.0 thing ruined it.
'Offline-first' is trying to combine the benefits of both approaches.
While this may be true, the central issue is a different one: most users and/or developers are not very privacy-conscious, so they don't consider it to be worth the effort to solve the problems that go in hand with such distributed systems.
Someone could write a whole slew of changes locally and someone else can eliminate all their work because they have an outdated copy locally and they made a simple update overriding the previous person's changes.
That's why git has merges and conflicts - it doesn't want to lose changes and it can't automatically figure out what should stay in case of a conflict.
Compared to what?
All you need is a way to resolve conflicts and you can serialize any distributed set of actions to a log.
If all you need to solve a problem is something impossible, then you haven't solved it.
Everything is hard if you don't know how to do it.
I.e., most people don’t care.
Local-first is optimal for creative and productivity apps. (Conversely, non-local-first are terrible for these.)
But most people are neither creative nor optimally productive (or care to be).
it's not that they "don't care", but that they dont know this is an issue that needs to be cared about. Like privacy, they didnt think they need it until they do, but by then it's too late.
Local-first and decentralized apps haven't become popular because SaaS has a vastly superior economic model, and more money means more to be invested in both polish (UI/UX) and marketing.
All the technical challenges of decentralized or local-first apps are solvable. They are no harder than the technical challenges of doing cloud at scale. If there was money in it, those problems would be solved at least as well.
Cloud SaaS is both unbreakable DRM (you don't even give the user the code, sometimes not even their data) and an impossible to evade subscription model. That's why it's the dominant model for software delivery, at least 90% of the time. The billing system is the tail that wags the dog.
There are some types of apps that have intrinsic benefits to being in the cloud, but they're the minority. These are apps that require huge data sets, large amounts of burstable compute, or that integrate tightly with real world services to the point that they're really just front-ends for something IRL. Even for these, it would be possible to have only certain parts of them live in the cloud.
There’s also an upcoming generation that doesn’t know what a filesystem is which also doesn’t help matters.
This is why I sometimes think it's hopeless. For a while there -- 90s into the 2000s -- we were building something called "computer literacy." Then the phones came out and that stopped completely. Now we seem to have inverted the old paradigm. In that era people made jokes about old people not being able to use tech. Today the older people (30s onward) are the ones who can use tech and the younger people can only use app centric mobile style interfaces.
The future is gonna be like: "Hey grandpa, can you help me figure out why my wifi is down?"
Local first tends to suck in practice. For example, Office 365 with documents in the cloud is so much better for collaborating than dealing with "conflicted copy" in Dropbox.
It sucks that you need an internet connection, but I think that drawback is worth it for never having to manually merge a sync conflict.
That has nothing to do with where the code lives and runs. There are unique technical challenges to doing it all at the edge, but there are already known solutions to these. If there was money in it, you'd have a lot of local first and decentralized apps. As I said, these technical challenges are not harder than, say, scaling a cloud app to millions of concurrent users. In some cases they're the same. Behind the scenes in the cloud you have all kinds of data sync and consistency enforcement systems that algorithmically resemble what you need for consistent fluid interaction peer to peer.
When multiple people work on a document at the same time, you will have conflicts that will become very hard to resolve. I have never seen a good UI for resolving non-trivial changes. There is no way to make this merging easy.
The only way to avoid the merge problem is to make sure that the state is synchronised before making changes. With cloud based solutions this is trivial, since the processing happens on the server.
The local first variant of this would be that you have to somehow lock a document before you can work on it. I worked on a tool that worked like that in the early 2000s. Of course that always meant that records remained locked, and it was a bit cumbersome. You still needed to be online to work so you could lock the records you needed.
There are multiple ways to do this, like CRDTs plus raft based leader signaling for conflict resolution. The latter signaling requires almost no bandwidth. Raft based time skew adjustment works too if your problem domain can accept a small amount of uncertainty.
Like I said a lot of these same kinds of algorithms are used cloud side. All the big cloud stuff you use is a distributed system. Where the code runs is irrelevant. The cloud is just another computer.
There is no way to tell which of the changes should win.
That's why most applications decided to require users be online for edits. when you have to be online, the chance of simultaneous edits becomes so small that you can just show an error message instead of trying to merge.
The online requirement also ensures that you are notified of conflicts immediately, which is vastly preferable to users. Nothing worse than working on a document for hours and discovering someone else also worked on the same document and now you have two edited copies that someone needs to consolidate.
That's the reason why offline first is becoming increasingly unpopular.
> The long-term direction is for Epicenter to become a foundational data framework for building apps where users truly own and control their own data. In other words, the Epicenter framework becomes an SSO for AI applications. Users use Epicenter to plug into any service, and they'll own their data, choose their models, and replace siloed apps with interoperable alternatives. Developers will use our framework to build highly customized experiences for said users. To pursue that goal seriously, we also need a sustainable model that honors our commitment to open source.
> ...The entire Epicenter project will be available under a copyleft license, making it completely free for anyone building open-source software. On the other hand, if a company or individual wants to incorporate the framework into their closed-source product, they can purchase a license to build with Epicenter without needing to open source their changes and abide by the copyleft terms. This is the model used by Cal.com, dub.sh, MongoDB, etc.
[1]: https://hw.leftium.com/#/item/44942731
[2]: https://github.com/epicenter-md/epicenter/issues/792
On the business model, dual license works if you de‑risk the integration: stable plugin ABI, permissive SDKs, and paid “closed‑source embedding” tier with SLAs and on‑prem support. Where I’ve seen revenue actually land: (1) paid sync/relay with zero data retention, (2) enterprise key management and policy controls, and (3) priority support/migration bundles. One caution: “privacy” alone doesn’t convert; solve a concrete ops pain. I built CrabClear to handle the obscure brokers others miss, and the lesson was the same—privacy sells when it eliminates a specific, recurring headache. If Epicenter can quantify one such headache and make it vanish out‑of‑the‑box, the model becomes much easier to sustain.
Also I don't understand why so many people on HN are concentrating on the simultaneous editing scenario, for most ordinary people this is actually quite a rare event especially in their private lives. Google Keep on Android seems to work pretty well in his context my family uses it to share shopping lists and other notes very successfully even hough several of us are online only intermittently.
But also nowadays you want to have information from other computers. Everything from shared calendars to the weather, or a social media entry. There's so much more you can do with internet access, you need to be able to access remote data.
There's no easy way to keep sync, either. Look at CAP theorem. You can decide which leg you can do without, but you can't solve the distributed computing "problem". Best is just be aware of what tradeoff you're making.
Git has largely solved asynchronous decentralized collaboration, but it requires file formats that are ideally as human understandable as machine-readable, or at least diffable/mergable in a way where both humans and machines can understand the process and results.
Admittedly git's ergonomics aren't the best or most user friendly, but it at least shows a different approach to this that undeniably works.
People say git is too "complex" or "complicated" but I never saw end users succeeding with CVS or Mercurial or SVN or Visual Sourcesafe the way they do with Git.
"Enterprise" tools (such as business rules engines) frequently prove themselves "not ready for the enterprise" because they don't have proper answers to version control, something essential when you have more than one person working on something. People say "do you really need (the index)" or other things git has but git seemed to get over the Ashby's law threshold and have enough internal complexity to confront the essential complexity of enterprise version control.
Yes, but then you are not using a "local first" tool but a typical server based workflow.
Fortunately, a lot of what chafes with git are UX issues more than anything else. Its abstractions are leaky, and its default settings are outright bad. It's very much a tool built by and for kernel developers with all that entails.
The principle itself has a lot of redeemable qualities, and could be applied to other similar syncing problems without most of the sharp edges that come with the particular implementation seen in git.
The merge workflow is not inherently complicated or convoluted. It's just that git is.
When dvcses came out there were three contendors: darcs, mercurial and git.
I evaluated all three and found darcs was the most intuitive but it was very slow. Git was a confused mess, and hg was a great compromise between fast and having a simple and intuitive merge model.
I became a big hg advocate but I eventually lost that battle and had to become a git expert. I spent a few years being the guy who could untangle the mess when a junior messed up a rebase merge then did a push --force to upstream.
Now I think I'm too git-brained to think about the problem with a clear head anymore, but I think it's a failure mostly attributable to git that dvcs has never found any uptake outside of software development and the fact that we as developers see dvcs as a "solved problem" outside more tooling around git is a failure of imagination.
What makes merging in git complicated? And what's better about darcs and mercurial?
(PS Not disagreeing just curious, I've worked in Mercurial and git and personally I've never noticed a difference, but that doesn't mean there isn't one.)
[0] Where CRDTs spent most of a couple of decades shooting for the stars and assuming "Conflict-Free" was manifest destiny/fate rather than a dream in a cruel pragmatic world of conflicts, Darcs was built for source control so knew emphatically that conflicts weren't avoidable. We're finally at the point where CRDTs are starting to take seriously that conflicts are unavoidable in real life data and trying new pragmatic approaches to "Conflict-Infrequent" rather that "Conflict-Free".
Auto-merging code is also a double-edged sword - just because you can merge something at the VCS-level does not mean that the result is sensible at the format (programming language) or conceptual (user expectation) levels.
It wasn't just "auto-merging" that is darcs' superpower, it's in how many things that today in git would need to be handled in merges that darcs wouldn't even consider a merge, because its data structure doesn't.
Darcs is much better than git at cherry picking, for instance, where you take just one patch (commit) from the middle of another branch. Darcs could do that without "history rewriting" in that the patch (commit) would stay the same even though its "place in line" was drastically moved. That patch's ID would stay the same, any signatures it might have would stay the same, etc, just its order in "commit log" would be different. If you later pulled the rest of that branch, that also wouldn't be a "merge" as darcs would already understand the relative order of those patches and "just" reorder them (if necessary), again without changing any of the patch contents (ID, signatures, etc).
Darcs also has a few higher level patch concepts than just "line-by-line diffs", such as one that tracks variable renames. If you changed files in another branch making use of an older name of a variable and eventually merge it into a branch with the variable rename, the combination of the two patches (commits) would use the new name consistently, without a manual merge of the conflicting lines changed between the two, because darcs understands the higher level intent a little better there (sort of), and encodes it in its data structures as a different thing.
Darcs absolutely won't (and knows that it can't) save you from conflicts and manual merge resolution, there are still plenty of opportunities for those in any normal, healthy codebase, but it gives you tools to focus on the ones that matter most. Also yes, a merge tool can't always verify that the final output is correct or builds (the high level rename tool, for instance, is still basically a find-and-replace and can be over-correct false positives and and miss false negatives). But it's still quite relevant to merges the types of merges you need to resolve in the first place, and how often they occur, and what qualifies as a merge operation in the first place.
Though maybe you also are trying to argue the semantics of what constitutes a "merge", "conflicts", and an "integration"? Darcs won't save you from "continuous integration" tools either, but it will work to save your continuous integration tools from certain types of history rewriting.
"At the end of the day" the state-of-the-art of VCS on-disk representation and integration models and merge algorithms isn't a solved problem and there are lots of data structures and higher level constructs that tools like git haven't applied yet and/or that have yet to be invented. Innovation is still possible. Darcs does some cool things. Pijul does some cool things. git was somewhat intentionally designed to be the "dumb" in comparison to darcs' "smart", it is even encoded in the self-deprecating name (from Britishisms such as "you stupid git"). It's nice to remind ourselves that while git is a welcome status quo (it is better than a lot of things it replaced like CVS and SVN), it is not the final form of VCS nor some some sort of ur-VCS which all future others will derive and resembles all its predecessors (Darcs predates git and was an influence in several ways, though most of those ways are convenience flags that are easy to miss like `git add -p` or tools that do similar jobs in an underwhelming fashion by comparison like `git cherry-pick`).
For local-first async collaboration on something that isn't software development, you'd likely want something that is a lot more polished, and has a much more streamlined feature set. I think ultimately very few of git's chafing points are due to its model of async decentralized collaboration.
Apparently 'jujutsu' makes the git workflow a bit more intuitive. Its something that runs atop git, although I don't know how much it messes up the history if you read it out with plain git.
All in all I'm pretty happy with git compared to the olden days of subversion. TortoiseSVN was a struggle haha.
Not saying this would be in any way easy, but I'm also not seeing any inherent obstacles.
> It requires file formats that are ideally as human understandable as machine-readable, or at least diffable/mergable in a way where both humans and machines can understand the process and results.
What you're proposing is tracking and merging operations rather than the result of those operations (which is roughly the basis of CRDTs as well).
I do think there's some problems with that approach as well though (e.g., what do you do about computationally expensive changes like 3D renders?). But for the parts of the app that fit well into this model, we're already seeing collaborative editing implemented this way, e.g., both Lightroom and Photoshop support it.
To be clear though, I think the only sensible way to process merges in this world is via a GUI application that can represent the artifact being merged (e.g., visual/audio content). So you still wouldn't use Git to merge conflicts with this approach (e.g., a simple reason why is that what's to stop an underlying binary asset that a stack of operations is being applied to from having conflicting changes if you're just using Git?). Even some non-binary edits can't be represented as "human readable" text, e.g., say adding a layer of a vector drawing of rabbit.
imagine asking a normie to deal with a merge conflict
It's literally entirely on a computer. If that somehow makes it harder to answer basic human questions about the complex things we're using it for, well that means we've got a problem folks.
The problem is with comprehensibility, and it's entrenched (because the only way for a piece of software to outlive its 50 incompatible analogs and reach mass recognition is to become entrenched; not to represent its domain perfectly)
The issue lies in how the tools that we've currently converged on (e.g. Git) represent the semantics of our activity: what information is retained at what granularity determines what workflows are required of the user; and thence what operations the user comes to expect to be "easy" or "hard", "complex" or "simple". (Every interactive program is a teaching aid of itself, like how when you grok a system you can whip together a poor copy of it in a couple hours out of shit and sticks)
Consider Git's second cousin the CRDT, where "merges" are just a few tokens long, so they happen automatically all the time with good results. Helped in application context by how a "shared editor" interface is considerably more interactive than the "manually versioned folder" approach of Git. There's shared backspace.
Git was designed for emailing patches over dialup, there it obviously pays to be precise; and it's also awesome at enabling endless bikeshedding on projects far less essential than the kernel, thanks to the proprietary extension that are Pull Requests.
Probably nobody has any real incentive to pull off anything better, if the value proposition of the existing solution starts with "it has come to be expected". But it's not right to say it's inherently hard, some of us have just become used to making it needlessly hard on ourselves, and that's whose breakfast the bots are now eating (shoo, bots! scram)
Three-way merges in general are easier to write than the CRDTs as the article suggests. They are also far more useful than just the file formats you would think to source control in get; it's a relatively easy algorithm to apply to any data structure you might want to try.
For a hobby project I took a local-first-like approach even though the app is an MPA, partly just because I could. It uses a real simple three-way merge technique of storing the user's active "document" (JSON document) and the last known saved document. When it pulls an updated remote "document" it can very simply "replay" the changes between the active document and the last known saved document to the active document to create a new active document. This "app" currently only has user-owned documents so I don't generally compute the difference between the remote update and the last saved to mark conflicted fields to the user, but that would be the easy next step.
In this case the "documents" are in the JSON sense of complex schemas (including Zod schemas) and the diff operation is a lot of very simple `===` checks. It's an easy to implement pattern and feels smarter than it should with good JSON schemas.
The complicated parts, as always, are the User Experience of it, more than anything. How do you try to make it obvious that there are unsaved changes? (In this app: big Save buttons that go from disabled states to brightly colored ones.) If you allow users to create drafts that have never been saved next to items that have at least one save, how you visualize that? (For one document type, I had to iterate on Draft markers a few times to make it clearer something wasn't yet saved remotely.) Do you need a "revert changes" button to toss a draft?
I think sometimes using a complicated sync tool like CRDTs makes you think you can escape the equally complicated User Experience problems, but in the end the User Experience matters more than whatever your data structure is and no matter how complicated your merge algorithm is. I think it's also easy to see all the recommendations for complex merge algorithms like CRDTs (which absolutely have their place and are very cool for what they can accomplish) and miss that some of the ancient merge algorithms are simple and dumb and easy to write patterns.
So, sure, if you are saying "people trained to use git" there, I agree. And you wind up having all sorts of implicit rules and guidelines that you follow to make it more manageable.
This is a lot like saying roads have solved how to get people using dangerous equipment on a regular basis without killing everyone. Only true if you train the drivers on the rules of the road. And there are many rules that people wind up internalizing as they get older and more experienced.
Do I? What sort of information ...
> shared calendars
OK yes that would be a valid use, I can imagine some stressed executive with no signal in a tunnel wanting to change some planned event, but also to have the change superceded by an edit somebody else makes a few minutes later.
> the weather
But I don't usually edit the weather forecast.
> a social media entry
So ... OK ... because it's important that my selfie taken in a wilderness gets the timestamp of when I offline-pretend-posted it, instead of when I'm actually online and can see replies? Why is that? Or is the idea that I should reply to people offline while pretending that they can see, and then much later when my comments actually arrive they're backdated as if they'd been there all along?
It's a far, far more complicated mental model than simply posting it. It'd be a huge barrier for normal users (even tech-savvy users, I'd say). People want to post it online and that's it. No one wants an app what requires its users to be aware of syncing state constantly unless they really have no choice. We pretend we can step on gas instead of mixing the gas with air and ignite it with a spark plug until we need to change the damn plug.
At work: I write code, which is in version control. I write design documents (that nobody reads), and put them on a shared computer. I write presentations (you would better off sleeping through them...) and put them on a share computer. Often the above are edited by others.
Even at home, my grocery list is shared with my wife. I look up recipes online from a shared computer. My music (that I ripped from CDs) is shared with everyone else in the house. When I play a game I wish my saved games were shared with other game systems (I haven't had time since I had kids, more than 10 years ago). When I take notes about my kid's music lessons they are shared with my wife and kids...
> Local-first was the first kind of app. Way up into the 2000s, you'd use your local excel/word/etc, and the sync mechanism was calling your file annual_accounts_final_v3_amend_v5_final(3).xls
With the exception of messenger clients, Desktop apps are mostly "local-first" from day one.
At the time you're beginning to think about desktop behavior, it's also worth considering whether you should just build native.
To be precise, these apps where not local-_first_, they where local-_only_. Local-first implies that the app first and foremost works locally, but also that it, secondly, is capable of working online and non-locally (usually with some syncing mechanism).
Sure there is, you just gotta exploit the multiverse[1]. Keep all the changes in their own branch aka timeline, and when there's some perceived conflict you just say "well in the timeline I'm from, the meeting was moved to 4pm".
[1]: https://www.reddit.com/r/marvelstudios/comments/upgsuk/expla...
It was the first practical manner to downsize mainframe applications.
There's no easy way to merge changes, but if you design around merging, then syncing becomes much less difficult to solve.
It started with single computers, but they were so expensive nobody had them except labs. You wrote the program with your data, often toggling it in with switches.
From there we went to batch processing, then shared computers, then added a networking, with file sharing and RPC. Then the personal computer came and it was back to toggling your own programs, but soon we were running local apps, and now our computers are again mostly "smart terminals" (as opposed to dumb terminals), and the data is on shared computers again.
Sometimes we take data off the shared computer, but there is no perfect solution so distributed computing and since networks are mostly reliable nobody wants that anyway. What we do want is control of our data and that we don't get (mostly)
My last job was balls deep in the Google ecosystem, and all the collaborative editing, syncing and versioning and whatnot did nothing to stop that practice.
On a related note, I used to hate Gmail (I still do, but I used to too), until I had to use Outlook and all the other MS crap at my new job. Jesus christ. WTF even is Teams? Rhetorical question; I don't care.
When you do serverside stuff you control everything. What users can do, and cannot do.
This lets you both reduce support costs as it is easier to resolve issues even by ad-hoc db query, and more importantly - it lets you retroactively lock more and more useful features behind paywall. This is basically The DRM for your software with extra bonus - you don't even have to compete with previous version of your own software!
i want my local programs back, but without regulatory change it will never happen.
Having built a sync product, it is dramatically simpler (from a technical standpoint) to require that clients are connected, send operations immediately to central location, and then succeed / fail there. Once things like offline sync are part of the picture, there's a whole set of infrequent corner cases that come in that are also very difficult to explain to non-technical people.
These are silly things like: If there's a network error after I sent the last byte to a server, what do I do? You (the client that made the request) don't know if the server actually processed the request. If you're completely reliant on the server for your state, this problem (cough) "doesn't exist", because when the user refreshes, they either see their change or they don't. But, if you have offline sync, you need to either have the server tolerate a duplicate submission, or you need some kind of way for the client to figure out that the server processed the submission.
The bigger issue is naivety. A lot of these corner cases mean that it's unlikely someone can just put together a useful prototype in a weekend or two.
> if it was more profitable we would all be doing it.
More like, if there was more demand we would all be doing it. Most of us have reliable internet connections and don't go out of service often enough to really care about this kind of thing.
Right now, I can throw my phone in the ocean, go to the Apple Store, sign in and my new phone looks and acts like my old phone with all of my data available, my apps and my icons being in the same place.
My wife and I can share calendar events, spreadsheets, photo libraries etc.
That’s not to mention work.
My current thinking is that the only way we get substantial local-first software is if it's built by a passionate open-source community.
Look at single player video games, cannot get more ideal for local-first. Still you need a launcher and internet connection.
There are currently tens of thousands of games that are unplayable due to requiring pinging to a network/patch server which long ago was deprecated.
Forsaking patch requirements, just as many games are no longer playable due to incompatibility/abandoned OS, codebase, gamebreaking bugs.
In both of these scenarios, my "lifetime license" is no longer usable through no action of my own, and breaks the lifetime license agreement. I shouldn't need to be into IT to understand how to keep a game I bought 5 years ago playable.
The solution to this "problem" for user, as offered by the corporate investment firms in control, is to offer rolling subscriptions that "keep your license alive", for some reason. Rather than properly charge for a service at time of purchase.
TLDR: Why move the goal posts further in favor of tech/IT/Videogame Investment firms?
Two people meet in an HN thread, and they both dislike the status quo in a particular way (e.g. that copyright is awful, DRMed games suck, whatever). They both want to fight back against the thing that they dislike, but they do it in different ways.
One person finds alternatives to the mainstream and then advertises them and tell people: Look, here's the other way you can do it so you can avoid this terrible mess! That messaging can sometimes come across as downplaying the severity of the problem.
The second person instead wants to raise awareness of how awful the mess is, and so has to emphasize that this is a real problem.
The end result is two people that I think agree, but who appear to disagree because one wants to emphasize the severity of the problem and the other wants to emphasize potential solutions that the individual can take to address it.
Concretely, I think that's what happened here. I think everybody in this thread is pissed that single-player games would have activation and online DRM. Some people like to get around that by buying on marketplaces like GOG or playing open source games, and others want to change the policy that makes this trend possible, which means insisting that it really is a problem.
Sorry for all the meta commentary. If I got it wrong, I'd be interested to understand better!
We drove everything online for logistics and financial reasons. Not because the tech requires online connections for everything. it isn't changing because people don't see always-online as a big enough deterrent to change their habits.
Most people have a Dropbox, Apple Storage, Google Storage or similar.
A lot of people used to happily pay for desktop software.
It is sort of a combo of those 2 things economically.
Dropbox could sweep up here by being the provider of choice for offline apps. Defining the open protocol and supporting it. adding notifications and some compute.
You then use Dropbox free for 1, 5, 10 offline apps (some may need free some paid) and soon you'll need to upgrade Storage like any iPhone user!
More or less no one used to "happily" pay. Absent pirating software, they did pay often hundreds of dollars for all sort of software sight unseen (though shareware did provide try before you bought) which often came with minimal updates/upgrades unless they paid for such.
But expectations have largely changed.
Adobe is keen to emphasize that their products are cloud based
It’s not immune to file conflicts across your devices though.
What point are you trying to make?
The really crazy thing is that everyone just forgot a couple of years ago "Dubai chocolate" meant something a lot more gross.
It's called damage control and yes it's crazy that we blindly allow this kind of society-wide manipulation.
That would let you access your data forever, albeit you might still need to write your own scripts to port it to another app.
And the price they give me from clicking the ad is a limited-time discount so then I'm turned off from ever coming back later and paying the full price i.e. the sucker's price.
Surely this isn't the optimal business model for many of the products that have adopted it.
I see product X is $10/month. I subscribe. I'm not sure where the deception is there? The alternative is likely either the cost of the product is exorbitantly high like $500 for lifetime. Or the developer makes it more affordable but sales peter out and they end up having to go out of business and can't afford to maintain the product after a couple of years. Likely both. And hackernews will complain either way.
The only sustainable model I've seen is lifetime licenses but updates for a single year.
It's also a programming complexity problem.
A local-first app has to run on a zillion different configurations of hardware. A cloud-first app only has to run on a single configuration and the thing running on the user hardware is just a view into that singular cloud representation.
We have a business model that I think is kind of novel (I am biased) -- we split our service into a "global identity layer"/control plane and "Relay Servers" which are open source and self-hostable. Our Obsidian Plugin is also open source.
So while we have a SaaS, we encourage our users to self-host on private networks (eg. tailscale) so that we are totally unable to see their documents and attachments. We don't require any network connection between the Relay Server and our service.
Similar to tailscale, the global identity layer provides value because people want SSO and straightforward permissions management (which are a pain to self-host), but running the Relay Server is dead simple.
So far we are getting some traction with businesses who want a best-in-class writing experience (Obsidian), google-docs-like collaboration, but local-first. This is of particular interest to companies in AI or AI safety (let's not send our docs to our competitors...), or for compliance/security reasons.
[0] https://relay.md
That sort of model collapses where software needs to be constantly updated and maintained lest it rots and dies as the rest of the ecosystem evolves and something dynamically linked changes by a hair. So what's left is either doing that maintenance for free, i.e. FOSS, or charging for it on a monthly basis like SAAS. We've mostly done this bullshit to ourselves in the name of fast updates and security. Maybe it was inevitable.
If you think this is only a problem for distributed systems, I have bad news for you.
In a talk a few years ago [1], Martin Kleppman (one of the authors of the paper that introduced the term "local-first") included this line:
> If it doesn't work if the app developer goes out of business and shuts down the servers, it's not local-first.
That is obviously not something most companies want! If the app works without the company, why are you even paying them? It's much more lucrative to make a company indispensable, where it's very painful to customers if the company goes away (i.e. they stop giving the company money).
[1] https://speakerdeck.com/ept/the-past-present-and-future-of-l...
Now that people are used to having someone in a data center do their backing up and distributing for them, they don't want to that work themselves again, privacy be damned.
I guess I should bring my devices back to exactly 1 device. Or just take a subscription on one service.
[1] GitHub: https://github.com/hasanhaja/tasks-app/ [2] Deployed site: https://tasks.hasanhaja.com/
325 more comments available on Hacker News