Nfs at 40 – Remembering the Sun Microsystems Network File System
Posted3 months agoActive3 months ago
nfs40.onlineTechstoryHigh profile
supportivepositive
Debate
20/100
NfsFile SystemsNetworkingLegacy Tech
Key topics
Nfs
File Systems
Networking
Legacy Tech
The 40th anniversary of NFS (Network File System) sparks nostalgia and discussion among HN users about its continued relevance and use in modern environments.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
126
Day 1
Avg / period
30.4
Comment distribution152 data points
Loading chart...
Based on 152 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 11:49 AM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 12:53 PM EDT
1h after posting
Step 02 - 03Peak activity
126 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 4:18 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45482467Type: storyLast synced: 11/20/2025, 5:28:51 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
SMB is a nightmare to set up if your host isn’t running Windows.
sshfs is actually pretty good but it’s not exactly ubiquitous. Plus it has its own quirks and performs slower. So it really doesn’t feel like an upgrade.
Everything else I know of is either proprietary, or hard to set up. Or both.
These days everything has gone more cloud-oriented. Eg Dropbox et al. And I don’t want to sync with a cloud server just to sync between two local machines.
I looked, found the link below, but it seems to just fizzle out without info.
https://en.wikipedia.org/wiki/DCE_Distributed_File_System
Anyway, we used it extensively in the UIUC engineering workstation labs hundreds of computers, 20+ years ago, and it worked excellently. I set up a server farm 20 years ago of Sun sparcs but used NFS for such.
plusses were security (kerberos), better administrative controls and global file space.
minuses were generally poor performance, middling small file support and awful large file support. substantial administrative overhead. the wide-area performance was so bad the global namespace thing wasn't really useful.
I guess it didn't cause as many actual multi-hour outages NFS, but we used it primarily for home/working directories and left the servers alone, whereas the accepted practice at the time was to use NFS for roots and to cross mount everything so that it easily got into a 'help I've fallen and can't get up' situation.
(off topic, but great username)
[0]: https://openafs.org/
[1]: https://www.auristor.com/filesystem/
IMO IBM/Transarc died for two reasons. First, there was significant brand confusion after the release of Windows Active Directory and Windows DFS since no trademarks were obtained for DCE service names. Second, the file system couldn't be deployed without the rest of the DCE infrastructure.
There was an unofficial effort within IBM to create the Advanced Distributed File System (ADFS) which would have decoupled DFS from the DCE Cell Directory Service and Security Service as well as replaced DCE/RPC. However, the project never saw the light of day.
https://en.wikipedia.org/wiki/DCE_Distributed_File_System
Samba runs fine on my FreeBSD host? All my clients are Windows though.
If I wanted to have a non-windows desktop client, I'd probably use NFS for the same share.
It's one of those tools that, unless you already know what you're doing, you can expect to sink several hours into trying to get the damn thing working correctly.
It's not the kind of thing you can throw at a junior and expect them to get working in an afternoon.
Whereas NFS and sshfs "just work". Albeit I will concede that NFSv4 was annoying to get working back when that was new too. But that's, thankfully, a distant memory.
NFS support was lacking on windows when I last tried. I used NFS (v3) a lot in the past, but unless in a highly static high trust environment, it was worse to use than SMB (for me). Especially the user-id mapping story is something I'm not sure is solved properly. That was a PITA in the homelab scale, having to set up NIS was really something I didn't like, a road warrior setup didn't work well for me, I quickly abandoned it.
SMBv1 has a reputation for being an extremely chatty protocol. Novell ipx/spx easily dwarfed it in the early days. Microsoft now disables it by default, but some utilities (curl) do not support more advanced versions.
SMBv2 increases efficiency by bundling multiple messages into a single transmission. It is clear text only.
SMBv3 supports optional encryption.
Apple dropped the Samba project from MacOS due to gplv3, and developed their own SMB implementation that is not used elsewhere AFAIK. If you don't care for Apple's implementation, then perhaps installing Samba is a better option.
NFSv3 relies solely on uid/gid mapping by default, while NFSv4 requires idmapd to run to avoid squashing. I sometimes use both at the same time.
I'd use the Finder to browse files, and for that it is terribly slow. Also without extra config in SAMBA it does litter the whole disk with DS_Store crap. I remember it was very slow that way, but have set up extra config in SAMBA (pear extension I think). Its extreme slowness may also be related to the workarounds to avoid that crap being not fully correctly configured. Also copying is comically slow, to get 4 files totaling 50kbytes can take 20 seconds sometimes. Same from a windows laptop takes sub second time.
Overall I'm underwhelmed by MacOS/iOS, this being one minor annoyance in the list. Windows and Linux both perform well and work well out of the box with my proven simple setup. (No AD)
Samba in MacOS might speed up things, but I bought that machine to get stuff cone more effectively than from Linux, and so far it didn't prove its value. Right now I'll not bother with that, as I feel that would have even worse OS level integration. Thanks for the advice nevertheless, much appreciated.
Samba can be. Especially when compared with NFS
> NFS support was lacking on windows when I last tried.
If you need to connect from Windows then your options are very limited, unfortunately.
I've always thought that NFS makes you choose between two bad alternatives with "stop the world and wait" or "fail in a way that apps are not prepared for."
I do agree that object storage is a nice option. I wonder if a FUSE-like object storage wrapper would work well here. I've seen mixed results for S3 but for local instances, it might be a different story.
This is why I say there’s mixed opinions about mounting S3 via FUSE.
This isn’t an issue with a self hosted S3 compatible storage server. But you then have potential issues using an AWS tool for non-AWS infra. There be dragons there.
And if you where to use a 3rd party S3 mounting tool, then you run into all the other read and write performance issues that they had (and why Amazon ended up writing their own tool for S3).
So it’s really not a trivial exercise to selfhost a mountable block storage server. And for something as important as data consistency, you might well be concerned enough about weird edge cases that mature technologies like SMB and NFS just feel safer.
That's the opposite of my experience. Fire it up and it just works, in less time than it would take you to configure NFS sensibly.
Samba can be set up easily enough if you know what you’re doing. But getting the AD controller part working would often throw up annoying edge case problems. Problems that I never had to deal with in NFS.
Though I will admit that NIS/YP could be a pain if you needed it to sync with NT.
Might just be bad timing then, most of my experience with it was in that v3/v4 transition period. It was bad enough to make me swear off the whole thing.
Everything that was supposed to replace it is so much worse, except for supposedly not being very unsafe.
It's very easy on illumos based systems due the integrated SMB/CIFS service.
- POSIX compliant, including dotting the i's. As opposed to, say, NFS which isn't cache coherent.
- performance and scalability. 1 TB/s+ sequential IO to a single file is what you'd expect on a large HPC system these days.
- Metadata performance has gotten a lot better over the past decade or so, beating most(all?) other parallel filesystems.
Downsides:
- Lots of pieces in a Lustre cluster (typically nodes are paired in sort-of active/active HA configs). And lots of cables, switches etc. So a fairly decent chance something breaks every now and then.
- When something breaks, Lustre is weird and different compared to many other filesystems. Tools are rudimentary and different.
To get a feel for what 'life with Lustre' could be, see e.g. various 'site reports' from workshops. E.g. for a couple somewhat recent ones: https://www.eofs.eu/wp-content/uploads/2024/09/cscs_site_rep... and https://www.eofs.eu/wp-content/uploads/2024/09/LAD-24-Luster...
Sftp is useful, but is pretty slow, only good for small amounts and small number of files. (Or maybe i don't know how to cook it properly.)
I'm tinkering on a project where I'd like to project a filesystem from code and added web-dav support, the 50mb limit will be fine since it's a corner-case for files to be bigger but it did put a dent into my enthusiasm since I had envisioned using it in more places.
Google Drive. Or Dropbox, OneDrive, yada yada. I mean, sure, that's not the question you were asking. But for casual per-user storage and sharing of "file" data in the sense we've understood it since the 1980's, cloud services have killed local storage cold dead. It's been buried for years, except in weird enclaves like HN.
The other sense of "casual filesystem mounting" even within our enclave is covered well already by fuse/sshfs at the top level, or 9P for more deeply integrated things like mounting stuff into a VM.
No one wants to serve files on a network anymore.
WebDAV shares of the NFS shares for things that need that view
sshfs for when I need a quick and dirty solution where performance and reliability don't matter
9p for file system sharing via VMs
Nothing for multi-user or multi-client. Avoid as long as that is possible since there is no good solution in sight.
https://github.com/xetdata/nfsserve
I don't need to "remember NFS", NFS is a big part of my day!
Also consider what your server and client machines will be running, some NFS clients suck. Linux on both ends works really well.
What I learned though was that NFS was great until it wasn't. If the server hung, all work stopped.
When I got to reddit, solving code distribution was one of the first tasks I had to take care of. Steve wanted to use NFS to distribute the app code. He wanted to have all the app servers mount an NFS mount, and then just update the code there and have them all automatically pick up the changes.
This sounded great in theory, but I told him about all the gotchas. He didn't believe me, so I pulled up a bunch of papers and blog posts, and actually set up a small cluster to show him what happens when the server goes offline, and how none of the app servers could keep running as soon as they had to get anything off disk.
To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
I set up a system where app servers would pull fresh code on boot and we could also remotely trigger a pull or just push to them, and that system was reddit's deployment tool for about a decade (and it was written in Perl!)
The caveat is a lot of software is written to assume things like fread(), fopen() etc will either quickly fail or work. However, if the file is over a network obviously things can go wrong so the common default behaviour is to wait for the server to come back online. Same issue applies to any other network filesystem, different OS's (and even the same OS with different configs) handle the situation differently.
'After a while' usually requiring the users to wait with an unresponsive desktop environment, because they opened a file manager whilst NFS was huffing. So they'd manage to switch to a virtual terminal and then out of habit type 'ls', locking that up too.
After a few years of messing around with soft mounts and block sizes and all sorts of NFS config nonsense, I switched to SMB and never looked back
> Hi, could you give some pointers about this? Thanks!
* https://man.archlinux.org/man/nfs.5.en#soft
* https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/What_are_...
In theory that should work, but I find that kind of non-default config option tends to be undertested and unreliable. Easier to just switch to Samba where not hanging is default/expected.
See, unlike some other more advanced, contemporary operating systems like VMS, Unix (and early versions of POSIX) did not support async I/O; only nonblocking I/O. Furthermore, it assumed that disk-based I/O was "fast" (I/O operations could always be completed, or fail, in a reasonably brief period of time, because if the disks weren't connected and working you had much bigger problems than the failure of one process) and network-based or piped I/O was "slow" (operations could take arbitrarily long or even fail completely altogether after a long wait); so nonblocking I/O was not supported for file system access in the general case. Well, when you mount your file system over a network, you get the characteristics of "slow" I/O with the lack of nonblocking support of "fast" I/O.
A sibling comment mentions that FreeBSD has some clever workarounds for this. And of course it's largely not a concern for modern software because Linux has io_uring and even the POSIX standard library has async I/O primitives (which few seem to use) these days.
And this is one of those things that VMS (and Windows NT) got right, right from the jump, with I/O completion ports,
But issues like this, and the unfortunate proliferation of the C programming language, underscore the price we've paid as a result of the Unix developers' decision to build an OS that was easy and fun to hack, rather than one that encouraged correctness of the solutions built on top of it.
Synchronous IO is nice and simple.
Synchronous I/O may be simple, but it falls down hard at the "complex things should be possible" bit. And people have been doing async I/O for decades before they got handholding constructs like 'async' and 'await'. Programming the Amiga, for instance, was done entirely around async I/O to and from the custom chips. The CPU needn't do much at all to blow away the PC at many tasks; just initiate DMA transfers to Paula, Denise, and Agnus.
Quote[0]:
> In Ingo's view, there are only two solutions to any operating system problem which are of interest: (1) the one which is easiest to program with, and (2) the one that performs the best. In the I/O space, he claims, the easiest approach is synchronous I/O calls and user-space processes. The fastest approach will be "a pure, minimal state machine" optimized for the specific task; his Tux web server is given as an example.
Granted, most software is not developed for the Linux kernel, but neither is asynchronous programming black magic. I think the software space has rather been negatively impacted by being slow to adopt asynchronous programming, among other old practices.
[0] https://lwn.net/Articles/219954/
Sheds a tear for AFS (Andrew File System).
We had a nice, distributed file system that even had solid security and didn't fail in these silly ways--everybody ignored it.
The AFS consistency model is fairly strong. Each client (aka cache manager) is only permitted to access the data/metadata of a vnode if it has been issued a callback promise from the AFS fileserver. File lock transitions, metdata modifications, and data modifications as well as volume transactions cause the fileserver to break the promise. At which point the client is required to fetch updated status information before it can decide it is safe to reuse the locally cached data.
Unlike optimistic locking models, the AFS model permits cached data to be validated after an extended period of time by requesting up to date metadata and a new callback promise.
An AFS fileserver will not permit a client to perform a state changing operation as long as there exist broken callback promises which have yet to be successfully delivered to the client.
"Most Production Applications run from AFS"
"Most UNIX hosts are dataless AFS clients"
https://web.archive.org/web/20170709042700/http://www-conf.s...
I looked up at one point whatever happened to AFS and it turns out that it has some Amdahl’s Law glass ceiling that ultimately limits the aggregate bandwidth to something around 1 GBps, which was fine when it was young but not fine when 100Mb Ethernet was ubiquitous and gigabit was obtainable with deep enough pockets. If adding more hardware can’t make the filesystem faster you’re dead.
I don’t know if or how openAFS has avoided these issues.
As I understand it, it mitigated many of those issues, but is still very "90s" in operation.
I've been flirting with the idea of writing a replacement for years, about time I had a go at it!
AuriStor's RX and UBIK protocol and implementation improvements would be worthless if the application services couldn't scale. To accomplish this required converting each subsystem so it could operate with minimal lock contention.
This 2023 presentation by Simon Wilkinson describes the improvements that were made to AuriStor's RX implementation up to that point.
https://www.auristor.com/downloads/auristor-rx-hare-and-the-...
The RX tortoise is catching up with the TCP hare.
> In practice, the global mutexes restricted the fileserver process to 1.7 cores regardless of how many cores were present in the system.
So in theory the bandwidth could scale with single CPU and/or point to point bandwidth but cannot scale horizontally at all. Except on the new implementations.
One site which recently lifted and shifted their AFS cell to a cloud made the following observations:
We observed the following performance while copying a 1g file from local disk into AFS.
All of the above tests were performed from clients located on campus to fileservers located in in the cloud.There are many RX implementation differences between the three versions. It is important to note that the window size grows from 32 -> 128 -> 512.
https://www.usenix.org/legacy/publications/library/proceedin...
https://workshop.openafs.org/afsbpw08/talks/wed_1/OpenAFS_an...
Why did everybody ignore it, do you know?
1) AFS, IIRC, required more than one machine in its original configuration. That meant hardware and sysadmins which were expensive--until, suddenly they weren't.
2) Disk, memory and bandwidth were scarce--and then they weren't. AFS made a bunch of solid architectural decisions and then wasted a bunch of time backing some of them down in deference to the hardware of the day and then all that work was wasted when Moore's Law overran everything, anyhow.
3) Everybody was super happy to be running everything locally to escape the tyranny of the "Mainframe Operator" (meaning no NFS or AFS or the like)--until they weren't. Once enough non-technical people appeared who didn't want to do system administration, like, ever, that flipped.
We lost the VMS filesystem in this timeframe, too. Which was also a distributed, remote filesystem.
But those x86 processors sure are cheap ... sigh.
I often wonder how the world would be different if AFS 3.0 could have been freely distributed world wide in 1989 precluding the need for HTTP to be developed at CERN.
https://www.openafs.org/
It never ceases to amaze me how well it does what it does and how well it handles being misused.
NFS volumes (for home dirs, SCM repos, tools, and data) were a godsend for workstations with not enough disk, and when not everyone had a dedicated workstation (e.g., university), and for diskless workstations (which we used to call something rude, and now call "thin clients"), and for (an ISV) facilitating work on porting systems.
But even when when you needed a volume only very infrequently, if there was a server or network problem, then even doing an `ls -l` in the directory where the volume's mount point was would hang the kernel.
Now that we often have 1TB+ of storage locally on a laptop workstation (compare to the 100MB default of an early SPARCstation), I don't currently need NFS for anything. But NFS is still a nice tool to have in your toolbox, for some surprise use case.
> To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
True, though, on a risky moving-fast architectural decision, even with two very experienced people, it might be reasonable to get a bit more evidence.
And in that particular case, it might be that one or both of you were fairly early in your career, and couldn't just tell that they could bet on the other person's assessment.
Though there are limits to needing to re-earn trust from scratch with a new team. For example, the standard FAANG-bro interview of everyone having to start from scratch for credibility, like they are fresh out of school with zero track record, and zero better ways to assess, is ridiculous. The only thing more ridiculous is when companies that pay vastly less try to mimic that interview style. Every time I see that, I think that this company apparently doesn't have experienced engineers on staff who can get a better idea just by talking with someone, rather than fratbro hazing ritual.
While diskless (or very limited disk) workstations were one use case for NFS, that was far from the primary one.
The main use case was to have a massive shared filesystem across the team, or division, or even whole company (as we did at Sun). You wouldn't want to be duplicating these files locally no matter how much local disk, the point was to have the files be shared amongst everyone for collaboration.
NFS was truly awesome, it is sad that everything these days is subpar. We use weak substitutes like having files on shared google drives, but that is so much inferior to having the files of the entire company mounted on the local filesystem through NFS.
(Using the past tense, since it's not used so much anymore, but my home fileserver exports directories over NFS which I mount on all other machines and laptops at home, so very much using it today, personally.)
For example, one of the big uses of NFS we had was for engineering documents, all of which could be accessed from FrameMaker or Interleaf running on your workstation. Nowaways, all the engineering documentation and more would be accessed through a Web browser from a non-NFS server, and no longer on a shared filesystem.
Another use of NFS we had was for collaborating on shared code by some projects, with SCM storing to NFS servers (other projects used DSEE and ClearCase). But nowaways almost everyone in industry uses distributed Git, syncing to non-NFS servers, with cached copies on their local storage.
I suppose a third thing that changed was CSCW distributed change syncing becoming popular at moving into other tools, such as a live "shared whiteboard" document editing that people can access in their Web browsers. I have mixed feelings about some of the implementations and how they're deployed, but it's pretty wild to have 4 remote people during Covid editing a document in real time at once, and NFS isn't helping with the hard part of that.
Right now, the use case for NFS that first comes to mind is individual humans working with huge files (e.g., for AI training, or other big data), where you want the convenience of being able to access them with any tool from your workstation, and maybe also have big compute servers working with them, without copying things around. You could sorta do these things with big complicated MLops infrastructure, but sometimes that slows you down more than it speeds you up.
github, specifically, I'd say.
github normalized the (weird) idea that the central repo is over on someone else's website.
You don't have to use git that way though. My internal git repositories are on NFS, available to all client machines.
The advantage you find to NFS for this is that you share workspaces between the client machines? Or reduce the local storage requirements on the client machines?
Same for mercurial. Most of my internal use repositories are mercurial since it's so much more pleasant to use than git and for my hobby time I want pleasant tools that don't hate me. But I digress..
It's the model I've used since the 90s in the days of teamware at Sun.
NFS 4.1 introduced pNFS scalability and 4.2 has even more optimizations.
These days, there are plenty of NFS vendors with similar reliability. (Even as far back as NFSv3, the protocol makes it possible for the server to scale out).
Also, we were a startup, and a Netapp filer was way outside the realm of possibility.
Also, that would be a great solution if you have one datacenter, but as soon as you have more than one, you still have to solve the problem of syncing between the filers.
Also, you generally don't want all of your app servers to update to new code instantly all the same time, in case there is a bug. You want to slow roll the deploy.
Also, you couldn't get a filer in AWS, once we moved there.
And before we moved to AWS the rack was too full for a filer, I would have had to get a whole extra rack.
The only counterexample involved a buggy RHEL-backported NFS client that liked to deadlock, and that couldn’t be upgraded for… reasons.
Client bugs that force a single machine/process restart can happen with any network protocol.
Failover, latency, and so on are something you need to think about independently of what transfer protocol you use. NFS may present its own challenges with all the different extensions and flags, but that's true of any mature technology.
That said, live code updates probably aren't a very good idea anyway, for exactly the reasons you mention. Those are the reasons you were right at the time, not any inherent deficiencies on the NFS protocol.
Or distributed NFS filers like Isilon or Panasas: any particular node can be rebooted and its IPs are re-distributed between still-live node. At my last job we used one for HPC and it stored >11PB with minimal hassle. OS upgrades can be done in a rolling fashion so client service is not interrupted.
Newer NFS vendors like Vast Data have all-NVME backends (Isilon can have a mix if you need both fast and archival storage: tiering can happen on (e.g.) file age).
At some point a new system came around that was able to make really good use of the hardware we had, and it didn’t use NFS at all. It was more “docker” like, where jobs ran in containers and had to pre-download all the tools they needed before running. It was surprisingly robust, and worked really well.
The designers wanted to support all of our use cases in the new system, and came to us about how to mount our NFS clusters within their containers. My answer was basically: let’s not. Our way was the old way, and their way was the new way, and we shouldn’t “infect” their system with our legacy NFS baggage. If engineers wanted to use their system they should reformulate their jobs to declare their dependencies up front and use a local cache, and all the other reasonable constraints their system had. They were surprised by my answer but I think it worked out in the end: it was the impetus for things to finally move off the legacy infrastructure, and it worked out well in the end.
[1] Of course, I didn’t test every single app — there’s a bucketload of them on Google Play and elsewhere…
if you only have two or three devices that need a fast connection you can just do point to point, of course
If I needed more than that, I’d probably do a direct link.
I also use it for shared storage for my cluster and NAS, and I don't think NFS itself has ever been the bottleneck.
Latency-wise, the overhead is negligible on via LAN, though can be noticeable when doing big builds or running VMs.
But I really loved the lesser known RFS. Yes it wasn't as robust, or as elegent.. but there's nothing quite like mounting someone else's sound card and blaring music out of it, in order to drive a prank. Sigh...
Usually caused by a coaxial cable not being properly terminated.
Naturally it meant nothing on the network was working, however NFS was kind of the canary in the mine for it.
What I don't like is the security model. It's either deploying kerberos infrastructure or "trust me bro I'm UID 1000" so I default to SMB on the file server.
https://news.ycombinator.com/item?id=31820504
I just find this highly ironic considering this is NFS we are talking about. Also, do they fear their ISPs changing the 40 year old NFS specs on the flight or what ? Why even mention this ?
https://github.com/Barre/ZeroFS
I’ve recently started using it again after consistent issues with SMB on Apple devices, and the deprecation of AFP. My FreeBSD server, running on a Raspberry Pi, makes terabytes of content available to the web via an NFS connection to a Synology NAS.
For my use case, with a small number of users, the fact that NFS is host based rather than user based, means I can set it up one on each device, and all users of that host can access the shares. And I’ve generally found it to be more consistently performant on Apple hardware than their in-house SMB implementation.
And of course I still use it every day with Amazon EFS; Happy Birthday, indeed!
- It caused me to switch from Linux to FreeBSD in 1994 when Linux didn't have NFS caching but FreeBSD did & Linus told me "nobody cares about NFS" at the Boston USENIX. I was a sysadmin for a small stats department, and they made heavy use of NFS mounted latex fonts. Xdvi would render a page in 1s on FreeBSD and over a minute on Linux due to the difference in caching. Xdvi would seek byte-by-byte in the file.. You could see each character as it rendered on Linux, and the page would just open instantly on FreeBSD.
- When working as research staff for the OS research group in the CS dept, I worked on a modern cluster filesystem ("slice") which pretended to be NFS for client compat. (https://www.usenix.org/legacy/event/osdi00/full_papers/ander...)
"NFS is like heroin: it seems like a great idea at first and then it ruins your life" (as many commenters are pointing out)
Still an amazing technology for it's time though.
Weirdly that nerd snipe landed me two different jobs! People wanted to build network-based services and that was one of the quickest ways to do it.