I Took All My Projects Off the Cloud, Saving Thousands of Dollars
Postedabout 2 months agoActiveabout 2 months ago
rameerez.comTechstoryHigh profile
heatedmixed
Debate
85/100
Cloud ComputingCost OptimizationInfrastructure Management
Key topics
Cloud Computing
Cost Optimization
Infrastructure Management
The author shares their experience of moving projects off the cloud to save thousands of dollars, sparking a debate on the pros and cons of cloud computing and the trade-offs between cost, complexity, and convenience.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
87
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 4, 2025 at 4:22 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 4, 2025 at 5:28 PM EST
1h after posting
Step 02 - 03Peak activity
87 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 8, 2025 at 8:33 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45816041Type: storyLast synced: 11/22/2025, 11:00:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you need a lot of, well, anything, be it compute, memory, storage, bandwidth etc., of course cloud stuff is going to be more expensive... but if you don't need that, then IMO $3/mo on-demand pricing really can't be beat when I don't have to maintain any equipment myself. Oracle also offers perpetually free VM instances if you don't mind the glow.
I can certainly see a use for that small amount of compute & RAM, but it's not clear that your level of needs is common. I've been paying for a $16/mo VPS (not on AWS) for about 15 years. It started out at $9/mo, but I've upgraded it since then as my needs have grown. It's not super beefy with 2 vCPUs, 5GiB of RAM, and 60GiB of disk space (with free data ingress/egress), but it does the job, even if I could probably find it cheaper elsewhere.
But not at Amazon. Closest match is probably a t3.medium, with 2 vCPUs and 4GiB RAM. Add a 60GiB gp2 EBS volume, and it costs around $35/mo, and that's not including data transfer.
The point that you're missing is we're not looking for the cheapest thing ever, we're looking for the cheapest thing that meets requirements. For many (most?) applications, you're going to overpay (sometimes by orders of magnitude) for AWS.
You say "if you need a lot", but "lot" is doing a bit of work there. My needs are super modest, certainly not "a lot", and AWS is by far not the cheapest option.
Don't give the big cloud companies an inch if you don't absolutely have to. The internet needs and deserves the participation of independent people putting up their own services and systems.
Amazon really doesn't care if your $10,000 bed folds up on you like a sandwich and cooks you when AWS us-east-1 goes down, or stops your smart toilet from flushing, or sets bucket defaults that allow trivial public access to information you assume to be secure, because nobody in their right mind would just leave things wide open.
Each and every instance of someone doing something independently takes money and control away from big corporations that don't deserve it, and it makes your life better. You could run pihole and a slew of other useful utilities on your self-hosted server that benefit anyone connected to your network.
AI can trivially walk you through building your own self-hosted setups (or even set things up for you if you entrust it with an automation MCP.)
Oracle and AWS and Alphabet and the rest shouldn't profit from eating the internet - the whole world becomes a better place every time you deny them your participation in the endless enshittification of everything.
> Idiotic piece
That's unnecessary; please don't do that here. Weird that you created an account just to post an unsubstantive comment.
https://github.com/ipfs/kubo/issues/10327
https://discuss.ipfs.tech/t/moved-ipfs-node-result-netscan-d...
>This happens with Hetzner all the time because they have no VLANs and all customers are on a single LAN and IPFS tries to discover other nodes in the same LAN by default.
> running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people.
That's a medium to large homelab worth of stuff, which means it can be run by a single nerd in their spare time.
The gulf between these two insofar as what approach, technologies, and due-diligences are necessary is vast.
But I think for many (most?) businesses, one nine is just fine. That's perfectly doable by one person, even if you want, say, >=96% uptime, which allows for 350 hours of downtime per year. Even two nines allows for ~88 hours of downtime per year, and one person could manage that without much trouble.
Most businesses aren't global. Downtime outside regular business hours for your timezone (and perhaps one or two zones to the west and east of you) is usually not much of a problem, especially if you're running a small B2B service.
For a small business that runs on 1-3 servers (probably very common!), keeping a hot spare for each server (or perhaps a single server that runs all services in a lower-supported-traffic mode) can be a simple way to keep your uptime high without having to spend too much time or money. And people don't have to completely opt out of the cloud; there are affordable options for e.g. managed RDBMS hosting that can make maintenance and incident response significantly easier and might be a good choice, depending on your needs.
(Source: I'm building a small one-person business that is going to work this way, and I've been doing my research and gaming it out.)
That is quite different to a business that turns off its boxes for an hour at 0100 Sunday morning to do updates and release new software. The downtime isn't equivalent because it really matters when it is and if that hurts your use case or not. Your own system might be down for more hours a year than AWS, but its not down Monday to Friday on an evening when you do most your sales because you refuse to touch anything during that period and do all the work outside that and schedule your updates.
Either it's production system in the context of "it's a business serving customers", in which case, there are many homelabs out there which have received paying traffic, or it's a production system in terms of functionality, downtime, technical features etc. Again, there are many homelabs out there that can tick all the same technical and performance boxes as "production system".
In the context of the original comment I was responding to ("...running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people.") you're more evidence for my comment that it's perfectly possible for 1 - 2 people to run a small production system, just as it's possible for 1 nerd to run a medium - large homelab with the same technical features.
My experience is that the largest regulated production systems I've run have clearly been larger and more complex than my homelab, but my homelab has been significantly more resilient, featureful and robust than many of the smaller production systems I've been responsible for outside of regulated domains.
Then don't. If your team and budget are small enough not to hire a sysadmin, then your workload is (almost certainly) small enough to fit on one server, one Postgres database, Jenkins or a bash script, and certainly no k8s.
> The whole debate of “is this still the cloud or not” is nonsense to me. You’re just getting lost in naming conventions. VPS, bare metal, on-prem, colo, who cares what you call it. You need to put your servers somewhere. Sure, have a computer running in your mom’s basement if that makes you feel like you’re exiting the cloud more, I’ll have mine in a datacenter and both will be happy.
The "is this cloud or not" debate in the piece makes perfect sense. Who cares whether Hetzner is defined as "the cloud" or not? The point is, he left AWS without going to Azure or some other obvious cloud vendor. He took a step towards more hands on management. And he saved a ton of money.
Then the article should be titled as
"Send this article to your friend who still thinks that AWS is a good idea"
or
"Save costs by taking a step towards more hands on management"
or
"How I saved money moving from AWS to Hetzner"
If you can't drive to the location where your stuff is running, and then enter the building blindfolded, yet put your hands on the correct machine, then it's cloud.
However, one situation where I think the cloud might be useful is for archive storage. I did a comparison between AWS Glacier Deep Storage and local many-hard-drive boxes, for storing PB-scale backups, and AWS just squeaked in as slightly cheaper, but only because you only pay for the amount you use, whereas if you buy a box then you have to pay for the unused space. And it's off-site, which is a resilience advantage. And the defrosting/downloading charge was acceptable at effectively 2.5 months worth of storage. However, at smaller scales you would probably win with a small NAS, and at larger scales you'd be able to set up a tape library and fairly comprehensively beat AWS for price.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all.
https://lowendbox.com/blog/two-weeks-after-killing-the-linod...
https://mjtsai.com/blog/2023/03/03/linode-price-increases/
https://www.linode.com/community/questions/23898/new-price-i...
Getting through AWS documentation can be fairly time consuming.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
apt install automysqlbackup autopostgresqlbackup
Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots.
And again I'll emphasize proper snapshot, cutting off writes at an exact point in time. A normal file copy cannot safely back up an active database.
Only if your database files are split across multiple file systems, which is atypical.
Backups that are stored with the same provider are good, providing the provider is reliable as a whole.
(Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.")
Well 2 commands...
Then copy it down The biggest effort would be then running the Apache Parquet to CSV tool on it.But creating an S3 bucket, an IAM role and attaching policies isn't 30 commands.
There are also turnkeys solutions that allow one to spin up a DB, setup replication and backups inside or outside of big cñoud vendors. That is the point of db kubernetes operators for instance.
Yes, but not with
> TypeScript and CDK
Unless your business includes managing infrastructure with your product, for whatever reason (like you provision EC2 instances for your customers and that's all you do), there is no reason to shoot yourself in the foot with a fully fledged programming language for something that needs to be as stable as infrastructure. The saying is Infrastructure as Code, not with code. Even assuming you need to learn Terraform from scratch but already know Typescript, you would still save you time compared to learning CDK, figuring out what is possisble with it, and debugging issues down the line.
And learning something arguably better, like Cloudformation / Terraform / SST, is still a hurdle.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
(LOL 'customer'. But the point is, when the day comes, I'll be happy to give them money.)
https://news.ycombinator.com/item?id=39520776
Where did you read that? The pricing page says 10 credits per GB, and extra credits can be purchased at $10 per 1500 credit. So it's more like $0.067/GB.
> The free plan is always free, with hard monthly limits that cannot be exceeded or incur any costs.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
How could I not use the cloud?
I guess this is one of those use cases that justify the cloud. It's hard to host that reliably at home.
Not wanting to deal with backups or HA are decent reasons to put a database in the cloud (as long as you are aware how much you are overpaying). Not having a good place to put the server is not a good reason
Though both of which are probably less than you'd need if you needed a full of rack of space, which I assume is part of the reason that pricing is almost always "contact us". I did not bother getting a quote just for the purpose of this comment. But another thing that people need to be less afraid of, when they're looking to actually spend a few digits of money and not just comment about it, is asking for quotes.
However, I think there's an implicit point in TFA; namely, that your personal and side projects are not scaling to a 12 TB database.
With that said, I do manage approximately 14 TB of storage in a RAIDZ2 at my home, for "Linux ISOs". The I/O performance is "good enough" for streaming video and BitTorrent seeding.
However, I am not sure what your latency requirements and access patterns are. If you are mostly reading from the 12 TB database and don't have specific latency requirements on writes, then I don't see why the cloud is a hard requirement? To the contrary, most cloud providers provide remarkably low IOPS in their block storage offerings. Here is an example of Oracle Cloud's block storage for 12 TB:
https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/bl...Those are the kind of numbers I would expect of a budget SATA SSD, not "NVMe-based storage infrastructure". Additionally, the cost for 12 TB in this storage class is ~$500/mo. That's roughly the cost of two 14 TB hard drives in a mirror vdev on ZFS (not that this is a good idea btw).
This leads me to guess most people will prefer a managed database offering rather than deploying their own database on top of a cloud provider's block storage. But 12 TB of data in the gp3 storage class of RDS costs about $1,400/mo. That is already triple the cost of the NAS in my bedroom.
Lastly, backing up 12 TB to Backblaze B2 is about $180/mo. Given that this database is for your dev environment, I am assuming that backup requirements are simple (i.e. 1 off-site backup).
The key point, however, is that most people's side projects are unlikely to scale to a 12 TB dev environment database.
Once you're at that scale, sure, consider the cloud. But even at the largest company I worked at, a 14 TB hard drive was enough storage (and IOPS) for on-prem installs of the product. The product was an NLP-based application that automated due diligence for M&As. The storage costs were mostly full-text search indices on collections of tens of thousands of legal documents, each document could span hundreds to thousands of pages. The backups were as simple as having a second 14 TB hard drive around and periodically checking the data isn't corrupt.
How many pets do you want to be tending to? I have 10^5 servers I'm responsible for...
The quantity and methods the cloud affords me allow me to operate the same infrastructure with 1/10th as much labor.
At the extreme ends of scale this isn't a benefit, but for large companies in the middle this is the only move that makes any sense.
99% of posts I read talking about how easy and cheap it is to be in the datacenter all have a single digit number of racks worth of stuff. Often far less.
We operate physical datacenters as well. We spend multiple millions in the cloud per month. We just moved another full datacenter into the cloud and the difference in cost between the two is less than $50k/year. Running in physical DCs is really inefficient for us for a long of annoying and insurmountable reasons. And we no longer have to deal with procurement and vendor management. My engineers can focus their energy on more valuable things.
Multiple millions in the cloud per month?
You could build a room full of giant servers and pay multiple people for a year just on your monthly server bill.
But also, that’s extremely easily handled with physical servers - there are NVMe drives that are 10x as large.
Your use case is the _worst_ use case for the cloud.
For some reason people more easily understand the limits of CPU and memory, but overlook disk constantly.
I/O is hard to benchmark so it's often ignored since you can just scale up your disks. It's a common gotcha in the cloud. It's not a show stopper, but it blows up the savings you might be expecting.
Also, SAN is often faster then local disk if you have a local SAN.
> How could I not use the cloud?
Funnily enough, one of my side projects has its (processed) primary source of truth at that exact size. Updates itself automatically every night adding a further ~18-25 million rows. Big but not _big_ data, right?
Anyway, that's sitting running happily with instant access times (yay solid DB background) on a dedicated OVH server that's somewhere around £600/mo (+VAT) and shared with a few other projects. OVH's virtual rack tech is pretty amazing too, replicating that kind of size on the internal network is trivial too.
It's not all HA, NVMe, web scale stuff, but it's not like a few hundred TB:s is a huge undertaking even for individual nerds with a bit of money to spend or connections at corporations that monotonically decommission hardware and is happy to not have to spend resources getting rid of it.
This summer I bought a used server for 200 euros from an acquaintance, I plan on shoving 140 TB in it and expect some of my future databases to exceed 10 TB in size.
I had two projects reach the front page of HN last year, everything worked like a charm.
It's unlikely I'll ever go back to professional hosting, "cloud" or not.
The vast majority of us that are actually technically capable are better served self hosting.
Especially with tools like cloudflare tunnels and Tailscale.
When does the cloud start making sense ?
It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
AWS has a bunch of startup credits you can use, if you're smart.
But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.
Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!
LEVEL 2:
And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.
Oh do you want to do distributed inference? Wasmcloud: https://wasmcloud.com/blog/2025-01-15-running-distributed-ml... ... but I'd recommend just paying Google for AI workloads
Want livestreaming that's peer to peer? We've got that too: https://github.com/Qbix/Media/blob/main/web/js/WebRTC.js
PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.
LEVEL 3:
Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.
https://pears.com/news/building-apocalypse-proof-application...
You can easily scale hard drive space independently of RAM by buying block storage separately and then mounting it on your Linode.
It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'.
1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience.
2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.
I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype.
If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example.
It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck.
2x is the same ballpark???
When did Linode and DO got dropped and not being part of the cloud ?
What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud.
Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM.
Managed database? PlanetScale or Neon.
A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities.
I had someone on this site arguing that Cloudflare isn't a cloud provider...
Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks.
Agreed. These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes.
If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups.
> You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware)
Not true for VPSes or rented dedicated servers either.
> Peak-heavy loads can be a lot cheaper.
they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service.
> Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway.
> But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You have more forced upgrades.
An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do.
I would personally have an account at one of those places and back up to there with everything ready to spin up instances and failover if you lose your rack, and use them for any bursty loads.
...
...
"P.S. follow me on Twitter"
So uh, not everything
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
Reading author's article:
> For me, that meant:
> RDS for the PostgreSQL database (my biggest monthly cost, in fact)
> EC2 for the web server (my 2nd biggest monthly cost)
> Elasticache for Redis
https://rameerez.com/how-i-exited-the-cloud/
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
* at least I assume what this post is; I’m still waiting for it to load.
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
Strawman arguments, ad hominem attacks and Spongebob mocking memes, and the casual venturing into conspiracy theories and malicious intentions...
> Why do all these people care if I save more money or not? ... If they’re wrong, and if I and more people like me manage to convince enough people that they’re wrong, they may be out of a job soon.
I have a feeling AWS is doing fine without him. Cloud is one of the fastest growing areas in tech because their product solves a need for certain people. There is no larger conspiracy to keep cloud in business by silencing dissent on Twitter.
> You will hear a bunch of crap from people that have literally never tried the alternative. People with no real hands-on experience managing servers for their own projects for any sustained period of time.
This is more of a rant than a thoughtful technical article. I don't know what I was expecting, because I clicked on the title knowing it was clickbait, so shame on me, I guess...
Is this what I'm missing by not having Twitter?
235 more comments available on Hacker News