AWS to Bare Metal Two Years Later: Answering Your Questions About Leaving AWS
Posted2 months agoActive2 months ago
oneuptime.comTechstoryHigh profile
heatedmixed
Debate
85/100
Cloud ComputingBare Metal InfrastructureAWS Migration
Key topics
Cloud Computing
Bare Metal Infrastructure
AWS Migration
The article discusses a company's decision to migrate from AWS to bare metal infrastructure, sparking a heated debate on the pros and cons of cloud computing versus managing one's own infrastructure.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
15m
Peak period
91
0-6h
Avg / period
14.5
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 29, 2025 at 7:14 AM EDT
2 months ago
Step 01 - 02First comment
Oct 29, 2025 at 7:29 AM EDT
15m after posting
Step 02 - 03Peak activity
91 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 2, 2025 at 1:00 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45745281Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The long term app model on the market model is shifting much more towards buying services vs renting infrastructure. It’s here where the AWS case falls apart with folks now buying Planet Scale vs RDS, buying DataBricks over the mess that AWS puts for for data lakes, working with model providers directly vs the headaches of Bedrock. The real long term threat is AWS continues to whiff on all the other stuff and gets reduced to a boring rent-a-server shop that market forces will drive to be very low margin.
Yes a lot of those 3rd party services will run on AWS but the future looks like folks renting servers from AWS at 7% gross margin and selling their value-add service on top at 60% gross margin.
What people forget about the OVH or Hetzner comparison is that for those entry servers they are known for, think the Advance line with OVH or AX with Hetzner. Those boxes come with some drawbacks.
The OVH Advance line for example comes without ECC memory, in a server, that might host databases. It's a disaster waiting to happen. There is no option to add ECC memory with the Advance line, so you have to use Scale or High Grade servers, which are far from "affordable".
Hetzner per default comes with a single PSU, a single uplink. Yes, if nothing happens this is probably fine, but if you need a reliable private network or 10G this will cost extra.
But imo, systems like these (like the ones handling bank transaction), should have a degree of resiliency to this kind of failure, as any hw or sw problem can cause something similar.
For a startup with one rack in each of two data centers, it’s probably fine. You’ll end up testing failover a bit more, but you’ll need that if you scale anyway.
If it’s for some back office thing that will never have any load, and must not permanently fail (eg payroll), maybe just slap it on an EC2 VM and enable off-site backup / ransomware protection.
reference : https://news.ycombinator.com/item?id=38294569
> We have 730+ days with 99.993% measured availability and we also escaped AWS region wide downtime that happened a week ago.
This is a very nice brag. Given they are using their ddos protection ingress via CloudFlare there is that dependancy, but in that case I can 100% agree than DNS and ingress can absolutely be a full time job. Running some microservices and a database absolutely is not. If your teams are constantly monitoring and adjusting them such as scaling, then the problem is the design. Not the hosting.
Unless you're a small company serving up billions of heavy requests an hour, I would put money on the bet AWS is overcharging you.
Technically? Totally doable. But the owners prefer renting in the cloud over the people-related issues of hiring.
You don't need to hire dedicated people full time. It could even be outsourced and then a small contract for maintenance.
It's the same argument you could say for "accounting persons", or "HR persons" - "We are a software organisation!" - Personally I don't buy the argument.
Yeah, those people we outsourced to happen to work at AWS.
Pay some "devops" folks and then underfund them and give them a mandate of all ops but with less people and also you need to manage the constant churn of aws services and then deal with normal outages and dumb dev things.
These people exist, but we have far more stupid "admins" around here
When you are not in the infrastructure business (I work in retail at the moment), the public cloud is the sane way to go (which is sad, but anyway)
I’ve been working at a place for a long time and we have our own data centers. Recently there has been a push to move to the public cloud and we were told to go through AWS training. It seems like the first thing AWS does in its training is spend a considerable amount of time on selling their model. As an employee who works in infrastructure, hearing Amazon sell so hard they the company doesn’t need me anymore is not exactly inspiring.
After that section they seem to spend a considerable amount of time on how to control costs. These are things no one really thinks about currently, as we manage our own infra. If I want to spin up a VM and write a bunch of data to it, no one really cares. The capacity already exists and is paid for, adding a VM here or there is inconsequential. In AWS I assume we’ll eventually need to have a business justification for every instance we stand up. Some servers I run today have value, but it would be impossible to financially justify in any real terms when running in AWS where everything has a very real cost assigned to it. What I do is too detached from profit generation, and the money we could save is mostly theoretical, until something happens. I don’t know how this will play out, but I’m not excited for it.
The AWS mandatory training I did in the past was 100% marketing of their own solutions, and tests are even designed to make you memorize their entire product line.
The first two levels are not designed for engineers: they're designed for "internal salespeople". Even Product Managers were taking the certification, so they would be able to recommend AWS products to their teams.
Who does, then? Even with automatic updates, one can assume some level of maintenance is required for long-term deployments.
Don’t get me wrong, I love running stuff bare metal for my side projects, but scaling is difficult without any ops.
I do not miss that crap
It wasn't as simple as that then, at it's still not as simple as that now.
It’s become polarised (as everything seems to).
I’ve specced bare metal, I’ve specced AWS, which is used entirely a matter of the problem/costs and relative trade-offs.
That is all it is.
Clients that use cloud consistently end up spending more on devops resources, because their setups tends to be wastly more complex and involve more people.
The biggest ops teams I worked alongside were always dedicated to running AWS setups. The slowest too were dedicated to AWS. Proportionally, I mean, of course.
People here are comparing the worst possible of Bare Metal with "hosting my startup on AWS".
I wish I could come up with some kind of formalization of this issue. I think it has something to do with communication explosions across multiple people.
Don't make perfect the enemy of the good.
AWS in 2025 is way more work than Heroku/Fly/Vercel, but also more way work than renting bare metal from say Hetzner/OVH, and perhaps even more than renting colo.
AWS services are great quality, but they are extremely expensive.
Right, doesn't that include figuring out the right and best way of running it, regardless if it runs on client machines or deployed on servers?
At least I take "software engineering" to mean the full end-to-end process, from "Figure out the right thing to build" to "runs great wherever it's meant to run". I'm not a monkey that builds software on my machine and then hands it off to some deployment engineer who doesn't understand what they're deploying. If I'm building server software, part of my job is ensuring it's deployed in the right environment and runs perfectly there too.
With AWS I think this tradeoff is very weak in most cases: the tasks that you are paying AWS for are relatively cheap in time-of-people-in-your-org, and AWS also takes up a significant amount of that time with new tasks as well. Of the organisations I'm personally aware of, the ones who hosted on-prem spent less money on their compute and had smaller teams managing it, with more effective results than those who were cloud-based (to various degrees of egregousness from 'well, I can kinda see how it's worth it because they're growing quickly' to 'holy shit they're setting money on fire and compromising their product because they can't just buy some used tower PCs and plug them in in a closet in the office')
It's also that the requirements vary a lot, discussions here on HN often seem to assume that you need HA and lots of scaling options. That isn't universally true.
Funny how our perceptions differ. I seem to mostly see people saying all you need is a cheap Hetzner instance and postgres to solve all technical problems. We clearly all have different working environments and requirements. That's I roll my eyes at the suggestions in threads I see of going all in on colo. My last two major cloud migrations were due to colo facilities shutting down. They were getting kicked out and had a deadline. In one of the cases, the company I was working with was the second largest client at the colo but when the largest client decided to pull out the owners decided the economics of running the datacenter didn't make sense to them anymore. Switching colo facilities when you have a few servers isn't a big deal. It's annoying but manageable. When you have hundreds to thousands of servers, it becomes a major operational risk and is enormously disruptive to business as usual.
This applies only if you had an extra customer that pays the difference. Basically argument only holds if you can’t take more customers because upkeeping the infrastructure takes too much time or you need to hire extra person which takes more money than AWS bill difference.
("Shall we make the app very resilient to failure? Yes running on multiple regions makes the AWS bill bigger but you'll get much fewer outages, look at all this technobabble that proves it")
And of course AWS lock-in services are priced to look cheaper compared to their overpricing of standard stuff[1] - if you just spend the engineering effort and IaC coding effort to move onto them, this "savings" can be put to more AWS cloud engineering effort which again makes your cloud eng org bigger and more important.
[1] (For example implementing your app off containers to Lambda, or the db off PostgreSQL to DynamoDB etc)
I don't think it is easy. I see most organizations struggle with the fact that everything is throttled in the cloud. CPU, storage, network. Tenants often discover large amounts of activity they were previously unaware of, that contributes to the usage and cost. And there may be individuals or teams creating new usages that are grossly impacting their allocation. Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings.
Then you can start adding in the Cloud value, such as incomprehensible networking diagrams that are probably non-compliant in some way (guess which ones!), and security? What is it?
Sounds interesting, which setting is that?
MARS isn't strictly needed for most things. Some features that requires it are ORM (EF) proxies and lazy loading. If you need MARS, there are third party "accelerators" that workaround this madness.
"MARS Acceleration significantly improves the performance of connections that use the Multiple Active Result Sets (MARS) connection option."
https://documentation.nitrosphere.com/resources/release-note...
> Did you know there is a setting in MS SQL Server that impacts performance by an order of magnitude when sending/receiving data from the Cloud to your on-premises servers? It's the default in the ORM generated settings
I use and love EF, but generally leave MARS off when possible because it is responsible for more trouble than performance gains nearly every time.
As an Computer Science dude and former C64/Amiga coder in Senior Management of a large international Bank, I saw first hand, how cost balloon simply due to the fact, that the bank recreates and replicates its bare metal environment in the cloud.
So increasing costs while nothing changed. Imagine that: fixed resources, no test environments, because virtualisation was out of the equation in the cloud due to policies and SDLC processes. And it goes on: releases on automation? Nope, request per email and attached scan of a paper document as sign-off.
Of course your can buy a Ferrari and use it as a farm tractor. I bet it is possible with a little modification here and there.
Another fact is, that lock in plays a huge role. Once you are in it, no matter what you subscribe to, magically everything slows suddenly down, a bit, but since I am a guy who uses a time tracker to test and monitor apps, I could easily draw a line even without utilizing my Math background: enforced throtelling.
There is a difference between 100, 300 and 500ms for SaaS websites - people without prior knowledge of peceptual psychology feel it but cannot but their finger in the wound. But since we are in the cloud, suddenly a cloud manager will offer you an speed upgrade - just catered for your needs! Here, have a trial period over 3 month for free and experience the difference for your business!
I am a bit of opinionated here and really suppose, that cloud metrics analysed the banks traffic and service usage to willingly slow it down in a way, only professionals could find out. Have you promised to be lightning fast in the first place? No, that's not what the contract says. We fed you with it, but a "normal" speed was agreed upon. It is like getting a Porsche as a rental car for free when you take your VW Beetle to the dealer for a checkup. Hooked, of course. A car is a car after all. How to boil a frog? Slowly.
Of course there will be more sales and this is achilles' heel for every business and indifferent customers - easy prey.
It is a vicious cycle, almost like taxation. You cannot hide from it, no escape and it is always on the rise.
IIRC, he only got into making cars because Enzo Ferrari disrespected him.
Many a company was stuck with a datacenter unit that was unresponsive to the company's needs, and people migrated to AWS to avoid dealing with them. This straight out happened in front of my eyes multiple times. At the same time, you also end up in AWS, or even within AWS, using tools that are extremely expensive, because the cost-benefit analysis for the individuals making the decision, who often don't know very much other than what they use right now, are just wrong for the company. The executive on top is often either not much of a technologist or 20 years out of date, so they have no way to discern the quality of their staff. Technical disagreements? They might only know who they like to hang out with, but that's where it ends.
So for path dependent reasons, companies end up making a lot of decisions that in retrospect seem very poor. In startups if often just kills the company. Just don't assume the error is always in one direction.
I'd like to +1 here - it's an understated risk if you've got datacenter-scale workloads. But! You can host a lot of compute on a couple racks nowadays, so IMHO it's a problem only if you're too successful and get complacent. In the datacenter, creative destruction is a must and crucially finance must be made to understand this, or they'll give you budget targets which can only mean ossification.
Like can’t we just give the data center org more money and they can over provision hardware. Or can we not have them use that extra money to rent servers from OVH/Hetzner during the discovery phase to keep things going while we are waiting on things to get sized or arrive?
It’s how they always refuse to spend half my monthly salary on the computer I work on, and instead insist I use an underpowered windows machine.
Or just use Hetzner for major performance at low cost... Their apis and stuff make it look like its your datacenter.
In a large company I worked the Ops team that had the keys to AWS was taking literal months to push things to the cloud, causing problems with bonuses and promotions. Security measures were not in place so there were cyberattacks. Passwords of critical services lapsed because they were not paying attention.
At some point it got so bad that the entire team was demoted, lost privileges, and contractors had to jump in. The CTO was almost fired.
It took months to recover and even to get to an acceptable state, because nothing was really documented.
On the other hand it's not hard to believe that the CEO and the board are as sleepy as the CTO here. And the whole management team.
The worst one was when a password for an integration with the judicial system expired. They asked the DevOps to open their email and there were daily alerts for six months. The only reason they found this happened was because a few low level operators made a big thing out of it.
I don't like talking about "regulatory capture" but this is the only reason this company still exists. Easy market when there's almost no competition.
Looking back at doing various hiring decisions at various levels of organizations, this is probably the single biggest mistake I've done multiple times, hiring specific people using specific technology because we were specifically using that.
You'll end up with a team unwilling to change, because "you hired me for this, even if it's best for the business with something else, this is what I do".
Once I and the organizations shifted our mindset to hiring people who are more flexible, even if they have expertise in one or two specific technologies, they won't put their head in the sand whenever changes come up, and everything became a lot easier.
I'll also tend to look closely at whether people have "gotten stuck" specialising in a single stack. It won't make me turn them down, but it will make me ask extra questions to determine how open they are to alternatives when suitable.
A modern server can be power cycled remotely, can be reinstalled remotely over networked media, can have its console streamed remotely, can have fans etc. checked remotely without access to the OS it's running etc. It's not very different from managing a cloud - any reasonable server hardware has management boards. Even if you rent space in a colo, most of the time you don't need to set foot there other than for an initial setup (and you can rent people to do that too).
But for most people, bare metal will tend to mean renting bare metal servers already configured anyway.
When the first thing you then tend to do is to deploy a container runtime and an orchestrator, you're effectively usually left with something more or less (depending on your needs) like a private cloud.
As for "buying ahead of time", most managed server providers and some colo operators also offer cloud services, so that even if you don't want to deal with a multi-provider setup, you can still generally scale into cloud instances as needed if your provider can't bring new hardware up fast enough (but many managed server providers can do that in less than a day too).
I never think about buying ahead of time. It hasn't been a thing I've had to worry about for a decade or more.
All of this was already possible 20 years ago, with iLO and DRAC cards.
The bad old days of begging an IT ops person for a server, and then throwing a binary over the fence at them so they can grumble while they try to get it running safely in production... yeah, no, that doesn't have to be a thing anymore.
The "we" you speak of is the problem: if your org hires actual real sysadmins and operations people (not people who just want to run everything on AWS), then "you" don't have to worry about it.
And, lets face it - arent you already overprovisioning on the cloud because you cant risk your users waiting 1-2 minutes until your new nodes and pods get up? So basically the 'autoscaling' of cloud has always been a myth.
In my experience, the ops folks were absolutely thrilled with the arrival of the cloud because with a trivial amount of training and a couple of certifications they had a pathway to get paid as much, if not more, than devs, especially if they rebranded as “devops engineers” instead of “ops guys”.
The only pushback against the cloud, other than some of us engineers who actually were among the first to jump on the cloud, still really loved it, but also recognized that it wasn’t the best fit for all uses and carried significant risks, were people worried about data safety.
The latter concern has largely turned out to not be a real one yet, but a decade and a half later people are finally realizing that actually there are many areas where the cloud may not be the best fit.
If you hire people that are not responsive to your needs, then, sure, that is a problem that will be a problem irrespective of what their pet stack is.
Also there's a mindset difference - if I gave you a server with 32 cores you wouldn't design a microservice system on it, would you? After all there's nowhere to scale to.
But with AWS, you're sold the story of infinite compute you can just expect to be there, but you'll quickly find out just how stingy they can get with giving you more hardware automatically to scale to.
I don't dislike AWS, but I feel this promise of false abundance has driven the growth in complexity and resource use of the backend.
Reality tends to be you hit a bottleneck you have a hard time optimizing away - the more complex your architecture, the harder it is, then you can stew.
This is key.
Most people never scale to a size where they hit that limit, and in most organisations where that happens, someone else have to deal with it, and so most developers are totally unaware of just how fictional the "infinite scalability" actually is.
Yet it gets touted as a critical advantage.
At the same time, most developers have never ever tried to manage modern server harware, and seem think it is somehwat like managing the hardware they're using at home.
Not on the AMD machines from m7 (and the others which share the same architecture)
I kinda feel like this argument could be used against programming in essentially any language. Your company, or you yourself, likely chose to develop using (whatever language it is) because that's what you knew and what your developers knew. Maybe it would have been some percentage more efficient to use another language, but then you and everyone else has to learn it.
It's the same with the cloud vs bare metal, though at least in the cloud, if your using the right services, if someone asked you tomorrow to scale 100x you likely could during the workday.
And generally speaking if your problem is at a scale where baremetal is trivial to implement, its likely we're only taking about a few hundred dollars a month being 'wasted' in AWS. Which is nothing to most companies, especially when they'd have to consider developer/devops time.
"The right services" is I think doing a lot of work here. Which services specifically are you thinking of?
- S3? sure, 100x, 1000x, whatever, it doesn't care about your scale at all (your bill is another matter).
- Lambdas? On their own sure you can scale arbitrarily, but they don't really do anything unless they're connected to other stuff both upstream and downstream. Can those services manage 100x the load?
- Managed K8s? Managed DBs? EC2 instances? Really anything where you need to think about networking? Nope, you are not scaling this 100x without a LOT of planning and prep work.
You're note getting 100x increase in instances without justifying it to your account manager, anyway, long before you figure out how to get it to work.
EC2 has limits on the number of instances you can request, and it certainly won't let you 100x unless you've done it before and already gone through the hassle to get them to raise your limits.
On top of that, it is not unusual to hit availability issues with less common instance types. Been there, done that, had to provision several different instance types to get enough.
Its a lot worse than this in terms of AWS cost for apps that often barely any people use. They're often incorrectly provisioned and the AWS bill ends up in the hundreds of thousands or millions and could have been a few thousand in bare metal on Hetzner with a competent sysadmnin team. No, its not harder to administer bare metal. No, its not less reliable. No, its not substantially harder to scale for most companies to do bare metal(large fortune 50 excluded).
I can go in and guarantee that my fees are capped at a few months worth of their savings, and still it's a hard sell with a lot of teams who are perfectly happy to keep burning cash.
And I'll note, as much as I love to get people off AWS, most of the times people can massively reduce their bill just by using AWS properly as well, so even if bare metal was bad for their specific circumstances they're still figuratively setting fire to piles of cash.
I've never seen a cloud setup where that was true.
For starters: Most cloud providers will impose limits on you that often means going 100x would involve pleading with account managers to have limits lifted and/or scrounding a new, previously untested, combination of instance sizes.
But secondly, you'll tend to run into unknown bottlenecks long before that.
And so, in fact, if that is a thing you actually want to be able to do, you need to actually test it.
But it's also generally not a real problem. I more often come across the opposite: Customers who've gotten hit with a crazy bill because of a problem rather than real use.
But it's also easy enough to set up a hybrid setup that will spin up cloud instances if/when you have a genuine need to be able to scale up faster than you can provision new bare metal instances. You'll typically run an orchestrator and run everything in containers on a bare metal setup too, so typically it only requires having an auto-scaling group scaled down to 0, and warm it up if load nears critical level on your bare metal environment, and then flip a switch in your load balancer to start directing traffic there. It's not a complicated thing to do.
Now, incidentally, your bare metal setup is even cheaper because you can get away with a higher load factor when you can scale into cloud to take spikes.
> And generally speaking if your problem is at a scale where baremetal is trivial to implement, its likely we're only taking about a few hundred dollars a month being 'wasted' in AWS. Which is nothing to most companies, especially when they'd have to consider developer/devops time.
Generally speaking, I only relatively rarely work on systems that cost less than in the tens of thousands per month and up, and what I consistently see with my customers is that the higher the cost, the bigger the bare-metal advantage tends to be as it allows you to readily amortise initial setup costs of more streamlined/advanced setups. The few places where cloud wins on cost is the very smallest systems, typically <$5k/month.
This is so weird to me, because if you're running a company, you should be cost-sensitive. Sure, you might be willing to spend extra money on AWS in the very beginning if it helps you get to market faster. But after that, there's really no excuse: profit margin should be a very important consideration in how you run your infrastructure.
Of course, if you're VC backed, maybe that doesn't matter... that kind of company seems to mainly care about user growth, regardless of how much money is being sent to the incinerator to get it.
It's perfectly valid to not want to put engineering effort into it at the "wrong time" when delivering features will give you a higher return, but it came across as a lack of interest in paying attention to cost at all.
I saw a lot of that attitude from the tech side when I was looking at this. A lot of the time the CFO or CEO would be appallled, because they were actually paying attention to burn rates, but where often getting stonewalled by the tech side who'd often just insist all the costs were necessary - even while they often didn't know what they were spending or on what.
Let me go on a tangent about trains. In Spain before you board a high-speed train you need to go though full security check, like on an airport. In all other EU countries you just show up and board, but in Spain there's the security check. The problem is that even though the security check is an expensive, inefficient theatre, just in case something does blow up, nobody wants to be the politician that removed the security check. There will be no reward for a politician that makes life marginally easier for lots of people, but there will be severe punishment for a politician that is involved in a potential terrorist attack, even if the chance of that happening is ridiculously small.
This is exactly why so many companies love to be balls deep into AWS ecosystem, even if it's expensive.
Just for curiosity's sake, did any other EU countries have any recent terrorist attacks involving bombs on trains in the capital, or is Spain so far alone with this experience?
Edit: Also, after looking it up, it seems like London did add temporary security scanners at some locations in the wake of those bombings, although they weren't permanent.
Russia is the only other European country besides Spain that after train bombings added permanent security scanners. Belgium, France and a bunch of other countries have had train bombings, but none of them added permanent scanners like Spain or Russia did.
Notice how these inefficient processes create large, compact lines of passengers, which would made the casualties much worse in case of an actual bomb.
You can pay for EC2+EBS+network costs, or you can have a fancy cloud native solution where you pay for Lambda, ALBs, CloudWatch, Metrics, Secret Manager, (things you assume they would just give you, like if you eat at a restaurant, you probably won't expect to pay for the parking, toilet, or paying rent for the table and seats).
So cloud billing is its own science and art - and in most orgs devs don't even know how much the stuff they're building costs, until finance people start complaining about the monthly bills.
Because it was mostly fine at first, but later we had some close calls when there were changes that needed to be made on the servers. By the time we managed to mess up our hand managed incremental restart process, we had several layers of cache and so accidentally wiping one didn’t murder our backend, but did throw enough alerts to cause a P2. And because we were doing manual bucketing of caches instead of consistent hashing we hit the OOMKiller a couple times while dialing in.
But at this point it was difficult to move back to managed.
This feels closest to digital ocean’s business model.
Engineering mangers are promised cost savings on the HR level. Corporate finance managers are promised OpEx for CapEx trade-off, the books look better immediately. Cloud engineers are embarking on their AWS journey of certification being promised an uptick to their salaries. It’s a win/win for everyone, in isolation, a local optimum for everyone, but the organization now has to pay way more than it—hypothetically—would have been paying for bare metal ops. And hypothetical arguments are futile.
And it lends itself well to overengineering and the microservices cargo cult. Your company ends up with a system distributed around the globe across multiple AZs per region of business operations, striving to shave off those 100ms latency off your clients’ RTT. But it’s outgrown your comprehension, and it’s slow anyway, and you can’t scale up because it’s expensive. And instead of having one problem, you now have 99 and your bill is one.
So it is not like one can dazzle decision makers with any logic or hard data. They are just announcing the decision while calling it a robust discussion over pros and cons of on-prem vs cloud placement.
It’s really disturbing how the human factor controls decision making in corporations.
For my peace of mind, I chose a sane path - if the company as an entity decides to do AWS, I will do my best to meet its goals. I’ve got all Professional and Specialty certs. It’s the human nature. No purpose in tilting at windmills.
Amen to that.
Any kind of performance improvement, monitoring work I did for my applications has met with indifference or derision from managers. Because only if I had put efforts in cloud migration we could be "Horizontal Pod Scaling" for performance and fully managed Datadog console for monitoring the services.
But for me it's totally ruining my job to be honest. I like technology because it enables me to make things. I don't want to become an aws or azure specialist and learn what tickboxes the overlords at Amazon and Microsoft allow me to click. Screw that. That's nothing to do with technical knowledge, it's just about being a corporate drone. In my particular case it's Microsoft. Another problem with this is that they know they own everything in our company now so they're starting to treat us as employees, giving us things to do. Like promoting their features inside our company. I mean, they're a vendor FFS. They should answer to us.
A lot of my colleagues are really motivated with this shit and doing all the certs. Many are even becoming Microsoft evangelists and get pissed if I criticise someone. I'm looking for other options in the company now where I can actually do something technical again.
I understand there might be no bare metal work left in this company but in that case I'll just want to do something else. I don't want to be some goon that links their entire career to using the products of one big tech company. And also I think Microsoft and Amazon are horrible companies to work with as a customer. So me as a techie I just don't want to work at that anymore. What will remain are a lot of yes men who know how to click boxes.
And yeah we're not even doing anything smart or taking advantage of what the cloud offers. We just lifted all the physical stuff to compute instances that run 24/7.
The consequence of running a database poorly is lost data.
At the end of the day they're all just processes on a machine somewhere, none of it is particularly difficult, but storing, protecting, and traversing state is pretty much _the_ job and I can't really see how you'd think ingress and DNS would be more work than the datastores done right.
Now with AWS, I have a SaaS that makes 6 figures and the AWS bill is <$1000 a month. I'm entirely capable of doing this on-prem, but the vast majority of the bill is s3 state, so what we're actually talking about is me being on-call for an object store and a database, and the potential consequences of doing so.
With all that said, there's definitely a price point and staffing point where I will consider doing that, and I'm pretty down for the whole on-prem movement generally.
That's the sweet spot for AWS customers. Not so much for AWS.
The key thing for AWS is trying to get you locked in by "helping you" depend on services that are hard to replicate elsewhere, so that if your costs grow to a point where moving elsewhere is worth it, it's hard for you to do so.
331 more comments available on Hacker News