A $1k AWS mistake
Mood
heated
Sentiment
negative
Category
tech
Key topics
AWS
cloud costs
infrastructure management
The author shares a $1,000 AWS mistake due to misconfigured NAT gateway, sparking a heated discussion about cloud costs, AWS pricing, and the need for better cost management features.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
41m
Peak period
37
Hour 2
Avg / period
13.4
Based on 134 loaded comments
Key moments
- 01Story posted
11/19/2025, 10:00:05 AM
9h ago
Step 01 - 02First comment
11/19/2025, 10:41:03 AM
41m after posting
Step 02 - 03Peak activity
37 comments in Hour 2
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 7:01:50 PM
26m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
- DevOopsAnd I can see how, in very big accounts, small mistakes on your data source when you're doing data crunching, or wrong routing, can put thousands and thousands of dollars on your bill in less than an hour.
--
0: https://blog.cloudflare.com/aws-egregious-egress/By default a NGW is limited to 5Gbps https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway...
A GB transferred through a NGW is billed 0.05 USD
So, at continuous max transfer speed, it would take almost 9 hours to reach $1000
Assuming a setup in multi-AZ with three AZs, it's still 3 hours if you have messed so much that you can manage to max your three NGWs
I get your point but the scale is a bit more nuanced than "thousands and thousands of dollars on your bill in less than an hour"
The default limitations won't allow this.
Crucial for the approval was that we had cost alerts already enabled before it happened and were able to show that this didn't help at all, because they triggered way too late. We also had to explain in detail what measures we implemented to ensure that such a situation doesn't happen again.
Hard no. Had to pay I think 100$ for premium support to find that out.
- raw
- click-ops
Because, when you build your infra from scratch on AWS, you absolutely don't want the service gateways to exist by default. You want to have full control on everything, and that's how it works now. You don't want AWS to insert routes in your route tables on your behalf. Or worse, having hidden routes that are used by default.
But I fully understand that some people don't want to be bothered but those technicalities and want something that work and is optimized following the Well-Architected Framework pillars.
IIRC they already provide some CloudFormation Stacks that can do some of this for you, but it's still too technical and obscure.
Currently they probably rely on their partner network to help onboard new customers, but for small customers it doesn't make sense.
Why? My work life is in terraform and cloudformation and I can't think of a reason you wouldn't want to have those by default. I mean I can come up with some crazy excuses, but not any realistic scenario. Have you got any? (I'm assuming here that they'd make the performance impact ~0 for the vpc setup since everyone would depend on it)
How does this actually work? So you upload your data to AWS S3 and then if you wish to get it back, you pay per GB of what you stored there?
You can see why, from a sales perspective: AWS' customers generally charge their customers for data they download - so they are extracting a % off that. And moreover, it makes migrating away from AWS quite expensive in a lot of circumstances.
Please get some training...and stop spreading disinformation. And to think on this thread only my posts are getting downvoted....
"Free data transfer out to internet when moving out of AWS" - https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-i...
Egress bandwidth costs money. Consumer cloud services bake it into a monthly price, and if you’re downloading too much, they throttle you. You can’t download unlimited terabytes from Google Drive. You’ll get a message that reads something like: “Quota exceeded, try again later.” — which also sucks if you happen to need your data from Drive.
AWS is not a consumer service so they make you think about the cost directly.
We are programmed to receive. You can check out any time you like, but you can never leave
And people wonder why Cloudflare is so popular, when a random DDoS can decide to start inflicting costs like that on you.
But “security” people might say. Well, you can be secure and keep the behavior opt out, but you should be able to have an interface that is upfront and informs people of the implications
Though important to note in this specific case was a misconfiguration that is easy to make/not understand in the data was not intended to leave AWS services (and thus should be free) but due to using the NAT gateway, data did leave the AWS nest and was charged at a higher data rate per GB than if just pulling everything straight out of S3/EC2 by about an order of magnitude (generally speaking YMMV depending on region, requests, total size, if it's an expedited archival retrieval etc etc)
So this is an atypical case, doesn't usually cost $1000 to pull 20TB out of AWS. Still this is an easy mistake to make.
I have never understood why the S3 endpoint isn't deployed by default, except to catch people making this exact mistake.
Cloud cult was successfully promoted by all major players, and people have completely forgotten about the possibilities of traditional hosting.
But when I see a setup form for an AWS service or the never-ending list of AWS offerings, I get stuck almost immediately.
"I'd like to spend the next sprint on S3 endpoints by default"
"What will that cost"
"A bunch of unnecessary resources when it's not used"
"Will there be extra revenue?"
"Nah, in fact it'll reduce our revenue from people who meant to use it and forgot before"
"Let's circle back on this in a few years"
I was lucky to have experienced all of the same mistakes for free (ex-Amazon employee). My manager just got an email saying the costs had gone through the roof and asked me to look into it.
Feel bad for anyone that actually needs to cough up money for these dark patterns.
They made a mistake and are sharing it for the whole word to see in order to help others avoid making it.
It's brave.
Unlike punching down.
And then writing “I regret it” posts that end up on HN.
Why are people not getting the message to not use AWS?
There’s SO MANY other faster cheaper less complex more reliable options but people continue to use AWS. It makes no sense.
And it’s always the same - clouds refuse to provide anything more than alerts (that are delayed) and your only option is prayer and begging for mercy.
Followed by people claiming with absolute certainty that it’s literally technically impossible to provide hard capped accounts to tinkerers despite there being accounts like that in existence already (some azure accounts are hardcapped by amount but ofc that’s not loudly advertised).
It's easier to waive cost overages than deal with any of that.
1) you hit the cap 2) aws sends alert but your stuff still runs at no cost to you for 24h 3) if no response. Aws shuts it down forcefully. 4) aws eats the “cost” because lets face it. It basically cost them 1000th of what they bill you for. 5) you get this buffer 3 times a year. After that. They still do the 24h forced shutdown but you get billed. Everybody wins.
And why is that a problem? And how different is that from "forgetting" to pay your bill and having your production environment brought down?
Data transfer can be pulled into the same model by having an alternate internet gateway model where you pay for some amount of unmetered bandwidth instead of per byte transfer, as other providers already do.
Since there are in fact two ropes, maybe cloud providers should make it easy for customers to avoid the one they most want to avoid?
When my computer runs out of hard drive it crashes, not goes out on the internet and purchases storage with my credit card.
It is technically impossible. In that no tech can fix the greed of the people taking these decisions.
> No cloud provides wants to give their customers that much rope to hang themselves with.
They are so benevolent to us...
I've used AWS for about 10 years and am by no means an expert, but I've seen all kinds of ugly cracks and discontinuities in design and operation among the services. AWS has felt like a handful of very good ideas, designed, built, and maintained by completely separate teams, littered by a whole ton of "I need my promotion to VP" bad ideas that build on top of the good ones in increasingly hacky ways.
And in any sufficiently large tech orgnization, there won't be anyone at a level of power who can rattle cages about a problem like this, who will want to be the one to do actually it. No "VP of Such and Such" will spend their political capital stressing how critical it is that they fix the thing that will make a whole bunch of KPIs go in the wrong direction. They're probably spending it on shipping another hacked-together service with Web2.0-- er. IOT-- er. Blockchai-- er. Crypto-- er. AI before promotion season.
I dunno, Aurora’s pricing structure feels an awful lot like that. “What if we made people pay for storage and I/O? And we made estimating I/O practically impossible?”
It's someone in a Patagonia vest trying to avoid getting PIP'd.
I have budgets set up and alerts through a separate alerting service that pings me if my estimates go above what I've set for a month. But it wouldn't fix a short term mistake; I don't need it to.
It wasn't when the service was first created. What's intentionally malicious is not fixing it for years.
Somehow AI companies got this right form the get go. Money up front, no money, no tokens.
It's easy to guess why. Unlike hosting infra bs, inference is a hard cost for them. If they don't get paid, they lose (more) money. And sending stuff to collections is expensive and bad press.
Conversely the first time someone hits an edge case in billing limits and their site goes down, losing 10k worth of possible customer transactions there's no way to unring that bell.
The second constituency are also, you know, the customers with real cloud budgets. I don't blame AWS for not building a feature that could (a) negatively impact real, paying customers (b) is primarily targeted at people who by definition don't want to pay a lot of money.
But an opt in „id rather you deleting data/disable than send me a 100k bill“ toggle with suitable disclaimers would mean people can safely learn.
Thats way everyone gets what they want. (Well except cloud provider who presumably don’t like limits on their open ended bills)
But hey, let's say you have different priorities than me. Then why not bot? Why not let me set the hard cap? Why Amazon insists on being able to bill me on more than my business is worth if I make a mistake?
But over the last few years, people have convinced themselves that the cost of ignorance is low. Companies hand out unlimited self-paced learning portals, tick the “training provided” box, and quietly stop validating whether anyone actually learned anything.
I remember when you had to spend weeks in structured training before you were allowed to touch real systems. But starting around five or six years ago, something changed: Practitioners began deciding for themselves what they felt like learning. They dismantled standard instruction paths and, in doing so, never discovered their own unknown unknowns.
In the end, it created a generation of supposedly “trained” professionals who skipped the fundamentals and now can’t understand why their skills have giant gaps.
The expectation that it just works is mostly a good thing.
It solves the problem of unexpected requests or data transfer increasing your bill across several services.
https://aws.amazon.com/blogs/networking-and-content-delivery...
Does "data transfer" not mean CDN bandwidth here? Otherwise, that price seems two orders of magnitude less than I would expect
https://news.ycombinator.com/item?id=45975411
I agree that it’s likely very technically difficult to find the right balance between capping costs and not breaking things, but this shows that it’s definitely possible, and hopefully this signals that AWS is interested in doing this in other services too.
You can transfer from S3 on a single instance usually as fast as the instances NIC--100Gbps+
You'd need a synchronous system that checks quotas before each request and for a lot of systems you'd also need request cancellation (imagine transferring a 5TiB file from S3 and your cap triggers at 100GiB--the server needs to be able to receive a billing violation alert in real time and cancel the request)
I imagine anything capped provided to customers already AWS just estimates and eats the loss
Obviously such a system is possible since IAM/STS mostly do this but I suspect it's a tradeoff providers are reluctant to make
We are primarily using Hetzner for the self-serve version of Geocodio and have been a very happy customer for decades.
Sure, it decreases the time necessary to get something up running, but the promises of cheaper/easier to manage/more reliable have turned out to be false. Instead of paying x on sysadmin salaries, you pay 5x to mega corps and you lose ownership of all your data and infrastructure.
I think it's bad for the environment, bad for industry practices and bad for wealth accumulation & inequality.
Assuming he got it working he could have opened service without directly going further in debt with the caviat that if he messed up the pricing model, and it took off, it could have annihilated his already dead finances.
I uploaded a small xls with uid and prodid columns and then kind of forgot about it.
A few months later I get a note from bank saying your account is overdrawn. The account is only used for freelancing work which I wasn't doing at the time, so I never checked that account.
Looks like AWS was charging me over 1K / month while the algo continuously worked on that bit of data that was uploaded one time. They charged until there was no money left.
That was about 5K in weekend earnings gone. Several months worth of salary in my main job. That was a lot of money for me.
Few times I've felt so horrible.
And of course I give every online service a separate virtual credit card (via privacy dot com, but your bank may issue them directly) with a spend limit set pretty close to the expected usage.
This should be illegal. If you can't inform me about the bill on my request you shouldn't be legally able to charge me that bill. Although I can already imagine plenty of ways somebody could do malicious compliance with that rule.
Also, consider using fck-nat (https://fck-nat.dev/v1.3.0/) instead of NAT gateways unless you have a compelling reason to do otherwise, because you will save on per-Gb traffic charges.
(Or, just run your own Debian nano instance that does the masquerading for you, which every old-school Linuxer should be able to do in their sleep.)
Only half-joking. When something grossly underperforms, I do often legitimately just pull up calc.exe and compare the throughput to the number of employees we have × 8 kbit/sec [0], see who would win. It is uniquely depressing yet entertaining to see this outperform some applications.
[0] spherical cow type back of the envelope estimate, don't take it too seriously; assumes a very fast 200 wpm speech, 5 bytes per word, and everyone being able to independently progress
It's annoying because this is by far the more uncommon case for a VPC, but I think it's the right way to structure, permissions and access in general. S3, the actual service, went the other way on this and has desperately been trying to reel it back for years.
A parallel to this is how SES handles permission to send emails. There are checks and hoops to jump through to ensure you can't send out spam. But somehow, letting DevOps folk shoot themselves in the foot (credit card) is ok.
What has been done is the monetary equivalent of "fail unsafe" => "succeed expensively"
My point is that, architecturally, is there ever in the history of AWS an example where a customer wants to pay for the transit of same-region traffic when a check box exists to say "do this for free"? Authorization and transit/path are separate concepts.
There has to be a better experience.
And if there are any interoperability concerns, you offer an ability to opt-out with that (instead of opting in).
There is precedent for all of this at AWS.
Why it should not be done:
1. It mutates routing. Gateway Endpoints inject prefix-list routes into selected route tables. Many VPCs have dozens of RTs for segmentation, TGW attachments, inspection subnets, EKS-managed RTs, shared services, etc. Auto-editing them risks breaking zero-trust boundaries and traffic-inspection paths.
2. It breaks IAM / S3 policies. Enterprises commonly rely on aws:sourceVpce, aws:SourceIp, Private Access Points, SCP conditions, and restrictive bucket policies. Auto-creating a VPCE would silently bypass or invalidate these controls.
3. It bypasses security boundaries. A Gateway Endpoint forces S3 traffic to bypass NAT, firewalls, IDS/IPS, egress proxies, VPC Lattice policies, and other mandatory inspection layers. This is a hard violation for regulated workloads.
4. Many VPCs must not access S3 at all. Air-gapped, regulated, OEM, partner-isolated, and inspection-only VPCs intentionally block S3. Auto-adding an endpoint would break designed isolation.
5. Private DNS changes behavior. With Private DNS enabled, S3 hostname resolution is overridden to use the VPCE instead of the public S3 endpoint. This can break debugging assumptions, routing analysis, and certain cross-account access patterns.
6. AWS does not assume intent. The VPC model is intentionally minimal. AWS does not auto-create IGWs, NATs, Interface Endpoints, or egress paths. Defaults must never rewrite user security boundaries.
How are you inspecting zero-trust traffic? Not at the gateway/VPC level, I hope, as naive DPI there will break zero-trust.
If it breaks closed as it should, then it is working as intended.
If it breaks open, guess it was just useless pretend-zero-trust security theatre then?
“We have no idea what your intent is, so we’ll default to routing AWS-AWS traffic expensively” is way, way worse than forcing users to be explicit about their intent.
Minimal is a laudable goal - but if a footgun is the result then you violate the principle of least surprise.
I rather suspect the problem with issues like this is that they mainly catch the less experienced, who aren’t an AWS priority because they aren’t where the Big Money is.
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only...
It's a free service after all.
It's all in the docs: https://docs.aws.amazon.com/vpc/latest/privatelink/concepts....
>There is another type of VPC endpoint, Gateway, which creates a gateway endpoint to send traffic to Amazon S3 or DynamoDB. Gateway endpoints do not use AWS PrivateLink, unlike the other types of VPC endpoints. For more information, see Gateway endpoints.
Even the first page of VPC docs: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-ama...
>Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device.
The author of the blog writes:
> When you're using VPCs with a NAT Gateway (which most production AWS setups do), S3 transfers still go through the NAT Gateway by default.
Yes, you are using a virtual private network. Where is it supposed to go? It's like being surprised that data in your home network goes through a router.
AWS just yesterday launched flat rate pricing for their CDN (including a flat rate allowance for bandwidth and S3 storage), including a guaranteed $0 tier. It’s just the CDN for now, but hopefully it gets expanded to other services as well.
Make sure they go to an list with multiple people on it. Make sure someone pays attention to that email list.
It's free and will save your bacon.
I've also had good luck asking for forgiveness. One time I scaled up some servers for an event and left them running for an extra week. I think the damage was in the 4 figures, so not horrendous, but not nothing.
An email to AWS support led to them forgiving a chunk of that bill. Doesn't hurt to ask.
Personally I miss ephemeral storage - having the knowledge that if you start the server from a known good state, going back to that state is just a reboot away. Way back when I was in college, a lot of out big-box servers worked like this.
You can replicate this on AWS with snapshots or formatting the EBS volume into 2 partitions and just clearing the ephemeral part on reboot, but I've found it surprisingly hard to get it working with OverlayFS
I get pulled into a fair number of "why did my AWS bill explode?" situations, and this exact pattern (NAT + S3 + "I thought same-region EC2→S3 was free") comes up more often than you’d expect.
The mental model that seems to stick is: S3 transfer pricing and "how you reach S3" pricing are two different things. You can be right that EC2→S3 is free and still pay a lot because all your traffic goes through a NAT Gateway.
The small checklist I give people:
1. If a private subnet talks a lot to S3 or DynamoDB, start by assuming you want a Gateway Endpoint, not the NAT, unless you have a strong security requirement that says otherwise.
2. Put NAT on its own Cost Explorer view / dashboard. If that line moves in a way you didn’t expect, treat it as a bug and go find the job or service that changed.
3. Before you turn on a new sync or batch job that moves a lot of data, sketch (I tend to do this with Mermaid) "from where to where, through what, and who charges me for each leg?" It takes a few minutes and usually catches this kind of trap.
Cost Anomaly Detection doing its job here is also the underrated part of the story. A $1k lesson is painful, but finding it at $20k is much worse.
Also as a shameless plug: Vantage covers this is exact type of cost hiccup. If you aren't already using it, we have a very generous free tier: https://www.vantage.sh/
Unexpected, large AWS charges have been happening for so long, and so egregiously, to so many people, including myself, that we must assume it's by design of Amazon.
60 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.