Lost $300 Due to an API Key Leak From "vibe Coding" – Learn From My Mistake
Key topics
I recently lost $300 because of an API key leak. It started with a surprise $200 charge from Google Cloud, and when I looked into it, I found another $100 charge from the day before. Both were for Gemini API usage that I never intentionally set up.
After digging, I discovered the issue: I had hard-coded an API key in a script that was part of a feature I ended up deprecating. The file was only in the codebase for two days, but that was enough for the key to leak. Google actually sent me alerts about unusual activity, but I missed them because they went to a less-frequently-checked email account.
Here’s what I learned:
Never hardcode API keys - Use environment variables or a .env file, even for temporary code.
Set up billing alerts - Google Cloud (and other providers) let you set up alerts for unexpected charges.
Check all linked emails - Don’t ignore notifications, even if they’re sent to secondary accounts.
Don’t rely solely on GitHub’s secret scanning - It’s useful, but renaming variables can bypass it.
This happened while I was experimenting with "vibe coding" (letting AI generate code quickly), but I realized too late that human oversight is still crucial, especially for security.
Hope this helps someone avoid the same costly mistake!
TL;DR: Hard-coded an API key in a deprecated script, key leaked, and I got charged $300. Always use environment variables and set up billing alerts!
The author lost $300 due to an API key leak caused by 'vibe coding' with AI-generated code, highlighting the importance of human oversight and proper security practices, with commenters discussing alternative authentication methods and best practices.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
19h
Peak period
4
18-24h
Avg / period
2.3
Based on 14 loaded comments
Key moments
- 01Story posted
Sep 14, 2025 at 12:16 PM EDT
4 months ago
Step 01 - 02First comment
Sep 15, 2025 at 6:51 AM EDT
19h after posting
Step 02 - 03Peak activity
4 comments in 18-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 17, 2025 at 12:09 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The alternative? JWT or suchlike. Authenticate each session with zero trust.
At big corp work everything is Okta / JWT / Yubikey etc. Very very occasionally an API key.
Wouldn’t it be logical that Google knew about zero trust? The problem wasn’t the API Key, the problem was that the poster didn’t use best practices - see my other comment.
Even if it wasn’t a built in facility like the three or four ways to authenticate with GCP or AWS programmatically and you did have to use long live API keys, you could still piggy back off the cloud providers access I mentioned and read from a secure cloud hosted vault using your temporary keys from your script.
In the case of AWS read your third party API key from secrets manager and read secret manager based on your keys in your home directory or better yet your short lived local keys in your environment variables - not a local environment file that you will probably forget to use .gitignore for
When you run your code on the cloud platform, you attach privileges to the run time environment (VM, Lambda, docker runtime, etc) that are properly scoped for least privilege. The SDK also knows how to get your permissions from it automatically. You never need to worry about your code getting the proper access keys.
I’ve done most of my CI/CD using AWS native services that you also attach the role to the runtime. For instance CodeBuild is really just a Linux or Windows Docker runtime that you can run anything in and you attach permissions to your CodeBuild project. Of course your AWS access is controlled ideally via your SSO or 2FA.
I have done some work with Azure DevOps - which doesn’t have anything to do with Azure. You can also use it to deploy your AWS and you store your access keys in an Azure controlled vault and your pipeline gives AWS permissions to your scripts. I think the same thing works with GitHub Actions.
I think this bad-choice backfires though. I spend less time learning Cloud Services because the risks without a hard-limit are too high.
Now there is an actual free tier that won’t let you go over $250 on AWS.
I uploaded my API key to a public repository
I learned not to do this.
Never upload your API key to a public repository.
Ok.
You should never specify API keys anywhere in your code or env files for GCP or AWS.
https://cloud.google.com/docs/authentication/application-def...
You still risk checking in your env file.
Doing it the correct way, your config is in your home directory locally far away from your repo and it finds the configuration automatically when running on GCP.
Even better when developing locally is assign environment variables to temporary access keys.
I’m being handwavy because I’m not a GCP guy. But on AWS, you do something similar by using “aws config” locally and using the IAM role attached to the VM, Lambda, etc so you never need to deploy with access keys.
This isn’t meant to be an “AWS does it better comment”. It looks like from my brief research, something similar is also best practice with GCP.
always always always: code review everything AI makes (CREAM)
it also helps if you understand what it’s writing. the only way to do that is to… review the code