I Let My AI Agents Run Unsupervised and They Burned $200 in 2 Hours
Posted3 months agoActive3 months ago
blog.justcopy.aiTechstory
calmmixed
Debate
60/100
AI SafetyAutonomous AgentsCost Management
Key topics
AI Safety
Autonomous Agents
Cost Management
The author shares their experience of letting AI agents run unsupervised, resulting in a $200 cost in 2 hours, sparking a discussion on AI safety, cost management, and the need for better oversight.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
N/A
Peak period
19
0-2h
Avg / period
4.4
Comment distribution35 data points
Loading chart...
Based on 35 loaded comments
Key moments
- 01Story posted
Oct 14, 2025 at 3:30 AM EDT
3 months ago
Step 01 - 02First comment
Oct 14, 2025 at 3:30 AM EDT
0s after posting
Step 02 - 03Peak activity
19 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 7:29 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45577193Type: storyLast synced: 11/20/2025, 12:29:33 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Building justcopy.ai - lets you clone, customize and ship any website. Built 7 AI agents to handle the dev workflow automatically.
Kicked them off to test something. Went to grab coffee.
Came back to a $100 spike on my OpenRouter bill. First thought: "holy shit we have users!"
We did not have users.
Added logging. The agent was still running. Making calls. Spending money. Just... going. Completely autonomous in the worst possible way. Final damage: $200.
The fix was embarrassingly simple: - Check for interrupts before every API call - Add hard budget limits per session - Set timeouts on literally everything - Log everything so you're not flying blind
Basically: autonomous ≠ unsupervised. These things will happily burn your money until you tell them to stop.
Has this happened to anyone else? What safety mechanisms are you using?
Agents doing nothing, just doing things for the sake of doing things.
Seems we're there.
I am now going to make a multi-agent poker MCP as a joke. Thank you.
That said, I think a budget limit of $5-10k per agent makes sense IMO. You're underpaying your agents and won't get principal engineer quality at those rates.
I've heard from sources that I trust that both AWS and Google Gemini charge more than it costs them in energy to run inference.
You can get a good estimate for the truth here by considering open weight models. It's possible to determine exactly how much energy it costs to serve DeepSeek V3.2 Exp, since that model is open weight. So run that calculation, then take a look at how much providers are charging to serve it and see if they are likely operating at a loss.
Here are some prices for that particular model: https://openrouter.ai/deepseek/deepseek-v3.2-exp/providers
Or: what are they bleeding money on?
That doesn't mean that when they do charge for the models - especially via their APIs - that they are serving them at a unit cost loss.
I would presume that companies selling compute for AI inference either make some money or at least break even when they serve a request. But I wouldn't b surprised if they are subsidizing this cost for the time being.
[1]: https://finance.yahoo.com/news/sam-altman-says-losing-money-...
https://twitter.com/sama/status/1876104315296968813
"insane thing: we are currently losing money on openai pro subscriptions!
people use it much more than we expected"
I don't doubt that it is true that they lose money on a 200 subscription because the people that pay 200 are probably the same people that will max out usage over time, no matter how wasteful. Sam Altman was framing it in a way to say "it's so useful people are using it more than we expected!", because he is interested in having everyone believe that LLMs are the future. It's all bullshit.
If I had to guess, they probably at least break even on API calls, and might make some money on lower tier subscriptions (i.e.: people that pay for it but use it sparingly on a as-need basis).
But that is boring, and hints at limited usability. Investors won't want to burn hundreds of billions in cash for something that may be sort of useful. They want destructive amounts of money in return.
Shopify, Uber and Airbnb all hit profitability after 14 years. Amazon took 9.
And this isn't something that will go away anytime soon. OpenAI for instance is projecting that in 2030 R&D will still account for 45% of their costs. They think they'll be profitable by that time, or so they're telling investors.
https://epoch.ai/data-insights/openai-compute-spend
This is like saying solar power is free if you ignore the equipment and installation costs.
Even worse still, model creators are in an arms race. They can't release a model and call it a day, waiting for it to start paying for itself. They need to immediately jump on to the next version of the model or risk falling behind.
Bad idea, bad execution, I like it when a plan comes together.
_any_ website, can't imagine why there is _any_ confusion.
I don’t care if you make less money, don’t fucking lie.
This reminds of that law that people can only legally play their own games using console emulators.
1 more comments available on Hacker News