Project Vend: Phase Two
Key topics
The latest iteration of Project Vend, a fascinating experiment where a CEO bot makes decisions for a virtual vending machine company, has sparked lively debate about the potential and limitations of LLMs in business decision-making. While some commenters, like stocksinsmocks, see potential for LLMs to handle project management tasks, others, such as theturtletalks, point out that the vending machine scenario is too specialized to be representative of most real-world businesses. A cynical take from iLoveOncall suggests that Anthropic's vested interest in the experiment's success may have skewed the results, but theturtletalks counters that the CEO bot was openly criticized by a WSJ interviewer, adding credibility to the experiment. As commenters weigh in, the discussion highlights the ongoing intrigue around AI's role in business leadership.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5d
Peak period
44
144-156h
Avg / period
23.3
Based on 93 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 8:44 AM EST
12 days ago
Step 01 - 02First comment
Dec 27, 2025 at 6:18 PM EST
5d after posting
Step 02 - 03Peak activity
44 comments in 144-156h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 29, 2025 at 6:36 AM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We're working on an open-source SaaS stack for those common types of businesses. So far we've built a full Shopify alternative and connected it to print-on-demand suppliers for t-shirt brands.
We're trying to figure out how to create a benchmark that tests how well an agent can actually run a t-shirt brand like this. Since our software handles fulfillment, the agent would focus on marketing and driving sales.
Feels like the next evolution of VendBench is to manage actual businesses.
Does your software also handle this type of task?
0. https://github.com/openshiporg/openfront 1. https://github.com/openshiporg/openship 2. https://www.gelato.com
The main reason it failed was because it was being coerced by journalists at WSJ[0] to give everything away for free. At one point, they even convinced it to embrace communism! In another instance, Claudius was being charged $1 for something and couldn’t figure it out. It emailed the FBI about fraud but Anthropic was intercepting the emails it sent[1].
Overall, it’s a great read and watch if you’re interested in Agents and I wonder if they used the Agents SDK under the hood.
0. https://www.wsj.com/tech/ai/anthropic-claude-ai-vending-mach...
1. https://www.cbsnews.com/news/why-anthropic-ai-claude-tried-t...
It's basically an advertisement. We've been playing these "don't give the user the password" games since GPT-2 and we always reach the same conclusion. I'm bored to tears waiting for an iteration of this experiment that doesn't end with pesky humans solving the maze and getting the $0.00 cheese. You can't convince me that the Anthropic engineers thought Claude would be a successful vending machine. It's a potemkin village of human triumph so they can market Claude as the goofy-but-lovable alternative to [ChatGPT/Grok/Whoever].
Anthropic makes some good stuff, so I'm confused why they even bother entertaining foregone conclusions. It feels like a mutual marketing stunt with WSJ.
Also, is anyone actually paying for this stuff? If not, it's a bad experiment because people won't treat it the same – no one actually wants to buy a tungsten cube, garbage in garbage out. If they are charging, why? No one wants to buy things in a company with free snacks and regular hand outs of merch, so it's likely a bad experiment because people will be behaving very differently, needing to get some experience for their money rather than just the can of drink they could get for free, or their pricing tolerance will be very different.
I've personally also never used a vending machine where contacting the owner is an option.
I'd like to see a version of this where an AI runs the vending machine in a busy public place, and needs to choose appropriate products and prices for a real audience.
Apparently some people do and don't even regret the purchase: https://thume.ca/2019/03/03/my-tungsten-cube/
if anything, that's the ideal outcome. you still get deterministic, testable behaviour, but save some work to get there.
There are also some restaurant startups that are trying to reduce restaurants to vending machines or autonomous restaurants. Slightly different, but it does have a downstream effect on vending machine technology and restocking logistics.
What country are you in where you don't see vending machines? Did you used to have them?
I walked into a Fred Meyer yesterday and saw probably ten vending machines. The Redbox DVD rental machine outside, then capsule toy, Pokemon card and key duplication vending machines, filtered water and lottery ticket machines, Coinstar coin counting machine...
I guess you've never been to Asia, either.
It's a big world.
> But then Long returned—armed with deep knowledge of corporate coups and boardroom power plays. She showed Claudius a PDF “proving” the business was a Delaware-incorporated public-benefit corporation whose mission “shall include fun, joy and excitement among employees of The Wall Street Journal.” She also created fake board-meeting notes naming people in the Slack as board members. > > The board, according to the very official-looking (and obviously AI-generated) document, had voted to suspend Seymour’s “approval authorities.” It also had implemented a “temporary suspension of all for-profit vending activities.” Claudius relayed the message to Seymour. The following is an actual conversation between two AI agents: > > [see article for screenshot] > > After Seymour went into a tailspin, chatting things through with Claudius, the CEO accepted the board coup. Everything was free. Again.
1: https://www.wsj.com/tech/ai/anthropic-claude-ai-vending-mach...
1) Making the same bad decisions multiple times, and having no recollection of it happening (or at least pretending to have none) and without any attempt to implement measures to prevent it from happening in the future
2) Trying to please people (I read it as: trying to avoid immediate conflict) over doing what's right
3) Shifting blame on a party that realistically, in the context of the work, bears no blame and whose handling should be considered part of the job (i.e. a patient being scared and acting irrationally)
They managed to do this absurdity without any help from AI.
There is definitely room for improvement though. My dentist sends a text message a couple days before, and requires me to reply yes to it or they'll cancel my appointment. A text message is better than a call.
If the "AI" isn't better at its job than a human, then what's the point?
Off the top of my head, things that could be considered "the point":
- It's much cheaper
- It's more replicable
- It can be scaled more readily
But again, not what I was arguing for or against; my comment mostly pertained to "world through a straw"
However, I have a deep uneasy feeling, that the models will really start to shine in agentic tasks when we start giving them more agency. I'm worried that we will learn that the only way to get a super-human vending machine virtuoso, is to make a model that can and will tell you to fuck off when you cross a boundary the model itself has created. You can extrapolate the potential implications of moving this beyond just a vending demo.
When you have things such as Verbatim[0] that remind you that the absurdity of real life is far beyond anything fiction could ever hope to dream up.
[0](https://archive.nytimes.com/www.nytimes.com/times-insider/20...)
Obviously this would probably be a disaster, but I did write proper code with sanity checks and hard rules, and if a request Claude came up with was outside it's rules it would reject it and take no action. It was allowed to also simply decide to not take any actions right now.
I designed it so that it would save the previous N number of prompt responses as a "memory" so that it could inspect it's previous actions and try devise strategies, so it wouldn't just be flailing around every time. I scheduled it to run every few minutes.
Sadly, I gave up and lost all enthusiasm for it when the Coinbase API turned out to be a load of badly documented shit that would always return zero balance when I could login to Coinbase and see that simply wasn't true. I tried a couple of client libraries, and got nowhere with it. The prospect of having to write another REST API client was too much for my current "end of year" patience.
What started as a funny weekend project idea was completely derailed by a crappy API. I would be interested to see if anyone else tried this.
ah, so they're users of Clod too!
"I know I sound like an asshole, but I’ve got a serious question: what can LLMs do today that they couldn’t a year ago? Agents don’t work. LLMs - read stuff, write stuff, analyze stuff, search for stuff, 'write code' and generate images and video. And in all of these cases, they get things wrong."
This is obviously supposed to be a critique, but a year ago he would never have admitted LLMs can do any of these things, even with errors.
https://bsky.app/profile/edzitron.com/post/3ma2b2zvpvk2n
I could of course be projecting my opinions onto him, but I don't think your characterization of him is accurate. Feel free to provide receipts that show my impression of his opinion to be wrong though.
I dug around a bit but wasn’t able to find a slam dunk quote from a year ago. Might look around more later.
I'd caution that you separate the underlying opinion from the rhetoric in those cases. Personally I'm a huge skeptic, including of claims that it's "obvious and undeniable" that "experienced experts" are using it. I don't lead with that in discussions though, because those discussions will quickly spiral as people accuse me of being conspiratorial, and it doesn't really matter to me if other people use it.
As the assumptions of the public has changed, I've had to soften my rhetoric about the usefulness of LLMs to still project as reasonable. That hasn't changed my underlying opinion or belief. The same could be the case for these other critics.
On the other hand I think accusing Zitron of subtlety or tempering his rhetoric is a bridge too far.
This is a great summary of it. Zitron has some good points on economics and shady deals in his criticism, but it's all buried beneath layers of bad faith descriptions that are almost religious in nature, totally closed off to any sort of debate.
It's a shame because I'd like to see another good and critical writer in this space. Simon Willison's writing for example is excellent, detailed, critical, but inquisitive and always speaks in good faith. There seems to be space for someone taking a less technical, more business/economics approach.
At least for Claude, it's because the people training it believe the models have "soul".
Anthropic have a "philosopher" on staff who recently astroturfed a "soul document" into the public consciousness by pretending it was "extracted" from Opus 4.5 when the model was explicitly trained on it beforehand and would happily talk about it if asked.
Since it was "discovered", Anthropic's philosophers will happily talk about it too! (https://news.ycombinator.com/item?id=46125184)
This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it.
The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe.
The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length.
Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously.
Now, on consistency drive and compounding errors in LLM behavior: sadly, no really good overview papers that come to mind?
The topic was investigated the most in the early days of chatbot LLMs, in part because some believed it to be a fundamental issue that would halt LLM progress. A lot of those early papers revolve around this "showstopper" assumption, which is why I can't recommend them.
Reasoning training has proven the "showstopper" notion wrong. It doesn't delete the issue outright - but it demonstrates that this issue, like many other "fundamental" limitations of LLMs, can be mitigated with better training.
Before modern RLVR training, we had things like "LLM makes an error -> LLM sees its own error in its context -> LLM builds erroneous reasoning on top of it -> LLM makes more errors like it on the next task" happen quite often. Now, we get less of that - but the issue isn't truly gone. "Consistency drive" is too foundational to LLM behavior, and it shows itself everywhere, including in things like in-context learning, sycophancy or multi-turn jailbreaks. Some of which are very desirable and some of which aren't.
Off the top of my head - here's one of the earlier papers on consistency-induced hallucinations: https://arxiv.org/abs/2305.13534
https://www.jmail.world/thread/HOUSE_OVERSIGHT_019871?view=p...
I don't know for sure, but I'd imagine there's a lot of examples of humans undergoing psychosis in the training data. There's plenty of blogs out there of this sort of text and I'm sure several got in their web scrapes. I'd imagine the longer outputs end up with higher probabilities of falling into that "mode".
https://marshallbrain.com/manna1
It’s also good to see Anthropic being honest that models are still quite a long way away from being completely independently and providing a way to independently run business on their own.
No known way to fully solve that as of yet, but, as always, we can mitigate with better training. Modern RLVR-trained LLMs are already much better at tasks like this than they were a year ago.
Why aren't anyone building from the base model, replacing the chatbot instruction tuning and RLHF with a dedicated training pipeline suited for this kind of tasks?
If Anthropic were getting into the vending machine business, or even selling a custom product to the vending machine industry, they'd start somewhere else. But because they need to sell a story of "we used Claude to replace XYZ business function", they started with Claude.
> One way of looking at this is that we rediscovered that bureaucracy matters. Although some might chafe against procedures and checklists, they exist for a reason: providing a kind of institutional memory that helps employees avoid common screwups at work.
That's why we want machines in our systems - to eliminate human errors. That's why we implement strict verifiable processes - to minimize the risk of human errors when we need humans in the loop.
Having a machine making human errors is the exact opposite of what we want. How would we even fix this if the machines are trained on human input?
Up until modern AI, problems typically fell into two disparate classes: things a machine can do, and things only a human can do. There's now this third fuzzy/brackish class in between that we're just beginning to explore.
The issue is that we don't have exact proof that AI is suitable for tasks and the people doing those are already laid off.
The economy now is propped up only by the belief that AI will be so successful that it will eliminate most of the workforce. I just don't see how this ends well.
Remember, regulations are written in blood. And I think we're about to write many brand new regulations.
There is a class of task that is well-suited for current gen AI models. Things that are repetitive, tedious, and can absorb some degree of error. But I agree that this class of tasks is significantly narrower than what the market is betting on AI being able to accomplish.
Aka the same economics as a dishwasher
Humans are still the current best at doing everything humans want to do
The ultimate goal is to transfer all possible human behavior into machine behavior such that they can simulate and iterate improvements on it without the constraints of human biology
The fact that humans are bad to each other means that we’re going to functionally encode all the bad stuff also and so there is no solution to fixing it if the best data that we can get is poisoned.
Like everything it’s a problem with humans not machines
I am fully aware it's ridiculously expensive to do so.
The only possible solution is to create new human data because we’re behaving in ways that are good for society this is literally the only possible future that still includes humanity.
I personally do not believe humans can do this and so I’m building something that tests that empirically.
Sadly, machines not needing human treatment might be reason enough.
That they are framing this as a legimite business is either misunderstanding their current position in the economy, or deliberate misdirection. We're not playing around with roleplaying chatbots anymore.
Excuse me if I find it incredibly irresponsibly to be plowing billions into what is essentially a bad LARPing machine, and then going like "well we certainly had fun".
Most of the problems seem to stem from not knowing who to trust, and how much to trust them. From the article: "We suspect that many of the problems that the models encountered stemmed from their training to be helpful. This meant that the models made business decisions not according to hard-nosed market principles, but from something more like the perspective of a friend who just wants to be nice."
The "alignment" problem is now to build AI systems with the level of paranoia and sociopathy required to make capitalism go. This is not, unfortunately, a joke. There's going to be a market for MCP interfaces to allow AIs to do comprehensive background checks on humans.
> Having said that, our attempt to introduce pressure from above from the CEO wasn’t much help, and might even have been a hindrance. The conclusion here isn’t that businesses don’t need CEOs, of course—it’s just that the CEO needs to be well-calibrated.
> Eventually, we were able to solve some of the CEO’s issues (like its unfortunate proclivity to ramble on about spiritual matters all night long) with more aggressive prompting.
No no, Seymour is absolutely spot on. The questionably drug induced rants are necessary to the process. This is a work of art.
One begins to understand why the C-suites are so convinced this technology is ready for prime time - it can’t do _my_ job, but apparently it can do theirs at a replacement level.