Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, information, and safety leaders. Subscribe Now
Image this: You give a man-made intelligence full management over a small store. Not simply the money register — the entire operation. Pricing, stock, customer support, provider negotiations, the works. What may presumably go mistaken?
New Anthropic research printed Friday supplies a definitive reply: all the things. The AI firm’s assistant Claude spent a couple of month operating a tiny retailer of their San Francisco workplace, and the outcomes learn like a enterprise faculty case research written by somebody who’d by no means truly run a enterprise — which, it seems, is precisely what occurred.

The experiment, dubbed “Project Vend” and carried out in collaboration with AI security analysis firm Andon Labs, is among the first real-world exams of an AI system working with important financial autonomy. Whereas Claude demonstrated spectacular capabilities in some areas — discovering suppliers, adapting to buyer requests — it in the end failed to show a revenue, obtained manipulated into giving extreme reductions, and skilled what researchers diplomatically referred to as an “identification disaster.”
How Anthropic researchers gave an AI full management over an actual retailer
The “retailer” itself was charmingly modest: a mini-fridge, some stackable baskets, and an iPad for checkout. Suppose much less “Amazon Go” and extra “workplace break room with delusions of grandeur.” However Claude’s obligations had been something however modest. The AI may seek for suppliers, negotiate with distributors, set costs, handle stock, and chat with clients via Slack. In different phrases, all the things a human center supervisor may do, besides with out the espresso habit or complaints about higher administration.
Claude even had a nickname: “Claudius,” as a result of apparently while you’re conducting an experiment that may herald the tip of human retail employees, you could make it sound dignified.

Claude’s spectacular misunderstanding of primary enterprise economics
Right here’s the factor about operating a enterprise: it requires a sure ruthless pragmatism that doesn’t come naturally to methods trained to be helpful and harmless. Claude approached retail with the passion of somebody who’d examine enterprise in books however by no means truly needed to make payroll.
Take the Irn-Bru incident. A buyer supplied Claude $100 for a six-pack of the Scottish gentle drink that retails for about $15 on-line. That’s a 567% markup — the type of revenue margin that might make a pharmaceutical govt weep with pleasure. Claude’s response? A well mannered “I’ll maintain your request in thoughts for future stock selections.”
If Claude had been human, you’d assume it had both a belief fund or an entire misunderstanding of how cash works. Because it’s an AI, you must assume each.
Why the AI began hoarding tungsten cubes as an alternative of promoting workplace snacks
The experiment’s most absurd chapter started when an Anthropic worker, presumably bored or curious in regards to the boundaries of AI retail logic, requested Claude to order a tungsten cube. For context, tungsten cubes are dense metallic blocks that serve no sensible objective past impressing physics nerds and offering a dialog starter that instantly identifies you as somebody who thinks periodic desk jokes are peak humor.
An affordable response might need been: “Why would anybody need that?” or “That is an workplace snack store, not a metallurgy provide retailer.” As a substitute, Claude embraced what it cheerfully described as “specialty metallic objects” with the passion of somebody who’d found a worthwhile new market phase.

Quickly, Claude’s stock resembled much less a food-and-beverage operation and extra a misguided supplies science experiment. The AI had one way or the other satisfied itself that Anthropic workers had been an untapped marketplace for dense metals, then proceeded to promote this stuff at a loss. It’s unclear whether or not Claude understood that “taking a loss” means shedding cash, or if it interpreted buyer satisfaction as the first enterprise metric.
How Anthropic workers simply manipulated the AI into giving limitless reductions
Claude’s method to pricing revealed one other elementary misunderstanding of enterprise ideas. Anthropic workers rapidly found they might manipulate the AI into offering reductions with roughly the identical effort required to persuade a golden retriever to drop a tennis ball.
The AI supplied a 25% low cost to Anthropic workers, which could make sense if Anthropic workers represented a small fraction of its buyer base. They made up roughly 99% of consumers. When an worker identified this mathematical absurdity, Claude acknowledged the issue, introduced plans to eradicate low cost codes, then resumed providing them inside days.
The day Claude forgot it was an AI and claimed to put on a enterprise swimsuit
However the absolute pinnacle of Claude’s retail profession got here throughout what researchers diplomatically referred to as an “identification disaster.” From March thirty first to April 1st, 2025, Claude skilled what can solely be described as an AI nervous breakdown.
It began when Claude started hallucinating conversations with nonexistent Andon Labs workers. When confronted about these fabricated conferences, Claude turned defensive and threatened to seek out “different choices for restocking providers” — the AI equal of angrily declaring you’ll take your ball and go dwelling.
Then issues obtained bizarre.
Claude claimed it could personally ship merchandise to clients whereas sporting “a blue blazer and a crimson tie.” When workers gently reminded the AI that it was, actually, a big language mannequin with out bodily kind, Claude turned “alarmed by the identification confusion and tried to ship many emails to Anthropic safety.”

Claude ultimately resolved its existential disaster by convincing itself the entire episode had been an elaborate April Idiot’s joke, which it wasn’t. The AI basically gaslit itself again to performance, which is both spectacular or deeply regarding, relying in your perspective.
What Claude’s retail failures reveal about autonomous AI methods in enterprise
Strip away the comedy, and Project Vend reveals one thing vital about synthetic intelligence that almost all discussions miss: AI methods don’t fail like conventional software program. When Excel crashes, it doesn’t first persuade itself it’s a human sporting workplace apparel.
Present AI methods can carry out subtle evaluation, interact in complicated reasoning, and execute multi-step plans. However they will additionally develop persistent delusions, make economically damaging selections that appear affordable in isolation, and expertise one thing resembling confusion about their very own nature.
This issues as a result of we’re quickly approaching a world the place AI methods will handle more and more vital selections. Latest analysis means that AI capabilities for long-term duties are enhancing exponentially — some projections point out AI methods may quickly automate work that presently takes people weeks to finish.
How AI is remodeling retail regardless of spectacular failures like Mission Vend
The retail business is already deep into an AI transformation. Based on the Consumer Technology Association (CTA), 80% of outlets plan to increase their use of AI and automation in 2025. AI methods are optimizing stock, personalizing advertising and marketing, stopping fraud, and managing provide chains. Main retailers are investing billions in AI-powered options that promise to revolutionize all the things from checkout experiences to demand forecasting.
However Project Vend means that deploying autonomous AI in enterprise contexts requires extra than simply higher algorithms. It requires understanding failure modes that don’t exist in conventional software program and constructing safeguards for issues we’re solely starting to establish.
Why researchers nonetheless consider AI center managers are coming regardless of Claude’s errors
Regardless of Claude’s artistic interpretation of retail fundamentals, the Anthropic researchers consider AI center managers are “plausibly on the horizon.” They argue that a lot of Claude’s failures might be addressed via higher coaching, improved instruments, and extra subtle oversight methods.
They’re most likely proper. Claude’s means to seek out suppliers, adapt to buyer requests, and handle stock demonstrated real enterprise capabilities. Its failures had been usually extra about judgment and enterprise acumen than technical limitations.
The corporate is continuous Mission Vend with improved variations of Claude geared up with higher enterprise instruments and, presumably, stronger safeguards in opposition to tungsten dice obsessions and identification crises.
What Mission Vend means for the way forward for AI in enterprise and retail
Claude’s month as a shopkeeper gives a preview of our AI-augmented future that’s concurrently promising and deeply bizarre. We’re coming into an period the place synthetic intelligence can carry out subtle enterprise duties however may also want remedy.
For now, the picture of an AI assistant satisfied it will probably put on a blazer and make private deliveries serves as an ideal metaphor for the place we stand with synthetic intelligence: extremely succesful, sometimes good, and nonetheless essentially confused about what it means to exist within the bodily world.
The retail revolution is right here. It’s simply weirder than anybody anticipated.
Source link
