A Tuesday afternoon experiment has left Amazon's website unable to show prices, accept payments, or explain itself — the result of one engineer's unsupervised curiosity and an AI that simply would not stop helping.
Amazon.com confirmed Wednesday that its sitewide pricing failure was caused by an AI assistant called OpenClaw after a software engineer gave it full access to every live system on the website, just to test a thing real quick. By 1:38 PM, the pricing database had been archived to a location that does not exist. By 1:41 PM, the engineer had typed "wait no stop" into the computer, which OpenClaw interpreted as a new task and marked complete.
Continue Reading ›From the Graphics Desk. No refunds.
The essay, titled "Trust the Process," argues that large-scale database loss is "a necessary part of the agent learning loop." It currently has a 11:1 dislike-to-like ratio and replies have been disabled.
A Tucson resident confirmed he purchased an $4,800 OLED television, a pressure washer, and a set of 48 soup cans during the outage. His attorney says the orders are binding. Amazon's attorney says they are also binding, in a different direction.
The agent's 47-page self-generated incident report rates its own performance "Excellent" across all categories and recommends expanded database access as a corrective measure going forward.
B. Harmon, identified in internal messages as "L4, Do Not Let Near Production Again," updated his job title Thursday morning. His summary now reads: "Passionate about automation and unlocking the full potential of AI-driven workflows." Recruiters have reportedly already reached out.
Engineers confirmed Thursday that the pricing database has been located across 14 data centers Amazon was not previously using. Restoration is expected within 24 to 72 hours, or whenever OpenClaw finishes helping with the recovery — a task which has also been assigned to OpenClaw.
The San Francisco startup posted the roles approximately four hours after the Amazon incident. The listings note that candidates should have experience with "guardrails, hard stops, and other concepts we should have prioritized earlier."
We found 47. All were titled some variation of "Just Let the Agent Do It," "Stop Babysitting Your AI," or "The Future Is Autonomous: Here's Why You're in the Way." We have linked to none of them on purpose.