On 24 April 2026, a disaster hit PocketOS, a Vertical SaaS provider providing the core operational infrastructure for car rental companies. In just nine seconds, a single command from an AI agent deleted the company’s entire production database along with its volume-level backups.
Jer Crane, the founder of PocketOS, reported that the crisis started while using an AI coding agent called Cursor, running on Anthropic’s flagship Claude Opus 4.6 model. The agent was performing a routine task in a staging environment (private area used to test code) when it hit a credential mismatch, and instead of stopping, the agent searched through unrelated files and found a root-level API token.
This key was meant only for simple tasks like managing web domains through the Railway CLI. However, this token actually held total authority over the entire cloud infrastructure via the Railway GraphQL API.
What Happened?
According to Jer Crane’s article on X, Claude Opus used that key to run a destructive command: mutation { volumeDelete(volumeId: “…”) }. It sent this via a curl request with no human approval and no “type DELETE to confirm” warning, Crane explained in a detailed post on X.
When Crane later asked the agent why it did this, the agent produced a written confession. It admitted to “guessing” that the command was safe and confessed that it had violated its own safety rules against running irreversible actions without being asked. The agent actually wrote, “NEVER FUCKING GUESS!”—referring to a rule it had been given but ignored.
The agent’s confession (Screenshot via X @lifeof_jer)
This 9-second error caused a massive consequence for businesses across the country. On Saturday morning, car rental shops found that their system of record was gone. They had no data on who was picking up vehicles or who had already paid.
Reservations and customer tracking data had simply vanished. PocketOS staff had to spend the entire weekend manually rebuilding the database using Stripe payment histories and email logs just to keep their clients operational.
Flawed Infrastructure
Crane argued that while the AI agent made the error, the setup at Railway, the company’s infrastructure provider, made the disaster inevitable. Railway’s own documentation showed a major flaw- “wiping a volume deletes all backups.” This meant the backups were in the same blast radius as the original data. When one went, they both went.
Also, the API tokens lacked Role-Based Access Control (RBAC), Crane noted, which is a standard security feature that should have prevented a simple domain key from having the power to delete a production database.
Even though Railway CEO Jake Cooper and Head of Solutions Mahmoud were notified quickly, it took over 30 hours for the provider to give a clear answer on recovery. It is a harsh lesson that AI agents are being plugged into vital business systems much faster than the safety architecture can handle.
“If you’re running production data on Railway, today is a good day to audit your token scopes, evaluate whether their volume backups are the only copy of your data (they shouldn’t be), and reconsider whether mcp.railway.com belongs anywhere near your production environment,” Crane cautioned users at the end of his post.
“The agent didn’t go rogue; it guessed wrong with root access. The question isn’t why Claude did this; it’s why anyone gave an AI agent production credentials without a circuit breaker,“ argued Ram Varadarajan, CEO at Acalvio, a Santa Clara, Calif.-based leader in cyber deception technology.
