Anthropic Temporarily Banned OpenClaw’s Creator, Then Reversed Course Within Hours
Anthropic suspended Peter Steinberger's Claude access over "suspicious activity" a ban that was reversed within hours after going viral on X.
On the morning of April 10, 2026, Peter Steinberger, the Austrian developer who built OpenClaw, GitHub’s fastest-growing AI agent framework, posted a screenshot on X showing that Anthropic had suspended his Claude account.
The ban did not last long. Within hours, and after the post went viral, Steinberger confirmed his access had been restored. An Anthropic engineer responded in the thread directly, stating the company had never banned anyone for using OpenClaw and offered to help.
The incident exposed a deeper reality: a six-day-old policy shift by Anthropic had already fractured its relationship with the open-source ecosystem it once relied on.
The Policy Change That Set the Stage
Six days before the ban, on April 4, 2026, Anthropic implemented a significant billing policy change.
As The Verge confirmed, Claude Pro ($20 per month) and Max ($100-$200 per month) subscribers were informed via email that they could no longer use their flat-rate subscription limits to power third-party AI agent tools, specifically named OpenClaw.
Any user wanting to continue running OpenClaw with Claude would need to pay separately through a pay-as-you-go “extra usage” system or supply an independent Claude API key billed at full token rates.
The Compute Imbalance That Forced the Break
The technical justification came from Boris Cherny, head of Claude Code, who stated that Anthropic subscriptions were not designed for the extreme usage patterns of third-party tools like OpenClaw, emphasizing that compute capacity must be carefully managed.
The numbers reveal the imbalance, echoing broader pressure at Anthropic alongside the recent Claude Code leak.
The numbers highlight the imbalance. Reports from German outlet c’t 3003 showed a single day of OpenClaw running on Claude Opus model consumed $109.55 in API tokens, versus Anthropic’s $6 daily benchmark for a typical Claude Code user.
At scale, the gap widened further. A continuously running OpenClaw agent could cost $1,000 to $5,000 per day in compute, while users pay just $20 per month.
This meant Anthropic was effectively subsidizing massive usage whenever subscribers routed activity through third-party frameworks.
OpenClaw’s Scale and Claude Dependency
OpenClaw is not a peripheral project. Originally released in November 2025 under the name Clawdbot, the framework had reportedly accumulated more than 135,000 running instances by the time the subscription ban was announced.
It built its adoption almost entirely on top of Claude, which remained the preferred model among OpenClaw users even over ChatGPT. OpenClaw creator Steinberger himself confirmed this dynamic when questioned on X.
He explained that he actively tests Claude to ensure that updates to OpenClaw do not break compatibility for users relying on that model, highlighting how deeply the tool is tied to Claude’s behavior and update cycle.
Steinberger’s Response and the Timing Question
Steinberger did not accept the compute-capacity framing without challenge.
He posted after the April 4 change: “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” The reference was pointed.
Anthropic had recently added Claude Dispatch to its Cowork agent, a feature allowing users to remotely assign tasks to agents through external services like Discord, capabilities that OpenClaw had pioneered.
Dispatch launched approximately two weeks before the OpenClaw subscription ban took effect.
The timeline became more pointed still when Steinberger’s employment history entered the conversation.
On February 14, 2026, he announced leaving OpenClaw and transferring it to an open-source foundation to join OpenAI, where Sam Altman said he would drive the next generation of personal agents.
Weeks later, Anthropic introduced a legal terms update prohibiting OAuth token use in third-party tools, alongside the subscription cutoff. When a commenter said, “you had the choice, but you went to the wrong one,” Steinberger replied: “One welcomed me, one sent legal threats.”
What the Account Suspension Confirmed
Against that background, the April 10 account suspension landed on a developer community already watching closely.
As TechCrunch reported, Steinberger confirmed he was complying with the new policy, accessing Claude via API key as required, not subscription OAuth, yet he was still banned.
An Anthropic engineer later confirmed the suspension was an automated flag, not deliberate enforcement. Access was restored, with no public explanation.
The episode highlighted the fragility of the moment. Anthropic is moving away from open experimentation, where simple flat subscriptions indirectly supported many third-party builders, toward a more controlled system where access is organized in tiers.
This new approach is centered on its own products like Claude Code, Cowork, and an expanding marketplace. For the developers who built on that ecosystem, the transition is arriving quickly and with few soft landings.
Source: “harder in the future to ensure OpenClaw still works with Anthropic”



