AI Tools

Anthropic Bans Third-Party OAuth Use for Claude Subscriptions, Then Walks It Back

Anthropic banned OAuth in third-party tools via updated docs, then called it a 'cleanup' after backlash.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
February 20, 20264 min read
Share:
A broken padlock icon overlaying a code terminal interface with authentication error messages

Anthropic updated its Claude Code documentation on February 19 to explicitly prohibit using OAuth tokens from Free, Pro, and Max subscriptions in any third-party tool or service. Within hours, the company started calling it a misunderstanding.

The new language was blunt: OAuth authentication from consumer plans is "intended exclusively for Claude Code and Claude.ai," and using those tokens elsewhere "constitutes a violation of the Consumer Terms of Service." Even Anthropic's own Agent SDK was listed as off-limits for subscription OAuth. If your own SDK isn't exempt, the message is clear enough.

The walkback

Developer outrage on X and Hacker News was immediate, and Anthropic's Claude Code team moved fast to contain it. Thariq Shihipar, who works on Claude Code at Anthropic, posted on X that the update was just "a docs clean up we rolled out that's caused some confusion. Nothing is changing about how you can use the Agent SDK and MAX subscriptions!"

He followed up with a more specific line: personal and local development with the Agent SDK is fine. API keys are only required if you're building a business on top of it. The New Stack pressed Anthropic's PR team further and got a statement that reads like corporate damage control: "Nothing changes around how customers have been using their account and Anthropic will not be canceling accounts."

So the docs said one thing, and the people who wrote the docs said another. That kind of contradiction doesn't exactly inspire confidence, and developers noticed.

This isn't the first round

The documentation update landed on a community already raw from January's enforcement wave. On January 9, Anthropic deployed server-side blocks that killed subscription OAuth tokens in third-party clients overnight, no warning. VentureBeat reported that tools like OpenCode (which had been spoofing Claude Code's client identity via HTTP headers) broke instantly, along with Cline, RooCode, and OpenClaw.

Shihipar acknowledged that rollout too, admitting some accounts were "banned for triggering abuse filters" by mistake. The company reversed those bans, but the third-party blocks themselves stayed in place.

The economics behind all this are not subtle. A Claude Max subscription costs $200 per month for heavy usage. API pricing for Opus runs $15 per million input tokens and $75 per million output tokens. An autonomous coding agent running through a third-party tool can chew through millions of tokens daily, which makes a flat-rate subscription a losing proposition for Anthropic at scale. The "Ralph Wiggum" technique, where developers trap Claude in self-healing code loops that run overnight, accelerated the problem considerably when it went viral late last year.

Who's actually getting banned?

Here's the part that matters: despite Anthropic's reassurances, users on Reddit's r/claude and r/ClaudeCode kept reporting bans through mid-February. One user claimed they were kicked out right after renewing a 12-month subscription, and all they'd done was build a usage-tracking Mac app with Claude Code. Another post, tracked by PiunikaWeb, was titled "Anthropic is banning their $200/mo power users. Make it make sense."

Anthropic's stated line is that enforcement targets actual abuse: token reselling, business use on consumer plans, account sharing. Legitimate personal use is fine. How the company distinguishes between a solo developer's side project and a one-person "business" remains an open question that nobody at Anthropic has answered clearly.

The competitive fallout

The timing could not be worse for Anthropic's ecosystem ambitions. OpenClaw's creator Peter Steinberger joined OpenAI on February 14, and the project folded into an OpenAI-backed open-source foundation. The tool Anthropic spent months trying to block and rename (it went from Clawdbot to Moltbot to OpenClaw after trademark complaints) ended up at its biggest competitor.

Developer Gergely Orosz, author of The Pragmatic Engineer, was direct about what this signals: Anthropic's API costs are too high relative to competitors, and the company seems content with having no third-party ecosystem around Claude. Others are already switching. Rhys Sullivan publicly moved to OpenAI's Codex, calling ChatGPT's $20 and $200 plans "both really good."

As of today, the controversial OAuth language has been quietly removed from the Claude Code legal page. The page now references Consumer Terms and Commercial Terms without the specific prohibition on third-party OAuth use. Whether that means the policy is dead or just less visible is anybody's guess. No one at Anthropic has been banned for using their token in OpenClaw recently, but the January precedent still looms.

Tags:AnthropicClaudeClaude CodeOAuthAPIdeveloper toolsOpenClawAI coding
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Anthropic Bans Claude OAuth in Third-Party Tools | aiHola