Anthropic released Routines for Claude Code on April 14, a feature that lets developers configure automations once and run them unattended on Anthropic's cloud infrastructure. Define a prompt, point it at a repo, attach some connectors, and set a trigger. Scheduled, API-driven, or kicked off by GitHub events. Your laptop can be closed. The idea is that Claude Code stops being a tool you interact with and starts being a background service that works your backlog overnight.
The pitch is appealing. The timing, though, is worth thinking about.
What Routines actually do
There are three trigger types. Scheduled routines run on a cadence you pick (hourly, nightly, weekly). API routines give each automation its own HTTP endpoint and auth token, so you can wire Claude into deploy hooks, alerting systems, or internal tools with a POST request. And GitHub routines, the most interesting of the three, subscribe to repository events like pull requests and pushes. Claude spins up a fresh session for each matching PR, runs whatever instructions you've written, and keeps monitoring that PR for follow-up comments and CI failures.
That last bit is where the feature starts to feel different from a glorified cron job. A scheduled task fires and forgets. A GitHub routine maintains continuity with its PR, responding to new comments and test failures as they come in. It is more like a very focused, very short-lived agent than a simple scheduled script.
Existing /schedule tasks from the CLI automatically convert to routines. If you were already doing this the hacky way with claude -p in headless mode piped through GitHub Actions, Anthropic is now managing the session lifecycle for you. Whether that's worth the trade-off depends on how much you trust their infrastructure right now.
About that infrastructure
Here's the thing. Anthropic is launching a feature whose entire value proposition is "runs reliably in the cloud without you" during a month where its cloud services have been anything but reliable. Claude suffered a major outage on April 6 that drew over 8,000 reports on Downdetector. The next day, April 7, it went down again. And again on April 8. The status page for March shows a 98.21% availability rate for claude.ai, which sounds decent until you realize that's roughly 13 hours of downtime in a single month.
The Register put it well, calling routines "mildly clever cron jobs" and pointedly noting that Anthropic's infrastructure hasn't been all that reliable lately. Quality complaints on the Claude Code GitHub repo have also been escalating. According to The Register's analysis, April was on pace to exceed March's 18 quality-related issues, which itself represented a 3.5x jump over the January-February baseline.
None of this means routines are a bad idea. But if you're configuring an automation that's supposed to fix bugs at 2 AM and open draft PRs before your team wakes up, you'd probably like to know it actually ran.
What early adopters are doing with it
The patterns emerging from beta users are exactly what you'd expect. Nightly backlog triage that labels and assigns new issues, then posts a summary to Slack. Deploy verification where your CD pipeline pings Claude's endpoint after each deploy and Claude runs smoke checks against the build. Cross-language SDK porting, where a merged PR in a Python SDK triggers a routine that ports the change to Go and opens a matching PR.
The one I keep coming back to is bespoke code review. You write your team's own checklist (security, performance, style), attach it to a GitHub trigger, and Claude leaves inline comments on every new PR before a human reviewer even looks at it. That's not replacing the reviewer. But it could front-load the mechanical checks so the human focuses on architecture and design decisions instead of catching missing null checks.
Whether the AI-generated comments are actually good enough to be useful, or just noise that reviewers learn to ignore, is a question Anthropic's announcement conveniently doesn't address.
The pricing math
Routines are available on Pro ($20/month), Max ($100-200/month), Team, and Enterprise plans. The daily caps: Pro gets 5 routine runs per day, Max gets 15, Team and Enterprise get 25. Anything beyond that draws from your extra usage allocation, assuming you've enabled overage billing.
Five runs a day on Pro is tight. If you're running a nightly triage and a deploy verification hook and a code review trigger, you're already at three before lunch. And each run consumes your regular subscription token limits too, same as an interactive session. For teams on the fence about upgrading from Pro to Max, routines might be the nudge Anthropic is hoping for.
The New Stack's Frederic Lardinois made an observation worth repeating: those token limits "seem to be getting stricter and stricter by the day." If routines eat into the same pool as your interactive coding sessions, heavy routine users might find themselves hitting walls during business hours.
The bigger play
There's a strategic angle here that the documentation doesn't spell out but The Register did. Anthropic wants to own the interface through which developers interact with Claude. The redesigned Claude Code desktop app, announced the same day, adds an integrated terminal, file editor, faster diff viewer, and preview pane. The message is clear: stop bouncing to VS Code. Stay in our app. Let our infrastructure run your automations.
That's a reasonable business strategy. It is also a vendor lock-in strategy. Right now, routines only support GitHub webhooks, with other event sources coming "in the future" on no disclosed timeline. Your prompts, your repo configurations, your connector setups all live on Anthropic's platform. If you build twenty routines into your workflow and Claude has a bad week of outages, your options are limited.
Cowork, Anthropic's non-developer automation product, already has scheduled tasks, but those only run while your computer is awake. Routines are the cloud-native version of that same idea, just for code. The gap between "Claude as a tool" and "Claude as a teammate" keeps narrowing. Whether you find that exciting or unsettling probably depends on how last week's outages affected you.




