AI Tools

Claude Code Desktop Now Previews Apps, Reviews Diffs, and Monitors PRs in Background

Anthropic ships four features closing the write-to-merge loop without leaving the desktop app.

Oliver Senti
Oliver SentiSenior AI Editor
February 21, 20265 min read
Share:
Split-screen view of a code editor and running application preview in an AI-powered desktop development environment

Anthropic released a batch of updates to Claude Code on February 20 that push its desktop app closer to something resembling a self-contained development environment. The additions: live server previews, automated code review with inline comments, background PR monitoring with auto-fix and auto-merge, and session portability across CLI, desktop, web, and mobile. All four are available now to every user.

The timing is hard to ignore. Two days before the release, Boris Cherny, who created Claude Code at Anthropic, told Y Combinator's Lightcone podcast that the software engineering title would "start to go away" in 2026. His reasoning was that coding is "practically solved" for him, and that everyone on Anthropic's team, from designers to finance, now writes code. Bold claims. This update is the product team's attempt to back them up.

The preview loop

The headline feature is server previews. Claude Code can now spin up dev servers and render the running application inside the desktop interface itself. It reads console logs, catches errors, and iterates on its own. You can also click on visual elements in the preview and pass feedback directly to Claude, which is the kind of tight feedback loop that previously required you to bounce between a browser and a terminal, narrating what you were seeing like a radio play.

Whether this actually works well at scale is another question. Dev server previews are notoriously finicky, especially for apps with complex build pipelines or heavy client-side state. Anthropic's blog post doesn't mention which frameworks or server configurations are supported, or what happens when your app needs something like hot module replacement that doesn't play nice with embedded browsers. I'd want to see this handle a real Next.js app with API routes before getting too excited.

Code review before the push

The "Review code" button is more straightforward. Click it, and Claude examines your local diffs, leaving inline comments in the desktop diff view. Bugs, suggestions, potential issues. You can then ask Claude to fix whatever it flagged.

This is essentially a pre-push linter with opinions, which is genuinely useful. The question is how good the review quality is compared to, say, Cursor's Bugbot or GitHub Copilot's own code review features. Anthropic isn't publishing any metrics on false positive rates or the kinds of issues it catches versus misses. So for now, "a second set of eyes" is the pitch, and you'll have to calibrate your trust through experience.

Background PR monitoring (the interesting part)

This is where the update gets more ambitious. For GitHub-hosted repos, Claude Code now tracks PR status in the background using the GitHub CLI. You open a PR, move on to your next task, and Claude watches CI. Two optional modes: auto-fix, where Claude attempts to resolve CI failures on its own, and auto-merge, where the PR lands automatically once all checks pass.

The workflow Anthropic is selling here: open PR, switch context, come back later to find it either merged or with CI failures already addressed. That's a compelling pitch for anyone who has ever lost 20 minutes babysitting a flaky test suite.

But auto-merge on passing CI? That's a trust exercise. CI passing doesn't mean the code is correct, it means the tests you wrote (or that Claude wrote) passed. If your test coverage is thin, auto-merge is just auto-shipping bugs faster. Anthropic doesn't address this in the announcement, and the lack of any mention of configurable merge conditions, branch protection awareness, or required reviewer counts is a gap worth noting.

Where this sits competitively

The AI coding tool market has gotten crowded fast. GitHub Copilot already lets you assign issues to coding agents that write code, create pull requests, and respond to feedback in the background. Cursor has its own agent mode with multi-file editing and deep codebase awareness. Amazon Q Developer recently topped SWE-Bench scores.

Claude Code's angle has always been the terminal-first experience, which appeals to a certain kind of developer but limits mainstream adoption. The desktop app is Anthropic's answer to that, and this update pushes it further toward being a place where you actually stay rather than something you pop into for a quick task. Session mobility (run /desktop in CLI to migrate a session, or push it to the cloud) reinforces this. Start work in the terminal, finish it on your phone. In theory.

The competitive question isn't really about features anymore. GitHub Copilot has 20 million users and native GitHub integration that no third-party tool can match. Cursor has rabid fans who swear by its codebase-wide context. Claude Code's bet is that tight integration between the AI doing the coding and the AI reviewing and monitoring the results creates a tighter loop than bolting separate tools together. Whether that loop is tight enough to pull developers away from tools they already use is something this announcement alone can't answer.

The Cherny problem

Back to that podcast quote. Cherny said coding is "practically solved" for him and predicted the software engineering title would become vestigial. He said PMs, designers, and finance people at Anthropic all write code now.

This is a specific, testable claim from someone with obvious incentive to make it. And the update, while solid, doesn't quite support the conclusion. Auto-reviewing diffs and monitoring CI are useful developer productivity features. They're not evidence that coding has been "solved." They're evidence that Anthropic is building a good IDE. Those are different things.

The features ship today. Whether the vision ships at all is a longer conversation.

Tags:Claude CodeAnthropicAI coding toolsdeveloper toolscode reviewCI/CD automationGitHubIDEpull requests
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Claude Code Desktop Adds Previews, Review, PR Merge | aiHola