Jaana Dogan, a principal engineer at Google who works on the Gemini API, posted something on January 2nd that got five million views and counting. She gave Claude Code a three-paragraph description of a distributed agent orchestration problem her team had been struggling with since last year. It generated a working system in an hour.
"I'm not joking and this isn't funny," she wrote. That's about as close to panic as you'll see from a senior Google engineer in public.
The uncomfortable details
So what exactly happened? Dogan's team had been trying to build distributed agent orchestrators, systems that coordinate multiple AI agents working together. According to her, Google had explored various approaches without reaching consensus. The kind of architectural disagreement that can stall projects for months.
She couldn't use actual internal details, so she built a toy version based on existing ideas. Three paragraphs. Not particularly detailed. Claude Code produced something comparable to what they'd spent a year iterating on.
"It's not perfect and I'm iterating on it," she added in a follow-up. But that's not really the point, is it?
When someone asked whether Google uses Claude Code internally, Dogan said it's only allowed for open-source projects. Not internal work. Make of that what you will.
The part where she praises a competitor
Here's the thing that made this blow up: Dogan works on Gemini. She's publicly praising the competition while employed by Google. Her bio literally says "Principal Engineer at Google. Gemini API."
She addressed this directly. "This industry has never been a zero-sum game, so it's easy to give credit where it's due even when it's a competitor." And when asked when Gemini would reach this level, her response was telling: "We are working hard right now. The models and the harness."
That's corporate speak for "we're behind."
The numbers game
This fits into a broader pattern that's been building for months. Boris Cherny, the engineer who created Claude Code at Anthropic, recently shared that he landed 259 pull requests in December, 497 commits, 40,000 lines added. Every single line written by Claude Code using Opus 4.5. He didn't open an IDE once.
Anthropic CEO Dario Amodei claimed back in March 2025 that AI would be writing 90% of code within six months. When October rolled around, he doubled down at Salesforce's Dreamforce conference, saying it was "absolutely true" within Anthropic and companies they work with. There's some skepticism about whether this is really the average across all teams or just cherry-picked examples. A LessWrong analysis suggested the actual company average is closer to 50% for merged code.
Google, for its part, disclosed in July 2025 that AI writes about 50% of new code at the company, up from 25% in late 2024. Microsoft reported 30% earlier that year.
The financial entanglement
One detail that makes this more interesting: Google is a major investor in Anthropic. About $3 billion committed so far. In October 2025, the companies announced a deal worth tens of billions of dollars giving Anthropic access to up to one million of Google's TPUs. Over a gigawatt of computing capacity coming online in 2026.
So Google is simultaneously competing with Anthropic on AI coding tools and providing the chips Anthropic uses to build those tools. And investing in them. And having their own engineers publicly admit Anthropic's tool is beating them.
The AI industry is weird.
What this actually means
Dogan's post isn't about Claude Code being magic. She explicitly said the output needs refinement. But that's not the story here.
The story is that a senior Google engineer, someone with over a decade of experience at the company who used to work on observability of Go production services, looked at what her team had built over a year and watched an external tool produce something comparable from a three-paragraph prompt.
Her advice to skeptics: "Try it on a domain you are already an expert of. Build something complex from scratch where you can be the judge of the artifacts."
That's the challenge. Most people can't properly evaluate AI-generated code because they don't have deep domain expertise. Dogan does. And she's telling you what she found.
What happens next
Dogan said her team is "working hard right now" on both the models and the infrastructure. Google has resources. They have talent. They have their own chips. But they're playing catch-up on a tool that started as Boris Cherny's side project in September 2024 and spread through Anthropic's engineering team within days of internal release.
The next public benchmark will probably be whatever Google ships for Gemini's coding capabilities. Dogan's post has created expectations she'll now have to meet.
No pressure.




