Sam Schillace, Microsoft's Deputy CTO, dropped an essay on January 5th claiming AI coding tools have quietly crossed from "better autocomplete" to something else entirely. The headline claim: advanced engineering teams are now shipping software without reading what the models wrote.
That sounds insane. But Schillace is pretty well positioned to know, being the guy who built Google Docs back in the day and now sits near the top of Microsoft's technical hierarchy. So.
The timeline he's laying out
At the start of 2025, according to Schillace, most developers were still treating AI as autocomplete. Useful, sure, but humans wrote, debugged, and read the code. This was still true "5 or 6 months ago."
Then something changed. His explanation: models got better at code AND better at longer thinking, and somewhere the combination "crossed over a point where the return was positive, and then strongly positive." Tools like Claude Code and Codex that had been working "only so-so" suddenly clicked.
The feedback loop is the interesting part. Once the tools got good enough at debugging and analysis, they could improve their own scaffolding, which made them better, which let them improve more. He calls out that engineers close to this front are "tired, overwhelmed, and excited at the same time."
Beyond vibe coding
Schillace makes a distinction here that's worth flagging. This isn't vibe coding, "which mostly didn't work." What he's describing is something more systematic. New best practices. More robust tooling. Skills that not everyone has yet, which he says explains why plenty of developers still claim these tools don't work.
The implication: there's now a real skills gap between teams who've figured this out and teams who haven't.
The limit isn't compute anymore
This is buried in the middle but feels important. Schillace says right now there doesn't seem to be a ceiling on how many tokens a programmer can consume. The bottleneck has shifted to human attention, how much you can pay attention to, and people are building "meta orchestration" to squeeze more machine work per unit of human focus.
I don't have a way to verify that claim about token limits. He's not citing anything specific. But if true, it changes the economics pretty dramatically.
The pattern he thinks will repeat
He lays out a five-stage progression he expects across many domains:
First, models are okay but not great. Small payoff. Then improvement happens across models, tooling, and best practices. Early adopters start seeing real returns. Because they're getting returns, they invest time in improving the tools. Then because "everything is software," acceleration from the coding world leaks back into other domains.
The fifth stage hasn't hit coding yet, he admits. That's when tools go from interesting to mandatory, and anyone who can't make the transition "will fail out."
What he's not saying
There's a lot of hand-waving here. Which teams exactly? What does "not reading code" actually mean in practice? Are they not reading ANY code, or just reading less? The essay is light on specifics.
And the comparison to the internet's effect on media is... I mean, he's not wrong that a lot more media got created. Individual pieces became less valuable along a power law. Business models changed radically. Whether that's good news for professional developers is another question.
The timing
It's January 2026. If Schillace's timeline is right, the tipping point happened around mid-2025, maybe August. That would line up with some of the model releases we saw around then. But he doesn't name anything specific, which makes it hard to pin down what exactly triggered this.
One commenter on the essay noted that as tools get better, expectations for "more capabilities" keep growing too. The demand curve isn't static.
Next step to watch: whether other senior technical leaders at major companies start echoing this. Or whether this is one executive's enthusiastic read on what's happening in his corner of Microsoft.




