A growing body of research is converging on an uncomfortable conclusion for teams that bet big on AI coding assistants: writing code faster does not mean shipping software faster. The latest entry comes from T-Bank (formerly Tinkoff), whose AI4SDLC Research 2025 surveyed engineering teams and found that while a majority of developers now use AI for code generation, the bottleneck has simply migrated downstream, to review, integration, and release.
The trust problem
T-Bank's survey reports that 58% of engineers regularly use AI for generating or autocompleting code, and 64% say their productivity improved. So far, so predictable. But only 11% say they actually trust the output, while 49% explicitly do not.
Those numbers look dramatic until you stack them against the Stack Overflow 2025 survey, which found something strikingly similar across 49,000 respondents globally. Trust in AI accuracy dropped to 29%, down from 40% the year before, even as adoption climbed to 84%. The biggest complaint from 66% of developers was AI code that was almost right, close enough to look plausible and wrong enough to cost time.
"The growing lack of trust in AI tools stood out to us as the key data point," Stack Overflow CEO Prashanth Chandrasekar said, which is a polite way of saying the industry has a verification crisis on its hands.
Where the jam actually is
Here is where the T-Bank data gets interesting. If 58% of engineers use AI for writing code, only 24% apply it to code review or optimization. And 42% never touch AI when dealing with legacy systems. The pipeline narrows fast once you move past the editor.
Data from Faros AI, which analyzed telemetry across more than 10,000 developers, puts concrete numbers on the jam: teams using AI heavily saw a 98% increase in pull request volume. PR review time went up 91%. Senior engineers now spend an average of 4.3 minutes reviewing each AI-generated suggestion, compared to 1.2 minutes for human-written code.
This is a textbook case of what operations researchers call a shifting bottleneck. Speed up one station on the assembly line and the queue just piles up at the next one. AI tools have effectively turned code review into the new rate-limiting step, and most organizations have done nothing to adapt.
So what breaks the logjam?
The JetBrains 2025 ecosystem survey found that 85% of developers now use AI tools regularly. That is not an adoption problem. It is a process design problem. Teams optimized their workflows around the assumption that writing code was the hard part. It is not anymore, or at least it does not have to be.
Some companies are already trying to fix this. The shift-left approach, pushing testing and quality checks earlier in the cycle, has helped organizations reduce release cycles by an average of 67% according to Forrester's 2024 DevOps report. GitHub's own research from mid-2025 showed that running AI-powered reviews before opening pull requests eliminated entire categories of trivial issues. But most teams have not gotten there yet.
The T-Bank research points toward a more radical conclusion, one the source material frames bluntly: the next productivity jump will not come from AI that writes better code. It will come from agents that can reliably handle the full cycle, from idea through production. That is a much harder problem than autocomplete, and nobody has cracked it. The Stack Overflow survey found that 52% of developers either do not use AI agents at all or stick to simpler tools, with 38% having no plans to adopt them.
For now, the vibe-coding era has produced a strange inversion. Developers are writing more code than ever and shipping at roughly the same pace. The 2024 DORA report found that a 25% increase in AI adoption actually triggered a 7.2% decrease in delivery stability. More code, more problems, same release cadence.
T-Bank's full methodology and additional findings are available on the research site. The next update to the study has not been announced.




