Anthropic published a labor economics paper on Thursday that should concern anyone who recently graduated into a white-collar field. The finding isn't that AI is firing people. It's that companies appear to have quietly stopped hiring them.
The new paper, authored by Anthropic economists Maxim Massenkoff and Peter McCrory, combines real-world Claude usage data with U.S. labor survey data to build what the researchers call an "observed exposure" measure: a job-level score that weights both theoretical AI capability and how those tasks actually show up in automated workflows today.
Who's in the crosshairs
Computer programmers sit at the top, with 75% of their tasks covered under the new measure. Customer service representatives and financial analysts also rank among the most exposed. At the other end, about 30% of the workforce, including cooks, dishwashers, lifeguards, and bartenders, have zero measurable exposure. Their tasks simply don't appear in Claude's usage data often enough to register.
The demographics of the high-exposure group are worth sitting with. Before ChatGPT launched, workers in the most exposed occupations were 16 percentage points more likely to be female, 11 percentage points more likely to be white, and earned roughly 47% more than workers in unexposed roles. Graduate degree holders make up 17.4% of the most exposed group but only 4.5% of the unexposed group, a nearly fourfold gap.
One note on framing: the instinct to call this bad news for "older" workers misreads the data. The hiring slowdown the paper identifies is concentrated specifically among workers aged 22 to 25.
The hiring freeze nobody announced
High-exposure jobs aren't generating layoff notices. What's changed is the front door. Job finding rates for 22-to-25-year-olds entering exposed occupations dropped roughly 14% compared to pre-ChatGPT 2022 levels, while hiring into unexposed occupations held steady. The researchers flag this finding as "just barely statistically significant," which is honest and also means it should be read as a signal, not a verdict.
The paper explicitly notes that some of these young workers may be staying in existing jobs, pivoting to other fields, or returning to school rather than showing up as unemployed. The data can't fully distinguish between those scenarios yet, which is exactly why this paper exists: to build the measurement framework before the effects become obvious.
The adoption gap is the real story
The most structurally important finding gets the least attention. AI is nowhere near its theoretical ceiling. The paper lays out the gap between what large language models could theoretically handle, based on a widely-cited 2023 paper by Eloundou et al., and what Claude actually does in practice. Even in Computer and Math occupations, where theoretical AI capability covers 94% of tasks, observed real-world usage covers only 33%.
The reasons are mundane: legal constraints, software integration requirements, human verification steps, institutional inertia. The paper gives a concrete example: AI could theoretically handle pharmacy prescription authorization, but doesn't, because regulatory and liability structures block it. This is not a skill problem. It's an adoption problem, and adoption tends to accelerate once it starts.
Anthropic says it plans to update this framework as new Claude usage data and employment survey results become available. The researchers flag one obvious next step: tracking how recent graduates with credentials in high-exposure fields are actually faring, a cohort the current data can't cleanly isolate yet.




