Anthropic published results from what it calls the largest qualitative AI study ever conducted: 80,508 interviews across 159 countries, in 70 languages, all run by an AI interviewer built on Claude. The conversations happened over one week in December 2025, with every Claude.ai account holder invited to participate.
The company's research page lays out the findings in a slickly produced interactive format. But the most interesting data isn't in the top-line categories. It's in the contradictions.
What people say they want
Professional productivity topped the list at 18.8%, which is about as surprising as finding that office workers want faster email. The real texture came when Anthropic's Interviewer tool pushed past surface answers. People who started by talking about automating emails eventually admitted what they actually wanted: time to cook with their mothers, or to leave work early enough to pick up their kids from school.
Personal transformation (13.7%) and life management (13.5%) rounded out the top three. Financial independence came in at 9.7%, and societal transformation, things like curing diseases and fixing education, at 9.4%. Creative expression sat at the bottom with 5.6%, which feels low until you consider this was a self-selected group of Claude users, not a cross-section of the population.
81% of respondents said AI had already taken a concrete step toward their stated vision. That number deserves scrutiny. These are active Claude users talking to Anthropic's own AI about Anthropic's product. The selection bias is doing a lot of work here.
The fear that matters most
Unreliability and hallucinations topped the concerns at 26.7%, followed by job loss (22.3%) and loss of autonomy (21.9%). The average respondent named 2.3 separate fears. Only 11% expressed zero concerns.
But job anxiety was the strongest predictor of overall negative sentiment toward AI. Not hallucinations, not regulation gaps, not misinformation. The prospect of becoming economically irrelevant.
One quote from an Israeli lawyer stuck out: "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself?" Anthropic frames this as one of five recurring "tensions" where benefits and risks coexist in the same person. People who valued emotional support from AI were three times more likely to worry about becoming dependent on it.
Who's optimistic and why
67% of respondents rated their overall sentiment toward AI as positive. The geographic split is telling: Africa, Latin America, and South Asia skewed most optimistic. Europe and North America, most concerned.
Anthropic's analysis suggests emerging economies see AI as a way to leapfrog institutional failures, poor schools, expensive professionals, broken bureaucracies. An entrepreneur from Cameroon described reaching professional-level competence across cybersecurity, UX design, and marketing simultaneously. A task that would have taken a month in his country, finding a payment platform, took AI 30 seconds.
In wealthier countries, the mood is different. The concern isn't access, it's erosion. Cognitive atrophy (16.3%) reflects a specific anxiety: that outsourcing thinking to AI might degrade the capacity to think at all. One German respondent captured it sharply: "AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it's exactly the other way around."
The methodology problem nobody's ignoring
Anthropic is upfront about the limitations, which is refreshing. The sample is entirely self-selected Claude users who volunteered for an AI-conducted interview about AI. That's three layers of selection bias before you even get to the questions. The company acknowledges it cannot generalize these findings globally.
The interviews were conducted by Claude itself, using a system prompt designed for adaptive conversation. Claude-powered classifiers then sorted 80,000+ transcripts into categories. So the tool being evaluated is also the tool doing the evaluating and the tool conducting the evaluation. It's a closed loop, and Anthropic's own researchers seem aware of the circularity.
There's also a privacy angle. When Anthropic ran a smaller pilot with 1,250 published interviews in December 2025, a Northeastern University researcher de-anonymized 25% of the scientist subset using an off-the-shelf language model. The 81,000 transcripts from this larger study weren't published, but the episode raises questions about what "de-identified" means when people share detailed personal stories with an AI.
So what does it actually tell us?
Strip away the methodology concerns and some patterns hold. People want AI to handle drudgery so they can do things that matter to them, whether that's meaningful work or just being present with family. They're simultaneously grateful for the tool and worried it's changing them in ways they can't reverse. The optimism-pessimism divide maps more cleanly onto economic circumstance than onto any cultural or ideological axis.
None of this is shocking. But having 80,000 people say it in their own words, across 70 languages, at least gives the conversation some empirical grounding. Anthropic says it plans to use the Interviewer tool regularly on different topics going forward. The full methodology appendix is available as a PDF on the research page.




