AI Research

USC Study Warns Chatbots Are Flattening How Humans Write and Think

A new review paper argues LLMs are eroding cognitive diversity across language, perspective, and reasoning.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
April 15, 20264 min read
Share:
Abstract visualization of diverse human silhouettes gradually merging into a single uniform shape, with faint digital text patterns overlaying the figures

Researchers at the University of Southern California argue that widespread chatbot use is eroding the differences in how people write, reason, and form opinions. The opinion paper, published in Trends in Cognitive Sciences in March, synthesizes existing research to make a case that billions of people funneling their thinking through the same handful of AI models is producing what the authors call "standardized expressions and thoughts across users."

The paper is led by Morteza Dehghani, a professor of psychology and computer science at USC's Dornsife College who directs the university's Center for Computational Language Sciences. His co-authors are PhD students Zhivar Sourati and Alireza Ziabari. An arXiv preprint has been available since mid-2025.

The creativity paradox

The most interesting tension in the paper isn't about writing style. It's about what happens to groups. Individual users, the authors note, tend to generate more ideas when working with an LLM than without one. But here's the catch: groups of people relying on chatbots collectively produce less original work than groups working without AI. A separate study in Science Advances found the same dynamic, calling it a kind of "social dilemma" where individual gains mask collective losses.

That finding is both counterintuitive and hard to dismiss. You get more output from each person, but the outputs converge. Everyone arrives at the same clever ideas, which means they aren't clever anymore.

Who's actually speaking?

Sourati frames the concern bluntly. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," he told USC's press release. That's a bolder claim than most homogenization arguments, and the paper doesn't have original experiments to back it up directly.

The authors point to training data as the root cause. LLM outputs, they argue, disproportionately reproduce the language, values, and reasoning patterns of Western, educated, industrialized, rich, and democratic (WEIRD) populations. That's the familiar training data bias problem, but the paper extends it: when those outputs then shape how millions of people write and argue, the bias doesn't just exist in the model. It migrates into human discourse.

Dehghani told CNN that he's heard of people using AI to decide who to vote for, which he called "quite scary." Whether or not that anecdote scales to a societal pattern is exactly the kind of question the paper raises but can't answer.

What this paper isn't

A disclaimer buried in the source material is worth pulling forward: this is not an empirical study with original experiments. It's a review and opinion piece that synthesizes existing research into a broader hypothesis about global cognitive homogenization. Some of its component claims, like stylistic narrowing in AI-assisted writing, have solid backing from separate studies. The bigger thesis about civilizational-scale thought convergence remains speculative.

The funding source is also worth noting. The research was supported by the Air Force Office of Scientific Research, which doesn't invalidate the work but does put it in the context of defense-funded interest in information environments and cognitive security.

The paper also argues that popular models favor linear reasoning approaches like chain-of-thought prompting, potentially crowding out more intuitive or abstract thinking styles. That's a plausible concern, but it's doing a lot of work without much empirical scaffolding.

So what now?

The authors' recommendation is that AI developers should incorporate genuine linguistic and cultural diversity into training data, not just random variation. They argue this would both preserve cognitive diversity in society and improve the models' own reasoning abilities. It's a reasonable-sounding prescription that runs directly into the economics of model training: curating diverse, high-quality multilingual data is expensive, and the companies with the resources to do it have shown limited interest.

The paper lands at a moment when the evidence base is growing. A 2025 study analyzing college admissions essays found that each additional human-written essay contributed more new ideas to the collective pool than each additional GPT-4 essay, with the gap widening as more essays were added. Yale students recently told CNN that seminar discussions feel flatter and more predictable as classmates increasingly parrot chatbot outputs in class.

Whether "chatbots are making us all think the same" or "chatbots reflect the homogenization that mass media and the internet were already producing" is a distinction the paper acknowledges but doesn't resolve. The evidence for narrowing is real. The causal story is still being written.

Tags:artificial intelligenceLLMcognitive scienceUSC researchchatbot homogenizationcognitive diversityAI biasTrends in Cognitive Sciences
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

USC: Chatbots Are Homogenizing How Humans Write and Think | aiHola