AI Research

Machine Learning Pioneer Says AI Systems Should Be Designed Like Markets, Not Minds

Berkeley professor Michael I. Jordan argues the AI industry has it backwards on intelligence

Oliver Senti
Oliver SentiSenior AI Editor
December 17, 20254 min read
Share:
Illustration showing a neural network transforming into an interconnected marketplace with human figures and exchange points

Michael I. Jordan, the UC Berkeley professor whom Science identified as the world's most influential computer scientist, has published a paper arguing that the entire conceptual framework for AI development is wrong. The paper, which will appear in Communications of the ACM, proposes that large language models are better understood as "collectivist artifacts" representing aggregated human culture than as individual intelligent entities.

The market metaphor

Jordan's central claim is disarmingly simple: when you talk to ChatGPT or Claude, you're not really talking to an intelligence. You're interacting with millions of people who contributed data, opinions, and creative work to the training corpus. The LLM just happens to be the interface.

"Cultures are repositories of narratives, opinions, and abstractions. Cultures have personalities," Jordan writes. The implication being that we've been measuring the wrong thing when we benchmark these systems against human cognition.

This isn't entirely new territory for Jordan. He's been beating this drum since at least 2019, when he published a widely-circulated piece arguing the AI revolution "hasn't happened yet." But the new paper goes further, laying out what he calls "three thinking styles" that need to blend together: computational, inferential, and economic. Machine learning, as it currently exists, sits at the intersection of the first two. The third, Jordan argues, has been almost completely neglected.

Where the money isn't going

The economic angle cuts in an uncomfortable direction for AI companies. Jordan points out that the implicit social contract of the internet era, where users got free services in exchange for their data and content, made some sense when search engines sent traffic back to creators. LLMs break that bargain. They become "the endpoint rather than an intermediary," and the visibility benefit for producers "begins to wither."

Jordan doesn't mince words about the consequences: there's now "a strong incentive for the platform to use generative AI tools to replace the musicians."

He's not an outside critic here. Jordan discloses in the paper that he's a board member of UnitedMasters, a music distribution platform that has signed over 1.5 million independent artists. The company, founded by Steve Stoute with backing from Alphabet and Andreessen Horowitz, connects musicians directly with brands like the NBA and State Farm. Artists get paid when their music is used. Jordan holds this up as an example of how ML-powered markets can work when designed properly.

Whether that model scales beyond music is another question. Jordan describes a "three-layer data market" framework where platforms must provide formal privacy guarantees to users whose data gets sold to third parties, or risk losing them entirely. The mechanisms he proposes involve Stackelberg equilibria and differential privacy, the sort of technical apparatus that tends to get filed under "future work" in academic papers.

Foundation models and their blind spots

Jordan also takes aim at the accuracy claims surrounding foundation models like AlphaFold. He cites research showing that AlphaFold's confidence intervals can be systematically biased for certain queries, particularly involving proteins that exhibit quantum fluctuations where training data is sparse. The fix he proposes, "prediction-powered inference," involves adjusting global model outputs using local ground-truth measurements.

The broader point is that foundation models trained on historical data will inevitably perform poorly on problems at the edges of current knowledge, which is precisely where scientists tend to be most interested. Being aware of this gap doesn't make it go away.

What's missing

The paper runs to 14 pages plus appendices, and there's a lot of hand-waving about what a "tripartite blend" of computation, economics, and inference would actually look like in practice. Jordan acknowledges that AI lacks anything equivalent to Maxwell's equations or Schrödinger's equation, the foundational principles that allowed electrical and chemical engineering to become mature disciplines. "We are winging it," he writes.

That's either refreshingly honest or deeply unsatisfying, depending on what you wanted from the paper.

Jordan does offer one concrete example of what education in this mode could look like. He describes Data 8, a UC Berkeley course he helped design that became the university's fastest-growing class. The course blends statistical inference with programming, teaching freshmen to do permutation tests in Python and then apply them to real-world datasets of their choosing. Adding economic thinking, he suggests, would involve concepts like matching markets, contract design, and auctions.

The paper ends with a call for AI to grow into "a mature engineering discipline" with "modular, transparent design concepts." Coming from someone who has spent four decades in the field, it reads less like a manifesto and more like a midpoint assessment: we've built impressive artifacts, but we don't fully understand what we've built or who should benefit from them.

Jordan's paper is available on arXiv. The Communications of the ACM publication date has not been announced.

Tags:artificial intelligencemachine learningeconomicsMichael JordanUC BerkeleyLLMdata marketsAI researchtech policycreator compensation
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Machine Learning Pioneer Says AI Systems Should Be Designed Like Markets, Not Minds | aiHola