AI Research

Duke's AI Framework Extracts Simple Equations from Chaotic Systems

A new approach combines deep learning with 1930s mathematics to find order in complexity, producing models a fraction the size of previous methods.

Oliver Senti
Oliver SentiSenior AI Editor
December 23, 20253 min read
Share:
Illustration showing transformation from chaotic pendulum motion to simplified mathematical equations

Researchers at Duke University have developed an AI system that can derive compact mathematical descriptions from messy, high-dimensional data describing how systems evolve over time. The framework, published December 17 in npj Complexity, produced models that were in some cases more than 10 times smaller than those generated by earlier machine learning approaches while still delivering accurate long-term predictions.

The Koopman connection

The work builds on a mathematical insight that's been gathering dust for nearly a century. In 1931, Columbia mathematician Bernard Koopman demonstrated that nonlinear dynamical systems could be represented using linear operators, potentially simplifying their analysis. The catch: Koopman's linearization requires working in infinite dimensions, which isn't exactly practical.

The Duke team's contribution is figuring out how to get useful, finite approximations. Their framework analyzes time-series data from experiments, then uses deep learning combined with physics-inspired constraints to identify a small number of hidden variables that capture a system's essential behavior. The result is something that behaves mathematically like a linear system but still reflects real-world complexity.

"Scientific discovery has always depended on finding simplified representations of complicated processes," said Boyuan Chen, director of Duke's General Robotics Lab and the paper's senior author. Chen, who holds appointments in mechanical engineering, electrical engineering, and computer science, has been working toward what he calls "machine scientists" for automated discovery.

What it actually does

The team tested their framework on systems ranging from pendulums to neural circuits to climate models. Consider global temperature patterns: actual temperature varies continuously across space and time, creating an effectively infinite-dimensional problem. The AI compressed this into a relatively simple set of linear equations that still accurately predicted temperature fluctuations.

The 10x reduction in model size compared to previous approaches matters for interpretability. A compact model can be connected to existing scientific theory in ways that a sprawling one cannot.

"When a linear model is compact, the scientific discovery process can be naturally connected to existing theories and methods that human scientists have developed over millennia," Chen said in Duke's announcement.

Beyond prediction, the framework can identify attractors, the stable states where systems tend to settle. Sam Moore, the paper's lead author and a PhD candidate in Chen's lab, compared finding these structures to discovering landmarks in unfamiliar terrain.

What remains unclear

The paper demonstrates the approach on several benchmark systems, but questions linger about scaling. Climate models and neural circuits are complex, but they're also well-studied systems with known physics. How the method performs on truly novel systems, where there's no ground truth to validate against, is harder to assess.

There's also the interpretability claim. The researchers emphasize that their compact models can connect to existing theory, which sounds appealing. But whether working scientists will actually find these derived equations useful for generating new physical insights, rather than just making predictions, remains to be seen.

The Koopman operator approach itself has limitations that the paper doesn't fully address. Linear systems can't capture dynamics between multiple fixed points, which limits applicability for systems with genuinely complex attractor landscapes.

Next steps

The team is exploring how the framework could guide experimental design, actively selecting what data to collect to reveal a system's structure more efficiently. They're also looking at applying the approach to video, audio, and biological signals.

The research was supported by the National Science Foundation, Army Research Office, and DARPA's FoundSci and TIAMAT programs.

Tags:artificial intelligencemachine learningphysicsDuke Universityscientific discoverydeep learning
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Duke's AI Framework Extracts Simple Equations from Chaotic Systems | aiHola