Baidu officially launched ERNIE 5.1 on May 9, with the model placing fourth globally on LMArena's Search Arena and first in China at 1,223 points. The company blog frames the result as a cost-efficiency milestone rather than a raw capability win.
The pitch: ERNIE 5.1 reportedly trained at roughly 6% of the pre-training cost of comparable frontier models, with total parameters compressed to about a third of ERNIE 5.0 and active parameters cut roughly in half. Baidu credits a mix of "multi-dimensional elastic pretraining," disaggregated asynchronous reinforcement learning, and agentic post-training. These are company-reported figures. There's no full technical report and no independent replication of the cost claim.
The lighter ERNIE 5.1 Preview, released April 30, hit 1,476 on the LMArena Text Arena for 13th globally, also leading among Chinese models per a preview note. Strong showing, but still well behind the GPT-5 and Gemini tier at the top.
Access opens through Baidu's Qianfan platform and ernie.baidu.com for enterprise developers and individual users. DeepSeek V4 hasn't yet submitted to LMArena at scale, so the Chinese leaderboard fight is incomplete. Baidu has signaled a broader rollout tied to its Baidu Create 2026 conference.
Bottom Line
ERNIE 5.1 lands at fourth on LMArena's Search Arena with a 1,223 score, and Baidu claims it was trained at 6% the cost of comparable models, a figure no third party has yet checked.
Quick Facts
- LMArena Search Arena: 1,223 points, 4th globally, 1st in China
- LMArena Text Arena (Preview): 1,476, 13th globally
- Pre-training cost: ~6% of comparable models (Baidu-reported, unverified)
- Total parameters: ~1/3 of ERNIE 5.0; active parameters ~1/2
- Official launch date: May 9, 2026




