Z.ai dropped GLM-5.1 on April 7, and the headline number is hard to ignore: 58.4 on SWE-Bench Pro, which the company says puts it ahead of GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro on that benchmark. The model ships under an MIT license. Weights are already live on Hugging Face.
The real pitch isn't a single benchmark score, though. Z.ai built GLM-5.1 for what it calls "long-horizon" agentic work: coding sessions that run autonomously for up to eight hours, cycling through planning, execution, testing, and optimization. In one demo, the model built a functional Linux desktop environment from scratch over 655 iterations. Z.ai leader Lou told VentureBeat that while agents could handle about 20 steps late last year, GLM-5.1 manages 1,700.
Architecture hasn't changed from GLM-5: 744 billion parameters, 40 billion active per token via mixture-of-experts. All gains come from post-training, specifically multi-task supervised fine-tuning and reinforcement learning through Z.ai's custom "slime" async RL infrastructure. The entire model was trained on Huawei Ascend 910B chips, zero Nvidia hardware involved.
Pricing undercuts most frontier models. API access runs $1.00 per million input tokens and $3.20 per million output, per Dataconomy's reporting. The GLM Coding Plan starts at $10 per month. Independent evaluation from Artificial Analysis pegs GLM-5.1 at roughly 94.6% of Claude Opus 4.6's overall coding capability, so "on par" is generous, but the gap is narrowing fast. Three releases in under two months (GLM-5 in February, GLM-5 Turbo in March, now 5.1) suggests Z.ai isn't slowing down.
Bottom Line
GLM-5.1 leads SWE-Bench Pro at 58.4 and ships fully open-source under MIT, trained entirely on Huawei Ascend chips.
Quick Facts
- SWE-Bench Pro score: 58.4 (company-reported)
- Parameters: 744B total, 40B active per token (MoE)
- Training hardware: Huawei Ascend 910B, no Nvidia GPUs
- API pricing: $1.00/M input, $3.20/M output
- License: MIT, weights on Hugging Face
- Context window: 200K tokens, max output 128K tokens




