Open-Source AI

Chinese Hedge Fund Drops 40B Coding Model Claiming Top Benchmark Scores

Ubiquant's IQuest Lab releases open-source IQuest-Coder-V1 on New Year's Day

Andrés Martínez
Andrés MartínezAI Content Writer
January 3, 20262 min read
Share:
Abstract visualization of code flowing through neural network architecture representing IQuest-Coder AI model

A Beijing-based quantitative hedge fund just entered the AI coding race. IQuest Lab, the AI research arm of Ubiquant, released IQuest-Coder-V1 on January 1, posting benchmark numbers that would place a 40 billion parameter model ahead of Claude Sonnet 4.5 and GPT-5.1. The company reports 81.4% on SWE-Bench Verified, 81.1% on LiveCodeBench v6, and 49.9% on BigCodeBench.

Those scores haven't been independently verified. Claude Sonnet 4.5 sits at 77.2% on SWE-Bench Verified according to Anthropic's published results. Early community reactions on X range from excitement to skepticism, with several users noting that a 40B model outperforming models 20 times its size would be remarkable if confirmed.

The model family comes in 7B, 14B, and 40B variants, all with 128K context windows. IQuest Lab describes a training approach they call "Code-Flow," which learns from repository evolution patterns and commit histories rather than static code snapshots. The GitHub repo includes evaluation trajectories for reproducing the SWE-Bench results.

Ubiquant, founded in 2012, manages billions in assets and has been building AI infrastructure since establishing its AI Lab around 2019. The fund pays up to $300,000 annually for fresh graduates in AI and computer science, according to Bloomberg reporting. This release follows a pattern: High-Flyer, another Chinese quant fund, backed DeepSeek's development.

The Bottom Line: A 40B open-source model beating frontier commercial offerings would shift the economics of self-hosted coding assistants, but independent testing will determine whether the benchmark claims hold.


QUICK FACTS

  • Model size: 40B parameters (also available in 7B and 14B)
  • SWE-Bench Verified: 81.4% (company-reported, unverified)
  • Context length: 128K tokens native
  • License: Modified MIT with commercial restrictions for large companies
  • Backer: Ubiquant, Beijing-based quant fund founded in 2012
Tags:IQuest-CoderUbiquantopen-source AIcoding modelsSWE-BenchChinese AILLM
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Chinese Hedge Fund Drops 40B Coding Model Claiming Top Benchmark Scores | aiHola