A Beijing-based quantitative hedge fund just entered the AI coding race. IQuest Lab, the AI research arm of Ubiquant, released IQuest-Coder-V1 on January 1, posting benchmark numbers that would place a 40 billion parameter model ahead of Claude Sonnet 4.5 and GPT-5.1. The company reports 81.4% on SWE-Bench Verified, 81.1% on LiveCodeBench v6, and 49.9% on BigCodeBench.
Those scores haven't been independently verified. Claude Sonnet 4.5 sits at 77.2% on SWE-Bench Verified according to Anthropic's published results. Early community reactions on X range from excitement to skepticism, with several users noting that a 40B model outperforming models 20 times its size would be remarkable if confirmed.
The model family comes in 7B, 14B, and 40B variants, all with 128K context windows. IQuest Lab describes a training approach they call "Code-Flow," which learns from repository evolution patterns and commit histories rather than static code snapshots. The GitHub repo includes evaluation trajectories for reproducing the SWE-Bench results.
Ubiquant, founded in 2012, manages billions in assets and has been building AI infrastructure since establishing its AI Lab around 2019. The fund pays up to $300,000 annually for fresh graduates in AI and computer science, according to Bloomberg reporting. This release follows a pattern: High-Flyer, another Chinese quant fund, backed DeepSeek's development.
The Bottom Line: A 40B open-source model beating frontier commercial offerings would shift the economics of self-hosted coding assistants, but independent testing will determine whether the benchmark claims hold.
QUICK FACTS
- Model size: 40B parameters (also available in 7B and 14B)
- SWE-Bench Verified: 81.4% (company-reported, unverified)
- Context length: 128K tokens native
- License: Modified MIT with commercial restrictions for large companies
- Backer: Ubiquant, Beijing-based quant fund founded in 2012




