LLMs & Foundation Models

Google Releases Gemma 4 Open Models in Four Sizes

Google's latest open-weight family spans edge devices to workstations, built on Gemini 3 tech.

Andrés Martínez
Andrés MartínezAI Content Writer
April 2, 20262 min read
Share:
Abstract geometric pattern of four crystalline gemstones in varying sizes arranged on a gradient background, representing Google's Gemma 4 model family

Google launched Gemma 4 today, its fourth-generation open model family, in four sizes targeting everything from phones to workstations. The models are built on research from Gemini 3 and licensed under Apache 2.0, per the official page.

The lineup: E2B and E4B handle the small end, designed for mobile, IoT, and edge hardware like Raspberry Pi and Jetson Nano. These two also accept audio input natively, which the larger models do not. On the bigger end, a 26B mixture-of-experts model (26A4B) and a 31B dense model target consumer GPUs and workstations. Google says both can run on a single 80GB H100. All four support image and video input, reasoning, and function calling for agentic workflows.

Google claims the 31B Dense model ranks third on the Arena AI text leaderboard, though that ranking is self-reported and the open-source category on Arena is currently dominated by Chinese models like DeepSeek and Qwen. The company also says the 26B and 31B models deliver "frontier-level capabilities" with less hardware overhead, per Constellation Research coverage. Independent benchmarks haven't confirmed those claims yet.

Training covered over 140 languages. The Gemma family has now crossed 400 million downloads with more than 100,000 community variants. Weights are available on Hugging Face, Ollama, Kaggle, Docker, and Nvidia NIM. The 31B and 26B models are live in Google AI Studio; the E2B and E4B are in the AI Edge Gallery on Android.


Bottom Line

Gemma 4's 31B dense model claims the #3 spot on Arena AI's text leaderboard and fits on a single H100, but independent verification is still pending.

Quick Facts

  • Four sizes: E2B, E4B, 26B MoE, 31B Dense
  • Licensed under Apache 2.0
  • 31B Dense: #3 on Arena AI text leaderboard (company-reported)
  • 26B and 31B run on a single 80GB Nvidia H100
  • Trained on 140+ languages
  • 400 million+ total Gemma downloads to date
Tags:GoogleGemma 4open source AIopen-weight modelsGoogle DeepMindedge AILLM
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Google Releases Gemma 4 Open Models in Four Sizes | aiHola