Open Models
Google Releases Gemma 4 — Four Open Models Under Apache 2.0
Google DeepMind launches four Gemma 4 variants built on Gemini 3 architecture: 2B, 4B, 26B MoE, and 31B Dense — all with 256K context, native vision and audio, 140+ language support, and no commercial restrictions.
Google DeepMind released Gemma 4 on April 2, a family of four open models — E2B, E4B, a 26B Mixture-of-Experts, and a 31B Dense — built directly on the Gemini 3 architecture. All four support 256K context windows, native vision and audio processing, and inference across more than 140 languages. The switch from prior Gemma licenses to Apache 2.0 is the headline policy move: previous commercial-use restrictions are gone, opening the models to enterprise deployment without licensing negotiation.
The 31B Dense model entered the Arena AI leaderboard at number three among open-weight models at launch, a strong debut for a model in this parameter class. Google is distributing all four variants through Hugging Face, Kaggle, Ollama, and Google Cloud, lowering the barrier for both research and production use. The MoE variant at 26B is positioned as the efficiency pick for organizations running inference at scale, while the 31B Dense targets applications where reasoning depth outweighs cost concerns.
The Apache 2.0 relicensing signals a deliberate competitive response to the open-model field. With Meta’s Llama 4 still navigating benchmark credibility questions and Mistral Small 4’s recent consolidation under the same license, the open-model tier is now three serious Apache 2.0 families deep. The combination of multimodal capability, long context, and permissive licensing gives Gemma 4 a compelling story for developers who previously had to choose between capability and freedom.