Volume 1, No. 33 Thursday, April 2, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Open Models

Google Releases Gemma 4 — Four Open Models Under Apache 2.0

Google DeepMind launches four Gemma 4 variants built on Gemini 3 architecture: 2B, 4B, 26B MoE, and 31B Dense — all with 256K context, native vision and audio, 140+ language support, and no commercial restrictions.

Google DeepMind released Gemma 4 on April 2, a family of four open models — E2B, E4B, a 26B Mixture-of-Experts, and a 31B Dense — built directly on the Gemini 3 architecture. All four support 256K context windows, native vision and audio processing, and inference across more than 140 languages. The switch from prior Gemma licenses to Apache 2.0 is the headline policy move: previous commercial-use restrictions are gone, opening the models to enterprise deployment without licensing negotiation.

The 31B Dense model entered the Arena AI leaderboard at number three among open-weight models at launch, a strong debut for a model in this parameter class. Google is distributing all four variants through Hugging Face, Kaggle, Ollama, and Google Cloud, lowering the barrier for both research and production use. The MoE variant at 26B is positioned as the efficiency pick for organizations running inference at scale, while the 31B Dense targets applications where reasoning depth outweighs cost concerns.

The Apache 2.0 relicensing signals a deliberate competitive response to the open-model field. With Meta’s Llama 4 still navigating benchmark credibility questions and Mistral Small 4’s recent consolidation under the same license, the open-model tier is now three serious Apache 2.0 families deep. The combination of multimodal capability, long context, and permissive licensing gives Gemma 4 a compelling story for developers who previously had to choose between capability and freedom.


Industry Moves

Microsoft

Microsoft Launches MAI Model Family: Transcribe, Voice, and Image

Three in-house models — MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 — are now available through Microsoft Foundry and the MAI Playground. The release marks a deliberate push to reduce Microsoft’s reliance on third-party model providers. MAI-Transcribe-1 targets enterprise speech workflows; MAI-Voice-1 brings expressive text-to-speech with prosody control; MAI-Image-2 is aimed at visual content generation for enterprise applications. All three are accessible via API in Foundry.

NVIDIA

NVIDIA Releases Nemotron, Cosmos, Alpamayo, and Clara Model Families

A broad open model push from NVIDIA spanning agentic AI (Nemotron), physical and robotics AI (Cosmos and Isaac GR00T N1.5), autonomous vehicles (Alpamayo), and biomedical research (Clara). The releases mark NVIDIA’s most aggressive move into the model layer to date — hardware dominance is no longer the whole strategy. Nemotron targets multi-step agentic reasoning; Cosmos grounds physical simulation for robotics; Alpamayo addresses sensor fusion for autonomous driving; Clara accelerates drug discovery and genomics pipelines.

Benchmarks

GPT-5.4 “Thinking” Surpasses Human-Level Desktop Automation

OpenAI’s GPT-5.4 Thinking scored 75.0% on OSWorld-Verified — a 27.7-point jump over GPT-5.2 — and 83.0% on GDPVal, clearing the bar of expert human performance on economically valuable tasks. The model can autonomously navigate file systems, browsers, and terminals across a wide variety of desktop workflows. OSWorld-Verified tests the model against real computer environments rather than synthetic benchmarks, making the score one of the most credible demonstrations of agentic capability to date.

Research

MIT Designs Proteins by Motion, Not Shape — and Detects Atomic-Scale Chip Defects

Two MIT teams published breakthroughs in the same week. The first group demonstrated protein design based on dynamic motion rather than static 3D structure, opening new paths to adaptive therapeutics and programmable biomaterials. A separate MIT team used AI to identify atomic-level defects in semiconductor manufacturing — a precision previously unachievable without destructive testing — offering a route toward significantly improved chip yield rates.

More than fifty percent of U.S. adults believe AI is likely to cause significant harm within the next decade. Responsible AI Symposium 2026 polling — via NBC News
Regulation

Responsible AI Symposium 2026: 38 U.S. States Have Now Passed AI Laws

Technology leaders, government officials, and academic researchers convened at the Responsible AI Symposium 2026, where new polling data sharpened the stakes: more than 50% of U.S. adults now believe AI is likely to cause significant societal harm. That number coexists with an accelerating legislative record — 38 states have passed AI laws covering elections, medical AI applications, and algorithmic discrimination in employment and lending.

The Trump administration’s executive order directing the Department of Justice to challenge state-level AI laws escalates an emerging conflict. Federal preemption advocates argue a patchwork of 50 different state standards will hobble U.S. AI competitiveness; state legislators counter that the administration’s preferred “light-touch” federal framework amounts to no regulation at all. The symposium closed without consensus, but the 38-state figure underscores that enforcement momentum now sits with the states regardless of federal posture.

Election integrity provisions passed in 19 states require disclosure when AI-generated content appears in political advertising. Medical AI laws in 14 states mandate clinical oversight before AI diagnostic tools are used in patient care. Algorithmic discrimination statutes in 11 states impose audit requirements on automated hiring and lending systems. Each body of law reflects different risk assumptions, different enforcement mechanisms, and different preemption vulnerabilities — a legal landscape that will take years of litigation to settle.


Quick Dispatches

Claude Code v2.1.89 Ships NO_FLICKER Rendering Engine

Viewport virtualization eliminates scroll flicker in large output windows. The new NO_FLICKER engine renders only visible content at any given scroll position, cutting reflow cost significantly on long agentic sessions.

Copilot CLI Launches /fleet for Parallel Multi-Agent Orchestration

The new /fleet command ships simultaneously with v2.1.89, enabling users to spawn and coordinate multiple concurrent agent sessions from a single CLI interface. Targeted at complex multi-step workflows that benefit from parallelism.

Meta Pauses Mercor Data Contracts Under Investigation

Meta suspended data contracts with Mercor while investigating potential exposure of model-training data. The pause affects third-party annotators and data contractors working through Mercor’s platform. No disclosure has been made about what data may have been improperly accessed or shared.

GitHub Trending

Repo Language Stars Description
ultraworkers/claw-code Rust 148.2K Fastest repo to 100K stars; AI coding agent
siddharthvaddem/openscreen TypeScript ~18K No-watermark video editor alternative to Screen Studio
block/goose Rust ~45K Open-source extensible AI agent by Block
EvanLi/Github-Ranking ~18K Auto-updating daily GitHub stars and forks leaderboard
google/gemma.cpp C++ ~22K Lightweight C++ inference engine for Gemma models

Source: Trendshift • Star counts as of April 2, 2026