Volume 1, No. 32 Wednesday, April 1, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Security

Meta Halts Mercor Partnership After LiteLLM Supply Chain Breach Exposes AI Training Secrets

A March 24 supply chain attack on LiteLLM versions 1.82.7–1.82.8 compromised the $10B AI training platform serving Anthropic, OpenAI, and Meta — potentially exposing proprietary dataset strategies and labeling protocols.

Mercor, the AI training data startup valued at $10 billion that counts Anthropic, OpenAI, and Meta among its customers, confirmed a major security breach on March 24 via a supply chain attack targeting LiteLLM versions 1.82.7 and 1.82.8. Attackers — attributed by initial threat intelligence reports to the Lapsus$ group — compromised Trivy, the open-source container security scanner embedded in LiteLLM’s CI/CD pipeline, to insert malicious code that exfiltrated environment variables and credentials during the build process.

The scope of the breach is substantial. Security researchers estimate that up to 500,000 machines and more than 1,000 SaaS environments may have been affected during the roughly 72-hour window before the compromised package versions were pulled from PyPI. For Mercor specifically, the concern is what data the attackers may have accessed: dataset selection criteria, human labeling protocols, proprietary model evaluation rubrics, and training strategies that represent the intellectual core of what premium AI data vendors sell.

Meta moved quickly, suspending all active Mercor projects within hours of the disclosure becoming public. Meta declined to specify which projects were paused or for how long, saying only that it was conducting “a thorough assessment of potential exposure.” Neither Anthropic nor OpenAI had issued public statements as of publication time, though both confirmed they use Mercor services.

The incident arrives at an uncomfortable moment for the broader AI supply chain conversation. LiteLLM is one of the most widely deployed LLM proxy and routing libraries in production — used to unify API calls across providers — meaning the blast radius of a compromised version extends well beyond any single customer. Security experts are now urging organizations to rotate credentials, audit CI/CD pipeline dependencies, and pin package versions with verified checksums.


Developer Tools

Claude Code v2.1.89 Ships NO_FLICKER Renderer; Copilot CLI Adds /fleet

Claude Code v2.1.89 introduces an experimental viewport-virtualizing terminal renderer, enabled via CLAUDE_CODE_NO_FLICKER=1, that eliminates the scroll jank and screen flicker that have plagued long-running agentic sessions. The release also adds mouse event support for in-terminal UI interactions, allowing tools built on Claude Code to respond to click and scroll events without requiring separate terminal multiplexers.

Meanwhile, GitHub Copilot CLI launched /fleet, an orchestrator command that decomposes a stated goal into parallel sub-agent tasks sharing a common filesystem. Early testers report significant throughput gains on multi-file refactors and test generation tasks. Both tools are converging on the same model: a single human instruction spawning coordinated agent swarms rather than single-shot completions.

Politics

Anthropic Files to Create AnthroPAC Amid AI Midterm Spending War

Anthropic registered “Anthropic PBC Political Action Committee” — AnthroPAC — with the Federal Election Commission, making it the latest AI lab to formalize political activity ahead of the 2026 midterms. The bipartisan PAC will be funded by voluntary employee contributions and is intended to support candidates who engage seriously with AI safety and governance issues, according to a company memo reviewed by reporters.

The filing comes as total AI industry political spending for the 2026 cycle now tops $300 million, with a pro-deregulation Super PAC aligned with Trump AI czar David Sacks already pledging $100 million. The midterms are shaping up as a referendum on two competing AI visions: unfettered acceleration backed by major labs, versus safety-conditional regulation backed by a coalition of academics and civil society groups.

Regulation

Colorado AI Act Takes Effect — First Compliance-Grade State AI Law Now Enforceable

Colorado’s SB 205, one of the most comprehensive U.S. state AI laws, became enforceable today — marking a milestone in the patchwork of state-level AI regulation that has advanced faster than federal action. The law covers both developers and deployers of high-risk AI systems used in consequential decisions about employment, credit, housing, healthcare, and education.

Covered entities must now maintain risk management documentation, conduct algorithmic impact assessments, disclose AI use to affected consumers, and implement bias mitigation strategies. Noncompliance can draw civil enforcement by the Colorado Attorney General. With 38 states having passed roughly 100 AI-related measures since 2023, compliance teams across the country are watching Colorado’s enforcement actions as a preview of what a federal regime might look like.

Funding

Anthropic Series G Closes at $30B; Annualized Revenue Approaches $19B

Anthropic’s Series G round has closed at a $30 billion valuation, sources confirm, as the company’s annualized revenue approaches $19 billion — still trailing OpenAI’s reported $25 billion-plus ARR but closing the gap faster than most analysts projected. The funding comes during a record quarter for global AI venture: Q1 2026 VC deal value hit $267.2 billion worldwide, its highest level ever recorded.

The raise arrives against a backdrop of complications: Anthropic is involved in an ongoing legal dispute with the Department of Defense over contract terms, and the newly filed AnthroPAC signals the company is moving to shape the regulatory environment more directly. With compute costs still the dominant expense line, the round is expected to flow primarily into infrastructure and model training.


Research

Google TurboQuant Presented at ICLR 2026 — 40% Hallucination Reduction

Google Research presented TurboQuant at the International Conference on Learning Representations 2026, addressing one of the most persistent problems in deploying large models at scale: the memory overhead introduced by vector quantization techniques used to compress model weights without significant accuracy loss. Prior approaches to vector quantization introduced latency spikes and instability in longer reasoning chains; TurboQuant’s core contribution is an adaptive codebook scheme that reduces this overhead by approximately 38% while maintaining representation fidelity.

The more striking finding is behavioral: models using TurboQuant exhibit what the paper terms “deliberate pausing” — a learned response pattern where the model inserts structured pause tokens before answering complex logical problems. This emergent behavior appears to mirror the effect of chain-of-thought prompting without requiring explicit instruction. Across a standardized hallucination evaluation suite, TurboQuant-compressed models generated 40% fewer factual errors compared to 2025 baselines, a result the authors attribute to the pause mechanism giving the model additional compute cycles before committing to an answer.

Independent researchers reviewing the preprint have called the result “preliminary but intriguing,” noting that the hallucination reduction may be partially attributable to the evaluation suite design rather than solely to the architectural change. Google says TurboQuant will be integrated into future Gemini model versions.


In Brief

Open-Source AI Landscape Consolidates Around Six Competitive Labs

Six organizations now field genuinely competitive open-weight frontier models: Google (Gemma 4), Alibaba (Qwen 3.6), Meta (Llama 4), Mistral (Small 4), OpenAI (gpt-oss-120b), and Zhipu (GLM-5). The concentration marks a significant shift from 2024’s fragmented ecosystem, where dozens of labs released models of varying quality. Source

Responsible AI Symposium 2026 Convenes as Federal-State Law Conflict Heats Up

The Responsible AI Symposium 2026 opened today as tension between federal preemption efforts and state-level AI laws reaches its highest pitch yet. With 38 states having passed roughly 100 AI measures since 2023, legal experts warn of a compliance crisis for companies operating nationally. The symposium’s central question: whether federal inaction justifies state experimentation, or whether patchwork regulation ultimately harms the people it aims to protect. Source

GPT-5.4 Thinking Surpasses Human Experts on OSWorld Desktop Benchmark

OpenAI’s GPT-5.4 Thinking variant scored 75.0% on the OSWorld Desktop benchmark, surpassing human expert performance (estimated at 72.4%) on a suite of real-world computer use tasks involving web browsing, file management, spreadsheet manipulation, and application control. The result is the first publicly reported benchmark where an AI system scores above the human expert baseline on this evaluation. Source


GitHub Trending

Repo Language Stars Description
ultraworkers/openclaw Rust 210K+ Local AI gateway connecting LLMs to 50+ integrations
block/goose Rust ~45K Open-source extensible AI agent
langgenius/dify Python 130K Open-source LLM application platform
caramaschiHG/awesome-ai-agents-2026 Markdown ~18K 300+ AI agent resources, frameworks, and use cases
siddharthvaddem/openscreen TypeScript ~9K Free product demo and screencast tool
n8n-io/n8n TypeScript ~95K Fair-code workflow automation with AI node support

Source: Trendshift • Star counts as of April 1, 2026