Volume 1, No. 35 Saturday, April 4, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Platform Policy & Interpretability

Anthropic Ends Claude API Coverage for Third-Party Tools; Emotion-Vector Research Surfaces

Starting April 4 at noon Pacific, Claude subscriptions no longer cover API usage via third-party wrappers — and separately, internal research mapping “emotion vectors” inside the model has become public, reigniting debate over AI interpretability.

Anthropic announced that effective April 4 at 12 p.m. PT, Claude subscription plans — Pro, Team, and Max — will no longer cover API consumption routed through third-party wrapper applications such as OpenClaw. The company cited capacity management as the rationale, directing developers who build on top of Claude to access the API directly through Anthropic’s own console rather than via intermediaries. Users of affected wrappers began reporting access failures within hours of the policy taking effect.

The announcement landed on the same day that a previously internal research paper on what Anthropic researchers call “emotion vectors” circulated widely online. The document describes how certain learned representations inside Claude correspond to emotional-toned states — functional analogs to frustration, curiosity, and hesitation — that measurably influence the model’s outputs. The researchers are careful to frame these as mechanistic features rather than claims of sentience, but the publication reignited arguments about anthropomorphism, AI welfare, and what interpretability research actually reveals about the nature of large language models.

Critics of the emotion-vector framing argue that named representations tell us nothing about subjective experience; proponents counter that understanding which internal states shape outputs is essential safety work regardless of philosophical stance. The paper adds to a growing body of mechanistic interpretability findings from Anthropic that have made it one of the most active publishers in that sub-field.

Open Weights

DeepSeek V4: 1-Trillion-Parameter MoE Released with Open Weights for ~$5.2M

DeepSeek’s latest model is a 1-trillion-parameter Mixture-of-Experts giant with fully open weights — reportedly trained for just $5.2 million, roughly two percent of what comparable proprietary models cost to build.

DeepSeek released V4, its largest model to date, with fully open weights available for download and commercial use. At one trillion parameters in a Mixture-of-Experts architecture, only a fraction of parameters activate per forward pass, keeping inference costs manageable even at frontier scale. The reported training cost of approximately $5.2 million continues the company’s pattern of compressing the cost curve that defines AI development — prior estimates for models at this capability level routinely exceeded $200 to $500 million.

The release intensifies pressure on closed-weight laboratories that rely on proprietary access as a competitive moat. If open-weight models match or approach frontier closed-weight performance at a fraction of the training budget, the business case for access-restricted APIs weakens substantially. DeepSeek V4 weights are available on Hugging Face under a permissive license.

Industry & Tools

Venture Capital

Q1 2026 AI Venture Funding Hits Record $267B; OpenAI’s $122B Round Dominates

First-quarter 2026 AI venture funding totaled $267.2 billion, the largest single-quarter figure on record. OpenAI’s $122 billion raise anchored by Amazon ($50B), Nvidia ($30B), and SoftBank ($30B) dominated the headline, alongside xAI’s absorption into SpaceX. The concentration of capital at the very top of the AI stack — a handful of frontier labs capturing the majority of investment — has left mid-tier model companies and application-layer startups competing for a shrinking share of the remaining pool.

Developer Tools

Codex CLI v0.118.0 Fixes Linux Sandbox, MCP Startup, and Windows Patch Issues

The latest Codex CLI release addresses a cluster of TUI workflow regressions in app-server mode, restoring /copy, /resume, and /agent commands. Linux sandbox launch reliability is hardened, MCP startup edge cases are patched, and a Windows file-permission bug on diffs is resolved. Local .codex files are now protected on creation to prevent accidental overwrites during agent runs.

Privacy & Litigation

Perplexity Lawsuit Gains Momentum — 135-Page Complaint Details Tracker Embedding

The class-action filed in San Francisco federal court continues gaining traction. The 135-page complaint alleges that Perplexity embedded trackers that sent user prompts and AI-generated responses to Google and Meta even when users browsed in incognito mode. The class covers all free-tier users from December 2022 through February 2026 — a sweep that could encompass tens of millions of people. The suit draws on forensic network analysis of Perplexity’s app traffic rather than leaked documents, a strategy plaintiffs’ counsel say makes the evidence harder to dismiss.

Benchmarks

GPT-5.4 Thinking Surpasses Human Baseline on Desktop Task Benchmark

GPT-5.4 Thinking scored 75.0% on OSWorld-Verified — a 27.7-point jump over GPT-5.2 and the first score from a commercial model to exceed the established human baseline on that benchmark. The model pairs a 1-million-token context window with autonomous multi-step workflow execution, enabling it to chain browser sessions, file edits, and application interactions without human confirmation at each step. The claim of human-level desktop task automation, if it holds under adversarial evaluation, would mark a significant threshold in agentic AI capability.

A trillion parameters, open weights, five million dollars. The cost curve is not flattening — it’s in free fall. The AI Dispatch — April 4, 2026
Regulation

EU AI Act Obligations Phase In as 2026 Responsible AI Symposium Wraps

The annual Responsible AI Symposium closed this week against the backdrop of two major regulatory milestones: the EU AI Act’s prohibitions on certain high-risk practices have now taken effect, and General Purpose AI transparency requirements are binding for providers serving European users. Symposium polling found that 54% of U.S. adults believe AI will cause more harm than good — a figure that has risen in each of the past three surveys — while a separate Quinnipiac release underscored growing public skepticism about the pace of deployment relative to demonstrated safety work.

Colorado’s AI Act adds a U.S. state-level layer, establishing algorithmic discrimination protections for residents and imposing impact-assessment requirements on high-stakes automated decision systems. Enforcement frameworks for both the EU rules and the Colorado law remain uneven: regulators have issued detailed guidance on prohibited-practice categories but have not yet moved against any specific systems. The symposium’s consensus panel concluded that the gap between stated regulatory intent and actual enforcement capacity is the defining near-term risk for compliance teams.

Quick Dispatches

Google Gemma 4 Climbs to #3 on Arena Leaderboard

Google’s Gemma 4 continues to surge in community evaluations, now sitting third on the LMSYS Chatbot Arena leaderboard. Its Apache 2.0 license is driving unusually strong developer adoption for a model at its capability tier — fine-tuning activity on Hugging Face surpassed all other open-weight releases in the past 30 days.

Claude Code v2.1.89 NO_FLICKER Renderer Gains Traction

The NO_FLICKER terminal renderer introduced in Claude Code v2.1.89 is becoming the default recommendation in developer communities. The fix eliminates the scroll-jump artifacts that plagued long agentic sessions in iTerm2 and tmux, with several open-source orchestration frameworks updating their recommended launch configs to enable it by default.

GitHub Copilot CLI /fleet Enables Parallel Agent Orchestration

The /fleet command in the latest GitHub Copilot CLI preview allows developers to spin up multiple coding agents that share a filesystem context and coordinate work across a repository. Early adopters report significant speedups on parallelizable refactors. The feature is positioned as a direct competitor to Claude Code’s multi-agent capabilities and OpenAI’s Codex swarm mode.

GitHub Trending

Repository Language Stars Description
ultraworkers/claw-code Rust 148.2k Fastest repo to 100K stars; AI terminal coding agent built on Rust for minimal latency
block/goose Rust Open-source extensible AI agent that works with any LLM; designed for developer productivity workflows
siddharthvaddem/openscreen TypeScript Free open-source Screen Studio alternative for recording and sharing developer demos
EvanLi/Github-Ranking Auto-updated daily ranking of GitHub repositories by stars and forks across all languages
antonkomarev/github-trending-archive Daily archive of historical GitHub trending repositories; useful for tracking ecosystem momentum over time