Volume 1, No. 58 Thursday, April 30, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


IPO & Valuation

Bloomberg: Anthropic Reviewing October IPO at $900B+ Valuation — World’s Most Valuable AI Startup

A May board meeting is expected to finalize terms on funding offers that would value the company above $900 billion — surpassing OpenAI’s $852B and setting up a potential October debut.

Bloomberg formally confirmed Wednesday evening that Anthropic is reviewing inbound funding offers at a valuation exceeding $900 billion, with a board meeting expected in May to finalize the direction. Sources cited by Bloomberg say an October 2026 initial public offering is under active evaluation — a specific timeline detail that had not previously appeared in reporting. The disclosure arrives a week after earlier valuations of roughly $50 billion circulated; the $900B-plus figure reflects how rapidly investor appetite has escalated since Claude Mythos’s benchmark dominance became undeniable.

If the October window holds and the offering lands at or above the reported valuation, Anthropic would overtake OpenAI — currently valued at approximately $852 billion following its own recent capital markets activity — as the world’s most valuable AI startup. That inversion, measured in market-cap terms, would be the clearest public signal yet that the frontier-model race has at least two viable poles: the OpenAI-Microsoft axis and the Anthropic-Amazon-Google axis.

Anthropic’s trajectory makes the valuation arithmetically plausible. Enterprise revenue has grown sharply on the back of Claude deployments at Freshfields, Palantir, and dozens of other named accounts; government contract momentum is building (see Policy section); and Mythos’s 64.7% score on Humanity’s Last Exam with tools — the highest benchmark number any lab has publicly posted — anchors the technical credibility story. The May board meeting is, by Bloomberg’s account, the next concrete decision gate. No comment from Anthropic as of press time.

Big Tech Earnings Follow-Up

The Capex Story Has a Number: $640B+ and Climbing

After a week of Q1 earnings calls, the AI infrastructure spending picture for 2026 now has hard numbers. Microsoft guided to approximately $190 billion in capital expenditures; Alphabet projected $180–190 billion (with leadership explicitly flagging 2027 as “significantly higher”); Meta raised its range to $125–145 billion; and Amazon disclosed approximately $100 billion or more. The combined total lands somewhere between $630 and $650 billion — a figure with no precedent in the history of corporate technology spending.

Who Escaped Investor Punishment

Alphabet was the outlier: its shares rose after the report, buoyed by a strong cloud quarter that suggested AI spending was already generating returns. Microsoft traded roughly flat after hours, a neutral verdict from markets that had priced in the capex expansion. Meta dropped more than 6 percent — the steepest one-day reaction — as analysts pushed back on the upper end of its guidance range. Amazon’s AWS-driven beat cushioned a similar reaction to its spending trajectory.

The common thread: investors are not questioning whether AI infrastructure spending is real, but whether the ROI timeline is near enough to justify the magnitude. Alphabet’s clean quarter answered that question for one quarter; the others have more to prove.

The Anxiety Index

The anxiety is structural, not cyclical. Each of the four companies has committed to capex trajectories that will require not just continued AI revenue growth, but AI revenue growth accelerating faster than the spend. At the current pace, the combined annual capex of these four companies alone will exceed total global defense R&D within two years.

Alphabet’s 2027 comment is the most telling detail in any of the calls: the company is signaling that the 2026 numbers, extraordinary as they are, are a waystation rather than a ceiling. The question for the rest of the year is whether the revenue side of that equation arrives on schedule.

Models & Research

The Research Wire

Phi-4-Mini-Reasoning proves small models can math; NVIDIA opens quantum’s AI bottleneck; Mythos posts 64.7% on Humanity’s Last Exam; and a widely-cited training result turns out to have had a bug.

Small Models, Big Math

Microsoft’s Phi-4-Mini-Reasoning (3.8B) Beats 7B and 8B DeepSeek Distills on Math-500

Microsoft Research published the training recipe for Phi-4-Mini-Reasoning, a 3.8-billion-parameter model that outscores DeepSeek-R1-Distill-Qwen-7B by 3.2 points and DeepSeek-R1-Distill-Llama-8B by 7.7 points on Math-500 — despite being smaller than both. The four-stage recipe combines large-scale chain-of-thought mid-training, supervised fine-tuning on high-quality CoT data, Rollout DPO, and RLVR. All training data is synthetic math generated by DeepSeek-R1: roughly one million problems totaling approximately 30 billion tokens. The paper confirms that careful recipe design can compensate for scale — a result with direct implications for on-device and edge AI deployments where model size is a hard constraint.

Quantum Computing

NVIDIA Launches Ising: Open AI Models for Quantum’s Two Biggest Bottlenecks

NVIDIA announced Ising — the first family of open AI models purpose-built for quantum computing’s two core infrastructure problems: error-correction decoding and processor calibration. The models deliver up to 2.5x faster and 3x more accurate decoding compared with traditional methods, according to NVIDIA’s benchmarks. Early-access partners include Harvard University, Fermilab, Lawrence Berkeley National Laboratory, and IQM Quantum Computers. The models are open-source. The significance of Ising is not raw quantum performance but the removal of classical AI bottlenecks that have slowed practical quantum deployment — error correction and calibration being the two engineering steps that have most consistently limited real-world quantum utility.

Benchmark Watch

Claude Mythos Posts 64.7% on Humanity’s Last Exam With Tools — Largest Lead Yet Over Any Rival

Claude Mythos Preview reached 64.7% on Humanity’s Last Exam (with tools) this week — the highest posted score on any publicly tracked leaderboard for the benchmark. The nearest rivals: GPT-5.4 at 52.1%, Gemini 3 Pro Preview at 37.5%, and Claude Opus 4.7 at 34.4%. On SWE-bench Verified, Mythos stands at 93.9% while Opus 4.7 Adaptive reaches 87.6%. The results cap a week in which benchmark saturation on GPQA — with top models clustering between 92 and 94% — became a major community discussion point, with several researchers arguing that GPQA can no longer differentiate frontier systems and new evaluation instruments are needed.

Training Methods

SFT-Then-RL Beats Mixed-Policy — But a Bug in Widely-Used Frameworks Had Hidden It

A newly circulated preprint re-evaluates mixed-policy optimization — running supervised fine-tuning and reinforcement learning simultaneously — against the sequential SFT-then-RL approach. The finding is significant less for the conclusion (sequential wins) than for its method: the researchers discovered bugs in widely-used SFT training frameworks that had artificially inflated mixed-policy results in prior comparisons. After correcting the bugs, sequential SFT-then-RL consistently outperforms across tested conditions. The result is a reminder that benchmark competition between training paradigms can be confounded by implementation errors as much as genuine algorithmic differences — and that the community’s shared code infrastructure deserves the same scrutiny as its results.

Policy: April Closes

Regulation at the End of April

The White House moves to clear Anthropic’s federal path; the EU AI Act omnibus is effectively dead before August; and nineteen state AI laws passed in April alone.

Federal AI Access

White House Drafting Guidance to Let Federal Agencies Deploy Claude Mythos

The White House is drafting guidance that would authorize federal agencies to work directly with Anthropic, including access to Claude Mythos — the frontier cybersecurity and reasoning model currently under review at the Pentagon following a supply-chain risk designation in February. The guidance, if finalized, would effectively override that designation, reflecting a broader shift in the administration’s posture as Anthropic’s enterprise and government footprint expands. The move tracks Anthropic’s aggressive push into regulated verticals and government contracts, and would mark one of the most explicit federal endorsements of a specific frontier AI lab since the partnership with NIST.

EU AI Act

EU AI Act Omnibus Is Effectively Dead — August 2 Deadline Stands

Legal analysts at IAPP, Modulos, and DLA Piper have concluded that the EU AI Act omnibus reform package has no realistic path to implementation before the August 2, 2026 compliance deadline for prohibited AI practices. The May 13 trilogue is the last viable window before Parliament’s summer recess, but even a deal struck that day could not be transposed and published in time to defer the deadline under EU administrative rules. Companies are advised to comply with the original regulation as written. Modulos described the collapse of the reform track as “the most consequential regulatory non-event of the year” — a phrase that captures both the seriousness of what was attempted and the scale of the failure to deliver it.

State Legislation

Nineteen AI Laws Enacted Across U.S. States in April Alone — 2026 Total Now 25

Plural Policy’s April 2026 AI governance tracker records nineteen new AI laws enacted across U.S. states in April alone, bringing the 2026 total to 25. Notable enactments include Nebraska’s Conversational AI Safety Act; chatbot disclosure bills in Idaho and Oregon; Maine’s ban on AI-only therapy sessions; Alabama’s rules on AI coverage decisions in health insurance; and Hawaii’s bill restricting companion AI for minors. Utah Governor Spencer Cox signed nine AI bills during 2026, eight of them in the final two weeks of April. The pace confirms that the White House preemption framework — which would override state-level AI laws deemed to create “undue burdens” — is racing against a legislative calendar that is moving faster than any federal intervention can match.

Enterprise

The Agent-First Enterprise

Adobe rebrands its entire Experience Cloud around persistent AI coworkers — and every major platform provider is already in the integration stack.

Adobe CX Enterprise

Adobe Rebrands Experience Cloud as CX Enterprise — Persistent AI “Coworkers” Orchestrate Marketing Workflows

Publicis, WPP, and Omnicom are standardizing on it; AWS, Anthropic, Google Cloud, Microsoft, and OpenAI are already integrated. GA expected within months.

Adobe announced on Wednesday that its Experience Cloud platform is being rebranded and restructured as CX Enterprise — a ground-up agent-first architecture centered on what the company calls “Coworkers”: persistent AI agents that autonomously orchestrate marketing workflows across data, content, and campaign systems. The Coworkers are not one-off task runners; they maintain state across sessions and are designed to handle end-to-end campaign execution without human hand-offs at each step.

The platform integrates with AWS, Anthropic, Google Cloud, Microsoft, and OpenAI — a remarkably broad stack that reflects both the competitive positioning of the underlying model providers and Adobe’s calculation that enterprise customers will not accept vendor lock-in at the AI layer. Publicis, WPP, and Omnicom — three of the four largest advertising holding companies in the world — are already standardizing on CX Enterprise, a signal of how rapidly the agentic marketing workflow layer is consolidating around a small number of platforms. General availability is expected within months.

Vertical AI

Vertical Wins — and a Counterweight

Specialized AI commands a 3–5x pricing premium over horizontal platforms. Meanwhile, Nature publishes evidence that top AI agents still trail experienced human researchers on complex tasks.

Vertical AI’s Structural Pricing Advantage

Analysis from Asanify and Recursive argues that vertical AI in finance (Rogo), legal, healthcare, and logistics can price 3–5x higher than horizontal model APIs while achieving enterprise lock-in the generalist platforms cannot match. The dynamic mirrors the enterprise SaaS evolution of 2010–2015, when vertical specialists outperformed horizontal CRMs despite lower headline capabilities: the depth of domain-specific workflow integration proved more valuable than breadth. Regulated verticals — where compliance, auditability, and integration with existing data systems are non-negotiable — represent the clearest near-term moat for AI companies willing to invest in the unsexy infrastructure work.

Nature: Human Scientists Still Outperform Top AI Agents on Complex Research Tasks

A Nature analysis published this week pushes back on the wave of “human-level” benchmark claims. Despite strong performance on structured benchmarks, the top AI agents trail experienced human researchers on complex multi-step research tasks requiring hypothesis generation, experimental design, and cross-domain reasoning. The finding is not a rejection of AI’s scientific utility — the paper acknowledges significant productivity gains on well-defined subtasks — but it challenges the narrative that frontier models have achieved general scientific reasoning parity. It lands as a timely counterweight in a week dominated by record-breaking benchmark numbers, and arrives alongside the benchmark saturation discussion in GPQA where top models cluster at 92–94% and can no longer be meaningfully differentiated.

GitHub Trending

Today’s Most-Starred Repositories
Repo Language Stars What it does
warpdotdev/warp Rust ~41.4K Warp terminal ships stable v0.2026.04.29 today — AI-native terminal with agent sessions and block-based history.
ultraworkers/claw-code ~188K Open-source Claude Code alternative built on the Claw agent framework.
mattpocock/skills ~46K TypeScript skills registry for AI coding agents — reusable, composable agent capabilities.
nexu-io/open-design ~6.1K Open-source Claude Design alternative with 71 brand system templates.
microsoft/VibeVoice ~44.5K Microsoft’s open voice-synthesis framework for agentic applications.
NousResearch/hermes-agent ~117.7K Hermes-based autonomous agent framework from Nous Research.
VoltAgent/awesome-agent-skills Curated list of open-source agent skills and tool integrations for agentic systems.
Toolbox

April Closes With Updates to GitHub Copilot in Visual Studio and Claude Code

GitHub Copilot in Visual Studio — April Update

  • Cloud agent sessions now launch directly from the IDE agent picker, with automatic GitHub issue and pull request creation
  • New Debugger Agent validates proposed fixes against runtime behavior before suggesting them
  • Custom agents now support user-level definitions stored at %USERPROFILE%/.github/agents/
  • C++ Code Editing Tools (class hierarchy mapping, call chain navigation) ship as GA by default in this release

Claude Code — Late-April Changelog

  • New ANTHROPIC_BEDROCK_SERVICE_TIER environment variable with default, flex, and priority modes
  • /resume now accepts a GitHub, GitLab, or Bitbucket pull-request URL as a search term
  • Expanded OpenTelemetry logging for observability and tracing pipelines
  • Reliability fixes across branch handling, model selection, Vertex AI integration, voice mode, shell commands, and image sizing