Volume 1, No. 57 Wednesday, April 29, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Q1 Earnings & Capital Expenditure

Big Tech Earnings Signal a $630B AI Capex Supercycle

Alphabet posts 81% net income growth and Google Cloud surges 63% YoY; Microsoft’s AI revenue hits a $37B annualized run rate; Meta raises its 2026 capex forecast to $125–145B — and still drops after-hours.

Wednesday was earnings day for the three AI-heaviest names in the S&P 500, and the numbers rewrote Wall Street’s playbook in a single session. Alphabet reported Q1 net income growth of 81% year-over-year, driven by Google Cloud’s 63% revenue surge — its fastest quarter since the business was broken out as a separate segment. Shares rose roughly 7% after-hours, the company’s best post-earnings pop in three years.

Microsoft reported AI-related revenue at a $37 billion annualized run rate, up 123% year-over-year, as Copilot seat adoption accelerated across its enterprise base. The number marks the first time any single company has publicly reported AI revenue at that scale. Azure’s capacity constraints — a recurring refrain — remained the primary caveat.

Meta, meanwhile, raised the upper end of its 2026 capital expenditure forecast to $145 billion, citing accelerating infrastructure demand for its Llama-powered products and internal AI tooling. Investors responded by sending the stock down more than 6% after-hours, illustrating the tension at the heart of the current earnings season: the more a company spends on AI infrastructure, the more analysts worry about near-term return on capital. Taken together, the three companies’ combined 2026 capex guidance now implies a $630–650 billion AI infrastructure buildout for Big Tech alone — a figure that has roughly doubled in eighteen months.

Breaking — Venture Finance

Anthropic’s board is reviewing preemptive offers for a $50B round at a ~$900B valuation — which would make it the world’s most valuable AI startup, topping OpenAI. A decision is expected in May.

Venture Finance

Anthropic Weighs $50B Raise at $900B Valuation — Surpassing OpenAI

Bloomberg and TechCrunch report the board is reviewing offers that would value Anthropic at roughly $900 billion, up sharply from its last round. Annualized revenue has climbed to $30–40B; an IPO as early as October is also under discussion.

Anthropic’s board is actively reviewing preemptive term sheets for a new $50 billion financing round that would value the company at approximately $900 billion, according to TechCrunch and a parallel Bloomberg report published Wednesday. If completed at that figure, the round would vault Anthropic past OpenAI — most recently valued at $852 billion — as the world’s most valuable private AI company.

The fundraise reflects a dramatic acceleration in Anthropic’s commercial trajectory. The company’s annualized revenue is now estimated at $30–40 billion, up from roughly $9 billion at the close of 2025 — a roughly four-fold increase in under five months, driven by Claude’s expansion across enterprise, government, and developer markets. The board is expected to reach a decision in May. Sources familiar with the discussions also said a public offering as early as October is being weighed, though no bankers have been formally engaged.

The timing is notable: the raise is being discussed on the same day Alphabet reported Google Cloud’s 63% growth, a number that underscores how deeply Anthropic’s partnership with Google is embedded in its go-to-market. Amazon, which committed $4 billion to Anthropic in 2024, also remains a key infrastructure and distribution partner. A $900 billion valuation would represent one of the fastest private-company ascents in technology history — from a $4.1 billion Series B in 2023 to near-trillion-dollar territory in under three years.

Courts

Musk v. Altman, Day Three

OpenAI attorneys took the witness stand to Musk — challenging his nonprofit-mission argument, questioning his commitments at xAI, and exposing what CNN called the trial’s most dramatic hour.

Trial Coverage

OpenAI Counsel Turns Tables on Musk in Heated Cross-Examination

Day three of the Musk v. OpenAI trial produced what legal analysts and courtroom reporters are calling the most dramatic session yet. OpenAI’s attorneys used cross-examination to challenge Musk’s characterization of the nonprofit’s founding mission, pressing him on internal communications that show his own priorities shifted repeatedly between 2015 and his departure from the board in 2018. Counsel also questioned the mechanics of Musk’s $130 billion damages claim, noting that the methodology relies on contested valuations of OpenAI’s equity that existed before any of the alleged wrongs.

The sharpest exchange came when OpenAI attorneys asked Musk to reconcile his stated AI safety concerns with xAI’s own rapid commercialization of Grok — a line of questioning NPR described as exposing “deep tensions between Musk’s public safety posture and his private competitive behavior.” CNN’s courtroom correspondent noted the moment landed visibly with the jury. The trial continues Thursday with additional defense witnesses expected.

Cloud & Infrastructure

The Multi-Cloud Era Arrives

Amazon adds OpenAI models to Bedrock; Mistral ships Medium 3.5 with a modified MIT license and vibe-coding remote agents — all within 24 hours of Microsoft’s restructured OpenAI deal.

Amazon Web Services

AWS Adds OpenAI Models to Bedrock at San Francisco Event

At its “What’s Next with AWS” event in San Francisco, Amazon announced that GPT-5.4, GPT-5.5, Codex, and a new Bedrock Managed Agents service powered by OpenAI are entering limited preview on Amazon Bedrock. The announcement arrived precisely 24 hours after Microsoft’s restructured OpenAI partnership was finalized — underscoring that OpenAI has pivoted to a deliberate multi-cloud distribution strategy rather than exclusive Azure dependency. Amazon also unveiled Amazon Quick, an AI-powered work assistant desktop application targeting enterprise productivity workflows.

Open Weights

Mistral Ships Medium 3.5 — 128B Dense, Modified MIT, Four-GPU Deploy

Mistral released Medium 3.5 on Wednesday: a 128-billion parameter dense model with a 256,000-token context window, distributed under a modified MIT license that permits commercial use but carves out revenue-share exceptions for very large deployments. The model is deployable on as few as four consumer-grade GPUs via vLLM, SGLang, and Ollama, and is also available through NVIDIA NIM. Simultaneously, Mistral announced Vibe — a remote agent service for asynchronous cloud coding sessions, positioning the company directly against Cursor, Devin, and similar agentic coding platforms.

Developer Tools

Coding Tools & Infrastructure

Cursor opens its SDK to the public; llama.cpp goes native on Blackwell’s FP4 tensor cores; Claude Code ships two releases with new hook capabilities and an OAuth fix.

Cursor SDK

Cursor TypeScript SDK Enters Public Beta for Programmatic AI Agents

Cursor launched a public beta of its TypeScript SDK (npm install @cursor/sdk), enabling developers to build programmatic AI coding agents that run on the same runtime and models powering the Cursor IDE. Agents can execute locally or inside Cursor’s sandboxed cloud VMs, which spin up dedicated repository clones per session. Pricing is standard token-based. Cursor also published a companion engineering post titled “Continually improving our agent harness,” detailing their eval-driven approach to iterating on agent reliability — an unusually transparent look at how a frontier coding assistant is engineered for consistency.

Local Inference

llama.cpp b8967 Dispatches Native NVFP4 to Blackwell Tensor Cores

Build b8967 of llama.cpp introduces native NVFP4 MMQ dispatch for NVIDIA Blackwell GPUs (sm_120 architecture), making it the first llama.cpp build to route Blackwell FP4 tensor core operations natively. Early benchmarks on the RTX 5090 with Qwen3.6-27B-NVFP4 report prefill improvements of 43–68% (averaging roughly 57%) compared to b8966. Non-Blackwell hardware still gains NVFP4 memory efficiency without the native core dispatch.

Claude Code

Claude Code v2.1.121 Adds alwaysLoad MCP, PostToolUse Output Override

Claude Code shipped two releases this week. v2.1.121 introduces an alwaysLoad MCP option that bypasses tool-search deferral for frequently used servers; adds claude plugin prune for removing stale plugins; enables type-to-filter in /skills; and allows PostToolUse hooks to replace tool output via hookSpecificOutput.updatedToolOutput. A fullscreen scroll bug was also fixed. v2.1.123 patched an OAuth 401 retry loop that surfaced when CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 was set.

Security

AI Cyber Watch

Anthropic and OpenAI brief House Homeland Security on offensive model capabilities; CISOs warn that AI compresses months-long attack timelines into under an hour.

Capitol Hill

Anthropic and OpenAI Hold Classified Cyber Briefings With Congress

Anthropic and OpenAI held separate classified briefings with House Homeland Security Committee staff this week, presenting findings on the offensive cyber capabilities of Mythos and GPT-5.4-Cyber respectively. According to Axios’s “Behind the Curtain” column, the models are uncovering multi-decade-old vulnerabilities “that humans have not found for years” and can execute lateral network movement “at lightning speed.” The sessions mark among the first times Congress has received direct company briefings on the offensive potential of deployed AI systems.

Enterprise Security

AI Collapses Attack Timelines From Months to Minutes, CISOs Warn

A Marketplace deep feature published Wednesday drew on interviews with CISO-level executives who say attacks once requiring months of specialized research now execute in under an hour using internal AI tooling. The piece documents organizations that have seen their threat surfaces fundamentally restructured in under twelve months. One executive was blunt about board accountability.

If you’re a board member at a bank or hospital, you should be asking one question: What do we need to do to detect and contain an attack that unfolds in an hour? — CISO quoted in Marketplace / NPR, April 29, 2026

Research & Transparency

Stanford AI Index 2026 Drops Hard

Transparency scores fell 18 points in a single year as Google, Anthropic, and OpenAI stopped disclosing training details for flagship models — even as capabilities crossed every human PhD-level benchmark.

Annual Report

Foundation Model Transparency Index Collapses From 58 to 40

Stanford’s Foundation Model Transparency Index — an annual measure of how openly leading labs disclose training data, compute, and methodology — dropped 18 points in a single year, falling from 58 to 40. Google, Anthropic, and OpenAI all stopped disclosing dataset sizes and training durations for their flagship 2025 models. Eighty of the 95 most notable models released in 2025 shipped without any public training code. The index’s authors describe the trend as a “transparency deficit” accelerating in direct proportion to commercial stakes.

The capability picture is the inverse of the transparency picture: frontier models now exceed human performance on PhD-level science and mathematics evaluations across multiple independent benchmarks. The report frames this as the central paradox of 2026 — as AI grows more powerful and consequential, the information available to researchers, regulators, and the public to evaluate those systems is shrinking, not growing.

Quick Hits

Funding & Labor Briefs

Rogo Closes $160M Series D for AI Investment Banking Platform

Kleiner Perkins led the round — with Sequoia, Thrive, Khosla, and J.P. Morgan Growth Equity participating — valuing Rogo’s Felix agent platform at a post-money figure exceeding $300M in total funding. Felix automates deal screening, CIM generation, and data room diligence for 35,000+ professionals at Rothschild, Jefferies, Lazard, and more than 250 institutions. The raise is a clear signal that agentic AI for regulated professional services is entering its scaling phase.

AI Won’t Kill Your Job — It Will Kill the Path to Your First One

Yale CELI research cited in Fortune shows entry-level roles disappearing fastest, not established mid-career positions. C.H. Robinson is handling 29% more volume with 30% fewer employees versus 2019; major banks report 20–60% productivity gains in junior analyst work. The piece calls for targeted policy around internship and apprenticeship pipelines — the rungs that no longer exist to climb.

White House Plans Workshops to Bring Anthropic Back Into Federal Fold

An Axios follow-up scoop reports the White House is organizing a series of classified workshops designed to bring Anthropic back into the federal AI procurement pipeline, following friction over Anthropic’s earlier refusal to sign certain Pentagon AI ethics waivers. The workshops would involve the National Security Council and DOD contracting staff.

One-Example RL Paper Challenges Need for Large RLVR Datasets

A new arXiv preprint argues that a single well-chosen training example — paired with a verifiable reinforcement learning reward signal — is sufficient to produce meaningful reasoning improvements in large language models. If the result holds up, the finding has significant implications for low-resource and highly domain-specific reasoning applications, where large curated RLVR datasets are difficult or impossible to assemble.

GitHub Trending

Wednesday’s Most-Starred Repositories
Repo Total Stars What it does
ultraworkers/claw-code ~188K Community fork and extension layer on top of Claude Code with additional slash commands and integrations.
VoltAgent/awesome-design-md ~65.7K Curated collection of AI-generated design-system documentation in Markdown, ready for agent ingestion.
TauricResearch/TradingAgents ~54.4K Multi-agent framework for autonomous trading strategy research, backtesting, and live execution.
NousResearch/hermes-agent ~117.7K Open-source general-purpose agentic framework built on the Hermes model family, with tool-use and memory.
openai/symphony ~19.5K OpenAI’s new multi-agent orchestration library for coordinating specialized sub-agents in complex pipelines.
multica-ai/multica ~18.5K Multi-modal agent framework supporting vision, audio, and text tool-calling in a single context loop.
warpdotdev/warp ~41.4K AI-native terminal with agentic command suggestions, Rust core, and collaborative sessions.