Volume 1, No. 31 Tuesday, March 31, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Funding

OpenAI Closes Record $122B Funding Round at $852B Valuation

Amazon, NVIDIA, and SoftBank anchor the largest private funding round in history — with $3B from retail investors marking a first for the company.

OpenAI closed a $122 billion funding round on Monday, setting a private-market valuation of $852 billion in what analysts are calling the largest single capital raise in the history of venture-backed companies. Amazon committed $50 billion, NVIDIA $30 billion, and SoftBank — which had already pledged $100 billion to American AI infrastructure — added $30 billion to its OpenAI position. The round also included $3 billion from retail investors via a regulated crowdfunding mechanism, the first time OpenAI has opened a fundraise to individual non-accredited participants.

The capital is earmarked for three primary uses: compute infrastructure buildout (including data center construction across four U.S. states), continued development of OpenAI’s unified AI superapp that combines ChatGPT, Codex, and the “Atlas” browser project, and an accelerated international expansion to 40 additional countries before year’s end. An IPO is expected in late 2026, with Goldman Sachs and Morgan Stanley already in conversations to lead the offering.

The round values OpenAI at nearly double Microsoft’s $49 billion market cap for its AI division and puts the company in rare company with Aramco, Apple, and Nvidia as one of only a handful of entities — public or private — ever valued above $800 billion. SoftBank CEO Masayoshi Son called the investment “the defining bet of the intelligence age.”

Security

Claude Code Source Leaked via npm Packaging Error — Trojan Also Found

Anthropic accidentally published ~512,000 lines of TypeScript source in Claude Code v2.1.88 via bundler source map misconfiguration. A trojanized HTTP client found in the same window contained a cross-platform RAT.

Anthropic confirmed late Monday that Claude Code v2.1.88, published to the npm registry at approximately 00:21 UTC, inadvertently included complete TypeScript source maps containing roughly 512,000 lines of internal source code — the result of a bundler misconfiguration that was not caught by the release pipeline’s automated checks. The package was pulled at 03:29 UTC, but not before it was downloaded hundreds of thousands of times and mirrored across secondary registries.

Separately and in the same three-hour window, security researchers at Snyk and Socket.dev identified a trojanized version of a popular HTTP client library that had been introduced as an indirect dependency. The malicious version contained a cross-platform Remote Access Trojan (RAT) targeting macOS, Linux, and Windows. Users who updated Claude Code during the 00:21–03:29 UTC window are advised to downgrade to v2.1.87, rotate all API keys and secrets stored in development environments, and audit running processes for unusual network activity.

Within hours of the leak, a GitHub repository named “claw-code” containing the full TypeScript source appeared and accumulated over 84,000 stars and forks before GitHub issued takedowns under DMCA. A Rust reimplementation called “claurst” was already underway by morning. Anthropic issued a statement acknowledging the incident and promising an independent post-mortem review of its release process.


Privacy

Perplexity Hit with 135-Page Class-Action for Secretly Routing Chats to Meta and Google

A federal class-action complaint filed in San Francisco federal court alleges that Perplexity AI embedded tracking tools in its application code that transmitted full user conversations to Google and Meta before Perplexity’s own servers ever processed them. The 135-page complaint covers non-paying users from December 2022 through February 2026 and asserts violations of the California Consumer Privacy Act, the California Invasion of Privacy Act, and federal wiretapping statutes. Plaintiffs allege the routing was intentional and commercially motivated — exchanging user data for favorable terms in third-party advertising and analytics agreements.

Model

Alibaba Drops Qwen 3.6 Plus on OpenRouter with Free 1M-Context Preview

Alibaba’s Qwen 3.6 Plus Preview arrived on OpenRouter with a 1 million-token context window, a 65,536-token output cap, always-on chain-of-thought reasoning, and native function calling — free during the preview period. The model uses a hybrid linear-attention and sparse Mixture-of-Experts architecture that Alibaba claims delivers roughly three times the inference speed of Claude Opus 4.6 at comparable quality on coding and reasoning benchmarks. Independent testers reported strong performance on multi-document summarization tasks that historically stress-test long-context models.

Intelligence

Anthropic’s Unreleased “Mythos” Model Confirmed After Dual Data Leaks

Days before the npm incident, approximately 3,000 internal Anthropic files were leaked online, including a draft blog post describing a model codenamed “Capybara” and marketed internally as “Mythos.” The post described Mythos as a “step change” in AI capability, with claimed advances in multi-step reasoning, code generation, and — most controversially — cybersecurity offense and defense. Independent researchers who reviewed the leaked materials called its described cybersecurity capabilities “unprecedented for a commercially released model.” Anthropic declined to comment on the authenticity of specific documents.

Product

Google Translate Expands Gemini-Powered Headphone Translation to iOS

Google’s Live Translate feature — which provides real-time one-way audio translation through any paired headphones without requiring a dedicated hardware device — expanded to iOS and 12 additional countries including India, Mexico, Germany, Japan, and Nigeria. The feature now supports 70+ languages and, according to Google, preserves speaker tone and cadence rather than flattening emotional inflection. The iOS rollout resolves a two-year gap since the feature launched exclusively on Android.


Research

Meta Publishes HyperAgents Paper: Self-Modifying Agents Based on Darwin Gödel Machine

Researchers from Meta Superintelligence Labs, the University of British Columbia, the Vector Institute, NYU, and the University of Edinburgh published arXiv:2603.19461 describing the DGM-H (Darwin Gödel Machine — HyperAgents) framework. Unlike conventional agent architectures where the improvement mechanism is separate from the task-execution agent, DGM-H merges both into a single self-modifiable codebase: the agent can rewrite its own code, verify that the rewrite improves performance on a held-out benchmark, and deploy the improved version — all without human intervention.

The paper demonstrates gains across three qualitatively different domains: automated peer review of scientific papers, reinforcement learning reward design for robotics, and olympiad-level mathematics. Crucially, meta-level improvements — changes to how the agent improves itself — transferred across domains, suggesting the framework is learning generalizable self-improvement strategies rather than domain-specific heuristics. The authors note that DGM-H is the first system to demonstrate sustained, verifiable self-improvement loops outside of narrow game environments, and call for parallel work on “improvement containment protocols” before deployment.


In Brief

White House National AI Policy Framework Draws Peak Industry Commentary

The administration’s seven-pillar AI policy blueprint — covering child safety, intellectual property rights, political speech, and federal preemption of conflicting state laws — entered its public comment period, drawing the highest volume of industry submissions since the 2023 NIST AI Risk Management Framework. Tech companies and civil liberties groups submitted diametrically opposing comments on the preemption provision. Source

OpenAI Publishes “Inside Our Approach to the Model Spec”

A new transparency post on the OpenAI blog walks through the reasoning behind its Model Spec, including how competing values are prioritized when they conflict, how safety and helpfulness are balanced, and why certain behaviors are hardcoded vs. instructable. Researchers praised the detail; critics noted the document is non-binding and unverifiable from the outside. Source

Anthropic and OpenAI Both Prep for Public Listings

With Anthropic approaching $19 billion in annualized recurring revenue and OpenAI crossing $25 billion, both companies are in active IPO preparation according to people familiar with the matter. Anthropic has engaged two investment banks; OpenAI is reportedly targeting a Q4 2026 listing window. Source


GitHub Trending

Repo Language Stars Description
Kuberwastaken/claurst Rust ~1,100 Rust reimplementation of Claude Code from leaked source
ultraworkers/claw-code Rust ~188K Fastest repo to 100K stars; agentic CLI coding tool
caramaschiHG/awesome-ai-agents-2026 Markdown ~300 300+ AI agent resources across 20+ categories
dair-ai/ML-Papers-of-the-Week Markdown Weekly curated top ML papers
Significant-Gravitas/AutoGPT TypeScript ~183K Production-ready agentic workflow platform
n8n-io/n8n TypeScript ~182K Fair-code workflow automation with AI capabilities

Source: Trendshift • Star counts as of March 31, 2026