Volume 1, No. 40 Friday, April 11, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Frontier Models

GPT-5.5 “Spud” Nears Release — Polymarket Puts 78% Odds on April Launch

OpenAI’s next model completed pretraining on March 24 and prediction markets are betting heavily that it ships before the month is out. The question is no longer whether, but what name it wears when it arrives.

OpenAI’s next frontier model, internally codenamed “Spud,” completed its pretraining run on March 24 and has entered the safety evaluation and red-teaming phase that typically precedes a public release. A Polymarket contract tracking whether the model ships before April 30 currently trades at seventy-eight cents — reflecting strong consensus among prediction-market participants that a launch is imminent. Sam Altman described the model as “very strong” and suggested it could “really accelerate the economy,” while co-founder Greg Brockman characterized it as the culmination of “two years of research.”

The naming remains an open question. Whether it ships as GPT-5.5 or leapfrogs to GPT-6 will depend on how its benchmark performance compares to the existing GPT-5 series. Industry observers note that the distinction matters less for capability than for marketing: the version number signals to enterprise buyers and developers how much of a generational jump to expect, and OpenAI has historically been deliberate about when it increments the major version. Regardless of the label, the model is expected to represent a significant step forward in reasoning, multilingual fluency, and agentic task completion.

The timing is notable. With Anthropic’s Mythos revelations dominating security headlines this week and DeepSeek V4 still rolling out, a Spud release would ensure that no single lab dominates the narrative for long. The frontier-model release cadence has compressed from roughly annual to quarterly, and the market’s confidence in an April launch suggests that cadence is still accelerating.

Copyright & Litigation

Anthropic Copyright Settlement Update — Six Authors Opt Out, File Individual Suits

The largest AI copyright settlement in history hits a procedural snag as six authors reject the class deal and pursue individual claims against every major lab, seeking $150,000 per title per defendant.

The Authors Guild published its April update on the $1.5 billion Bartz v. Anthropic class-action settlement — the largest AI copyright settlement to date — and the news is mixed. While the vast majority of class members have not opted out, six authors have formally rejected the settlement and filed individual lawsuits against Anthropic, OpenAI, Google, Meta, xAI, and Perplexity AI. Each plaintiff is seeking $150,000 per copyrighted title per defendant, a damages framework that could result in payouts far exceeding what the class settlement offers on a per-author basis.

The opt-outs are strategically significant even if they represent a small fraction of the class. Individual suits allow plaintiffs to pursue discovery against each defendant separately, potentially unearthing details about training data practices that a class settlement would have kept sealed. The final approval hearing for the Bartz settlement is scheduled for May 14, and the presiding judge will weigh whether the opt-out rate is low enough to proceed. Legal analysts expect approval, but the individual suits will continue regardless, creating a parallel litigation track that could produce binding precedent on whether training on copyrighted text constitutes fair use.

Analysis & Research

Geopolitics & AI

DeepSeek V4 Still Awaited — Testing China’s AI Ambitions

DeepSeek’s V4 model — a roughly one-trillion-parameter Mixture-of-Experts architecture — had its open weights released on April 4, but full public deployment has not yet materialized as of today. Analysts are watching the rollout closely as a litmus test for China’s ability to train and deploy frontier-class models under tightening U.S. export controls on advanced chips. The gap between weight release and full deployment suggests either infrastructure scaling challenges or a deliberate phased approach to manage compute demand.

Energy & Efficiency

Tufts Neuro-Symbolic Breakthrough Cuts AI Energy 100x, Boosts Accuracy

Researchers at Tufts University developed a neuro-symbolic approach that combines neural networks with classical symbolic reasoning to dramatic effect. On Tower of Hanoi planning tasks, the hybrid system achieved 95% success versus 34% for conventional neural approaches, while slashing training time from over 36 hours to 34 minutes and reducing energy consumption to roughly 1% of standard systems. With AI currently consuming over 10% of U.S. electricity, the efficiency implications are substantial if the technique generalizes beyond planning benchmarks.

Culture & Society

USC Warns AI Tools May Be Homogenizing Global Culture

A study from USC’s Yalda Daryani and Morteza Dehghani, published in Policy Insights from the Behavioral and Brain Sciences, argues that popular large language models reflect what the researchers term a WHELM perspective — Western, high-income, educated, liberal, and male. As these systems reach hundreds of millions of weekly users worldwide, the authors warn that AI creates a self-reinforcing cycle: models trained on dominant cultural outputs generate content that further narrows the range of perspectives users encounter, gradually eroding cultural diversity at global scale.

Business

OpenAI Forecasts $2.5B in Ad Revenue for 2026

OpenAI is projecting $2.5 billion in advertising revenue for 2026 and up to $100 billion annually by 2030, as it takes early steps toward a potential public listing. The company’s total annualized recurring revenue has surpassed $25 billion, driven by subscription growth and API usage. The ad revenue projection signals a strategic shift: OpenAI is positioning ChatGPT not just as a productivity tool but as an attention surface that can be monetized like search and social media before it. Whether users will tolerate ads in their AI assistant remains an open question.

Two years of research, a codename from the produce aisle, and a Polymarket contract at seventy-eight cents. The next model is always imminent — but this time the betting markets believe it. The AI Dispatch — April 11, 2026
Week in Review

From Mythos to Emergency Bank Briefings — The Week AI Moved at Crisis Speed

No single AI capability disclosure has ever moved from a lab announcement to a central bank emergency meeting as fast as this week’s arc. On Monday, Anthropic revealed Mythos — an internal capability finding whose details remain partially classified but which triggered an immediate chain reaction across governments and financial institutions. By Tuesday, Project Glasswing had entered public discourse as defense analysts connected the capability to national security applications. Wednesday brought a pitched battle at the Pentagon over whether to blacklist or accelerate adoption. By Thursday, Federal Reserve Chair Powell and Treasury Secretary Bessent had convened an emergency briefing with global banking leaders on AI-driven cybersecurity threats.

By Friday, international regulators were scrambling to coordinate a response. The speed of escalation — from technical disclosure to central bank crisis meeting in four business days — has no precedent in the history of AI development. It signals that policymakers now treat certain AI capabilities not as abstract future risks but as immediate threats to financial infrastructure, placing them in the same category as sovereign cyberattacks and systemic banking crises.

Quick Dispatches

Global Banks on Alert as Regulators Scramble on AI Cyber Risks

International regulators and banking supervisors are coordinating an urgent response to AI-driven cybersecurity threats following this week’s Mythos revelations. Central banks in the EU, UK, and Japan have issued preliminary guidance asking systemically important institutions to audit their AI exposure by month’s end.

Z.ai Releases GLM-5.1 — #1 Open-Source, #3 Global

Z.ai (formerly Zhipu AI) released GLM-5.1, which immediately claimed the top spot among open-source models and the third position globally across major benchmarks. The model is reportedly capable of autonomous eight-hour operation while iteratively refining its own strategies — a significant step in sustained agentic reasoning.

UN AI Panel Begins Work on First Global Impact Assessment

The 40-member Independent International Scientific Panel on AI has begun work on its first comprehensive global impact assessment. The initial report is due at the Geneva summit on July 6–7. A public input portal is open through April 30, inviting submissions from civil society, industry, and academia.