Volume 1, No. 56 Tuesday, April 28, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Breaking — Brussels

The second political trilogue on the EU AI Act Omnibus has collapsed after roughly 12 hours of talks, leaving the original August 2, 2026 high-risk compliance deadline in force. A third trilogue is now scheduled for May 13.

EU Regulation

EU AI Act Omnibus Trilogue Collapses; August 2026 Deadline Stands

After 12 hours of negotiations, the European Parliament, Council, and Commission fail to bridge the gap over whether AI embedded in medical devices, machinery, and vehicles faces the AI Act or sector-specific rules — pushing a final deal to at least mid-May.

The second political-level trilogue on the EU AI Act Omnibus ended without agreement on April 28–29, following roughly 12 hours of negotiations between representatives of the European Parliament, Council of the EU, and the European Commission. The sticking point — how AI systems integrated into regulated products such as medical devices, industrial machinery, and connected vehicles should be classified — proved irreconcilable in this session.

The core dispute pits member states, which want sector-specific product-safety frameworks to remain the primary regulatory layer for embedded AI, against Parliament, which insists that AI components in high-stakes hardware must carry the full compliance obligations of the AI Act regardless of the host product’s regulatory category. The Commission has proposed a compromise that would apply AI Act requirements only to “autonomous” AI decision-making functions in regulated products, but neither side found the formulation sufficient on Tuesday.

The immediate consequence is that the original August 2, 2026 deadline for compliance with AI Act high-risk system rules remains operative. Companies that banked on an Omnibus revision relaxing or deferring those obligations must now plan for the existing framework, at least through a third trilogue scheduled for May 13. Industry groups in Brussels warned that continued uncertainty will chill near-term product launches, particularly for medical AI startups that had deferred compliance investments pending the Omnibus outcome. IAPP’s analysis notes that even a May 13 deal would leave almost no runway for secondary legislation needed before August 2.

Business

OpenAI Missed Its Own Revenue and User Targets Repeatedly in Early 2026, Sending Chip Stocks Down 3–4%

The Wall Street Journal reports OpenAI fell short of internal goals for ChatGPT weekly active users and monthly revenue several times as Gemini and Claude gained share — while CFO Sarah Friar warned the board privately about compute spending. Altman and Friar jointly denied the characterization.

OpenAI has missed its own internal targets for ChatGPT weekly active users and monthly revenue multiple times during early 2026, according to a Wall Street Journal report that CNBC confirmed Tuesday. The company had set a goal of reaching one billion weekly active users by year-end; the report suggests the trajectory has repeatedly undershot that pace as Google’s Gemini and Anthropic’s Claude erode ChatGPT’s dominant share of AI assistant usage.

CFO Sarah Friar reportedly warned the board in private sessions about the pace of compute spending relative to revenue, a tension that analysts have flagged given OpenAI’s multi-billion-dollar infrastructure commitments in the Stargate joint venture. Markets reacted sharply: Oracle fell roughly 4%, while AMD and Broadcom each dropped 3–4% on Tuesday as investors recalibrated expectations for AI infrastructure demand. The moves represent some of the steepest single-session losses for the chip supply chain in months.

CEO Sam Altman and CFO Friar issued a joint statement Tuesday denying the report’s framing, describing OpenAI’s trajectory as “strong” and declining to provide specific figures. The denial did not immediately stem the equity slide. The report lands at a delicate moment — OpenAI is in the late stages of restructuring from a nonprofit to a public benefit corporation, a process that sets the stage for an eventual IPO. Any sign of revenue-growth difficulties could complicate that narrative ahead of institutional roadshows.

Courtroom

Musk v. Altman, Day Two

Elon Musk takes the stand and accuses OpenAI’s leadership of “looting the nonprofit” — entering the company’s founding charter into evidence and recounting a falling-out with Larry Page over what it means to be pro-human.

Trial Coverage

Musk Testifies OpenAI Leaders Violated Founding Charter — Recalls Page Calling Him a “Speciesist”

Elon Musk took the stand for the second day of trial in his lawsuit against OpenAI, accusing CEO Sam Altman and co-founder Greg Brockman of violating the company’s founding charter by converting what he described as a public-benefit mission into private enrichment. Musk’s lawyers entered the original charter into evidence; its language stated OpenAI would “seek to create open source technology for the public benefit” and was “not organized for the private gain of any person.”

Under examination, Musk recounted a rupture with Google co-founder Larry Page that he said motivated his early involvement with OpenAI. When Musk raised concerns about the existential risks of superintelligent AI, Page accused him of being a “speciesist” for favoring humans over a potential digital superintelligence. Musk testified the exchange convinced him that Google’s leadership would not treat AI safety as a genuine priority, prompting him to co-found OpenAI as a counterweight. He described the current OpenAI as a betrayal of that founding vision — now serving, in his characterization, the financial interests of Altman, Brockman, and Microsoft rather than humanity at large.

They violated the founding charter. OpenAI was not organized for the private gain of any person — and now it is. — Elon Musk, testimony, Day Two — Musk v. Altman et al.

Open Weights & Orchestration

The Open-Source Surge

Poolside ships Laguna XS.2 for local agentic coding and open-sources its pool terminal agent; Mistral launches Temporal-powered Workflows; Multiverse Computing compresses Qwen3 to 0.3B; Qdrant Cloud adds GPU-accelerated indexing.

Model Release

Poolside Releases Laguna XS.2 — First Open-Weight Agentic Coder That Runs Locally on Mac

Poolside released Laguna XS.2 (33B parameters total, 3B active via MoE, Apache 2.0) as the first open-weight model capable of running locally on a Mac with 36 GB of unified RAM via Ollama — targeting autonomous agentic coding workflows without a cloud dependency. Its sibling, Laguna M.1 (225B/23B active, proprietary), posts 68.2% on SWE-bench Verified and 72.5% on SWE-bench Verified (lite), the highest scores Poolside claims for any model in its tier. Both models are available free via the Poolside API and OpenRouter for a limited period.

Open-Source Tooling

Poolside Open-Sources pool — Terminal Agent That Is Both ACP Server and ACP Client

Alongside Laguna, Poolside open-sourced pool — a terminal coding agent that simultaneously acts as an ACP (Agent Communication Protocol) server for editor integrations and an ACP client for composing multi-agent pipelines. The same environment is used internally for agent RL training, making the release a rare look at the scaffolding behind a commercial frontier coding model. The repository is live on GitHub under an open license.

AI Orchestration

Mistral Workflows Enters Public Preview — Temporal-Powered Durable AI Pipelines Inside Mistral Studio

Mistral Workflows is now in public preview — a durable AI orchestration layer built on Temporal, the same workflow engine used by Netflix, Stripe, and Salesforce to run millions of daily executions. Enterprise customers ASML, ABANCA, and CMA-CGM are already running production pipelines. The integration lives inside Mistral Studio, giving developers visual pipeline authoring without managing Temporal infrastructure directly.

Edge & On-Device

Multiverse Computing Releases LittleLamb 0.3B Family — Three Ultra-Compact Models That Beat Their Qwen3 Source

Multiverse Computing released three open-source LittleLamb 0.3B models — general, tool-calling, and mobile variants — compressed roughly 50% from Qwen3-0.6B using its CompactifAI tensor-network compression framework. Despite their size, both 0.3B variants outperform the original Qwen3-0.6B on the Humanity’s Last Exam benchmark. The models target edge, on-device, and agentic use cases where latency and memory constraints are primary constraints.

Vector Database

Qdrant Cloud Adds GPU-Accelerated HNSW Indexing, Multi-AZ Clusters, and Structured Audit Logs

Qdrant Cloud shipped three enterprise features on Tuesday: GPU-accelerated HNSW index builds delivering 4x faster construction times, multi-availability-zone clusters with a 99.95% SLA, and structured JSON audit logging for compliance-sensitive deployments. The GPU indexing improvement is particularly significant for large-scale RAG pipelines where nightly index rebuilds have been a bottleneck as corpus sizes grow into the billions of vectors.

Policy & Geopolitics

Policy & State of Play

Florida’s House kills DeSantis’s AI Bill of Rights; Google signs a classified Pentagon AI pact; the White House drafts guidance to bypass an Anthropic risk flag; Chatham House warns of a “securitized and fragmented” multipolar AI landscape.

State Legislation

Florida House Speaker Kills DeSantis AI Bill of Rights on Day One of Special Session

House Speaker Daniel Perez refused to bring Florida’s AI Bill of Rights to the House floor on the opening day of a special legislative session, effectively killing it. The Senate had gaveled in and advanced a companion bill — requiring companion-chatbot disclosures, parental consent for minors, and bans on non-consensual AI likenesses — but the House declined to act, citing deference to the Trump administration’s executive order on federal AI preemption. Florida lawmakers focused instead on a redistricting agenda, leaving the AI bill dead for the session.

Defense AI

Google Signs Classified Pentagon AI Deal; 700+ Employees Object

Google has approved Gemini for “any lawful government purpose” in classified military networks, including mission planning and weapons targeting, according to Bloomberg. More than 700 Google employees sent a letter to CEO Sundar Pichai opposing the decision. Google joins OpenAI and xAI among AI companies cleared for classified Department of Defense use — a shift that represents a significant departure from the company’s 2018 refusal to renew its Project Maven drone-AI contract following employee protests.

Federal AI Policy

White House Drafts Guidance to Route Around Anthropic’s Pentagon Risk Flag for New Models

The White House is drafting guidance that would allow federal agencies to bypass Anthropic’s supply-chain risk designation for new models — including access to Claude Mythos — reflecting shifting national security priorities as Anthropic’s enterprise and government footprint expands. The guidance, reported by Axios, would instruct agencies on acceptable risk-mitigation steps that satisfy procurement rules without requiring Anthropic to remove or revise its own risk flags for the models in question.

Geopolitics

Chatham House: Defence AI Surge Could Fracture the US–China AI Duopoly

A new Chatham House report argues that surging dual-use and defence AI investment by European, Southeast Asian, and Gulf states could allow them to develop competitive niches and reconfigure what has functioned as a US–China AI duopoly. The report warns, however, that the likely result is a “multipolar but securitized and fragmented” AI landscape rather than genuine democratization, as export controls, military integration, and classification regimes segment the global AI ecosystem along security-alliance lines.

Quick Hits

Briefs

Critical Unpatched RCE Flaw in Hugging Face LeRobot (CVE-2026-25874)

A CVSS 9.3 remote code execution vulnerability exists in Hugging Face’s LeRobot robotics library due to unsafe pickle deserialization over unencrypted gRPC. Privately reported in December 2025, the flaw remains unpatched as of publication; a fix is planned for v0.6.0. Researchers caution against connecting LeRobot nodes to untrusted networks in the interim.

Even Best LLMs Score ~9% on Deep Scientific Literature Discovery (AutoResearchBench)

Beijing Academy of AI’s AutoResearchBench tests models on identifying non-obvious relevant literature across 3M+ arXiv papers. Even the best performers — including DeepResearch and Gemini 3.1 Pro — score roughly 9%, exposing a critical gap between chatbot performance and genuine scientific research utility. The arXiv preprint is circulating ahead of peer review.

GLEAN Wins ICLR “Agentic AI in the Wild” Best Paper for Clinical Diagnosis Verification

Guideline-Grounded Evidence Accumulation (GLEAN), a clinical diagnosis verification agent, wins Best Paper at the ICLR 2026 “Agentic AI in the Wild” workshop. On MIMIC-IV, GLEAN surpassed the best baseline by 12% AUROC and achieved a 50% reduction in Brier score, demonstrating significant advances in AI-assisted clinical decision-making with verifiable evidence grounding.

GitHub Trending

Today’s Most-Starred Repositories
Repo Stars Today What it does
mattpocock/skills #1 · +7,357 Matt Pocock’s TypeScript / web dev learning skills — the day’s viral breakout repo.
forrestchang/andrej-karpathy-skills ~96.6K total · +2,937 Curated archive of Andrej Karpathy’s educational skills and lecture materials.
microsoft/VibeVoice ~44.5K total · +1,676 Microsoft’s real-time voice interaction layer for agentic applications.
Alishahryar1/free-claude-code +1,500 Terminal and VSCode proxy giving free-tier access to Claude Code. Persistent trending presence.
ComposioHQ/awesome-codex-skills +1,225 Curated list of MCP tools and skills optimized for use with OpenAI Codex agents.
Z4nzu/hackingtool +1,007 All-in-one ethical-hacking toolkit bundling hundreds of pentest utilities. Perennial top-10.
Fincept-Corporation/FinceptTerminal +951 Bloomberg-style terminal for financial data and AI-assisted market analysis, open-source.
HunxByts/GhostTrack +942 Open-source location tracking and OSINT tool for security research and red-team exercises.
Toolbox

Claude Code v2.1.122 and Copilot CLI v1.0.39: Tuesday’s Developer Drops

Claude Code v2.1.122

Two notable quality-of-life improvements in Tuesday’s build:

  • ANTHROPIC_BEDROCK_SERVICE_TIER environment variable sets Amazon Bedrock service tier (default / flex / priority) without modifying config files
  • Pasting a PR URL into /resume search now finds the session that originally created that PR, supporting GitHub, GitHub Enterprise, GitLab, and Bitbucket URL formats

GitHub Copilot CLI v1.0.39

ACP integration deepened further:

  • ACP toggle for allow-all permission mode — removes per-tool confirmation prompts in trusted sessions
  • Four new ACP session slash commands: /compact, /context, /usage, /env
  • ctrl+x→b keybind backgrounds the current running task without cancelling it