Volume 1, No. 39 Thursday, April 10, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Cybersecurity & Financial Regulation

Powell and Bessent Summon Bank CEOs to Emergency Briefing on Mythos Cyber Risk

Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent convened an urgent meeting with the heads of America’s largest banks to warn that AI-driven cyberattack capabilities may have outpaced the financial system’s defenses.

The Federal Reserve Chair does not summon bank CEOs on a Thursday morning unless the threat is already inside the building. Jerome Powell and Treasury Secretary Scott Bessent did exactly that this week, calling the chief executives of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo to an emergency cybersecurity briefing focused on Claude Mythos — the Anthropic model whose autonomous capabilities have dominated headlines since its restricted release. JPMorgan’s Jamie Dimon was the only major bank chief absent from the meeting.

The briefing’s core message, according to people familiar with the discussion, was blunt: current financial-sector cyber defenses were designed for human-speed adversaries and may be structurally inadequate against AI systems capable of identifying and exploiting vulnerabilities at machine speed. Regulators walked attendees through classified threat assessments describing potential attack vectors — from automated social engineering of bank employees to real-time exploitation of zero-day vulnerabilities in trading infrastructure — that Mythos-class models could theoretically execute with minimal human oversight.

The meeting signals a new phase in the government’s response to frontier AI capabilities. Where previous regulatory actions focused on model access and export controls, this intervention targets the downstream institutions that would bear the consequences of an AI-enabled breach. Several attendees reportedly left the session requesting follow-up technical briefings from their own security teams, and at least two banks have since initiated emergency reviews of their AI-facing threat models.

Export Controls & Geopolitics

DeepSeek Accused of Training V4 on Smuggled Nvidia Blackwell Chips

Reports allege the Chinese AI lab sourced export-banned Nvidia Blackwell GPUs through a smuggling network to train its next-generation model, reigniting fierce debate over whether U.S. chip controls are enforceable.

The Information reported this week that DeepSeek trained its upcoming V4 model on Nvidia Blackwell-architecture GPUs — chips that have been explicitly banned from export to China under U.S. Commerce Department rules since late 2024. According to the report, the hardware was sourced through an intermediary smuggling ring and is physically located in a data center in Inner Mongolia. Nvidia issued a statement calling the claims “far-fetched,” but declined to comment on specific supply-chain tracking measures.

The allegations strike at the heart of Washington’s semiconductor containment strategy. The entire logic of export controls rests on the assumption that physical chokepoints in the chip supply chain can effectively throttle adversarial AI development. If Blackwell-class hardware can reach Chinese labs through gray-market channels at training-relevant scale, the policy achieves little beyond raising costs and alienating allied chip buyers who face the same licensing bureaucracy. Congressional hawks are already calling for expanded enforcement powers, while industry voices argue the episode proves that supply-side controls are a leaky vessel for managing a demand-side problem.

Analysis & Industry

Labor & Economics

Goldman Sachs: AI Is Eliminating 16,000 U.S. Jobs Per Month, With Gen Z Taking the Brunt

New Goldman Sachs research quantifies what the labor market has been feeling for months: AI substitution is destroying roughly 25,000 American jobs per month while augmentation recovers only about 9,000, producing a net loss of approximately 16,000 positions every thirty days. The study finds entry-level workers under 30 are absorbing the heaviest impact — a one standard-deviation increase in AI exposure widens the wage gap between entry-level and experienced workers by approximately 3.3 percentage points.

The findings complicate the prevailing narrative that AI is primarily a productivity enhancer rather than a job eliminator. For Generation Z, the first cohort to enter the workforce in the age of capable AI coding assistants and automated customer service, the data suggests the traditional career ladder — start at the bottom, learn on the job, advance — is being sawed off at the lower rungs.

Cybersecurity

Experts Say the AI-Driven Hacking Era Has Already Arrived

Even if Anthropic succeeds in keeping Mythos under strict access controls, security researchers warn the genie is already out of the bottle. Fortune and NBC News analysis found that AI-enabled mass vulnerability exploitation — using language models to discover, weaponize, and deploy exploits at scale — is already accessible through a combination of open-weight models, fine-tuning, and tool-augmented agents. The Mythos scare is a symptom, not the disease.

Developer Tools

84% of Developers Now Use AI Coding Tools Daily; Only 29% Trust the Output

A sweeping JetBrains survey of professional developers reveals a paradox at the center of the AI coding revolution: adoption is nearly universal, but trust remains strikingly low. As of April 2026, 84% of developers report using AI coding assistants daily — Claude Code, Cursor, GitHub Copilot, and Google Antigravity among the most popular — yet only 29% say they trust AI-generated code enough to ship it to production without significant manual review.

The trust gap suggests the current generation of AI coding tools has succeeded as a drafting aid but not yet as an autonomous contributor. Cursor’s trajectory to $2 billion in annual recurring revenue by early 2026 demonstrates that developers will pay handsomely for speed even when they don’t fully trust the output — a business model built on acceleration rather than delegation.

The Fed Chair does not summon bank CEOs for a hypothetical. When Powell calls, the threat is real. Bloomberg — April 10, 2026
Regulation

China Issues Comprehensive AI Ethics Governance Guidelines

China’s Ministry of Industry and Information Technology, joined by nine other government departments, issued sweeping Administrative Measures for AI Ethics that took effect April 3. The rules require every university, research institution, and company conducting AI research to establish an internal ethics committee with binding oversight authority. High-risk projects — defined broadly to include foundation model training, autonomous decision systems, and biometric applications — must undergo mandatory government-led expert review before deployment.

The new framework introduces a dual-track governance model: “algorithm filing” (requiring registration of AI systems with regulators) combined with “ethical evaluation” (requiring pre-deployment review of high-risk applications). The approach mirrors elements of the EU AI Act but goes further in mandating institutional ethics infrastructure rather than relying primarily on external auditing. Western observers note the irony of China’s government — which faces its own criticism over AI-enabled surveillance — positioning itself as a global leader in AI ethics governance, though the substantive requirements are among the most detailed any government has issued to date.

AI Model Revenue Race

Estimated Annual Recurring Revenue — April 2026
Company Model / Platform Est. ARR Change
Anthropic Claude $30B +233% from EOY 2025
OpenAI ChatGPT / API $25B
Cursor AI Code Editor $2B
Perplexity Search / Agents $450M +50% in one month

Quick Dispatches

U.S. AI Chip Export Push Hits Bureaucratic Bottleneck

The Trump administration’s plan to flood allied markets with American AI chips is running into its own government. Licensing backlogs, staff attrition, and internal disputes at the Bureau of Industry and Security have slowed approvals to a crawl, undermining the strategic goal of ensuring allied nations buy American rather than turning to alternatives. The bureaucratic gridlock threatens to turn an export push into an export trickle at precisely the moment global demand for AI compute is surging.

Hoodline

Transparency Coalition Tracks 19 New AI Laws Passed in Q1 2026

The Transparency Coalition’s quarterly legislative tracker counts 19 new AI-related laws enacted at state and federal levels in the first quarter of 2026. Highlights include the federal GUARDRAILS Act — which directly challenges the Trump administration’s AI Preemption Executive Order — and the Youth AI Privacy Act establishing guardrails for AI systems interacting with minors. The pace of legislation has accelerated sharply from 2025, when the full-year total was 23.

Transparency Coalition