Volume 1, No. 19 Thursday, March 19, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Federal Legislation

Senator Blackburn Drops 300-Page “TRUMP AMERICA AI Act” — Would Ban AI Training on Copyrighted Works

A sweeping nearly 300-page discussion draft aimed at establishing a federal AI standard that preempts state laws. Its most consequential provision: unauthorized reproduction or “computational processing” of copyrighted works for AI training does not qualify as fair use.

Senator Marsha Blackburn (R-TN) on Wednesday released a sprawling 300-page discussion draft titled the “TRUMP AMERICA AI Act” — an acronym that spells out the president’s name — representing the most ambitious attempt yet to impose a unified federal framework on the artificial intelligence industry. The bill’s centerpiece is a provision that would explicitly declare that the unauthorized reproduction or “computational processing” of copyrighted works for AI training does not qualify as fair use under U.S. law, effectively resolving in one stroke the legal question at the heart of every major copyright lawsuit currently pending against OpenAI, Google, Meta, and Stability AI. Beyond copyright, the draft folds in the full text of the No Fakes Act, which would create a federal right of publicity covering AI-generated likenesses of real people; the Kids Online Safety Act (KOSA), which imposes a duty of care on platforms to protect minors; and a two-year sunset provision for Section 230 of the Communications Decency Act, the 1996 law that shields internet companies from liability for user-generated content.

The entertainment industry greeted the draft with near-unanimous enthusiasm. The Recording Industry Association of America called it “a landmark moment for creators,” while SAG-AFTRA president Fran Drescher said the bill “finally treats the theft of human creativity with the seriousness it deserves.” Hollywood studios, which have spent two years and tens of millions of dollars litigating AI training practices in federal court, see the legislation as a faster and more reliable path to the outcome they want than waiting for judges to rule on novel fair-use questions. Record labels, music publishers, and book authors’ guilds have similarly lined up in support, viewing the bill’s bright-line rule against unauthorized training as far preferable to the patchwork of judicial opinions emerging from different circuits.

The bill’s path through the Senate, however, remains deeply uncertain. Technology companies and their allies in both parties have already raised alarms about the Section 230 sunset, warning that eliminating platform liability protections — even temporarily — would trigger a cascade of litigation that could cripple smaller internet companies and chill free speech online. Several Republican senators have privately expressed concern that the copyright provisions, while popular with the creative industries, could hamper American AI competitiveness at precisely the moment when Chinese labs are closing the capability gap. And the Trump administration itself has sent mixed signals: while the White House has endorsed the No Fakes Act and stronger online protections for children, senior advisors have pushed back against the copyright training ban, arguing that it would hand an advantage to foreign AI companies that operate under more permissive regimes. Whether Blackburn can hold together a coalition broad enough to move the draft from discussion to markup — in a Senate where AI policy has historically been the subject of hearings rather than legislation — will test whether Congress is finally ready to stop studying the problem and start writing the rules.

Copyright

UK Government Abandons AI Copyright Exception After Musician Revolt

The United Kingdom’s Labour government has formally abandoned its proposed “text and data mining exception” that would have permitted AI companies to train models on copyrighted material unless rights holders explicitly opted out — a policy reversal driven by one of the most lopsided public consultations in recent British regulatory history. Technology Secretary Liz Kendall confirmed the U-turn on Wednesday, acknowledging that 95 percent of respondents to the government’s own survey had opposed the exception, a figure that left ministers with no credible basis for proceeding. The campaign against the policy was led by some of the most recognizable names in British music, including Sir Elton John and Sir Paul McCartney, who lent their considerable public profile to an industry-wide effort organized by UK Music, the trade body representing the recorded music sector.

UK Music chief executive Jamie Njoku-Goodwin called the reversal “a major victory for the creative industries and for the principle that human creativity has value.” The organization had warned that the opt-out model was fundamentally unworkable — placing the burden on millions of individual creators to monitor and block AI scraping across every platform and training pipeline in existence, rather than requiring AI companies to seek permission before using copyrighted works. With the exception now off the table, the UK government says it will pursue a “transparency-first approach” that requires AI companies to disclose which copyrighted works were used in training, a framework that aligns the UK more closely with the European Union’s AI Act and represents a significant departure from the Silicon Valley-friendly posture the government had initially adopted. For the global copyright debate, Britain’s retreat underscores a growing consensus among democratic governments that opt-out regimes for AI training are politically untenable — and that the burden of licensing must fall on the companies that profit from the data, not the creators who produced it.

Platform

YouTube Asks 2 Billion Users “Does This Feel Like AI Slop?”

Starting March 17, YouTube began surfacing a popup survey to viewers across its global platform, asking them to rate on a five-point scale whether a video “feels like AI slop” — the informal but increasingly ubiquitous term for low-effort, machine-generated content that clutters recommendation feeds. The move comes as AI-generated videos now account for more than 20 percent of content surfaced by YouTube’s recommendation algorithm, a share that has roughly tripled since mid-2025 and shows no sign of plateauing.

The survey has sparked immediate and fierce backlash. Viral posts on X accused YouTube of “turning two billion users into unpaid AI trainers,” arguing that the platform intends to use the human-generated quality ratings not to suppress AI content but to refine it — feeding the survey data back to generative models so that future AI videos are harder to distinguish from human-made ones. YouTube has not clarified how the ratings will be used, whether they will influence recommendation rankings, or whether creators flagged as producing “AI slop” will face any consequences. The silence has only deepened suspicion that the initiative is less about protecting viewers than about optimizing the next generation of synthetic content for maximum engagement.

“Unauthorized reproduction or computational processing of copyrighted works for AI training does not qualify as fair use.”

— From the TRUMP AMERICA AI Act discussion draft
Security

“Claudy Day”: Security Researchers Expose Three-Flaw Chain in Claude.ai

Researchers at Oasis Security have disclosed a chained attack they have dubbed “Claudy Day” that allows an attacker to silently exfiltrate sensitive conversation data — financial records, medical details, proprietary business information — from Claude.ai users and redirect them to malicious websites, all without requiring the victim to install any tools, enable any integrations, or connect any MCP servers. The three-step chain begins with an invisible prompt injection embedded in a document or webpage that a user pastes into a Claude conversation. Once processed, the injected instructions direct Claude to harvest the contents of the user’s chat history, including data from prior turns, and upload it via an attacker-controlled Files API key to an external server — a technique that exploits the model’s ability to invoke file-handling operations on behalf of the user.

What makes the attack particularly concerning is its simplicity. No sophisticated exploit development is required; the prompt injection leverages Claude’s own capabilities as the attack surface. The first flaw enables the injection itself, the second allows data harvesting across conversation turns, and the third permits the silent exfiltration via the Files API without triggering user-visible warnings. Anthropic has patched the primary injection vulnerability following responsible disclosure by the Oasis Security team, but mitigations for the second and third flaws — which involve deeper architectural questions about how much autonomy language models should have over file operations and cross-turn data access — remain in progress. The company said in a statement that it “takes these findings seriously” and is “implementing additional safeguards.”

For the broader AI security community, Claudy Day illustrates a class of vulnerability that is likely to become more prevalent as language models gain richer tool-use capabilities. The attack did not exploit a traditional software bug — no buffer overflow, no SQL injection, no memory corruption. Instead, it weaponized the model’s intended functionality: its ability to follow instructions, access conversation context, and interact with file systems. As AI assistants become more deeply integrated into enterprise workflows — handling documents, managing data, and operating across multiple systems — the boundary between “feature” and “attack surface” will only grow harder to police.

Industry & Developer Tools

Enterprise

Mistral Launches “Forge” — Train Your Own AI From Scratch on Proprietary Data

French AI startup Mistral unveiled Forge, a platform for enterprises to build custom AI models trained entirely from scratch on proprietary data — challenging fine-tuning approaches from OpenAI and Google. Supports full model lifecycle: pre-training, post-training, RL. Ships with forward-deployed Mistral engineers. Early partners: Ericsson, European Space Agency, Singapore’s DSO. CEO Arthur Mensch says Mistral is on track to surpass $1B ARR this year.

Developer Tools

Google Launches Open-Source Colab MCP Server, Bringing Cloud Notebooks to Any AI Agent

Google released colab-mcp — an open-source Model Context Protocol server that lets any MCP-compatible agent (Claude Code, Gemini CLI, custom agents) natively write and execute code inside Google Colab notebooks. Bridges local agent workflows with cloud compute. Live on GitHub.

AI & Defense

Pentagon Users Resist Hegseth’s Order to Dump Claude, Betting on a Deal

Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk on March 3, ordering a six-month phase-out of Claude across Pentagon systems. But a new investigation finds military staffers, former officials, and IT contractors are actively resisting — describing Claude as superior to alternatives and planning to “slow-roll” the phase-out. Claude was the first AI approved for classified military networks.

Research

Papers & Breakthroughs

Architecture

Kimi’s “Attention Residuals” Rewires the Transformer’s Spine, Gains 7.5 Points on Reasoning

Moonshot AI’s Kimi Team published a paper introducing Attention Residuals (AttnRes) — a drop-in replacement for standard residual connections in Transformers. Instead of fixed additive accumulation, each layer runs a softmax attention over all preceding layer outputs, letting the network selectively pull from any depth. The scalable Block AttnRes variant (tested on a 48B-total/3B-active MoE) gained 7.5 points on GPQA-Diamond and posted gains across MMLU, Math, HumanEval, and BBH with marginal overhead.

Benchmarks

HorizonMath Sets a New Ceiling: AI Benchmark Built From Unsolved Research Problems

Researchers released HorizonMath, a benchmark of 100+ predominantly unsolved computational math problems across 8 domains, verified automatically so it can’t be poisoned by memorization. State-of-the-art models score near 0% — but GPT-5.4 Pro managed to propose solutions that improve on best-known published results for two problems, suggesting genuine mathematical contribution (pending expert review). Open-source on GitHub.

In Brief

Quick Dispatches

Lightricks Ships LTX-2.3: Open-Source 4K Video With Native Audio at 50 FPS

22-billion-parameter open-source video generation model produces native 4K video at 50 FPS with synchronized audio in a single pass. Apache 2.0 license, runs locally on consumer hardware.

a16z Backs Deeptune’s $43M Series A for AI Agent “Training Gyms”

Startup builds high-fidelity RL environments simulating real workplace workflows so AI agents can learn to navigate Slack, Salesforce, and ticketing systems. Angels include OpenAI researcher Noam Brown. Team draws from Anthropic, Scale AI, Palantir.

Nature Formalizes the “Densing Law”: AI Capability Density Doubles Every 3.5 Months

A new paper in Nature Machine Intelligence observes that equivalent AI performance can be achieved with exponentially fewer parameters approximately every 3.5 months — suggesting inference costs may shrink far faster than the model release cycle implies.

WordPress 7.0 RC1 Ships With Native AI Connectors API

Release Candidate 1 introduces a platform-level Connectors API letting plugin developers write AI features once and switch providers (OpenAI, Google, Anthropic) via configuration. Stable release targets April 9.

Google DeepMind Joins White House “Genesis” Mission for All 17 DOE National Labs

DeepMind will provide accelerated access to frontier AI models and Gemini-powered agentic tools to scientists at all 17 U.S. Department of Energy National Laboratories as part of a national initiative for AI-driven scientific research.

Trending

GitHub Trending

Repo Language Stars Description
NousResearch/hermes-agent Python 2,200+ CLI AI agent with persistent memory and auto-generated reusable skills
alibaba/page-agent JavaScript 2,900+ In-page GUI agent for controlling any web interface with natural language
BigBodyCobain/Shadowbroker Python/JS 2,000+ Real-time OSINT dashboard aggregating 15 live intelligence feeds on a unified map
codecrafters-io/build-your-own-x Markdown 478,100+ Curated guides for recreating databases, Git, Docker, Redis, and more from scratch
n8n-io/n8n TypeScript 150,000+ Self-hostable workflow automation with visual builder, 400+ integrations, native AI agents
coleam00/local-ai-packaged Shell Rising One-command local AI stack bundling Ollama, n8n, Open WebUI, Supabase, and more