Election Integrity
AI Deepfake Attack Ads Hit U.S. Midterms With No Federal Guardrails
Virginia Republicans released fabricated video of Governor Spanberger; Senate Republicans published a deepfake of Texas candidate James Talarico. Reuters documents widespread AI-generated disinformation across the 2026 midterm cycle — and there is no federal law to stop it.
A Reuters investigation published March 28 documents the systematic deployment of AI-generated attack ads across the 2026 midterm cycle. The Virginia Republican Committee released a deepfake video of Governor Abigail Spanberger making statements she never made, while Senate Republicans published fabricated footage of Texas Democratic nominee James Talarico. The technology has matured faster than any regulatory response — there is no federal law constraining AI-generated content in political advertising, leaving only a fragmented patchwork of state laws that vary wildly in scope and enforcement.
The implications go beyond individual campaigns. When voters cannot distinguish authentic footage from AI fabrications, the epistemic foundations of democratic participation erode. Candidates can dismiss real footage as fake; fake footage can be presented as real. The window between now and November is shrinking, and every week without federal action is another week in which the tools get cheaper, more convincing, and more widely deployed.