Federal Legislation
Senator Blackburn Drops 300-Page “TRUMP AMERICA AI Act” — Would Ban AI Training on Copyrighted Works
A sweeping nearly 300-page discussion draft aimed at establishing a federal AI standard that preempts state laws. Its most consequential provision: unauthorized reproduction or “computational processing” of copyrighted works for AI training does not qualify as fair use.
Senator Marsha Blackburn (R-TN) on Wednesday released a sprawling 300-page discussion draft titled the “TRUMP AMERICA AI Act” — an acronym that spells out the president’s name — representing the most ambitious attempt yet to impose a unified federal framework on the artificial intelligence industry. The bill’s centerpiece is a provision that would explicitly declare that the unauthorized reproduction or “computational processing” of copyrighted works for AI training does not qualify as fair use under U.S. law, effectively resolving in one stroke the legal question at the heart of every major copyright lawsuit currently pending against OpenAI, Google, Meta, and Stability AI. Beyond copyright, the draft folds in the full text of the No Fakes Act, which would create a federal right of publicity covering AI-generated likenesses of real people; the Kids Online Safety Act (KOSA), which imposes a duty of care on platforms to protect minors; and a two-year sunset provision for Section 230 of the Communications Decency Act, the 1996 law that shields internet companies from liability for user-generated content.
The entertainment industry greeted the draft with near-unanimous enthusiasm. The Recording Industry Association of America called it “a landmark moment for creators,” while SAG-AFTRA president Fran Drescher said the bill “finally treats the theft of human creativity with the seriousness it deserves.” Hollywood studios, which have spent two years and tens of millions of dollars litigating AI training practices in federal court, see the legislation as a faster and more reliable path to the outcome they want than waiting for judges to rule on novel fair-use questions. Record labels, music publishers, and book authors’ guilds have similarly lined up in support, viewing the bill’s bright-line rule against unauthorized training as far preferable to the patchwork of judicial opinions emerging from different circuits.
The bill’s path through the Senate, however, remains deeply uncertain. Technology companies and their allies in both parties have already raised alarms about the Section 230 sunset, warning that eliminating platform liability protections — even temporarily — would trigger a cascade of litigation that could cripple smaller internet companies and chill free speech online. Several Republican senators have privately expressed concern that the copyright provisions, while popular with the creative industries, could hamper American AI competitiveness at precisely the moment when Chinese labs are closing the capability gap. And the Trump administration itself has sent mixed signals: while the White House has endorsed the No Fakes Act and stronger online protections for children, senior advisors have pushed back against the copyright training ban, arguing that it would hand an advantage to foreign AI companies that operate under more permissive regimes. Whether Blackburn can hold together a coalition broad enough to move the draft from discussion to markup — in a Senate where AI policy has historically been the subject of hearings rather than legislation — will test whether Congress is finally ready to stop studying the problem and start writing the rules.