By a vote of 409–2, the House passed the Take It Down Act, a bill designed to combat deepfake porn, or nonconsensual sexual content generated by artificial intelligence.
Creating or disseminating explicit deepfake images or videos without the subject’s permission is illegal under the law. Additionally, it mandates that within 72 hours of notification, online platforms remove content that has been flagged.
The ability to legally sue creators, distributors, or platforms that disregard takedown requests will now be available to victims. Given the speed at which AI-generated imagery is developing, lawmakers say this legislation is long overdue.
The bill is being praised as a historic step to safeguard digital privacy and human dignity, and it has the backing of President Trump and an uncommon bipartisan coalition.
Advocates stress that children, women, and public figures are disproportionately harmed by deepfake porn, which frequently results in serious social and psychological harm.
“This is about drawing a line,” stated Rep. Sheila Jackson Lee (D-TX), one of the bill’s sponsors. Nobody ought to wake up to find their face on a phony pornographic video that has gone viral online without their consent.
Just two lawmakers opposed the bill, citing worries about the implications for free speech and possible government overreach. Supporters counter that the bill carefully strikes a balance between platform responsibility and privacy rights.
The Take It Down Act is anticipated to be considered by the Senate in the coming weeks. It is likely to pass with executive support and strong bipartisan momentum.
This law would significantly alter how the United States handles digital exploitation and the abuse of artificial intelligence (AI) if it were to become law.