AI
States Forge Ahead with Deepfake Porn Legislation Amid Federal Delays
To go back to this article, navigate to My Profile and then look for the option to see your saved
The United States Requires Legislation Against Deepfake Pornography, with Several States Taking the Initiative
While federal laws against deepfake pornography are slowly progressing through Congress, various states are stepping up to address the issue themselves. A total of thirty-nine states have put forward a variety of laws aimed at discouraging the production of deepfakes without consent and penalizing individuals who produce and distribute them.
At the beginning of this year, Alexandria Ocasio-Cortez, a Democratic representative who has personally been targeted by unauthorized deepfake videos, put forward the Disrupt Explicit Forged Images and Non-Consensual Edits Act, also known as the Defiance Act. This legislation aims to empower individuals who have been depicted in deepfake pornographic materials without their consent to initiate legal action, provided they can demonstrate that these images or videos were created without their permission. In a similar vein, in June, Ted Cruz, a Republican senator, introduced legislation called the Take It Down Act. This proposed law would obligate online platforms to eliminate content that falls under the categories of revenge pornography or deepfake pornography made without consent.
While these initiatives have support from both sides of the political aisle, passing federal laws through both chambers of Congress and getting them enacted can be a lengthy process. However, state governments and local officials have the ability to act more swiftly, and they are making efforts to do so.
To date, 39 states have proposed laws targeting unauthorized deepfakes. Of these, 23 states have successfully enacted such laws, while four states are in the process of considering their legislation, and nine have rejected these proposals.
In a recent development, the office of San Francisco City Attorney David Chiu disclosed the initiation of legal action against 16 popular websites known for enabling the creation of AI-generated explicit content. Chiu emphasized the potential benefits of generative AI technology but also warned of its negative impacts and the potential for misuse by wrongdoers. "We must unequivocally state that such acts do not represent innovation but constitute sexual abuse," Chiu remarked in a public statement from his office.
The lawsuit marked another effort to tackle the escalating problem of unauthorized deepfake adult content.
Ilana Beller, the organizing manager at Public Citizen, a group monitoring legislation on nonconsensual deepfakes and sharing their insights with WIRED, points out a common misunderstanding. “It's not only famous individuals who are victims of this issue,” she notes. “Many ordinary people are encountering these situations as well.”
Information from Public Citizen indicates that 23 states have enacted laws against unauthorized deepfake content. "The problem is widespread, prompting state lawmakers to address it," Beller notes. "Additionally, there's a keen interest among legislators to implement laws regarding AI due to the rapid pace at which this technology is evolving."
In a report from WIRED last year, it was highlighted that deepfake pornographic content is on the rise, with studies suggesting that up to 90 percent of all deepfake videos consist of pornographic material, most of which is created without the consent of the women featured. However, Kaylee Williams, a researcher at Columbia University monitoring legislation on nonconsensual deepfakes, noted that lawmakers seem to be more concerned with deepfakes that have political implications rather than addressing this widespread issue.
"She notes that a greater number of states are focused on safeguarding the integrity of elections rather than addressing concerns related to private image issues."
Matthew Bierlein, a Republican serving in the Michigan state legislature and one of the sponsors behind the state's set of bills addressing nonconsensual deepfakes, shared that his interest in the matter originated from his efforts to legislate against deepfakes in politics. He aimed to categorize the use of undisclosed political deepfakes as a violation of campaign finance laws, mandating disclaimers to inform the public. In the process, Bierlein collaborated with Penelope Tsernoglou, a Democrat in the legislature, who played a significant role in advancing the legislation concerning nonconsensual deepfakes.
In January, unauthorized deepfake videos of Taylor Swift became a hot topic in the media. Beirlein believed it was an opportune moment to take action. He expressed confidence that Michigan could set an example for the Midwest as a pioneer in addressing this issue. This confidence stemmed from Michigan's advantage of having a full-time legislature and well-compensated staff, a setup not common in many other states. "We recognize this problem extends beyond Michigan's borders. However, change often begins at the state level," he noted. Beirlein hoped that if Michigan successfully addressed this issue, neighboring states like Ohio, Indiana, or Illinois might follow suit, thereby simplifying enforcement efforts.
The consequences for producing and disseminating deepfakes without consent differ greatly across various states. According to Williams, the approach to this matter in the US is extremely erratic. "There seems to be a recent misunderstanding that laws are being enacted nationwide. What's actually happening is a surge in the proposal of such laws," he explains.
Various states have provisions that enable both civil and criminal proceedings against offenders, though some may only offer legal recourse in either civil or criminal contexts. For example, a new law in Mississippi specifically targets the protection of minors. In the last year, there has been a notable increase in cases where middle and high school students have utilized generative AI technologies to create and distribute explicit content featuring their peers, predominantly female students. Meanwhile, some statutes are being revised to include adults, with lawmakers modernizing laws that prohibit the distribution of revenge pornography.
Williams points out that while there is widespread agreement on the clear immorality of creating nonconsensual deepfakes involving minors, the situation becomes more complex with nonconsensual deepfakes of adults. The ethical considerations in these instances are less clear-cut. Often, legal measures and bills introduced demand evidence of malicious intent behind the creation and distribution of such deepfakes, aiming to demonstrate that the perpetrator intended to cause harm to the individual depicted.
Online, according to Sara Jodka, a lawyer with a focus on privacy and cybersecurity, navigating through the various state laws can be especially challenging. "Without being able to identify someone from their IP address, how are you supposed to establish their identity, or even demonstrate what they intended to do?"
Williams points out that when it comes to creating unauthorized deepfakes of celebrities or public personalities, many creators do not view their actions as harmful. "They often justify it by claiming it's fan-made content," she explains, "arguing that it's a form of admiration and attraction towards the individual."
According to Jobka, while state legislation is a step in the right direction, its effectiveness in addressing the problem is expected to be minimal. She argues that only a nationwide ban on deepfakes created without consent could enable the cross-border inquiries and legal actions necessary to ensure true justice and responsibility. Jobka points out the limitations states face in pursuing cases that extend beyond their borders or involve international elements. As a result, she believes that the instances in which these laws can be effectively applied will be exceptionally rare and highly particular.
However, Bierlein from Michigan points out that numerous state lawmakers are unwilling to delay action for the federal authorities to tackle the problem. Bierlein specifically worries about the potential use of deepfakes produced without consent in sextortion schemes, which, according to the FBI, are becoming more frequent. In 2023, a teenager from Michigan took his own life following threats from fraudsters to share his (authentic) private images on the internet. Bierlein notes, "Things progress at a snail's pace at the federal level, and if we were to wait for their intervention, it might take much longer."
Explore More…
Delivered to your email: A selection of the finest and most peculiar tales from the archives of WIRED
Exploring the Method Behind Memory Selection by the Brain
The Major Headline: Introducing Priscila, the Sovereign of the Rideshare Syndicate
Silicon Valley's Wealthy Elite Show Support for Donald Trump
Occasion: Be part of The Major Interview happening on December 3rd in San Francisco.
Additional Content from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights reserved. Purchases made through our site may result in WIRED receiving a share of the sale via our Affiliate Partnerships with various retailers. Content from this site is protected and cannot be copied, shared, broadcast, stored, or used in any form without explicit prior consent from Condé Nast. Ad Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.