Connect with us

AI

Balancing Act: The ACLU’s Crusade Against Deepfake Regulation and the Fight for Free Speech

Published

on

To go back to this article, navigate to My Profile and then click on Saved stories to view them.

The ACLU Defends Your Constitutional Freedom to Create Deepfakes

On the morning of Election Day, you check your phone only to find a disturbing video depicting your state capitol amidst chaos. The scene shows plumes of smoke rising from the building, accompanied by other videos that capture the sound of gunfire in the background. Initially, you contemplate staying away from the voting station for safety. It's only after some time that you discover the footage was fabricated using artificial intelligence.

A buddy reaches out, deeply upset. Someone she doesn't know has used her image in explicit fake videos, which are now making their way across various platforms. When she sought help from law enforcement, they advised her to get a legal representative involved. However, the legal warnings she's sent out have had no effect.

As a well-known performer, a leading technology firm approached you to lend your voice to their latest artificial intelligence helper. You chose not to accept their offer. Subsequently, when the chatbot was unveiled, the public noted its striking vocal resemblance to you. Without your permission, your distinctive voice has been replicated, allowing another party to profit from it.

As AI-generated forgeries flood the online world, it's becoming apparent that soon, it won't just be high-profile personalities like Scarlett Johansson sharing such experiences. In response, lawmakers throughout the United States have enacted close to a dozen pieces of legislation and proposed many more aimed at governing the use of AI to create these imitations. However, this push for legal restrictions is encountering opposition from an unexpected quarter. Human rights organizations, spearheaded by the American Civil Liberties Union along with its state branches, are formulating a legal strategy designed to limit or potentially overturn a number of these new regulations. At the core of their contention is the belief that the U.S. Constitution protects the right of Americans to create deepfakes of one another.

"Whenever there's a surge of proposals aimed at setting rules for emerging technologies sweeping through all 50 state legislatures, not to mention countless local laws, it's inevitable that a significant portion will miss the mark in how they're drafted," Brian Hauss, a leading attorney specializing in speech, privacy, and technology for the ACLU, shared with me. "Therefore, I'm absolutely certain," he continued, "that there will be a plethora of legal challenges to these initiatives as they start to take effect."

This legal battle may result in a difficult confrontation for the growing efforts to control AI, potentially resulting in a chaotic scenario where society must tolerate a certain degree of imitation created by machines.

Initially, discard the idea that AI possesses any rights of its own. It simply does not. According to Hauss, AI "serves as a tool, akin to a toaster or any other non-living object." "However," he added, "when I utilize AI to express something to the world, my First Amendment rights come into play."

In a similar way, a sign that reads “Thank God for Dead Soldiers” doesn't receive special legal status. However, when individuals from the Westboro Baptist Church display this message at a military funeral, they are protected under the same constitutional rights as anyone else. Despite the offensive nature of the message, these rights cannot be taken away. (In a 2010 incident, the church was initially fined $5 million for protesting at a Marine's funeral. This ruling was overturned, and during the ensuing Supreme Court case, the ACLU submitted a brief supporting the church's right to protest. The Supreme Court ultimately sided with the church.)

Once a form of legal expression is out there—be it a sign at a demonstration or a malicious deepfake video about someone in your community—the principles of First Amendment law impose stringent restrictions on the circumstances and reasons for which the government is allowed to prevent it from being seen by others. “Envision a scenario in which the government doesn't control who has the right to speak but instead controls who has the opportunity to hear,” says Cody Venzke from the ACLU’s National Political Advocacy Department. “Both of those freedoms must coexist.” This concept is often described as the “right to listen.”

Based on these standards, numerous AI policies and regulations that have garnered support from both political parties nationwide fail to meet the constitutional requirements. Moreover, their numbers are significant.

During the previous summer, the Federal Election Commission started evaluating if a current regulation concerning deceptive misrepresentation is relevant to "intentionally misleading AI-driven campaign advertisements." The American Civil Liberties Union, through a correspondence to the FEC, cautioned that the regulation must be narrowly applied only to deepfakes where it can be clearly shown that the creators aimed to mislead the public, instead of targeting any deepfake that could potentially deceive certain audiences. (The FEC has yet to make a determination.)

In October of 2023, President Biden enacted a comprehensive executive order regarding artificial intelligence, which mandates the Department of Commerce to establish guidelines for the watermarking of AI-generated content. Biden emphasized the public's entitlement to discern whether the audio they listen to or the videos they view have been created or modified by AI technologies. The ACLU and various groups advocating for civil liberties have expressed reservations about the mandatory labeling, questioning its efficacy given the potential for malicious entities to bypass such measures. Additionally, they argue that it forces individuals to disclose information they might prefer to keep private. To draw a parallel, this requirement to label AI-generated content is somewhat akin to asking comedians to declare "This is a parody!" before imitating a political figure.

Within state legislatures, there has been a significant uptick in legislative efforts. In just the month of January this year, lawmakers across the states have put forward 101 pieces of legislation concerning deepfakes, as reported by BSA, an organization representing the software industry. Among these proposed laws, one from Georgia stands out by proposing to criminalize the act of producing or disseminating a deepfake aimed at swaying election results. This proposal has placed lawyers and supporters associated with the local ACLU branch in a difficult position.

"Throughout its history, the Georgia branch of the ACLU has been a major supporter of voting rights," Sarah Hunt-Blackwell, who advocates for First Amendment policies at the nonprofit, shared with me. Not long before the legislation was discussed in the legislature, primary voters in New Hampshire were targeted with phone calls featuring a fabricated voice of Joe Biden, telling them not to go to the voting booths. Hunt-Blackwell expressed that this was "highly alarming."

After discussions with the national ACLU office, the team concluded that suppressing and excessively punishing false political speech posed a greater danger. While the group advocates for more specific regulations against misinformation concerning the timing and location of elections, viewing it as a type of voter suppression, it argues that individuals possess a constitutional freedom to disseminate falsehoods through AI, similar to the liberties they have to spread falsehoods in writing or verbally at political gatherings. "Politics has largely been comprised of falsehoods," a high-ranking ACLU official mentioned.

During a presentation to the Georgia Senate Judiciary Committee on January 29, Hunt-Blackwell advocated for the removal of criminal sanctions from the proposed legislation and requested exemptions for journalistic entities interested in redistributing deepfakes as a component of their news coverage. The legislative session in Georgia concluded before any further action could be taken on the bill.

Legislation aimed at addressing the issue of deepfakes is expected to face opposition. In January, members of Congress put forward the No AI FRAUD Act, designed to allocate rights over one's image and voice to individuals. This act would allow individuals depicted in deepfakes, and their descendants, to pursue legal action against creators and distributors of these fraudulent creations. The legislation seeks to safeguard individuals from unauthorized use in both pornographic content and creative imitations. Shortly thereafter, organizations such as the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology expressed their dissent in writing.

In conjunction with multiple organizations, they contended that the legislation might be applied to stifle a wide range of expressions beyond merely unlawful discourse. The possibility of being embroiled in legal action, according to the letter, might deter individuals from engaging with the technology for activities safeguarded by the constitution, like satire, parody, or expressing personal views.

In comments given to WIRED, the initiator of the legislation, Congresswoman María Elvira Salazar, highlighted that "the No AI FRAUD Act clearly acknowledges the First Amendment rights to free speech and public expression." Congresswoman Yvette Clarke, the proponent of a similar measure mandating that deepfakes depicting actual individuals must be identified, conveyed to WIRED that modifications have been made to the bill to exempt satirical and parody content.

During discussions with WIRED, policy experts and legal representatives from the ACLU made it clear that they are not against specific regulations designed to tackle the issue of deepfake pornography created without consent. However, they highlighted that current laws against harassment provide a somewhat robust structure to deal with this matter. Jenna Leventoff, a senior policy counsel at the ACLU, mentioned to me that while there might be challenges that current legislation cannot address, she believes that, in many cases, the laws already in place are adequate to handle these issues.

However, not all legal experts agree on this matter. According to Mary Anne Franks, a law professor at George Washington University and a prominent supporter of stringent regulations against deepfakes, the idea that existing laws are sufficient to tackle this issue is flawed. In an email to WIRED, she pointed out, "The critical shortcoming of the argument that we already possess the necessary laws is evidenced by the rapid increase in such abuses without a proportionate rise in criminal prosecutions." Franks further explained that, in cases of harassment, prosecutors are required to convincingly demonstrate that the accused had a clear intention to harm a specific individual—a challenging standard to meet, especially when the perpetrator may not have any personal acquaintance with the victim.

Franks stated, “A recurring issue reported by victims of this mistreatment is the apparent lack of clear legal solutions available to them—a perspective they are uniquely qualified to have.”

As of now, the ACLU has not initiated legal action against any governmental bodies concerning regulations on generative AI. While officials from the organization have not disclosed if they are gearing up for a lawsuit, they, along with various local branches, have made it clear that they are closely monitoring developments in the legislative arena. Leventoff conveyed to me their readiness to respond promptly to any emerging issues.

The ACLU alongside various organizations acknowledge the severe consequences of generative AI misuse, ranging from spreading false political narratives to creating non-consensual explicit imagery and stealing the creativity of artists. Their involvement in such matters isn't to support the objectionable material. According to Hauss, their stance often involves defending speech they personally do not agree with. The primary concern for these groups is to halt what they perceive as a perilous drift away from constitutional protections. Hauss highlighted the potential danger in having laws that permit the suppression of deepfakes, questioning how such laws could be exploited by authoritarian figures to censor legitimate discourse about themselves.

In the previous year, the ACLU alongside various groups dedicated to protecting civil liberties co-signed a letter expressing their disagreement with a bipartisan Senate proposal that aims to hold social media companies accountable for the generative AI content they host, particularly deepfakes. The letter highlighted concerns that loosening the current protections that prevent companies from being liable for the content on their platforms could pave the way for states to initiate lawsuits against these companies for hosting non-AI related content. It referenced a legislative bill proposed in Texas the prior year, which would criminalize the act of hosting information about accessing medication for inducing abortions online. Should both the federal and state proposals be enacted, social media platforms could face legal actions for hosting content related to abortion. This could occur simply if a user employs AI support for posting content, such as using ChatGPT for drafting a tweet or creating an image with DALL-E. The letter further suggested that even widely used and basic features like autocomplete and autocorrect could be categorized under the Senate proposal’s broad definition of generative AI.

Due to similar concerns, both the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) have expressed doubts about the broadening of so-called "publicity rights," aimed at protecting artists against AI-generated imitations. These organizations argue that such rights could be exploited by affluent and influential individuals to quell undesirable speech. For instance, while a "Saturday Night Live" skit impersonating Tom Cruise is legally permissible, someone creating deepfake videos of Tom Cruise might find themselves legally at risk under these proposed regulations. This, according to the civil liberties groups, could lead to a troubling precedent. In March, a legislation named the ELVIS Act was passed in Tennessee, making it illegal to replicate musicians' voices using artificial intelligence. Although the ACLU hasn't made an official statement regarding this law, staff members who discussed the matter with WIRED showed skepticism towards the notion that utilizing creative content to train platforms such as ChatGPT or Dall-E constitutes a violation of copyright.

For many years, the American Civil Liberties Union (ACLU) has been successful in defending cases related to freedom of speech. Their opposition to rules governing artificial intelligence may dampen expectations that unrestrained AI can be controlled solely through legal means. Conversations with several advocates for civil liberties revealed a consensus that enhancing education and improving the public's ability to critically evaluate media content might be a more effective strategy to combat the issues posed by AI deception than relying on legislative and judicial measures. However, the question remains: Is this solution sufficient?

Throughout history, society has tolerated a degree of offensive, often unnecessary, and at times harmful discourse as a trade-off for safeguarding the type of dialogue that promotes transparency and democratic values. Nevertheless, the emergence of sophisticated technologies capable of generating deceptive speech, combined with algorithms designed to prioritize divisive and inflammatory content, is causing a growing number of people to question if extending the same legal protections to AI-generated speech as we do to human expression might ultimately be counterproductive. "We are facing a unique challenge unlike any we've encountered before," stated Mary Anne Franks, a law professor at George Washington University.

With that in mind, it's important to recognize that we are still in the early stages. There's a possibility that through advocacy and regulatory intervention, a middle ground that has historically balanced the principles of free speech could be reestablished. Yet, there is also a risk that forthcoming disputes may corner us into making a choice between two undesirable outcomes. If we were to adopt a strict interpretation of the First Amendment, it could lead to a scenario where we are powerless against the digital equivalent of highly offensive protests at sensitive events whenever we are online. Conversely, even a slight adjustment to the principles governing free speech could empower future administrations with the unprecedented authority to determine which speech is considered truthful or worthwhile, and which is not.

Share your opinions on this piece by sending a letter to the editor via email at mail@wired.com.

Suggested For You…

Direct to your email: A daily selection of our most significant articles, carefully curated for you.

A faulty update from CrowdStrike brought global computer systems to a halt.

The Major Headline: When could the Atlantic Ocean split apart?

Introducing the age of excessive online consumption

Olympics: Stay updated with our comprehensive coverage from Paris this season right here

Additional Content from WIRED

Evaluations and Manuals

Copyright © 2024 Condé Nast. All rights reserved. Purchases made through our website may result in WIRED receiving a share of the sales, as a result of our affiliate agreements with retail partners. The content of this site is protected by copyright and may not be copied, shared, broadcast, or used in any other way without the express written consent of Condé Nast. Advertisement Choices

Choose a global website


Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE FOR FREE

Advertisement
Politics6 mins ago

From Middle Class Roots to Working Class Reality: Kemi Badenoch Reflects on Life-Changing Teen Job Amidst Conservative Leadership Race

Business12 mins ago

Hong Kong Banks Follow Fed’s Lead: Prime Rate Cuts Promise Monthly Savings for Mortgage Borrowers and Boost to Local Economy

Politics35 mins ago

John Swinney’s Stark Independence Admission: A Reality Check for SNP Campaigners on Referendum Anniversary

Politics41 mins ago

Transparency in Question: Tom Tugendhat Highlights Concerns Over Keir Starmer’s Extensive Receipt of Gifts and Hospitality

Business44 mins ago

Huawei’s Mate XT Tri-Fold Smartphone Ignites Market Frenzy: Scalpers Skyrocket Prices at Huaqiangbei Electronics Marketplace

Politics54 mins ago

Tory Leadership Contender Tugendhat Questions Starmer’s Lavish Gift Totals Amid Transparency Concerns

Politics1 hour ago

Scrutiny Intensifies as Starmer’s Lavish Gifts Top Charts Amid Tory Leadership Race

Business1 hour ago

Google Triumphs in EU Antitrust Case, Overturning €1.5 Billion Fine: A Setback for Vestager’s Crusade Against Silicon Valley

Politics2 hours ago

Unveiling Transparency: How to Use Westminster Accounts to Track Your MP’s Activities

Business2 hours ago

Hong Kong Financial Officials Warn Borrowers of Funding Cost Delays Amid Slow Prime Rate Cuts

Politics2 hours ago

Prime Minister Sir Keir Starmer Tops MP Gift and Hospitality Chart with Over £100,000 in Declared Freebies

Business2 hours ago

Surge in Hang Seng Index: Closes Above 18,000 Post Fed Rate Cut, Marking a Two-Month High

Politics3 hours ago

Politics Unpacked: Labour’s Internal Strife, High-Stakes Diplomacy in Paris, and the Road Ahead

Business3 hours ago

Hong Kong Banks, HSBC and Bank of China, Initiate First Prime Rate Cut in 5 Years to Support Local Businesses and Mortgage Borrowers

Politics3 hours ago

Under Pressure: Minister Defends PM Starmer’s Right to Accept Freebies Amid Scrutiny

Business3 hours ago

Wrise’s Rapid Expansion in Hong Kong Amid Surge in Family Offices Setup: A New Era in Wealth Management

Moto GP4 hours ago

Fabio Quartararo Contemplates Exit Amid Yamaha’s Performance Crisis, Commits to Future with Renewed Hope

Business4 hours ago

Yutong, World’s Leading Electric-Bus Manufacturer, Advances in Tech with CATL’s Quick-Charge Batteries; Aims for Increased Range and Reduced Operating Costs

Politics2 months ago

News Outlet Clears Sacked Welsh Minister in Leak Scandal Amidst Ongoing Political Turmoil

Moto GP4 months ago

Enea Bastianini’s Bold Stand Against MotoGP Penalties Sparks Debate: A Dive into the Controversial Catalan GP Decision

Sports4 months ago

Leclerc Conquers Monaco: Home Victory Breaks Personal Curse and Delivers Emotional Triumph

Moto GP4 months ago

Aleix Espargaro’s Valiant Battle in Catalunya: A Lion’s Heart Against Marc Marquez’s Precision

Moto GP4 months ago

Raul Fernandez Grapples with Rear Tyre Woes Despite Strong Performance at Catalunya MotoGP

Sports4 months ago

Verstappen Identifies Sole Positive Amidst Red Bull’s Monaco Struggles: A Weekend to Reflect and Improve

Moto GP4 months ago

Joan Mir’s Tough Ride in Catalunya: Honda’s New Engine Configuration Fails to Impress

Sports4 months ago

Leclerc Triumphs at Home: 2024 Monaco Grand Prix Round 8 Victory and Highlights

Sports4 months ago

Leclerc’s Monaco Triumph Cuts Verstappen’s Lead: F1 Championship Standings Shakeup After 2024 Monaco GP

Sports4 months ago

Perez Shaken and Surprised: Calls for Penalty After Dramatic Monaco Crash with Magnussen

Sports4 months ago

Gasly Condemns Ocon’s Aggressive Move in Monaco Clash: Team Harmony and Future Strategies at Stake

Business4 months ago

Driving Success: Mastering the Fast Lane of Vehicle Manufacturing, Automotive Sales, and Aftermarket Services

Cars & Concepts2 months ago

Chevrolet Unleashes American Powerhouse: The 2025 Corvette ZR1 with Over 1,000 HP

Business4 months ago

Shifting Gears for Success: Exploring the Future of the Automobile Industry through Vehicle Manufacturing, Sales, and Advanced Technologies

AI4 months ago

Revolutionizing the Future: How Leading AI Innovations Like DaVinci-AI.de and AI-AllCreator.com Are Redefining Industries

Business4 months ago

Driving Success in the Fast Lane: Mastering Market Trends, Technological Innovations, and Strategic Excellence in the Automobile Industry

Tech4 months ago

Driving the Future: Exploring Top Innovations in Automotive Technology for Enhanced Safety, Efficiency, and Connectivity

Mobility Report4 months ago

**”SkyDrive’s Ascent: Suzuki Propels Japan’s Leading eVTOL Hope into the Global Air Mobility Arena”**

V12 AI REVOLUTION COMMING SOON !

Get ready for a groundbreaking shift in the world of artificial intelligence as the V12 AI Revolution is on the horizon

SPORT NEWS

Business NEWS

Advertisement

POLITCS NEWS

Chatten Sie mit uns

Hallo! Wie kann ich Ihnen helfen?

Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe now to keep reading and get access to the full archive.

Continue reading

×